Skip to main content

Trio Kenwood KD-1033 turntable main bearing service

  One of my first posts in this blog was regarding the servicing of the Pioneer PL12D turntable, and that has proven to be the most popular by a significant margin. The Pioneer PL12D had its competitors, and probably the most popular alternative was the Trio Kenwood KD-1033 turntable. I use both names (Trio and Kenwood) as the company operated under different names in different markets. In the UK they were Trio, possibly because the brand Kenwood was most famous here for food blenders. 95% of a KD-1033 is similar to  Pioneer PL12D, and servicing them is very much the same process. Both are belt drive , have an AC synchronous motor,  mechanical speed change , a main bearing, sprung top deck and rubber motor mounts. the KD-1033 uses a different type of anti skate mechanism, employing a thread and weight. The plinth is shallower with an internal cross brace. One aspect that is different is the design of the main bearing, and servicing it is a little harder than a PL12D. The ...

Building a Fold@Home dedicated PC server which is also a space heater

This post is about a project I've recently worked on which was not as successful as I'd hoped, but worth documenting all the same.

I wanted to make a PC server, using old parts I had and headless (no monitor), that just runs medical research workloads  (fold@home aka FAH) and warms up my office in the process.

Background : Computers, waste heat and the data furnace 

I worked in IT for 35 years, and have always been interested in how computer systems deal with waste heat. Most server rooms or data centres need to be kept cool or the computers will fail. Computers turn 90%+ of the electricity they consume into heat, probably now closer to 99% with SSD technology, and in this regard they are no different to an electric heater. We've all probably felt the warm air coming from a games console fan, or had a laptop feel uncomfortably warm. Computers are great electric heaters. Waste heat is treated as an inconvenient by-product, it's not quite usefully hot enough, typically around 40 degrees C, and often the combined fan noise means you can't simply vent the air into an adjacent room. I worked in an office where we had small machine rooms containing computer servers, requiring local AC to be retro-fitted to dump the waste  heat outside the building. These rooms were next to open-plan office areas , where some staff  needed fan heaters to keep warm. Crazy.

Some data centres do use excess heat to warm nearby buildings or water for swimming pools and most of the cloud providers have a few showcase examples. Sweden has a hot water grid in Stockholm, data centres heat water which is pumped around the city, heating buildings, and thus recycling this 'waste' heat, treating it as an asset. However these are exceptions rather than the norm

Another approach is to perform the processing across distributed servers, that are then located where their heat can be put to useful work. A few examples I've found are :

Deep Green - They make data furnaces , small portable data centres that can be placed close to municipal swimming pools.  They take the waste heat and pass it through heat exchangers and heat the water. The costs to the swimming pools is paid for by Deep Greens Cloud customers . 

qarnot - qalway - This French firm offer digital heaters and boilers. They are cloud on-premise servers which perform workloads for their cloud customers, such as video rendering. The waste heat is conducted away from the CPU and GPU cores and used to heat homes or hot water. Just like an electric space heater.

heata - These are similar in concept to Deep Green, but much smaller and aimed at individual homes. They provide a computer-based hot water heater, just like an immersion heater. Overnight the unit runs workloads on behalf of their customers. The 'waste' heat passes via heat exchangers and heats the water in the tank. The homeowner is reimbursed for the cost of the electricity the heata unit consumes.

This concept of distributing cloud data center workloads across servers located where their excess heat can be used is sometimes referred to as data furnaces or computational heating.


Distributed Computing


Certain problems lend themselves to an approach called distributed computing, where individual servers, PC's laptops request a packet of work from a central service, work on it, and upload the result when complete. Often the nature of the problem is it's too complex for single machines , but collectively 1000s of computers can complete the overall task in a fraction of the time. The most famous example was the SETI@home project, where computers would work through the huge backlog of space radio telescope data , looking for patterns . It was developed as a desktop screen saver which started when a computer was unused, taking over  the CPU until the computer was needed again. Seti@Home stopped this approach a few years ago, but many other systems such as BOINC and Folding at home (FAH) use this approach for medical research. 

FAH is solving various protein-folding problems which relate to the fight against diseases including covid, cancer and Alzheimer's. As with seti, a local client downloads a unit of work, processes it and returns the result. These tasks are highly CPU intensive and you can control how much resource they use and when they run. If run at maximum throughput they will use 100% of the CPU until complete. FAH Can also exploit modern Graphics Processor Units (GPU's), much as crypto-mining systems now do.

Combining the two 

With a broken heating boiler, I started to wonder if I could use old computers to work on worthy projects like  FAH ,but also generate some heat in my house as a useful byproduct. I reasoned that 350 watts of heat from an electric heater is no different from a 350 watts consuming computer. The latter could also be doing something worthwhile i.e medical research, rather than the heater which just uses some form of resistive element. Both consume 350 watts and convert 99% of it into heat, but one does something scientifically useful too. My house has a modern thermostat, so I reasoned the heating system would adapt to any extra secondary heat provided by other sources, as it does for fridges, TV's and other appliances that get warm.

Some years ago , wondering what to do with a 2012 mac mini that one son had used for schoolwork and no longer needed, I set it up in my wifes office. This happens to be the coldest room in the house, and I wanted to see if the old Mac might make a difference to the temperature. I configured it for headless operation, where no monitor is connected, it just has mains cable and a network lead (though could use WiFi and simplify further. I manage it using Apple Screen sharing. It runs the Fold at Home mac client and is used for nothing else. It's been running at full power 24/7 for 4 years. 


The Intel Core i5 processor is reported to be at 90 degrees C and has been for 4 years. Impressive for a machine which still has a mechanical HDD inside - it just keeps going. The outer case of the mini is notably warm, and the wall opposite the slow running fan vent is also warm. The machine is otherwise silent, and important consideration as I didn't want to generate intrusive fan noise.

Does it heat the room ? No, but clearly it contributes toward the heat of the room. the mac mini is a nice design where the Aluminium case acts as a large heat sink. I run the unit on its side to better improve airflow 






I wondered if I could take this concept and improve it using an old gaming PC ? I was inspired by this blog How to Make a Folding@Home Space Heater (and why would you want to?)




This machine was purchased as parts back around 2009. It houses an ASROCK 775dual-vsta motherboard with 2Gb of memory. It's an interesting transitional motherboard having PCI-E and AGP graphics slots, IDE and SATA disk connectivity, 2 types of memory slot and 4 PCI slots. it was chosen when a previous Compaq blew its power supply which was non-standard ATX. I choose this , rightly or wrongly on the basis that the maximum amount of kit could be migrated from the old into this.

First of all I removed all the IDE disks and cables and fitted a small 64Gb SDD card which I purchased off ebay, reconditioned for £9. These are fast, cool and use less power.

Next I removed the secondary case fans from the front and top of the unit. These were 'dumb' fans that lit up and ran at a fixed speed and contributed to the noise. My plan was that the machine should be as quiet as possible, and that the perforated design of the case panels would allow sufficient cooling.

I initially used the old graphics card it still had from is previous use , a BFG Nvidia GeForce 7600 GT OC. This seemed to have issues with being discovered during the Peppermint Linux Install and X-windows gave a corrupted image on the screen. It was also very noisey from the GPU fan.

I then remembered that we had a PCI Graphics card in another PC in the attic (Dell latitude mentioned in my blog on how not to build a gaming machine ) a Sparkle 8500GT 256Mb PCI card (yes that really is PCI not PCI express). I tried this and still couldn't get linux to start X windows cleanly during the boot from a live USB image.

I considered that in order to use the machine I was going to need a fanless  GPU card anyway, so I purchased a second hand  ASUS AMD Radeon HD 5450 1GD3 Graphics Card, which has a heatsink but no fan. It also also half sized board so the internal airflow should be better. The Peppermint 11 installer faired better here and I was able to install Linux with no issues.


Here you can see both the AMD card in its PCI-E slot (upper red board) , and below it the Sparkle GeForce 8500 , which I initially left in place.



Having got a working linux server, I installed the Fold @Home client , ran it up, which confirmed that while a 2 core CPU was detected and usable, no GPU could be found. I tried various installs of drivers, etc, but the FAH Forum confirmed that none of these GPU cards would be recognised by FAH, being too old. That was a setback, but I decided that being old, there was little point in throwing more money at it.

So the machine will run FAH 100% cpu on both cores, but not on the GPUs. After running for several hours the vented air from either the case fan or via the ATX power supply was not noticeably warm, but it's quite hard to tell. However  99% or so of the energy consumed by the PC is turned into heat in the process of performing FAH work, and it does this pretty silently.

I decided to try measuring how much power the rig was consuming. It's not possible to do this from sensors, the only way is a plugin monitor at the socket. It stabilised at  94 Watts, which is lower than I expected, but with the two GPU's doing nothing much and a single SSD hard drive, it's not consuming very much.



I also installed the linux lm-sensors package , and disabled the GUI as this would not be needed for normal headless operation. Lm-sensors once configured enables me to interrogate the various temperature and fan speed sensors in the machine with the sensors command .

$ sensors

w83697hf-isa-0290

Adapter: ISA adapter

in0:           1.30 V  (min =  +1.09 V, max =  +1.02 V)  ALARM

in2:           3.31 V  (min =  +0.26 V, max =  +0.00 V)  ALARM

in3:           2.91 V  (min =  +0.02 V, max =  +0.27 V)  ALARM

in4:           3.01 V  (min =  +0.13 V, max =  +0.00 V)  ALARM

in5:           3.01 V  (min =  +0.13 V, max =  +1.02 V)  ALARM

in6:           3.04 V  (min =  +0.00 V, max =  +1.02 V)  ALARM

in7:           3.17 V  (min =  +3.09 V, max =  +0.06 V)  ALARM

in8:           3.39 V  (min =  +0.00 V, max =  +0.06 V)  ALARM

fan1:        1577 RPM  (min = 84375 RPM, div = 8)  ALARM

fan2:        2008 RPM  (min = 1318 RPM, div = 8)

temp1:        +34.0°C  (high =  +0.0°C, hyst =  +0.0°C)  ALARM  sensor = thermistor

temp2:        +38.5°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistor

beep_enable: enabled


nouveau-pci-0300

Adapter: PCI adapter

GPU core:      1.32 V  (min =  +1.20 V, max =  +1.32 V)

temp1:        +64.0°C  (high = +95.0°C, hyst =  +3.0°C)

                       (crit = +125.0°C, hyst =  +2.0°C)

                       (emerg = +130.0°C, hyst = +10.0°C)


radeon-pci-0200

Adapter: PCI adapter

temp1:        +46.0°C  (crit = +120.0°C, hyst = +90.0°C)


coretemp-isa-0000

Adapter: ISA adapter

Core 0:       +56.0°C  (high = +78.0°C, crit = +100.0°C)

Core 1:       +60.0°C  (high = +78.0°C, crit = +100.0°C)


An interesting point is that while neither Graphics card was being used (X is disabled , so only a non-graphical login running) both GPUs were running warm, with the aged PCI card  at 64 degrees C. I think this is just the idle current of having them plugged in and hence powered via the PCI interfaces. Both are passively cooled, with no local fan, only the case, CPU and PSU fans in operation. I could remove one of the graphics cards, which would fractionally reduce the power consumption,  but initially decided to leave in place.


So,  on the plus side I have a silent headless server that just sits there doing FAH work units for the benefit of medical science. It cost me a few pounds for the SSD and Radeon drive, both second hand from ebay. It consumes a small amount of electricity, which is all turned into heat,  which must contribute to the temperature of the room. But not quite in the way my son's gaming rig can heat his room  at maximum settings.

I may revisit this concept again and it may be worth trying if you have a newer vintage gaming rig with GPU's that FAH can exploit.

So a partial success I reckon ?


Update: 15th November 2023


Enjoying this experiment I've tweaked a few things more

1) I decided to remove the passively heating PCI Graphics card, which drops the servers power consumption to 80 watts under full load, 45 when idling. It served no purpose other than consuming , and radiating power, and I couldn't justify its inclusion. It will be reused on something else.

2) as part of lm-sensors, it includes fancontrol and after come configuration I found I could run the case fan (FAN2) at minimum settings and still keep the cores at reasonable temperatures. the CPU fan is under automatic control from the BIOS. I figure under 70C is probably fine . The machine is whisper quiet even under full load. 

coretemp-isa-0000

Adapter: ISA adapter

Core 0:       +63.0°C  (high = +78.0°C, crit = +100.0°C)

Core 1:       +67.0°C  (high = +78.0°C, crit = +100.0°C)


w83697hf-isa-0290


fan1:        1480 RPM  (min = 84375 RPM, div = 8)  ALARM

fan2:        1081 RPM  (min = 1318 RPM, div = 8)  ALARM




$ cat /etc/fancontrol 

# Configuration file generated by pwmconfig, changes will be lost

INTERVAL=10

DEVPATH=hwmon0=devices/platform/coretemp.0 hwmon1=devices/platform/w83627hf.656

DEVNAME=hwmon0=coretemp hwmon1=w83697hf

FCTEMPS= hwmon1/device/pwm2=hwmon0/temp3_input

FCFANS= hwmon1/device/pwm2=hwmon1/device/fan2_input

MINTEMP= hwmon1/device/pwm2=60

MAXTEMP= hwmon1/device/pwm2=75

MINSTART= hwmon1/device/pwm2=150

MINSTOP= hwmon1/device/pwm2=0



3) I pondered whether I could throttle the unit based on the UK CO2 generation mix. I've not yet implemented this, but wrote a script using curl and jq to extract a status from the UK National Grid Carbon Intensity API. Their API has a number of features, I decided to use there Regional value for me in the South of England with a postcode for me of GU51.

curl -s -X GET https://api.carbonintensity.org.uk/regional/postcode/GU51   -H 'Accept: application/json'| jq -r '.[]| .[]| .data| .[]| .intensity.index'


This returns a word value for the level of CO2 in the electricity generation mix for my region , based on a postcode. Values range from very low,low, moderate, high, very high. My thinking was to pause FAH for any value above moderate, and shutdown for high and above. The South of England scores badly having little local wind or nuclear generating capacity, relying on Gas, Solar and the French ISA interconnect.

4) I paired down the processes running on the machine . Removing some services for WPA and Wireless modems . I set journald to log into volatile memory, reducing the amount of disk IO. I added these lines to the journald.conf file

[Journal]

Storage=volatile

RuntimeMaxUse=128M

ForwardToSyslog=no




While not the crypto-rig of FAH severs, the machine will complete an FAH Unit around every 1.6 days, taking approx 23 minutes per 1% progress, which is not too bad

******************************* Date: 2023-11-15 *******************************

06:29:07:WU00:FS00:0xa8:Completed 475000 out of 2500000 steps (19%)

06:51:17:WU00:FS00:0xa8:Completed 500000 out of 2500000 steps (20%)

07:13:26:WU00:FS00:0xa8:Completed 525000 out of 2500000 steps (21%)

07:35:36:WU00:FS00:0xa8:Completed 550000 out of 2500000 steps (22%)


You can follow my FAH results here  Team Ives-Towers

Update 24th November


I tweaked a few more things on linux, replacing ntpd, which seemed to use quite a lot of cpu, with chrony, which I simply installed and it self-configured, replacing ntpd.

I also fitted mesh blanking plates I found on ebay very cheaply,  to both rear PCI slots and the now vacant front DVD bay, as I needed the player for another project .....






Finally I configured FAHControl on my desktop machine to remotely access the farm of clients I'm now running. There is a very good guide on how to do this here by Linus Tech Tips member Gorgon









Comments

Popular posts from this blog

Restoring a Pioneer PL-12D Turntable

 I got back into vinyl records about 18 months ago, and have collected a few hundred albums, mostly second hand. I have a number of turntables (NAD 5120, Ariston QDeck, Pioneer PL12D ) which I got also second hand. I keep a few, some end up passed onto friends, some that are too far gone are kept for spares. In most cases they have required a little work to get them running again. Last week I was lucky enough to get a Pioneer PL-12D turntable from freecycle. The lady who offered it, said that she in turn had received it from freecycle , and had replaced the belt but couldn't get on with the springiness of it, and had got a modern USB turntable. I have previously worked on one of these decks, and know what she means about the suspension system used. While this blog is specifically about this particular model of deck, many of the concepts are similar to others from this vintage. Many Japanese belt drive decks throughout the  1970's shared very similar construction, so this m...

Bracing IKEA EXPEDIT or KALLAX cabinets for greater rigidity when used with a turntable

The inevitable problem, too many records. Turntable and amplifier on top Like many record collectors around the world, I have a number of IKEA cube cabinets from both their EXPEDIT (older) and KALLAX (newer) ranges. These are easy to make, cheap to buy and perfect for record storage. The system comes flat packed and uses bolts and dowels to hold the system together. If you follow the instructions  the system is strong and should not collapse. You can augment the joints with PVA wood glue for extra strength. My turntable sits on top of an EXPEDIT 2x4 cabinet, laid along the long side, with some felt feet to support it from the floor, and you can use KALLAX in the same way. It's been great but has two problems I wanted to address : 1) Because the system has no back panel, the records  can be pushed too far back. 2) While the structure is strong and fairly rigid, there was some lateral movement. With heavy equipment on top, I found that if I nudged or knocked either top side, thi...

Replacing MR16 Halogen bulbs with LED equivalents

If you have read my other blogs about GU10 mains voltage halogen lighting, you will know that I have mostly replaced these bulbs with LED equivalents. they cost more, but last longer, are cooler and use a fraction of the energy. a halogen bulb is typically either 35 or 50 watts. most LED bulbs are 3 watts. While the bulbs in my house are predominantly GU10 mains halogens, I do have a couple of MR16 bulbs. These are a different type of halogen that operate from a 12volt supply. I should mention at this point that I don't have any dimmer switches in my house. Now some mains voltage GU10 LED bulbs are dimable (always check first). MR16 bulbs have two pin connectors rather than the lugs found on GU10 bulbs. GU10's use a push and twist method of connecting to the socket. the MR16 simply push in place with metal clips also gripping the edge of the bin base. They use a separate power supply , either a transformer (old type ) or a switch mode power supply(newer) to convert 250v ...