Quantum Bigfoot CY 4.3GB - 1996
The Bigfoot I had was, like all others, a large industrial-looking slab of a hard drive. It was a quarter-height 5.25" drive (desktop CD-ROM/DVD-ROM is half-height, a 5.25" floppy drive would be full height), when HDDs had been 3.5" for years and years.
A Bigfoot was an ideal storage drive, but less than ideal for programs. This came down to its somewhat extreme geometry. Its platters were 125 mm in radius, where a 3.5" drive has 90 mm radius platters. This meant that the Bigfoot had enormously more area on the platter: 49,000 mm^2 versus 25,500 mm^2. Proportionally, the 3.5" also lost more of its area to the inner radius where the motor sat.
This meant the outer edges of the Bigfoot were spinning at ferocious speed, so passing more data under the head more quickly. We can even work this out. The circumference of the platter of a Bigfoot is 785 mm. Of a conventional drive, it is 565 mm. At the same speed, the Bigfoot is fairly obviously 50% faster. A Bigfoot did not run at the same speed, however. It ran at a lower speed than most other mechanisms. A performance drive would be 5,400 RPM, standard units ran at 4,800 RPM, but the Bigfoot ran at 3,600 RPM. In the table below, I've included a column where the RPM is in Hertz (1/RPM) as most other units are also per second.
Drive
Rotation
Hz
Outer track
Linear velocity
Bigfoot
3,600
60
758
45,480
Standard 3.5"
4,800
80
565
45,200
Performance 3.5"
5,400
90
565
50,850
Examining the data pulls apart one of Quantum's claims, that the larger drive allows faster sequential transfers. Sure, sequential transfers are in the same class as the 4,800 RPM units, but that's all. The slow rotation was also claimed by Quantum to save energy, which perhaps it did, but nobody cared about the five watts or so being used by a hard disk drive.
The Bigfoot, spinning more slowly, was cheaper to manufacture, and also offered much higher capacity. A single Bigfoot platter had well over double the usable area of any other drive, so the last Bigfoot hit 19.3 GB in an era where normal 3.5" drives were around 5-10 GB.
So, if we've said they can keep up with a standard 3.5" drive, and indeed have more outer tracks, so performance would decay to inner more slowly, why were they so slow?
The answer is in access time. At this point in history, most voice coils could accurately position the head on the right track in around 10 ms: This is the seek time. Quantum made the best voice coils, so there's no reason to suspect a Bigfoot was in any way lagging here. The larger platters meant the voice coil had to move the heads a greater distance, so it lagged a little. Quantum claimed 13 ms, we have no reason to doubt this figure.
Then, after the head has moved to the right track, we wait for the sector we want to start reading from to be under the head. Clearly a more rapidly rotating unit will have an advantage here. This is rotational latency, and the rotational latency of a 3,600 RPM unit is 8.33 ms. A 4,800 RPM unit has a rotational latency of 6.25 ms. This doesn't seem much, but wait until we see what it does.
At that time, most accesses were either 4 kB, 8 kB or 64 kB blocks. 4 and 8 kBs were FAT32's default cluster sizes, 64 kB was FAT16's cluster size for the drive dizes of this era.
So, for every x kB transferred, at 2,048 kB/s, we add a delay of seek time (13ms or 10 ms) and rotational latency (8.33 or 6.25 ms). We'll also add in a 512 kB read block, as this was a fairly big file at the time, and useful for measuring file operations. All speeds in kB/sec.
Drive
Access
4 kB
8 kB
64 kB
512 kB
Bigfoot
21.33
171.5
315.8
1200.1
1846.2
Standard 3.5"
16.25
219.2
395.1
1326.4
1880.6
Performance 3.5"
15.56
227.8
409.0
1345.7
1885.4
Now we can see why the Bigfoot was slow for applications, games, and the operating system. The critical small block or cluster-level performance was around two generations behind. However, if we're storing bulk data, which would have been images, documents, archives, backups, and so on at the time, our large transfer performance was as good as anything else!
This underlined the truism: You never, ever wanted a Bigfoot as your only drive!
They were popular with OEMs, who could advertise a bigger number, but they degraded performance equivalent to maybe losing 4 MB RAM or dropping the CPU from 166 MHz to 133 MHz.
Retail, they were popular with people who just needed cheap giant storage barrels. It must be understood that, back then, nobody had enough storage. The size of HDDs only caught up to demand in around 1997/98, before then the largest drives in a range always carried a little premium and would quickly fly from the store shelf.
I sit here writing this on a machine with 6 TB HDD and 1 TB SSD storage, with around 3 TB of that free, but back then, when the Bigfoot was relevant, 10 GB was a very large amount of storage. Most machines had one to four gigabytes. My first ever PC was in this time, and it had a 2 GB Seagate SCSI mechanism on an Adaptec AHA-2940. I added an extra 4 GB soon after. That's how constrained we were for space. The late Bigfoot TX upped the spin speed to 4,000 RPM, but fast spins were bad for a Bigfoot, since air resistance went up with the square of velocity and linearly with area. The Bigfoot had an awful lot of area. I would be surprised if modern (2021) HDD technology could produce a Bigfoot-type HDD at 7,200 RPM. Air resistance would be so much as to heat the platters unacceptable. Seagate, IBM and Western Digital, the only vendors ever to make 10,000 or higher RPM drives, never used the 3.5" form factor for it, let alone the 5.25" one!
The machine which first ran my personal web server as a machine distinct from my main desktop, the ancestor of hattix.co.uk had this Quantum Bigfoot and a Pentium II-266.
IWill SIDE RAID-100
Add-in ATA-RAID cards were fairly popular between 1999 and 2004, Promise and Highpoint quickly scooped up major design wins from almost all motherboard manufacturers. While the RAID functions were host-based, the other reason many sold was simply to add more ATA channels.
Highpoint's HPT370A controller was supported across the board, from archaic Windows95 all the way through to the latest WindowsXP, Linux, BSD and so on. It even works without a hitch in Windows Vista. It was built into most Abit '-RAID' variants and was generally seen as a solid solution as well as a low cost one. The HPT370A controller and a simple BIOS EEPROM are all that make up the logic on this card. The BIOS wasn't even necessary if the user never had an intention of booting from the controller.
Seagate U5 10GB
Seagate's U series had always been, to put it simply, crap. They weren't unreliable or horribly slow or anything, they were just too 'consumer'. They had mediocre speed, mediocre seek time and were generally all-round mediocre, the sort of thing you'd expect to find in a Dell. In the "fast, cheap, good, pick two", Seagate went for "cheap and good". Later, around the time of the U9, U10 and U11 they'd go with "cheap".
Looking carefully at a Seagate U5, you'd see the top cover was held on by aluminium foil adhesive tape. This identifies it as a Conner manufactured drive. Seagate did add screws, but fewer than anyone else. The rounded edges, a Conner trademark, also identify who made this drive.
In the PCB shot, this drive has its "SeaShield" rubber housing removed but it's replaced in the top shot. Seagate knew that most HD damage was done on the route between factory and user so they added the rubber casing to protect the disks from damage. It worked; With a much lower failure rate than other vendors, Seagate were able to drop prices on most of their range.
For people just wanting a cheap, giant bit-barrel (who exist in surprising number), the Seagate U series was ideal. Such people do not want stellar performance, they just want a lot of gigabytes for runtime storage without having to sift through a stack of DVDRs as tall as they are. This 10GB U5 was bought for just such a purpose, it being the second drive in a system which had outgrown its 4.3GB Fujitsu.
Fujitsu MPE3173AE 17.3 GB (2000)
The drive controller (largest IC) in this unit is a MB90255A, manufactured also by Fujitsu, a 16 bit microcontroller of the F²MC family. Fujitsu's microcontroller divison was sold to Spansion in 2013, and little documentation is available for these older parts. The smaller two-row chip near the ATA connector is the firmware, a STM M29F102BB, 1 Mbit (64Kb x 16) flash memory. Next to it is the DRAM, used for firmware scratch and drive buffer, an OKI M54V24616, a 4 bit 10ns SDRAM (66 MHz CL2, 100 MHz CL3). The chip with the heatsinking (large contacts) has a scar on the top of it - burn damage - meaning it's likely a power regulator which died in the line of duty, a very common failure mode from 1995 until the SATA power connector took over, and was usually caused by hot-plugging the drive's power connector. The final semiconductor component is a Cirrus Logic SH3512, a PRML signal processor (known as a "Read channel") - so the bit that reads from the drive heads.
Fujitsu drives were always reliable enough, but much, much slower than they had any right to be. You'd look at the RPM, 5,400 in this case, and the seek time (9.5 ms) and have a good idea of how well it'd perform. Both were fairly middle of the line in 2000, but like with previous mechanisms, the MPE3173AE plain underperformed. It always felt slower than it should have been.
The drive was rated for 15 ms access time (seek + rotational latency), but usually performed around 18 ms. 18 ms is on the slow side of things, but not horrible. Transfer rate was about 20 MB/s at the start and 8 MB/s at the end, again not horrible. In use, however, Fujitsu drives would form a little cluster below everything else in things like ZDNet's Winbench and other tests which measured application loading time. One can only presume either awful firmware or a badly implemented ATA interface.
Maxtor D740x 80GB (2001)
Ah, a company trying to be something it isn't. Maxtor had always been regarded rather like Seagate's U series (above), mediocre acceptable junk. With the D740x, Maxtor was attempting to fly with the big boys. The seek performance was exceptional (fastest ATA drive on the planet on release and still, 7 years later, able to compare with modern drives on a level playing field), the data transfer rate was good, so where was the catch? With Maxtor there had to be a catch.
The downside was that the D740x did not use fluid bearings and, at 7,200RPM, not using fluid bearings gives you "audible feedback" or to put it more bluntly "a goddamned whining noise". D740x drives were very, very whiny. Reliable enough (this one was killed by a faulty PSU, but still 'worked') but oh so incredibly noisy.
We can't entirely blame Maxtor, however. Anyone familiar with hard disks knows that companies have a standard chassis design which hardly ever changes. This drive is not a Maxtor. The shape of the top cover, the location of the printing are all very reminiscent of Quantum's Fireball series. The D740x is actually a successor to the Quantum Fireball Plus AS, Quantum's 7,200 RPM performance line It was renamed since Maxtor had just bought Quantum... And the fluid dynamic bearings removed, which the Fireball Plus AS was known for. Hence the lack of fluid bearings and the excellent performance, something Quantum had made a name for themselves with.
The D740x's secret was its Fireball Plus heritage. Quantum took the actuator (voice coil and head assembly) design from their blazing fast SCSI drives and put them into the Fireball AS drives. The D740x was featuring 40GB (20GB per surface) platters.
I used this particular drive from the late Windows 2000 era through most of the Windows XP era, to the point when its performance was more ordinary than special and areal density enabled newer drives to catch up. Its death came about as a PSU failed. If the drive would spin up properly, it would remain working for that entire session, but could then disappear off the bus on a reboot.
Images Pending
Maxtor Fireball 3 20 GB (2002)
Produced in the former Quantum facility in Singapore, this modern-looking slimine drive was a mere 20 GB in an era of 60-120 GB drives being the norm. It was an entry-level 5,400 RPM unit, cheap as chips, slow as molasses, used a single 40 GB platter and only the lower surface of it. The upper surface went unused.
Best avoided.
Images Pending
Seagate Barracuda SATA V 120 GB ST3120023AS (2003)
The Barracuda SATA V series of 7,200 RPM drives was Seagate's mainstream desktop performance drive of 2003, available in 60, 80 and 120 GB capacities, using 30 GB per surface platters. It was also the first widely available SATA drive, although it appears this drive wasn't natively SATA.
They were equipped with an 8 MB buffer, quite large for the time (2 and 4 MB were still common). The 60 GB model used just one platter, while the 80 and 120 GB units used two. The 80 GB unit had only 3 heads, so one side of a platter went unused, while it also dropped 10 GB of potential storage (three used recording surfaces would be 90 GB, not 80 GB).
Performance was quite moderate, 14 ms typical access, 44 MB/s linear read on outer surface dropping to 23 MB/s on the inner.
They were significantly problematic on the SATA interface, sometimes due to immature SATA controllers (Silicon Image controllers were anathema to these drives), sometimes due to buggy firmware. They often worked better if Native Command Queueing was disabled.
The drive is based around the STMicro 100238949 microcontroller, LSI L282368 Serial ATA adapter (can't find any data on this, looks like it's a SATA to PATA adapter) and the STMicro SMOOTH 100217347 appears to be the read channel or motor controller. Buffer and firmware scratch is handled by the Samsung K4S641632F 64 Mbit (1M x 16 bit x 4 banks) SDRAM rated for 6ns operation, or 166 MHz at CL3.
This one died without warning around 2012 and so had a good innings of almost ten years (about six of those being power on time). It now sits clicking away, occasionally showing up for half an hour, which enabled much of the data to be copied off.
Hitachi Deskstar 60GXP IC35L060 40 GB (2003)
This guy had died in the field and when it arrived here, as a data drive in an Opteron-165 based desktop, it wasn't showing up on the ATA bus at all. The IC35L060 was right as Hitachi bought IBM Global Storage Technologies, so some of them where branded IBM, some branded Hitachi. This weird thing is labelled as IC35L060, but only has 40 GB capacity. The IC35L040 was the 40 GB model!
They had a capacity of 20 GB per platter so chances are good that this is a three platter drive with only two of them enabled. Talk about waste!
The 60GXP series were a little better than the infamous Deathstar 75GXP, but still a nightmare. I remember looking at it in 2012 and thinking "How the hell is this still working?" because practically all of them failed quickly and early. A design fault in the 75GXP's had them do head crashes, hard, into the recording surface, sometimes actually delaminating it from the glass platter! What actually happened was that the magnetic layer was prone to flake off, then it would impact the head, causing the head (which flew on aerodynamic pressure from the spinning platter) to lose its cushion of air and crash into the platter. I haven't opened this 60GXP to check whether this was the problem or not.
It was a shame because, after around 1998, IBM was the global hard disk market leader. IBM pioneered giant magneto-resistive (GMR) heads, IBM pioneered EPRML recording, IBM pioneered glass platters. Since IBM exited the HDD market, in 2002 when the sale to Hitachi was announced, the market stagnated for years. Capacities improved, but performance only rose with areal density. Access time flatlined: The same 10-ish ms access IBM first demonstrated in 1999 is the access time of 2018's 4 TB models. Since IBM, the only real innovations have been perpendicular recording in 2005, which was actually developed at IBM Almaden Research Center, then implemented commercially by Toshiba. Others were helium filled HDDs, and shingled magnetic recording (SMR). Helium was considered, and seemed to always be "being considered", as early as 1994 at Quantum, but was always rejected: Helium will escape over time, so any HDD relying on it is essentially time-bombed. SMR has been considered obvious by the industry for years before Seagate implemented it. SMR allows a large improvement in platter capacity, but at the cost of very poor write speeds.
Images Pending
Western Digital Caviar WD2500JB 250GB (2003)
250GB in 2003 was right at the top of the line, and Western Digital knew it. The other WD2500 you're likely to see was the Caviar SE, not this unit. The Caviar SE had a black upper cover, this was just plain metallic silver. There was a WD2500JD model which used a Marvell 88i8030 SATA bridge but was otherwise the same drive.
It operated with three 83.3 GB platters, an 8 MB buffer but merely average performance. Maxtor's DiamondMax Plus 9 generally beat it in most tests, but was only available at 160 GB at the time.
This series of drives tended to have its firmware chips fail after a few years, resulting in garbled detection in BIOS and general failure to do anything.
Seagate U9 ST380012A 80 GB (2004)
In the early 2000s, Seagates fell out of nearly every OEM machine you saw. Dell, HP, Packard Bell, you name it, you saw Seagate in it.
This came out of a very unlovely Pentium 4 machine and remained as a storage drive when the P4 was replaced by a recycled Opteron 165. It was working when retired in 2012, by which point 80 GB was useless.
Hitachi Deskstar 160GB (2004)
After the nightmare that was the IBM Deskstar 60 and 75 series, IBM got out of the storage business entirely and sold the division to Hitachi, to become Hitachi Global Storage Technologies. Many expected Hitachi GST to change the series name, but they kept Deskstar, Travelstar, etc.
This one, manufactured August 2004, performed exceptionally for three years before being placed into this very server to handle live content. It did so quite happily until early 2010 when it started to lose reliability. It was finally replaced with a 200 GB Seagate SATA drive after six years of constant use in systems which were almost never powered off.
Its fault is quite simple: Sometimes it just doesn't spin up and the system doesn't see it. If it does spin up, it works perfectly until the next reboot. This is a common failure mode and is simply due to the motor becoming worn.
Western Digital Caviar SE16 500 GB (WD5000AACS)
This is an external hard drive.
No, really. I'm not joking. It was a Western Digital MyBook 500. Of course it was just a cheap SATA to USB2.0 bridge and a normal hard disk: This normal hard disk, manufactured on the 11th of November 2007.
This one was a few revisions into the SE16's life. The spindle speed was dropped to 5,400 RPM as this was both cheaper and reduced power requirements. These are important attributes for external hard drives, where the casing offers little room for a fast hard drive to cool itself and performance is hardly important.
16 MB onboard buffer, SATA 150 interface and the origin of the WD Green series as slow, cheap, low power drives. You can see at the bottom the "GreenPower" sticker.
The one you see here was liberated from its stuffy enclosure and put on a SATA interface. The drive was never quite happy in the enclosure, it'd just disappear every so often. Outside the enclosure after about a year, the same thing started happening again. It was forgotten about for nearly two years, then put back into a system where it worked for about another 18 months before it again failed - this time seemingly forever.
Images Pending
Samsung SpinPoint T166 HD403LJ 400 GB
Samsung rapidly became a favourite whenever I needed a HDD in the 2005-2014 timeframe. Sure, Seagate was around the same price, but why would I buy Seagate's big, cheap, slow, when I could have Samsung's big, cheap, fast?
This particular drive was a warranty replacement for a dead SpinPoint P120 (SP2514) of 250 GB capacity in PATA. The RMA centre offered a 400 GB SATA, and I'd just upgraded to a SATA-based system. I said "Yes, please."
With Samsung's then-new 166.7 GB platters (83.3 GB per surface), the 400 GB model used three platters, and five surfaces. One surface went unused. The 500 GB model used all six surfaces and was the top model of the T166 series. It went up against Hitachi's Deskstar T7K500 and Seagate's Barracuda 7200.10 series. Typically, the Hitachi won by a small amount, and the Seagate lost by that same small amount. Samsung was middle of the pack. The Samsung was also significantly cheaper than both ($30 MSRP lower than the Hitachi), and backed by the same 3 year warranty as the Hitachi.
As of January 2020, it has logged over six and half years of power-on hours. There are zero reallocations, zero pending sectors and only a slightly elevated soft read error rate even hint that this mechanism is practically ancient history. It was retired in that same January after serving as a Windows 10 File History drive for two years.
Samsung Spinpoint M80 HM080HI - 2006
8 MB buffer, 5,400 RPM, 2.5" laptop form factor, SATA 150 Mbps interface. The 8 MB puts it fairly late in the original SATA's run, the 80 MB capacity was fairly small for the time, and Samsung was a big supplier ot Dell laptops, though I don't recall which machine this came out of. May have been a Dell Latitude D620.
It was decently quick, though not at the top of any rankings, bottom of the storage capacity tiers (80-120-160 in that day).
Hitachi's 7K1000 series were Hitachi's first HDDs to hit the 1 TB marker, which was with five 200 GB platters, but by 2008 they were no longer modern, so were replaced by the 7K1000.B, which was faster and ran cooler. The 7K1000.B drives likely used 333 GB (166 GB/surface) platters and were almost embarrassingly low cost.
They were not fast, mind, far from it. They were good and cheap. This one was bought new in early 2009 (manufacture November 2008), always check the Power on Hours stat, but almost immediately developed some bad sectors. Three, if I recall right. It then kept them nice and steady for eight years, before it started to gain more bad sectors as the heads and recording surface wore out. At retirement, in September 2020, it had logged 62,642 power on hours (over seven years), had 476 reallocated sectors and 7 pending reallocation.
When new, it was the primary system drive for a Windows Vista machine with a dual core Opteron 165, 2 GB RAM, and a Geforce 7900 GTO. It retained this status over the Windows 7 upgrade, but then faster and better came along: A Samsung SSD 850 EVO. As it had been system drive for so long, it was nearly bottom of the storage tree when it was finally relegated to bulk data.
On retirement in September 2020 it was doing Windows 10 FileHistory duties.
Images Pending
Seagate Barracuda 7200.12 ST3250316AS SATA 250 GB (HP OEM, 2009)
I bought a HP Microserver TL with four of these fitted, they were HP's OEM version of Seagate's retail/OEM ST3250318AS. It was used lightly as a HTPC and media server, with the drives often powered off. Ideal conditions for a hard drive to live forever. Four years later, three of them had died, including this one. The 7200.12 (U12) series used 500 GB platters. This uses just one of them, and only one surface of it, with a mere 8 MB buffer. It's not going to win any performance prizes.
After Seagate had horrific problems with the Barracuda 7200.11 (U11) firmware permanently corrupting, then a firmware update causing the problem it was meant to fix, the 7200.12 series was meant to restore confidence in Seagate's products.
It failed to do so. It had WD's quite good (but terribly overpriced, perhaps WD just couldn't make them very well) Caviar Black to deal with, as well as Samsung's exceptional SpinPoint F1 (which was also usually cheaper). It was simply outclassed and turned out to be an unreliable stinker. The performance issues were thanks to poor access time, while the reliability issues were probably Seagate's cost-cutting attempts. Samsung was undercutting everyone with faster, better units. Seagate was trying to keep up with shoddy, poor quality units. It was sad to see the industry's oldest player and so very often its leader stooping so low. Of the "fast, cheap, good", WD was offering "fast, good", Hitachi bounced between "fast, good" and "cheap, good", Seagate was offering "cheap" and Samsung was offering "fast, cheap AND good".
This was going up right against the Samsung SpinPoint F3 a little below. It had the same 7,200 RPM spindle speed, the same 32 MB buffer, the same SATA 3Gb/s interface. Similar performance, then?
No! The Hitachi was quite a bit slower. Even though access time performance was quite good (better than the Samsung), the drive generally underperformed. It was only slight, and you'd need to be benchmarking to show it, but it was there.
The other thing is that this drive is dead as of June 2020, and the Samsung is still running well! (Although the Samsung did start to weaken and was taken out of service in September 2020.)
Images Pending
Hitachi Deskstar 5K3000 HDS5C3020ALA632 2 TB
Hitachi's 5,940 RPM mechanisms were not bought by anyone for their performance, they were cheap and good, but not fast. Performance drives were all 7,200 RPM (and have been for the last decade and half as of 2020) or 10,000 RPM. With four platters and seven heads (one side was disabled for whatever reason), this was an awful lot of storage in quite a small place in 2012. Hitachi's reliability led me to pick these up whenever Samsung was out of stock.
It's important to realise that there were two models of 5K3000. The 1.5 GB and 2 TB versions (HDS5C3020ALA632 and HDS5C3015ALA632) were rated differently to the 3 TB version (HDS5C3030ALA630) implying the 3 TB version used a different mechanism, it was slightly heavier (10g) and used slightly more power. To me, this indicated an extra platter.
The 5K3000 series was named "CoolSpin" for, and I needed to look up Hitachi's marketing copy on this "CoolSpin technology delivers a greater level of power efficiency and quiet operation". Focusing on non-performance metrics for 5,400 RPM class units was common. WD's GreenPower, for example.
This one was not bought because a Samsung was out of stock. It was bought in 2011 because it was suspiciously cheap: £55 or so. As of October 2018, it has logged 47,795 power on hours, working out to be 1,991 days or five years and five months. It is showing no reallocations, pending sectors or even much seek time or error rate degradation. The next year, a massive lack of supply thanks to the Thailand floods, saw 2 TB hard drives pass double what I paid for this. No regrets!
Performance was, well, nobody bought these for performance. They were supertankers, not speedboats. 2 TB in 2011 was near the top of the rankings...
...5,940 RPM was not. Most reviews, such as SPCR considered its linear read to be around 130-140 MB/s, so we can speak a little about benchmarking methodology here. A linear read is only valuable if we know what part of the drive it came from. The outer cylinders can be (are) much faster than the inner ones. Crystal Disk Mark has just picked 1 GB of free space in a random location. If we use a tool which allows us to chart performance, we see that the outer cylinders do indeed hit that kind of speed:
HD Tune paints a more accurate picture. The zig-zag shape was almost completely reproducible between runs, even down to the precise position of peaks and troughs. Maximum was 141.6, minimum 58.5 and average 104.4, all in MB/s. Average access time was 15.5 ms with AAM off. It shipped from the factory with AAM enabled, which had access time between 18 and 19 ms (awful!), presumably part of "CoolSpin".
This drive passed 80,000 hours of power on time in late 2022 without any reported surface defects.
Hitachi Global Storage Technologies (Now part of Western Digital) rated these drives for 300,000 load/unload cycles and this drive had, at the 80,000 hour mark, done 12,804 load/unload cycles. It also was rated for 50,000 start-stop cycles at 40°C and had done 12,752. The rating at extreme ends of temperature/humidity was 10,000 start-stop cycles. WD and HGST went out of its way to ensure the drive had no specified MTBF (mean time between failure) and simply offered a 3 year warranty.
As of mid-2024, the drive had had daily backups for three years, being expected to fail soon, but instead passed 94,000 hours power on time. It still has zero reallocations, zero pending, has outlived every single drive it knew when it was first added to the system, outlived several which came later than it, and appears to be immortal.
Samsung SpinPoint F.3 HD103SJ 1 TB
Samsung was getting out of the HDD business when the SpinPoint .F3 was the new and cool thing. It was lightning fast and carried 500 GB platters - two of them, and used both sides. As typical for Samsung, it was very cheap, performed very well and, well, was pretty much a no-brainer. The 32 MB buffer certainly didn't hurt. The 1 TB model, as seen here, was the first HDD widely available to test at 150 MB/s sequential read.
In "business" benchmarks, it tended to underperform. The command queuing was maybe not optimal, or the host interface didn't handle heavy loads very well. Benchmarks are very heavy loads with large I/O queues which aren't often seen in practice.
Samsung's HDD business was sold to Seagate in the mid 2010s. This left Seagate and Western Digital in the market. When WD bought Hitachi Global Storage Technologies, the desktop drive business was offloaded to Toshiba to satisfy regulators. Seagate and WD both were promising advancements in technology such as HAMR "real soon now" since around 2012. Instead, we got SMR (which is widely disliked) and helium-filled HDDs, which have a limited lifespan due to helium leakage.
This was manufactured in April 2011 and a check in June 2020 showed it had 30,970 power on hours (1,290 days, or 3.5 years!), nothing reallocated or pending, and considered itself to be in good health.
A check in September 2020 (where HAMR is not commercialised, SMR is still unpopular, and helium was still for drives with a definite expiry date). In late August 2020, between 6 and 9 pending sectors appeared on the surface. These are sectors the drive controller has tagged as poor, take more retries than usual, or cannot be read at all. It was waiting for a write to that sector to be able to redirect the write and remap the sector.
To combat these, reads were ran against the affected files, but hundreds of retries did not work. This did, however, tell the drive firmware to mark these sectors as hard-failed. It was time to retire the drive from active service at 32,951 hours (3 years, nine months) power on time and nine pending sectors.
To see if it was still any good for a kids' gaming machine, it was fully overwritten with AIDA64's full linear write benchmark (this is not a secure erase). The pending sectors, which would usually be redirected on write, just plain disappeared. They didn't become reallocated sectors which was very unusual.
As of 2023, it was still in light use as a game storage drive for a 10 year old boy's "gaming" PC, a Dell Precision T5600 with two Xeon E5-2640s, 32 GB RAM, and a Geforce GTX 680 2GB.
Dell PERC H310
PERC started life as Performance Enhanced RAID Controller, but is now "PowerEdge RAID Controller". H310 is one of the entry level controllers and can handle SATA and SAS drives. It supports all the usual RAID functions such as JBOD, RAID levels 0, 1, 5, 10 and 50, can handle 32 drives (16 per RAID volume or 32 individual) and supports pass-through operation.
Now Dell doesn't actually manufacture any hardware itself, so who actually made this controller, and what is it? Under the heatsink resides an LSI Logic MegaRAID SAS2008 controller, a very powerful storage controller by desktop or workstation standards in the early 2010s.
This guy came out of a Dell Precision T5600 workstation, where it was was criminally under-utilised. It had one 10K 600 GB drive, one (of a potential two) Intel Xeon CPUs and 16 GB registered ECC memory.
The PERC H310 comes with IR (RAID mode) mode firmware, and it's usually a better choice for the home NAS builder to flash it to IT (target device) mode. Dell has this firmware on its site.
If, when placed into another system, the H310 causes bootlooping before POST, the solution is actually quite simple. For whatever reason, SMBus on some consumer boards isn't implemented very well (it's unclear exactly what the problem is, this author suspects it's BIOS assuming any device in the first PCIe x16 slot is a video card) so covering pins B5 and B6 on the PCIe edge connector fixes the issue, these are the SMBus pins. This may cause the option ROMs to not work for booting, but you want to boot from a SATA or NVMe SSD anyway.
Hitachi TravelStar HTS725032A7E630 320 GB
The TT7SAE series used 125 GB/surface platters, and was commonly sold as 120 GB, 250 GB, 320 GB and 500 GB. All of them had two platters, and all platters have two surfaces. The capacity is then limited in firmware to suit the need of the customer: In this case, Dell. This HDD came out of a Dell Latitude E6440, manufactured 2013. The drive itself is manufactured July 2014, so is a little newer than the laptop.
A manufacture code can be seen near the top, TT7SAE500(B), while the type code is seen near the bottom, TT7SAE320. What does this tell us? It's a 500 GB HDD with 180 GB disabled, a platter and half. If Hitachi was fair about it, the missing capacity would be usable as remapping sectors. Chances are, it's not.
Hitachi's Travelstar TT7SAE series was somewhat unremarkable. They found themselves as OS drives in mid-range laptops where they did a more or less okay job.
Seaghate Savvio 10K.6 ST600MM0006 600 GB
The enterprise variant of Seagate's Cheetah series used a 6 Gbps SAS interface, two 150 GB per surface platters, four heads
and a roaring 10,000 RPM rotation rate. Power draw, a long-standing problem with 10K HDDs, was rated at 7.3 watts typical operating. A generous 64 MB cache and 5 year warranty rounded out Seagate's offering.
At this time, WD's VelociRaptor was also in the SAS space as the "Enterprise Storage" (Seagate would rebrand the Savvio 10K.7 to "Enterprise Performance v7") and the two frequently duked it out.
This guy spent most of his time doing pretty much nothing in a Dell Precision T5600 workstation, which is where I got it from. By the time I had it (around 2018-ish), the world was SSD, but I needed hardware to replace the Hattix.co.uk server with something meatier and it was the boot drive for a few years, being retired in 2023 due to impending failure.
Seagate rated its performance to a 2.9 ms average latency, 204 MB/s outer and 125 MB/s inner transfer rates. In 2012, when this drive was introduced, that was definitely in the 10K-class of performance. How has it held up?
I usually use a HDTune test here, since it's quick and does what I need to show you. HDTune does not like SAS HDDs. Not one bit. So, I had to use AIDA64.
Ouch. What's going on? The HDD is old and the heads have become degraded somewhat. The retries cause the transfer rate to fall. If you see this kind of trace on a linear read test, the HDD is either busy at the time (bad time to run the test!) or is degraded and really needs to be replaced pronto!
That's what happened to this. The degradation became so severe that the partition table couldn't be read. A full overwrite failed to improve anything, so it was retired from service.
You don't see 10K HDDs much anymore, although Seagate and Toshiba still produces them with five year warranties, they're "new old stock", no longer in active manufacture, and rated for 266 MB/s transfer. This is because SSDs have almost completely replaced them. The 10K HDDs still have a density advantage at the same price, but this is slowly evaporating. A 1.8 TB 10K unit was around $300 in 2023, while the embarrassingly expensive 15K RPM drives, which only Seagate made as the Exos 15K line (the "Exos" branding is now used for both 7,200 and 5,400 RPM units), were $300 for 900 GB and seemed to be out of production. An SSD which would wipe the floor with either of those in every way was around $80. The only reason you'd now want a HDD over an SSD is longevity. HDDs don't suffer wear from writes so, for extremely write-intensive tasks, a HDD will outlive an SSD by a substantial margin. SSDs are also slow at extended writes (sometimes slower than HDDs!), so these applications favour HDD use. Typically, this is surveillance and other types of DVR. Storing backups is also a task HDDs do better than SSDs at.
Seagate Laptop Thin SSHD 500GB ST500LM000 (Gen 3) The point of the SSHD or Hybrid Drive was that most-used data was cached on a small onboard SSD, while the massive capacity advantage of the HDD backed it up. They sold at a small premium, but there ended the SSHD. Why?
Firstly, and most importantly, if we get a cache miss, we're wholly dependent on the HDD, Seagate's 5,400 RPM mechanism. The earlier Gen 2 model used a 7,200 RPM spinner of reasonable performance, but this Gen 3 model uses the much slower 5,400 RPM HDD mechanism. The astute may notice that 8 GB is not remotely enough (16 or 32 GB may have been) and that 5,400 RPM is really going to hurt where it matters. For synthetic benchmarks, where test files are quite small, SSHDs showed very powerful performance. For real workloads, they suffered.
Secondly, these hybrid drives were incredibly unreliable. This one came out, as hundreds others did, of a Fujitsu LifeBook E734. Not long later, the industry realised it was a lost cause and enough SSD to realistically improve performance put the drive into too high a price bracket: One might as well buy an SSD.
For everyday performance, a 7,200 RPM drive did better across the board. Seagate's own previous generation SSHD did better.
Clockwise from top left: TI SH6966 spindle motor and voice coil manager, Samsung 8GB MLC NAND, Intel eASIC 50415 semi-programmable ASIC, LSI logic 869000-series HDD controller, Samsung K4T511630J-BCE7 DDR2 64 MB SRAM
This drive wasn't showing up on boot, then was tested on a Dell Latitude E6440, it again didn't show up. On a desktop, it actually did show up in BIOS, but the HDD mechanism kept making noises like it was trying to start over and over again, most likely a head flying or a read channel issue. It never progressed far enough in its startup to actually report SMART data, so no clues at what the real problem might have been.
A second example was found and turned out to be properly working. This actually went into use as a big fat slow USB3.0 drive using a J-Micron USB to SATA adapter. These adapters never give 12 V, so desktop HDDs don't work, but 2.5" drives and SSDs don't need 12V.
Performance was not great, as this is a laptop 5,400 RPM drive, but the flash cache really helped in benchmarks and little else! Sustained write performance was about 46 MB/s, on the low side of things for this era, while sustained read was much better, around 115 MB/s.
Would you look at that? The performance looks great! Look at that random 4K write performance! Yeah, SSHDs basically cheated in benchmarks, since the benchmark file could be cached in the flash. You wouldn't see this in the real world. The HDTune result shows us what's going on here:
You see the normal curve as we start on the outer tracks and move inward, then finally the last few GBs shoot up as they're reading from the flash cache. Ignore the CPU usage, that's inaccurate on USB3.0 connected mechanisms.
Crucial MX100 512 GB
One of the very early generation of SSDs in 2014, this still had many of the features a modern SSD buyer would be familiar with. There's a 512 MB DRAM buffer (LPDDR2 1066, 16 bit), a Marvell controller, and a bank of Micron flash memory.
You'd see the same PCB used on the Sandisk Ultra II and the Kinston HyperX Fury. They differed in firmware.
The controller is the Marvell 88SS9189 "Renoir" controller, which uses four flash channels (at 200 MT/s each) and some sources claim uses two ARM9 cores, though I can't find first-party validation of that. It wouldn't be unusual, however. The memory is eight packages Micron L95B, using MLC flash and two 128 Gbit dies per package. This reaches 512 GB capacity, but only 480 GB of that is exposed, leaving 32 GB for spare. This spare NAND was important, since the endurance of this thing was only 72 TBW. The NAND packages were stacked two per channel.
We know the erase block size of Micron L95B, it is 512 pages (16K pages) so 8 MB, this is arranged into planes of 1048 blocks, and each die has two planes.
The MX series succeeded the M series in around 2014 and were among the first "affordable" (this thing was over £250 when new, though quickly dropped to around £150) SSDs to be able to completely saturate the SATA interface.
The bank of small capacitors visible here is not to protect against host power failure, it's to protect data already present. These capacitors allow any open pages to be closed in the event of host power failure, but any write in progress will lose its data, they're just not large enough to keep the DRAM powered up, the controller powered up, then to open new NAND pages to allow it to flush the DRAM to flash.
While serving as the OS drive of the hattix.co.uk server (physical tin since 2001!) this failed with what appears to be a VRM fault or a short after the VRM somewhere. It will power on but give quite severe coil whine.
Samsung SSD 850EVO 250 GB
SSDs completely changed the storage market. While early SSDs had problems with their garbage collect and firmware (and Intel's were actually time-bombed to self-destruct after a set period!), they were still incredibly more responsive than HDDs were. Swapping a HDD to an SSD was such a game changer of performance, it probably eclipses the first dual-core CPUs in how immediately noticeable the difference was. They put small, short-stroked, high power 10K HDDs like Seagate's Cheetah and WD's VelociRaptor completely out of the market.
All flash RAM (NAND, 3D VNAND, etc.) place storage cells in blocks, and each cell is charged in bulk. Cells can be individually discharged, but cannot be individually charged, so a write is done all at once, to the entire block: First fully charged, then the cells discharged until the right pattern of 1s and 0s is achieved. Multi-level cells and triple-level cells vary the amount of charge in a cell to give different patterns, improving storage density at the expense of endurance. When charged, a cell takes a small amount of damage - It can't quite fit in that many electrons next time. Samsung rates the 250 GB model of the 850 EVO to 75 TB written.
This all means that the writing phase of a NAND block is a three step affair: Erase, program, verify. The verify catches program failures, retries them, if they still fail, the block will be retired. All SSDs carry spare blocks. This SSD is rated for 250 GB, but carries more than that. It uses eight layer 3D-VNAND per package, and two such packages. Each die is 16 GB, so Samsung uses two packages of eight stacks of 16 GB, for a total of 256 GB. With only 250 GB exposed, and even then that's "Hard drive bytes", so 1 GB = 1 billion bytes). Of the 256 GB provided, 232 GB of it is available to the OS. The remaining 24 GB is spare, and available to be swapped in if a block fails.
Another important feature for an SSD is the RAM cache. Some cacheless drives are available, which really fall over in writing. RAM cache allows the storage controller to move things around in wear levelling and garbage collect much quicker than having to read/buffer/write. As a write is made, the "old block" is just left, and a new block programmed. Garbage collect is the process of erasing these old blocks, as well as moving data around. During SSD activity, the garbage collect rapidly moves differrent blocks in and out of service to keep wear levels even, which needs RAM to back it up if it's going to maintain a sustainable write speed. The 250 GB model was supplied with 512 MB of RAM, Samsung's own LPDDR3.
In 2012, a 250 GB WD VelociRaptor could sustain around 190 MB/s transfer, and around 140 sustained random IOPS (I/O operations per second). The earliest of SSDs could sustain 250 MB/s and handle 1,000 IOPS. By 2014, SSDs were saturating the SATA-III bus at 500MB/s and topping 50,000 IOPS: This one can hit 75,000 IPOS.
We'll use CrystalDiskmark (3.0.3, x64) to demonstrate just what even this relatively old SSD can do, against a modern fast HDD, a Toshiba P300 2TB (with AAM disabled) and a few others around the workshop.
Drive
Seq. Read
512K Read
4K Read
4K Read QD32
Samsung 850EVO
511.0
464.5
29.58
159.1
Toshiba P300
104.2
35.07
0.417
0.841
Samsung SpinPoint F3
104.1
30.82
0.350
0.860
All units in megabytes per second.
Sequential reads are a HDD's strongest suit, and it was five times slower than the SSD. The 512K random reads, not a worst case for a HDD and quite similar to how applications and games load, was thirteen times slower on the HDD. 4K random reads, a worst-case for a HDD but commonly found when "disk thrashing", was seventy times slower: Even the very, very fastest HDDs cannot reach 2 MB/s in this test. 1 MB/s is considered to be an excellent result for a HDD in 4K random reads.
4K random reads with a 32-request queue (queue-depth 32) was 189 times slower on the HDD. Such a deep queue is a pathological case, but shows that even a HDD's command queue optimisation cannot help it dig out of this level of hole. SSDs can use multiple parallel flash RAM channels and here we see the Samsung 850EVO doing just that in the 4K QD32 test.
Marvell 88SE9215 SATA Controller
When your large video card fouls SATA slots, but you need them for hard drives, you add in a PCI Express card. This one is based on the Marvell 88SE9215 PCIe SATA-III controller, which is built on a 55 nm CMOS process (probably at TSMC) and has one PCIe 2.0 lane. It can be addressed via standard IDE or AHCI interfaces.
Marvell's datasheet gives information on the 88SE9215
controller, which appears to be pin-compatible with the two-port 88SE9125 and other members of the 912x and 92xx family.
The board shows a further two SATA ports on the rear. What's this about? The 88SE9215 is only capable of four ports. Take a very good look at the "RJx" component positions on theboard. These, as you can see, as zero ohm resistors, or jumpers. They direct the signal lines either to the rear, or to the top ports. This isn't wired up for eSATA (which has power) but instead just plain SATA. You can tell this as SATA carries seven lines, three grounds (one for each differential pair, and a further common ground) and two differential pairs. The signal pairs and grounds are obvious on the larger image of the PCB.
The components at positions RJ1, RJ2, RJ3, RJ6, RJ8, RJ9, RJ10 and RJ11 can be shifted over one place to redirect the signals to these ports.
At the top of the PCB, between the top facing ports and inward facing ports, are two unpopulated jumper mounts. These are both diode protected, but it's unclear what they are for.
The heatsink is only just necessary, as the 88SE9215 is rated to only 1.0 watts.
Other components on the board are a 25 MHz oscillator (position Y1) which is needed by the 88SE9215 (the ASIC takes in the PCIe REFCLK, and a 1/4 divider would be trivial), and a voltage regulator (MOSFET and inductor bottom right of the image) which I initially thought may be generating the 3.3V supply the 88SE9215's I/O needs, but then realised this is available from the PCIe slot. It is, therefore, probably the 1.0V VDD for the ASIC core.
The date code on this card gives the manufacture as Week 26, 2016.
Crucial MX300 525 MB CT525MX300SSD4
Shown here mounted to a USB3.0 to M.2 SATA adapter, as I got the option on one at a kind of price you don't decline before I had a motherboard able to handle M.2!
Crucial's MX300 were low end SSDs, back before the bottom of the barrel had been reached, so it still has DRAM buffer. The SSD uses Micron (Crucial is a label of Micron) triple-level-cell (TLC) 3D-NAND with three dies per package to hit 576 GB raw capacity (512 MB with ECC). The controller is the Marvell 88SS1074 Dean quad-channel controller. Only two channels are in use, with two NAND packages per channel. The 1050 GB model uses all four channels.
Write cache was done with "dynamic write acceleration", where a portion of the TLC NAND, typically out of the extra capacity, is redesignated as SLC (single level cell), which is much quicker to write. Once a complete flash block was full, the write would be transferred to waiting MLC by the drive's firmware. This helped the limited hardware of the MX300, particularly compared to its peers, keep up. It was no Samsung 850 EVO, but it also wasn't priced like one. This is what DRAMless SSDs do to handle smaller writes until they choke.
On the left of the image, a bank of SMD capacitors can be seen. Contrary to popular misbelief, they are not to flush data if power is lost during an operation. They're not remotely large enough for that. What they do, and most SSDs have them, is provide the large instantaneous current needed to erase a flash block. Erasing flash is done by fully charging it, so filling all cells, and flash can only be erased on a per-block basis. Cells can be read and discharged on an individual cell basis, but only charged as part of a very large block, which can be hundreds of MBs or even GBs.
If power is lost, an SSD will make sure any open flash page is closed as quickly as it can, otherwise the entire page could be lost. Most controllers can keep dozens, sometimes hundreds, of pages open at any one time, but the SSD will only have enough capacitor to close a few.
An "Open" flash page is one tagged as "dirty", that its contents have been changed and the controller is awaiting sufficient writes to form a full block write. "Closing" the page marks it as invalid, and writes a full block in pre-erased flash somewhere else.
They were available in either 2.5" or M.2 form factors and came just a few months before DRAM-less ultra-low-end SSDs appeared. Most SSDs would easily saturate the SATA 6 Gbps link (600 MB/s tops, but 550 in real life) but a few wouldn't, and the MX300 525 GB was one of them. Poorly threaded tasks, those with large sequential reads, would normally top out around 460 MB/s. Throw some threads at the problem, build up a disk queue, and the MX300 would hit its rated 530 MB/s read and 510 MB/s write all day. Finally, the MX300 in 525 GB capacity was rated to 160 TBW endurance.
The CDM results above tell you some sort of write buffering on the device is being used. Random 4K reads at queue depth 1 (bottom row) will stink, no way about it, as the interface adds overhead. Random 4K writes can be optimised (the drive can buffer them, accumulate a block, then commit it) and this indeed is the case, so writes are much higher than reads.
Look more closely at the SSD, the M.2 keying on the SSD. There's obviously the M-key at the bottom, but there's also the B-key at the top. The slot also has a B-key position. A B-key slot can offer PCIe x2, SATA, USB 2.0, USB 3.0, Audio, and I2C. This one doesn't, it just offers SATA. The SSD itself is keyed to fit in a B-key or an M-key slot: This is how you can tell it is a SATA, not NVMe, drive. An NVMe only SSD will not have a B-key, so won't physically fit in an adapter like this, where only SATA is available. The different keyings of slots are carefully considered so that the slot can deny access to cards needing interfaces it doesn't have. We've started so we might as well go into what the different keys offer in the slot. Or can. No slot is guaranteed to have all of them.
A: A key means you get 2x PCIe lanes, USB 2.0, I2C, and DisplayPort. Typically used for WiFi adapters.
B: B key has PCIe x2, SATA, USB 2.0, USB 3.0, Audio, I2C. Usually used for internal USB devices, but WiFi cards often have B keys too.
E: E key gives two PCIe lane, USB 2.0, I2C, SDIO, UART, and PCM audio. Not often seen, in favour of A key.
M: M key offers four PCIe lanes, SATA, and SMBus. NVMe SSDs use M key, though most of them would be quite happy with just two or even one PCIe lane at the attendant cost of performance.
Slots should never have multiple keys, but devices do.
A + E: Two PCIe lanes, USB2.0 and I2C. Typically used for WiFi or BlueTooth adapters, where bandwidth isn't that important. It basically allows an E-key device to fit in an A-key slot, since A-key offers a superset of E-key for most use cases.
B + M: As seen here, two PCIe lanes and SATA. Typically used by SATA SSDs, though two PCIe lanes may be available.
Toshiba P300 2TB HDWD120
See below for details on the Toshiba P300 series as a whole. This was the first Toshiba mechanism to come into the lab and was a direct continuation of Hitachi's drive lineup.
Performance was, similar to the 3 TB model, uninteresting. HD Tune painted a maximum of 198.6, minimum of 95.5 and an average of 156.2 MB/s, with an average read access of 12.6 ms. This is entirely within the realms of expectation for a 7,200 RPM hard disk drive.
Toshiba P300 3TB HDWD130-UZSVA
Sometimes you want a data silo which is 7,200 RPM so won't embarrass itself if something has to be installed on it, but also you don't want to pay the Western Digital tax or take a risk with Seagate.
Throughout the 2000s, your good money went to Samsung or Hitachi. Samsung exited the market and Western Digital bought Hitachi's HDD division, Hitachi Global Storage Technologies (GST). Now then, the condition of WD's cannibalism of Hitachi was that the 2.5" production be separated from the 3.5" production, and WD wasn't allowed to have the 3.5" lines. Hitachi's 3.5" drive lines were sold to Toshiba as a result.
Crystal Disk Mark 8.0.4 painted a fairly ordinary picture.
Performance was reasonable for its era, capacity and spindle speed. Maximum 205.8, minimum 141.6, average 176.8 MB/s, with an access time of 11.5 ms. By this point, the HDD market was a lot less competitive. It had undergone a lot of consolidation and there were very few vendors, each with little to differentiate them. An uncompetitive market has two major characteristics:
1. Price is usually a little higher than it would otherwise be
2. Artificial segmentation and differentiation is endemic
The first point is obvious. The second is less so, and it's a result of innovation stagnating. Seagate will sell you a "Barracuda Compute". Western Digital has drives aimed specifically for NAS, online, nearline, surveillance, etc. None of that really means anything. A few firmware settings might be a little different (e.g. a NAS drive will be more willing to lose data and have a low TLER, as it assumes it has redundancy). They're still, at heart, the same 5,400 and 7,200 RPM mechanisms been peddled since the early 2000s. Sure, stuff like SMR (nobody wants this) and helium filling (this doesn't seem to offer much either) have been tried, while HAMR was around the corner real soon now for the last 10 years. Without any innovation, manufacturers fake it. Once they've faked it long enough, they start to divorce the real specification of drives from their models.
The P300 series was initially based around Hitachi's 2.4 terabit (300 GB) platters and featured both conventional magnetic recording (CMR) and the lower performance, higher density, shingled magnetic recording (SMR). Additionally, both 5,400 RPM and 7,200 RPM units were available. Some had 128 MB buffer, some had 64 MB... wow, Toshiba! You must be new!
Toshiba had an entire range of HDDs under the same "P300" name. "P300" wasn't a model of HDD, it was a meaningless brand.
The 2TB and 4TB models overlapped 64/128 and SMR/CMR. The 2TB model additionally overlapped 5,400/7,200 RPM. They were a minefield. The 3 TB model, however, was only ever CMR, only ever 64 MB, only ever 7,200 RPM. It was a much safer purchase. You weren't risking the poor write performance of SMR, you weren't risking the poor everything of 5,400 RPM, and you paid for it with a 64 MB buffer, which you almost certainly didn't care about.
Decoding the model name of the HDDs was simple once you knew how. This is the HDWD130UZSVA. We split that into:
HDWD - Production code
1 - CMR or 2 for SMR
3 - Capacity in TB
0 - Always zero
UZSVA - Bulk channel or EZSTA for retail channel
This drive was brought in to replace three storage drives: A Hitachi 750 GB 3.5" which had a small number of bad sectors cleanly remapped, a Toshiba 500 GB 2.5" which had a large swathe of bad sectors cleanly remapped but had developed another cluster of weak LBAs, and a Samsung Spinpoint F.3 1 TB which was doing games storage and had started to develop bad sectors which weren't cleanly remapping. All of them were well beyond 5 years power on time. The oldest, the Hitachi, was showing 64,642 power-on hours, which is seven years and one month.
PENDING
A-Data XPG SX8200 Pro 512 GB
This NVMe device is one of the very many using the Silicon Motion SM2262ENG controller. A-Data has paired that with Micron 64-layer 3D TLC NAND.
There were many hardware revisions to the SX8200 Pro with different controllers (basically speed variants on the SM2282, between 575 MHz and 650 MHz) and different storage, which varied from 800 MT/s to 525 MT/s. Some variants also shared a name with the XPG Gammix S11 Pro.
A-Data did provide a "heatsink" with this, which was a very thin piece of aluminium with a fancy logo on it. Most reasonable motherboards have SSD heatsinks. SSD cooling is one of those weird things. You can certainly overheat the controller and cause it to throttle, which will reduce performance (it is not easy to do), but NAND likes to run warm. At 25 C, NAND will wear out twice as fast as it will at 40 C. At high temperatures, small "charge traps" on the tunnel oxide of the NAND floating gate are dissipated. These charge traps are shallow or deep. Deep ones are irreversible, but shallow ones can be reversed at high temperatures and they don't form as easily at higher temperatures. This means that not only does NAND run better, warmer, but also has an extended lifespan.
This might explain why A-Data's "heatsink" is so feeble. Those who have "proper" M.2 SSD heatsinks (including those supplied with motherboards) are advised to insulate the NAND from them and ensure they contact only the controller (insulating tape or electrician's tape will do this) or to not fit them at all.
The SX8200 Pro is rated by A-Data to 3,500 MB/s read and 2,300 MB/s write over its PCIe 3.0 x4 interface (4,000 MB/s). Endurance is rated to be 320 TBW (terabytes written) from TLC NAND, which is better than QLC but not as good as SLC or MLC. It is good for a 500 GB unit. As far as 500GB-class SSDs went in 2020, you could do faster and better, but prices went very high very quickly. Going cheaper would skimp on essential features like SLC caching, DRAM buffers and a decent controller. You can skimp here for a storage drive (e.g. the Crucial P3 below) but this makes no sense for a system drive.
DRAM buffers are really important, but only on drives doing a lot of writing, like OS drives.
The real results exceeded the rating a little, this is normal. The key result here, and where a good SSD will pull away from a cheap one, is the Q32T16 test. This hits the controller heavily and allows it to leverage its smart firmware, DRAM buffer, and multiple NAND channels... If they're there. For a well used drive from the 2019-2020 period, these results are excellent.
Why, then, if this is a good controller with a DRAM buffer, does the DRAM-less P3 below beat it? Especially in high-queue multithreaded writes, where they're supposed to do worse? Simply put, the P3 is three years newer and the SX8200 Pro has an OS installed on it. The P3's result was done just after first copying its data from the old mechnical HDD, so the drive was still fresh.
Why does your SSD show really low write performance when you ran the test to compare? It needs a TRIM. Whenever changes are made to an SSD, the blocks are not made free until the OS specifically tells the SSD to clear those blocks, so writes could be made to blocks which contain data which the OS has deleted, but the SSD can't understand the filesystem so has no idea that those files have been deleted. Indeed, the SSD has no concept of what a file even is. A "TRIM" operation clears out that data. Windows 7 and upward will automatically TRIM the SSD every so often, typically once a week.
Crucial P3 2 TB
The Crucial P3 was not an SSD to boot a system from, other than maybe a lightweight laptop. To understand why, we need a little look into how SSDs work.
NAND storage, "flash RAM", is organised into "erase-blocks". A write operation must be done on an entire block at once, and each block can be very large, 1 MB to 8 MB and rising. Erasing a block is done by charging it, so each cell becomes logic level high. This can be a 1 or a 0, or a 11/00 or a 111/000 depending on how many levels each cell is using. This P3, for example, uses QLC flash, so each cell stores four bits in sixteen charge levels.
To write to any cell, the entire erase block has to be erased, so it makes sense to buffer writes until enough data to erase an entire block is present. This is what a "DRAM buffer" or "DRAM cache" does, so the phenomenon of write amplification doesn't take hold as much.
Erasing NAND is also very slow, usually 10-20 ms, comparable to the seek time of a HDD! If the SSD runs out of pre-erased storage and has to erase on demand, write speeds drop awesomely, down below 50 MB/s.
The P3 has no DRAM so uses a buffer area of the actual storage to take the writes, once this has been used up (say, during a game install, or any other large write operation), the drive has no option but to slow itself down to allow the buffer to be copied to main storage.
In regular operation, a DRAMless SSD will skip the "Buffered write" phase, as it has no buffer, and go straight to "data folding", where a write is absorbed by pSLC cache NAND, this cache is then copied to permanent storage. Because several channels can be open at any one time, more than one pSLC block can be being folded at any one time. It is still slow, however. Data folding is usually able to keep up with write speeds of around 300-400 MB/s.
This shows in extended write performance, which is very poor. While copying large amounts of data to the P3 (from another SSD) write performance tanks to below 50 MB/s. When the data quantity doesn't exceed the SLC cache size, writes are reasonably quick.
However, the P3 reads at full speed, since the cache or buffer doesn't affect reads. This makes a DRAMless SSD ideal for storing games, which don't often write to their install location but need decent read performance.
Crucial has to use Micron RAM, as Crucial is a trading name of Micron Technology, but freely chooses which controller to use. In this case, it's the Phison E21T PCIe 4.0 four-lane, four-channel 1,600 MT/s controller. This controller uses a single ARM Cortex R5 (ARMv7-R) 32-bit microcontroller with two "propreitary" accelerators, one runs at around 900 MHz, the other at 500 MHz and they both do different things. Cortex R is a version of Cortex A intended for critical applications and deterministic operation, known as "realtime" in embedded systems and, when first made available in 2011, it was a version of the Cortex A5 (ARMv7-A) but did not have a memory management unit, since realtime applications usually considered virtual memory to be an annoyance.
The single Cortex R5 is extremely weak by MPU standards, but it doesn't need to be that meaty. It's there to just move data around for the most part. It isn't doing anything complex and the payload data doesn't even enter its memory map. Having a single processor as the manager of a number of specialised accelerators is a very old fashioned way of managing a storage device, most HDDs moved from that model to "a fast enough processor does it all in software" in the early 1990s. Embedded controller manufacturers are usually quite secretive about their designs, it would not surprise me to find out the "specialised accelerators" are just more Cortex R5 cores running in their own memory space - This is very common for accelerators!
The P3 disables PCIe 4.0 in firmware, while the P3 Plus enables PCIe 4.0. The 2 TB model shown here uses Micron NY161/N48R 176 layer QLC. The hardware between P3 and P3 Plus is identical.
Endurance is 440 TBW, which is very low for a 2 TB SSD, but average for QLC flash. The programmability of QLC is less than 100 erase-program cycles.
Check the keying again: This is M-key only. This would not fit into the same adapter the MX300 was using, it requires an M-key slot and M-key slot only. This is how the drive can ensure it is in an NVMe-capable M.2 slot.
Note the good performance. Why are we seeing excellent performance from a DRAMless SSD with a budget controller and QLC NAND?
The size of the CrystalDiskMark test file fits within the P3's pSLC cache. This means the P3 isn't being overly stressed and doesn't expose its weaknesses. This fits with most real world use that one will have for such a drive, short of installing a new game (or doing an update to one), most writes are not large enough.
Like many SSDs, the same PCB, controller, and NAND is used with many other names, even other vendors. The P3 is the same device as the Corsair MP600 Core XT, the Kingston NV2, the Crucial P3 Plus, the MSI Spatium M461, and the Silicon Power UD90.
What a Hard Disk Is
Of the core system components, motherboard, CPU, RAM, GPU, and hard drive, the hard drive stands out. Even as an SSD, it stands out. If it's a bad model or a slow unit, it'll drag down the entire system. It'll run slow, feel slow, load slow, operate slow. Everything else can be cut back for a particular use case: An office machine will never need a decent CPU or much of a GPU at all. RAM can be slower, if there's enough of it. The motherboard is more or less generic these days anyway. Cut back on the hard drive and everything a machine does will be slower, because the hard drive is what system architects call "primary store". It's where everything ultimately comes from.
All HDDs approach the problem of "fast, cheap, good, pick two" (project managers will recognise this as an output of their Iron Triangle) differently. Rarely, very rarely, we get all three. Usually, just two, but all too often, just one. Sometimes, none. IBM's infamous 75GXP, for example, was fast, but neither cheap nor good. Seagate tended to aim at "cheap and fast". Western Digital (WD) went for "good and fast". Samsung went after "good and cheap". Hitachi chased "fast and good" like WD did. Maxtor, before folding into Seagate, went for "cheap and good". Quantum chased "cheap and fast". Connor was "good and cheap". IBM, like its successor Hitachi, went for "fast and good".
Areal Density
Over time, all HDDs become faster. As HDDs inexorably become larger, the sectors are smaller and closer together, so even if the same linear distance of platter spins beneath the head in a given time (same rotation rate, such as 7,200 RPM), that same distance contains more data. We call this areal density. It has a minor impact in performance, and most often when dealing with linear accesses to very large files, those measured in megabytes or more.
Access Time
The key of all HDD performance is access time. We can predict this using head sweep parameters, or we can just measure it using random read access times. It is made up of two distinct measures, both of them are an average.
Seek Time
The first component of access time is the seek time. This is how long it takes the head to get from where it is to where it needs to be. If that's just the next track, seek time can be very low, around 1 ms. If it's all the way across the surface (a full seek), it can be very high, 10-20 ms. Typically, seek time is half of full-seek, but not always. Some drives don't do exact seeking and may overshoot or undershoot, then read the track ID, to work out where they are, then move the shorter distance to the desired track. These drives can actually be faster, because they don't need to do a precisely controlled seek, they can just fling the head as fast as possible. Some keep careful calibrations of precisely the right voice coil powers to move the head to precisely the right position. Some have built in temperature sensors to add this to their calibration. Most drives do a mixture of all of these. Access time is one of the holy grails of HDD performance.
Rotational Latency
This is always given as the average, which is half a turn. When the head arrives at the right track, chances are that the sector it needs is quite a distance away from the head, so the head has to wait for the sector to arrive under it. It's predictable and related to the spindle speed. Spindle speed divided by 30 to give half rotations per second, then inverted (1/x) to give time per half rotation, in seconds.
3,600 rpm = 8.33 ms
4,200 rpm = 7.14 ms
4,800 rpm = 6.25 ms
5,400 rpm = 5.56 ms
7,200 rpm = 4.17 ms
10K rpm = 3.00 ms
Back To Access Time
Average seek plus rotational latency gives access time. A fast 7,200 RPM drive with a 10 ms full seek will have a rotational latency of 4.17 ms, a seek time of around 5 ms, so a total access time of around 9 ms. This would be extremely fast, but we've seen drives achieve this in real life.
Sometimes, a drive vendor will quote "access time" figures which are absurd, like Seagate's ST1800MM0128 10K SAS Enterprise drive, which quotes 2.9ms access. This is below the 3.0 ms of rotational latency at 10K RPM! The figure being quoted there is probably the average seek performance, and usually that's measured as a half-seek. Real seek time would therefore be around 20% more than the 2.9 ms (so call it 3.1 ms) and with the rotational latency added on: 6.1 ms, a much more reasonable figure.
10,000 RPM Units
These have been around since the late 1990s as Seagate's Cheetah 10K drives. Western Digital got in on the party with the small Raptor (later VelociRaptor) drives. These reduce rotational latency, but due to the air resistance of the rapidly spinning platter, they run very hot. They also require very well balanced platters to minimise vibration, and sturdy heads to withstand the generated airflow and temperatures.
Complications!
Some drives have "Automatic Acoustic Management", or AAM. This splits seeks, slows maximum head traversal rate, to minimise the clicking or buzzing noise the head makes while moving. It also dramatically reduces seek performance. A Hitachi 1 TB unit sitting around averages a quite poor 15.3 ms random read access time with AAM enabled, but a very good 12.4 ms with it disabled. It's best to disable this if performance is something one desires.
SSDs
The access time of an SSD is so small it might as well be zero. Even a very slow, poorly implemented SSD has an access time well below 1 ms. Their access times, being so low, expose overheads which other drives can safely ignore, such as command processing time, controller latency and filesystem overhead. These are all measured in microseconds, which is a thousand times smaller than a millisecond, but an SSD's access time is also measured in microseconds and the combined overheads can be a significant part of the overall access latency.
SSDs started on the SATA interface and AHCI command set, both of which were developed for hard drives, and not optimal for SSDs. SSDs work best with large command queues and high parallelism: AHCI offers only a single 32-object command queue and parallelism can't be done. High end, even some mainstream, systems as of 2018 are using the M.2 interface and NVMe command set. NVMe minimises command processing time and controller latency, while offering block modes suited for NAND-based storage. NVMe offers almost zero-cost parallelism and 65,535 command queues, each of which can be 65,535 deep.
Company Profiles
IBM 1957-2002
IBM scarcely needs any introduction, IBM invented the hardfile, the immediate ancestor of the hard disk drive, in 1957. In 1961, IBM invented the air-bearing, still used today. 1965, the voice coil actuator. In 1974, IBM invented the "swinging arm" actuator, still in use today. 1979, the thin film head. 1990, PRML recording. 1991, magnetoresistive heads. 1994, laser textured landing zones. 1997, giant magnetoresistive heads.
In the 1990s, its DeskStar and TravelStar drives were the gold standard, but after a string of bad releases, culimating in the DeskStar GXP debacle, IBM exited the business during
the dot-com bust, selling its storage business, IBM Global Storage Technologies, to Hitachi as Hitachi Global Storage Technologies. Hitachi had no ambition to be a market leader and innovation dried up.
Fujitsu 1963-2009
Japanese conglomerate Fujitsu made its first storage device in 1963 and was a common brand in the West during the 1980s and 1990s. It was usually a little slower than the pack, but sometimes had a decent mechanism. It sold its HDD factories to Toshiba in 2009.
Control Data Corporation 1965-1989
CDC was a minicomputer manufacturer and cared nothing for consumer or business desktop grade HDDs. Like DEC, it slid into obscurity, its storage related technologies falling under Seagate in 1989.
Memorex 1966-1971
Memorex was first to develop a non-IBM, but IBM compatible disk pack in 1966, then a "plug compatible" disk drive in 1968.
This all woke people up to the very concept of a computer stoage market, before then the storage was just part of the larger computer system, was supplied with it and was an integral part.
Memorex did little else, betting on removable media, and was soon one of the world's leading manufacturers of mangnetic media, be it disk or tape. It exited the hard drive-related storage market in the early 1970s.
Digital Equipment Corporation 1967-1994
DEC cared about storage in so much as it was required for PDP and VAX series minicomputers. DEC's storage related patents and material were acquired by Quantum in 1994.
Tandon 1982-1988
India-based company making magnetic heads for floppy disk drives in the 1970s and 1980s, it moved into actual hard disk drives in the early 1980s, releasing its first MFM drive in 1982.
At the time, Western Digital was a manufacturer of HDD controller boards (an ISA board you attached a HDD to) and wanted to sell "hard-cards", an ISA board with the HDD pre-populated. Western Digital contracted production to Tandon, then bought the company in 1988.
NEC
In the 1980s, an NEC mechanism, all the way from Japan, was very much a premium product. They were fast, quality and reliable, but oh so spendy.
NEC did little outside Japan and it was unusual to see an NEC mechanism.
Samsung 1988-2011
The giant Korean chaebol megacorporation, making everything from armoured vehicles to heavy shipping to televisions to smartphones. It entered the storage business in 1988 and focused on the Asian and European markets. It was very unusual to see a Samsung mechanism in, say, North America or Australia.
In the early 1990s, a Samsung HDD went to the uneducated or the desperate. They were terrible. The drives were around as fast as everyone else, around as large, and around the same price, but so very unreliable. Samsung drives just couldn't handle operating shock, even the mildest knocks, without causing a head crash: Terminal for all HDDs.
Samsung worked out the problems, after burning a lot of people, by the mid-1990s.
There was an all-too brief golden age in the late 2000s, where drives like the Spinpoint F-series didn't just catch up with the "big two" of Seagate and Western Digital, but often won.
Samsung exited the hard disk business in 2011 to focus on NAND flash based storage and is now a world-leader of NAND design and general semiconductor fabrication.
SanDisk 1988-2016
Incorporated in 1988 as SunDisk, its co-founder Eli Harari invented the floating-gate EEPROM, the very first NAND Flash RAM, at Hughes Microelectronics (a subsidiary of Hughes Aircraft Company). This proved semiconductor non-volatile storage was possible and able to be commercialised. Another co-founder, Sanjay Mehrotra, invented the multi-level cell.
By 1991, SanDisk had produced the first solid state drive (SSD), a 20 MB, $1,000 device. It entered partnership with Toshiba in 2000, producing flash memory media, primarily Compact Flash and then SD cards.
SanDisk read the direction of the market in 2006, when it bought M-Systems, inventors of the first USB flash drive, known as the "DiskOnKey", a successor to its less successful DiskOnChip JEDEC-based "NVRAM" of 1995. DiskOnKey went up to 32 MB.
Foundering Western Digital, flush with cash but seeking its core HDD market shrinking, bought SanDisk in 2016 for US$19 billion. Rumours persist about Western Digital spinning off SanDisk back as its own entity, as the combined value to shareholders of Western Digital and SanDisk separately may be greater than the two as one single entity.
LaCie 1987-2014
LaCie emerged in 1993 as LaCie got into the HDD business. It was bought by Paris-based Électronique D2 which retained the LaCie name. Both companies had specialised in SCSI hard disk drives and were Apple's main suppliers of HDDs.
LaCie was one of the first vendors to produce external hard drives, SCSI was primarily an external bus, but HDDs usually mounted internally. It continued to focus on product differentiation, employing industrial designers and making attractive devices which, while middle of the road for performance, looked good on a designer's desk. This enabled low cost devices to sell at high prices. A Mac-using designer paid more for a LaCie because it looked good.
Seagate bought LaCie in 2014.
Seagate 1978-
The story of Seagate's origin from Shugart Technology is one of the Silicon Valley legends, and has been told many times better than this site would be able to.
Seagate made the first 5.25" bay-mountable HDD in 1980, the ST-506. Seagate appeared as shocked as anyone else when IBM selected it for the IBM PC in the early 1980s.
Through the 1980s, Seagate was as happy as anyone else was in just following IBM, but in 1992 Seagate saw opportunity in the emerging workstation market and released the first 7,200 RPM HDD, the Seagate Barracuda, a 1992 SCSI mechanism. Seagate produced some IDE 7,200 drives, the Barracuda mechanism with an IDE controller attached, in the Medalist 7200 in 1998, then Seagate completely exited the performance IDE market only to release the Barracuda ATA, a year later, in tiny quantities as a halo product.
This was Seagate's modus operandi. Whenever IBM hadn't done something new and awesome to leap ahead, Seagate was the market leader... So long as you bought the right model.
Seagate continued to push RPM limits, the 10,000 RPM Cheetah in 1996, then the 15,000 RPM Cheetah X15 in 2000. Cheetah 2.5" mechanisms continued in production well into the 2010s. This site was served off one from 2019 to a server rebuild in 2024 (and the Savvio 10K SAS HDD failing) onward.
The first SATA drives were Seagate in 2002, the first tunnel magnetoresistive (TMR) head sensor in 2005, and the unloved shingled magnetic recording in 2013.
By the 2020s, Seagate had the undesirable reputation as the least reliable HDD manufacturer, with AFRs (annualised failure rates) above 2% across most of its line, with some of its 12 TB models passing well over 5% AFR. HGST, WDC, and Toshiba, all clustered below 1%, and usually belowe 0.5%.
This wasn't unusual, or even all that bad. If your storage is fault-tolerant, then you may select drives based on different metrics than reliability, such as cost. Seagate was undercutting the rest of the market, so currency/capacity was in its favour. Seagate was also very widely available from any retailer, which HGST and Toshiba could not boast.
Western Digital 1988-
WD began in the HDD business when its business strategists saw which way the wind was blowing. Western Digital had been in the market much longer, but designed hard disk drive controller cards. These were ISA cards which would implement a low-level bus (e.g. the ST412 bus).
The bus was derived from the floppy disk drive controller bus, via the Shugart ST1000, to become the ST412. This provided raw signals to a HDD, how to move its stepper motor, converted digital data to the waveforms to record to the media. When reading, it would use the "cylinder, head, sector" data to position the reading head, and decode the waveform back to the original data, piping that back to the host. The controller card did all of this and interfaced the drive to the system bus. IBM sourced the cards for the PC/XT from Xebec and the PC/AT from Western Digital. The ST412 bus had two connectors, a control and a data and could, in theory, run four drives on the same bus.
Well, Western Digital knew that the new ATA (AT Attachment) interface was for controllerless drives. They had the controlling hardware mounted to the drive itself, moved the ST412 interface actually on to the drive, and used a new interface, ATA, to connect to the system. ATA was very similar to ISA, so nobody was going to make a business of selling ATA adapters (they were not controllers).
WD was soon to be a business without a product! If those guys at Conner were right, future HDDs wouldn't even have controlling hardware, it'd all be done in software on microcontrollers. So WD was already in partnership with Tandon Corporation, WD provided the controller for Tandon's DiskCard and Business Card, while Tandon provided the mechanisms for WD's FileCard. WD then bought Tandon and became a HDD manufacturer in its own right.
This didn't change WD's business model as much as one might think, however. It still relied heavily on third party suppliers: It now manufactured HDDs, but the controlling components had been assembled and specified by WD, not manufactured by WD. Until the late 1990s, a WD mechanism was a mish-mash of third party components, meaning WD's drives always lagged behind Seagate and IBM, both of whom had in-house R&D.
WD progressed in fits and starts in the 1990s as its various suppliers produced new and better motor controllers, read channels, microcontrollers, etc. Patents had to expire or be licensed before WD's suppliers could use them. WD didn't have magnetoresistive heads until 1998, did a weird 5,200 RPM until the Caviar 2.1 series hit 5,400 in 1998. WD slotted between Seagate and Quantum. Seagate's drives were similar performance but cheaper, Quantum was similar price but faster. I usually preferred the Quantum Fireball EX for performance-oriented builds, or, a little later, the Samsung Spinpoint V4300 where cost was a concern.
In the late 1990s, WD fell so far behind that it contracted out design and production to IBM: The IBM Deskstar 22GXP was also known as the Western Digital Expert! They were so close that it took a well trained eye to tell the difference without looking at the label. It was an exceptional drive, however.
WD came into its own in the 2000s, where a WD Caviar Black 7,200 RPM drive was always going to be a good choice, and a WD Caviar Green Power was usually a reliable high capacity storage or cost-conscious mechanism. WD bought Hitachi's 2.5" operations in 2011 (the 3.5" desktop drives went to Toshiba) and converted its own 2.5" platter facilities to making 3.5" units.
Quantum 1980-2001
Quantum wasn't known for any great firsts, nor any fantastic innovations. It made hard disk drives, it made good ones, bad ones, average ones, cheap ones, expensive ones. Quantum was, like Maxtor, Western Digital, and Seagate, a "full stack" manufacturer. It did everything from slow cheap things to expensive high performance units. Conner, for example, was not full stack, it focused on consumer and home PCs, where performance and reliability were less of a concern than price was.
In 1983, Quantum span off Plus to develop "hard cards", where the HDD mechanism and its controller were mounted together in a single ISA-slot mounting. It would all be pre-configured, the user didn't have to mess about with interleaving, RLL or MFM. Purists hated them, as a few tweaks to encoding or interleaving and a tired old HDD could have its performance dramatically improved. The industry, however, saw this was the future and Quantum had a big part in designing ATA. Plus was re-absorbed in 1992.
By the late 1990s, Quantum was not doing well. The Bigfoot failed to set the world on fire, as it was quickly abused and used in ways it wasn't intended to be: It was meant to be a cheap and giant slow storage, a secondary HDD in a system booting off a much faster one. It was meant for network attached storage or SAN arrays. It was meant for file servers. It was never meant to be anyone's boot drive!
The performance-oriented Fireball series were falling behind competitors (it was very late to fluid motor bearings) as IBM was seemingly innovating new and awesome every year. Quantum sold its HDD division to Maxtor in 2001. Maxtor continued some of the Fireball series (as Maxtor-brand) but had actually bought Quantum for the technology and the elimination of competition: Quantum's facilities and staff were shut down by 2004.
Conner 1986-1996
Founded in 1985 by a Seagate co-founder, Finis Conner, and then merged with CoData, also founded in 1985 by two MiniScribe founders, to produce its first HDDs in 1986.
Conner's mechanisms were Seagate-influenced in manufacture, made of precision cast iron and aluminium, and MiniScribe-influenced in design, based on a Motorola 68HC11 microcontroller running a custom real-time OS which handled the bus interface, the track seeking and following, meaning dedicated hardware wasn't needed. This meant the first few Conner HDDs weren't quite trusted. All the hardware was missing!
Conner also pioneered the drive power-on self-test (POST), where it could be jumpered in the factory to test itself needing only power to do so, not any sort of host interface.
Conner grew so fast that, by 1990, it had set a four year sales record for any US company. It was the fastest growing manufacturing start-up in US history.
In the early-1990s, Conner got stuck at 3,600 RPM while competitors had gone to 4,200 RPM and 4,500 RPM. It had absorbed the storage related patents of PrairieTek in 1991.
Using SMP and snooping technologies from RISC workstations, Conner actually made a dual-HDD HDD. Two actuators, two controlling microcontrollers, made the Conner Chinook. It also featured the industry's first implementation of command queuing and reorganisation
Hitachi Global Storage Technologies 2003-2012 (2013-2018 semi-independently)
Hitachi itself is a Japanese giant industrial megacorporation similar to Matsushita or NEC, but Hitachi GST was formed from IBM exiting the HDD business and selling its HDD division to merge with Hitachi's smaller affair. IBM GST was the final remnant of IBM's once-dominant home PC presence.
Hitachi, even before the acquisition of IBM GST, was a HDD manufacturer, though one of very little presence in the West. This author has never even seen a pre-GST Hitachi mechanism.
The early-acquisition Hitachi GST focused extremely hard on reliability and select markets. Retaining the IBM brands of Ultrastar (enterprise), Deskstar (desktop), and Travelstar (2.5"), by the mid-2000s, Hitachi GST was again at the top level of reliability. If you built with a Hitachi Deskstar, the HDD would usually outlast your build.
In the early 2010s, Hitachi made "CoolSpin" drives, which used slightly tighter tolerances on a 5,400 RPM line, and processes to identify the highest quality units, to run them at 5,940 RPM. This reduced access time somewhat (via lower rotational latency) without the excess power and expense of a real 7,200 RPM manufacturing process. A side effect of this was down to the mode of selection, only very well manufactured drives became CoolSpin, at least early on, meaning that Hitachi's already market-leading reliability became the stuff of legends. A Deskstar 5K3000 this author bought in 2012 is still running, without any sign of media wear out, head failure, controller failure, anything, as this piece is updated in 2024. It has accrued over 90,000 power-on hours in that time!
In 2012, Hitachi GST was bought by Western Digital, but Chinese commerce authorities demanded, as a condition of the sale, that the production facilities were not all owned by WD, so the Hitachi GST facility in Shenzhen (which made lower spec 3.5" units) was traded to Toshiba in exchange for a damaged 2.5" factory in Thailand, which had never reopened after the 2011 flooding. Additionally, WD had to run Hitachi GST independently, with its own research and development and its own product lines.
In 2013, Hitachi/WD GST released the first helium-filled drives. Helium, being less dense than air, gave less "air" drag on the platters, so the drive operated cooler. It also gave less turbulence and friction from the spinning platter, allowing a thinner and less rigid platter, meaning more platters could go into the same unit. A standard air-filled HDD can support, at maximum, six platters, but a helium drive can do ten. This allowed the release of a giant 6 TB unit in 2013. Helium, however, is a nightmare to contain and will escape from practically any seal. The helium atoms (helium doesn't form molecules) are small enough that they can fit between the atoms of any material and diffuse out. Over time the helium level of such a drive declines. During manufacture they are overfilled, and a reservoir behind a pressure valve tops it back up but, inevitably, enough helium escapes that the pressure within the drive becomes insufficient to support head flying.
Helium Filled Hard Drives
Helium had been the "Holy Grail" of HDD design since the late 1970s. It was known that a helium-filled HDD could spin faster, could fly its heads more reliably (turbulence on the heads is a key cause of media errors, helium flows better), and fit more platters in the same space. When IBM hired Cornell physicist Barry Stipe in 1998, his task was no less than to succeed where literally the entire HDD business had been failing since the early 1980s.
The problem with helium HDDs was not making a good seal, pressure valves could easily fix a slow leakage, it was manufacturing one. A 1999 IBM Deskstar had between 9 and 12 openings between the "mechanism" (platters, motor, heads, voice coil) and the outside world. Motor power and control lines, read channel wiring, voice coil power, etc. Sealing these was much more than dabbing epoxy on them, epoxy was easily penetrated by helium. Metal tape was already used by Conner and Seagate to seal HDDs, but the no adhesive known was able to contain helium.
IBM settled on welding the drive shut. As sealing was done at the very end of production, standard TIG welding would heat the mechanism so much that its components would be damaged. IBM used a technique from the aerospace industry, employing powerful lasers to weld the aluminium. This also caused another problem: A HDD chassis is machined from a die-cast aluminium block, usually using ADC12/A383 alloy. This is an alloy of roughly 80% aluminium, 10% silicon, 4% copper, 1% iron, 2% zinc, but it does vary, and traces of magnesium, manganese, nickel, and tin are common impurities below the 0.5% mark.
While this alloy is resistant to cracking from metal fatigue, laser welding caused very extreme temperature gradients and the material would crack. This required the use of aluminium alloys much more resistant to temperature extremes, which also came from the aerospace industry.
The remaining problem was how to run electrical signals. A simple hole, no matter how small, could not be filled adequately by any sealant or epoxy. The solution came from how refrigerators were made. Sealing in coolants like CFCs (e.g. Freon) or alkanes in a loop to last years was done using glass-metal feedthroughs. A tube of metal contains a tube of glass which contains the conductor, usually the same metal as the outer tube. All that remained was how to solder the steel feedthrough to the chassis. Anyone who's ever tried soldering to aluminium will tell you, quickly, that it is impossible. Stipe, at IBM, solved even this problem. A laser-etched mask could make the solder stick to the aluminium, but not to the steel. Plating the steel with nickel gave a good bond.
In 2002, IBM had done the impossible. Millions of dollars in research and development had resulted in a hard disk drive which ran in a helium atmosphere and would retain helium for a century. A few demonstration HDDs, of unknown capacity, were shown off at trade shows. Unfortunately, this exotic manufacturing process meant the helium drives offered 40% more capacity for 15x the price. Nobody in 2002 wanted to buy a 200 GB HDD for $2,000. Cost reduction was next. How could this process be adapted to the mass market?
IBM sold up to Hitachi at this time, but the moonshot continued. They had to solve problems as obscure as the laser hitting pockets of liquid or gas trapped in the aluminium, which would explode and destroy the seal! Hitachi developed edge-welding and a higher-silicon alloy. The alloy was then too soft and would gum up the CNC machines, which was solved with novel self-cleaning CNC. Eventually, Hitachi was not interested in technological leadership and, in 2007-8, terminated the project. The Almaden Research Center hosting this work, ex-IBM, was due to be mothballed, then sold.
Only Akihito Aoyagi remained, and it wasn't his full time job. He argued that the project had produced more patents per year, per head, than anywhere else in Hitachi. To Hitachi, however, IBM had also failed where everyone else in the industry had failed. Why churn more money into a bottomless pit?
In 2009, however, demand for high capacity HDDs rocketed up. This was the era of "Big Data", "Cloud Computing", and "Hyperscaling". Anyone who could sell a ten platter HDD would be 50% ahead of the world and could name his price. In a world where Seagate, Samsung, and Western Digital were just pushing to 2 TB, a helium drive would be able to do 3 TB. Hitachi reactivated the project and, four years later, had a commercial helium-filled HDD. The entire HDD industry had just reached 3 and 4 TB as flagship capacities, Hitachi was selling 6 TB.
A year later, Hitachi used shingled magnetic recording to boost helium drives to 10 TB.
This was what tipped Western Digital to get rid of Hitachi GST as competition and buy the company. China was not happy at WD attempting to get rid of competition and set timescales on the purchase, as well as other conditions, before it wouild approve the "merger".
In 2015, China's Ministry of Commerce set a timescale by which Western Digital could finally destroy Hitachi GST and, by 2018, Hitachi GST was fully integrated into Western Digital. Most of it was shut down, what little production WD wanted to keep remained, notably the plants in Fujisawa, Japan, and Prachinburi Province, Thailand.
By 2021, Western Digital was selling around 10-15 million helium-filled HDDs every year. The price never came down to the same as conventional manufacturing and still commanded a premium, restricting helium to high density near-line roles, and helped WD become dominant in the hyperscale datacentre.