Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
Georgef551

What sucks about fomatting HDD's today....

Do you bother reformatting new HDD's today?  

16 members have voted

  1. 1. Do you bother reformatting new HDD's today?

    • Yes. I want the format MY way!
      5
    • No. Takes too long, I\'ll live with it the way it is.
      2
    • Depends.
      8
    • I still use the 8\" floppies with my desk computer.
      1


Recommended Posts

I wanted to back up my YouTube collection, as well as other important stuff, so I got myself a Western Digital 2TB external hard drive.

For those of you who can, remember back in the late 80's and 90's, that a format only took several minutes to execute? Who would ever thought smaller capacity drives would have a leg up on today's bohemoths?
My laptop has a 80GB HDD, and doing a total defrag takes a couple hours. My old P2/450 could do it in under an hour.

You would think with the super high capacity of today's drives, and the higher RPM's, and faster read/write times, that things would be consistant, or maybe a hare slower?

Anyway, I got the 2TB drive, but it's formatted in 4GB segments, and I didn't want that. I wanted the single continuous segment like my other PC's were (NTFS was it called?), so I reformatted the thing.
First attempt failed, due to time constraints.

Second attempt, I started the formatting process at about 7:15 pm, and came back from work at about 5pm the day AFTER, and it was at 97% completion! After it was done, it took about 22 HOURS to format!

In the end, I'm happy, though. Got my YouTube vids backed up, but was the process ever so SLOW AS ALL HELL!
.
.
(Yeah, I sneaked in an elevator video....yeah.....it's relative to formatting HDD's today.)

Share this post


Link to post

If I remember correctly, I believe it's because the size and storage capacities have gone absolutely through the roof and the read/write speeds have only kept up enough to still do their jobs.

I still remember when 4 Gig's of space was adequate on a window's 95...

Share this post


Link to post

Time for a full formatting was always pretty long, and just grew up linearly with the size of a drive. Remember, low-level formatting doesn't benefit from CPU, Ultra-DMA or even interface architecture a lot, so formatting it is not exactly as fast (actually, not at all) like reading or writing from a formatted disk. Much like reading a floppy disk on an 8088 or a Core i7 will take more or less the same amount of time.

As for the overall speed of drives: RPM ranges are still in the same order of magnitude as 30 years ago. Maximum theoretical transfer speeds did improve by a couple of orders, but as I said those have almost no effect on low-level formatting.

BTW, why did you need to do a low-level format anyway? Unless you suspect physical damage, a quick format should suffice.

Defragmentation depends on a lot of factors, first of all how badly fragmented a drive is and what defragmentation options you chose. Surely, a large AND fragmented drive will take longer to defragment than a smaller, equally badly fragmented one, but if you choose to resort directories too...it can turn up pretty slow even on the smaller drive. Sometimes the best option is to simply copy all files off a fragmented disk, delete them, and copy them back in an orderly manner.

Share this post


Link to post

Depending on what I'll use the drive for, I either reformat it as ext3 or FAT32, both of which usually takes about 30 seconds to 5 minutes, depending on the size of the drive (I think the two 1TB drives in my desktop took about 4 minutes when one was formatted as ext3). Including partitioning time, if I need it (though most drives I leave as a single partition). I also try not to reformat flash memory since it can lead to performance degradation when done incorrectly.

As a test, I reformatted a 40gb USB drive sitting next to me a few times. NTFS took 3.783 seconds ("time mkfs.ntfs -f /dev/sdb1"), ext3 took 46.598 seconds ("time mkfs.ext3 /dev/sdb1"), and FAT32 took 1.363 seconds ("time mkdosfs /dev/sdb1"). Disclaimer: I'm on Slackware Linux 12.2.

Share this post


Link to post
exp(x) said:

Defragment? Here's a nickel, kid. Get yourself a better filesystem.


I see a "Linux doesn't need defragging" debate coming...

ANY disk-based media with random reads/writes will sooner or later become fragmented, as deletion invariably leaves holes. This includes sony minidisk, among others.

Yeah, I know that certain operating systems handle file allocation differently than just stuffing the first free hole, but even on the smartest setup, you can induce fragmentation by writing a fuckton of small files until you fill the disk up, deleting every other of them, and then writing one or more big files that will have nowhere else to go but filling in the gaps.

It's also a common misconception that fragmentation is caused by specific file systems: wrong. Neither NTFS, FAT or ext3 specify a file allocation policy, that's up to the OS to decide. Windows/DOS just does it the quick and easy way and stuffs the first available free space on a disk (but that's not mandatory by any means).

Linux by default tries to find a better spot for a newly created file, at the cost of longer creation overheads.

All of this is explained in great detail here, so you know that I didn't pull it out of my ass:

http://geekblog.oneandoneis2.org/index.php/2006/08/17/why_doesn_t_linux_need_defragmenting


FAT does NOT specify an allocation policy. It's up to the operating system to find a good spot for a file. That means that the allocation policy is not a part of the FAT but of the FAT filesystem driver. You can place files anywhere you want on a FAT driver (check Alexei Frounze's FAT driver for an example).


Regarding formatting times again: George was probably trying to do a low-level format (with physical verification and all), no wonder that takes a fuckton of time to perform with any disk technology and file system.

My approach vs fragmentation? Whenever possible, on machines that are going to handle a lot of incoming/outgoing data (downloads + burn) I create at least two partitions: one for stuff like programs and the OS, which are going to stay there for a while and will virtually never need defragging (unless I delete something big with 20000 files), and a "data" partition where downloads, movies, music and rapidly changing shit like work in progress can go, which I can safely not give a fuck whether it gets fragmented or not.

Share this post


Link to post
Georgef551 said:

Do you bother reformatting new HDD's today?

I intentionally stick to smaller HDs that won't completely paralyze my digital life when they eventually blow up, taking all that data with them.

Who the hell even needs +300gb discs anyway, unless all they do is warez movies and store them on their HDs?

Share this post


Link to post
Jodwin said:

I intentionally stick to smaller HDs that won't completely paralyze my digital life when they eventually blow up, taking all that data with them.


I had three such occurences to this day (differing causes), and experience showed me that only stuff that was burned onto CD-R/DVD-R made the trip safely. I only use external HDs for large but not vital data, for everything else I prefer burning.

Share this post


Link to post
exp(x) said:

Defragment? Here's a nickel, kid. Get yourself a better filesystem.

I'll remember than when I'm setting up my next Beowulf cluster.

Share this post


Link to post

Plus, I can't really fit external HDs into any consistent usage scheme, save for occasionally moving/temporarily storing large amounts of data.

They are too fragile to continuously move around, not reliable enough for long-term storage and most enclosures are often ill-suited to continuous operation than internal hard disks (poor ventilation, easily failing external power supplies or enclosure's adapter circuits are quite common annoyances). Luckily, the HD inside the enclosure usually survives these common failure modes, but it's annoying nonetheless.

Using them for storing movies/music and turning them on occasionally may sound reasonable, but if you do that consistently they will soon rack up much greater start/stop stress than the PC's main hard disk(s). Leaving them on all the time kinda defeats the purpose, and is not really OK unless you have a server/workstation-grade enclosure, not those ultra-compact USB-powered ones. I prefer optical media for this sort of job, which can be used in other computers and even certain DVD/DivX players. I prefer putting any wear and tear on the optical drives, rather than on an HD.

Share this post


Link to post
Maes said:

Yeah, I know that certain operating systems handle file allocation differently than just stuffing the first free hole, but even on the smartest setup, you can induce fragmentation by writing a fuckton of small files until you fill the disk up, deleting every other of them, and then writing one or more big files that will have nowhere else to go but filling in the gaps.

It's also a common misconception that fragmentation is caused by specific file systems: wrong. Neither NTFS, FAT or ext3 specify a file allocation policy, that's up to the OS to decide. Windows/DOS just does it the quick and easy way and stuffs the first available free space on a disk (but that's not mandatory by any means).

Tell that to /g/.

Share this post


Link to post
Bucket said:

Tell that to /g/.


They are uneducated whiggers.

Share this post


Link to post
Prince of Darkness said:

If I remember correctly, I believe it's because the size and storage capacities have gone absolutely through the roof and the read/write speeds have only kept up enough to still do their jobs.

I still remember when 4 Gig's of space was adequate on a window's 95...


Heck, I remember when having 640k of memory was plenty, and would almost never run out, either.

Mayes Then Sayeth:
Time for a full formatting was always pretty long, and just grew up linearly with the size of a drive. Remember, low-level formatting doesn't benefit from CPU, Ultra-DMA or even interface architecture a lot, so formatting it is not exactly as fast (actually, not at all) like reading or writing from a formatted disk. Much like reading a floppy disk on an 8088 or a Core i7 will take more or less the same amount of time.

Lame. Not you, or what you said. After 30 years, no real improvement? That's lame.

As for the overall speed of drives: RPM ranges are still in the same order of magnitude as 30 years ago. Maximum theoretical transfer speeds did improve by a couple of orders, but as I said those have almost no effect on low-level formatting.

I hate to see what happens when we get into the 100's of terabytes. Might as well forget it, then.

BTW, why did you need to do a low-level format anyway? Unless you suspect physical damage, a quick format should suffice.

I knew there was an option, but I just chose the one that sounded like it would do the job, and not a half-ass one. Gotta' do it, d oit right. Now I'm wondering if the fast way would've hurt.

Defragmentation depends on a lot of factors, first of all how badly fragmented a drive is and what defragmentation options you chose. Surely, a large AND fragmented drive will take longer to defragment than a smaller, equally badly fragmented one, but if you choose to resort directories too...it can turn up pretty slow even on the smaller drive. Sometimes the best option is to simply copy all files off a fragmented disk, delete them, and copy them back in an orderly manner.

That, I do know, and I defrag on a semi-regular basis. Does the included defrag on Vista even work? I do the old defrag.exe in the DOS box, because I can sort of actually see it do something.

Jodwin Asketh"
Who the hell even needs +300gb discs anyway, unless all they do is warez movies and store them on their HDs?

I do go overkill if I can, but that means bigger files won't require getting another HDD. Space is very limited in my tiny abode.

PS: The external isn't on all the time. It's rarely used for backup purposes on a very large scale. CD's anf DVD's will not preserve the massive video databases of specific topics, unless you want a clusterfuck of disks and a place to organize them the size of an old Buick.

Share this post


Link to post
Prince of Darkness said:

I still remember when 4 Gig's of space was adequate on a window's 95...


Pfft what? Try 200 MB. Or better yet, computers that don't have a HDD.

Share this post


Link to post

Being a Linux user, partitioning / formatting / reformatting the HD is pretty much mandatory for me.

Share this post


Link to post
Georgef551 said:

Lame. Not you, or what you said. After 30 years, no real improvement? That's lame.

I hate to see what happens when we get into the 100's of terabytes. Might as well forget it, then.


Why should a purely maintenance operation be "improved"? If there's anything that must be done securely and slow, that's the one. And, actually, if you somehow manage measure "formatted data per second" you will still find that they are faster than a 30 MB ST-501 hard disk, but the disks themselves are larger and the operation itself is (deliberately) inherently slow.

BTW, there's no reason to do a full format with a new HD unless you bought it used or you suspect it to be flawed, in which case the slowness will really be the least of your concerns.

Georgef551 said:

I knew there was an option, but I just chose the one that sounded like it would do the job, and not a half-ass one. Gotta' do it, d oit right. Now I'm wondering if the fast way would've hurt.


Dunno what option/tool you used to do the format, but since we're talking about an external disk, I presume you used windows' "format" option in the context menu, which has a pretty evident "Quick format" tick box. In the command line the equivalent switch would be /q. Even when installing a fresh copy of Windows XP (but not in 2000) you get the option to do a quick format instead of a full one.

Georgef551 said:

That, I do know, and I defrag on a semi-regular basis. Does the included defrag on Vista even work? I do the old defrag.exe in the DOS box, because I can sort of actually see it do something.


I only use JKDEFRAG for this sort of thing. Works on all windoze, has a clear visual, and plenty of options.

Georgef551 said:

CD's and DVD's will not preserve the massive video databases of specific topics, unless you want a clusterfuck of disks and a place to organize them the size of an old Buick.


I only know that a split-second power glitch or data corruption can fuck up your entire filesystem on a HD (case in point: a 300 GB hard disk plugged into an IDE channel after its enclosure failed. Unfortunately, the BIOS had the 127 GB limit, and the filesystem was rendered irrecoverable from the moment the disk was powered up...OUCH!!!!). Plus the 10s of failed HDs I've seen in my old helpdesk job...

On the other hand, killing 10s or 100s of disks scattered around is much, much harder ;-) And that you can always buy another optical drive should your older ones fail.

To keep the clusterfucking down however, I sometimes destroy older/duplicate disks, or employ rewritables for "middle term" stuff. That being said, I can't wait for recordable/rewritable blu-ra discs to become commonplace.

Share this post


Link to post
Maes said:

I only know that a split-second power glitch or data corruption can fuck up your entire filesystem on a HD

Again, get yourself a better filesystem. Modern ones (like ext3/4, ReiserFS, JFS, etc) have journaling to guard against that very case.

Share this post


Link to post
Maes said:

I only use JKDEFRAG for this sort of thing. Works on all windoze, has a clear visual, and plenty of options.

I've been using PerfectDisk. Any advantages JkDefrag has over it?

Share this post


Link to post

Hooray, let's all argue in another Georgef troll/fail thread!

I own two 1Tb drives, neither took 22 hours to format. You're not doing it right.

Maes said:

I only use external HDs for large but not vital data, for everything else I prefer burning.

Last time I burnt a massive heap of stuff to CD, I went to get it back a couple of years later and half the discs were corrupt. Now I just buy hard drives.

Maes said:

I see a "Linux doesn't need defragging" debate coming...

Anyone who says this is a fuckwit. ext2/3 are definitely more frag resistant than NTFS, but they do do it. Last time I kept a Linux install on the same partition for ~18 months I ended up with about 20% fragmentation. That's with constant install/uninstall of apps and upgrade/dist-upgrades though.

All filesystems fragment, some just fragment slower than others.

Bucket said:

Tell that to /g/.

Hahaha, whenever I have the misfortune to visit that board, I realise just how complex basic technology is to so many people. And seriously, how many goddamn times do people have to ask what headphones to buy?

Share this post


Link to post

For a somewhat on topic hijack, when shutting down my main PC (a Win XP setup from 2002) last night it hung up while installing an Auto Update. I was forced to push the power button when it was clear that it wasn't going to stop on its own. When I started it back up I got a nice message saying the harddrive was unmountable. The standard procedure is apparently to put the XP installation CD in and run chkdsk. That doesn't work so well when the CD drive has been fried for the last three years.

Anyway, I was able to find a source saying that if you pull the drive out and put it in another computer with XP (for example by taking the connectors for the CD drive on that computer and hooking them to the bad drive) that computer will run chkdsk and fix the corrupted boot sector. I guess Microsoft has to do something right once in a while.

Anyway, to cut the rambling short, is there anyway to avoid corrupting your hard disk if the system hangs while updating?

Share this post


Link to post
Dr. Zin said:

Anyway, to cut the rambling short, is there anyway to avoid corrupting your hard disk if the system hangs while updating?

It really depends what it's doing at the time it's updating. You may just have a discontigous entry stopping boot, you may have some essential file(s) completely corrupted and either need to copy them over from a working install or wipe and reload entirely.

Having dealt with professionally fixing things like this for years, despite the fact it may not have happened to many personally, is one of the main reasons I use Linux.

Share this post


Link to post
MikeRS said:

Again, get yourself a better filesystem. Modern ones (like ext3/4, ReiserFS, JFS, etc) have journaling to guard against that very case.


But NTFS uses journaling too ;-)

However the filesystem really doesn't enter into it in most of the serious data corruption cases. A data corruption induced by a SB Audigy used with a VIA chipset fucked up the MFT in one case, and in another one a size-limited BIOS fucked up the disk geometry so badly that merely powering up the disk made everything irrecoverable and changed the very definition of sectors/tracks and data density on the disk. Using ext3 or HPFS or FAT would have made no difference, since the corruption occured at a very low logical/physical level, even if it was not fatal to the disk.

alexz721 said:

I've been using PerfectDisk. Any advantages JkDefrag has over it?


Lemme see... it's free, open source, small and portable (no installation required, fits even on a floppy). Cuts it for me.

Super Jamie said:

Having dealt with professionally fixing things like this for years, despite the fact it may not have happened to many personally, is one of the main reasons I use Linux.


Ironically, UNIX users on desktop systems, especially those with DOS and Windows backgrounds were taught/warned to never randomly shut down a Unix machine from the power switch because the file system and its thousands of open handles needed to be completely closed and unmounted, or else data loss and filesystem corruption was almost certain. On the contrary, DOS and Windows (even 95 and above) were much more tolerant of this practice (if they were not writing ATM). For years, Linux distros suffered of this problem too, as the journaling option was not installed by default.

Super Jamie said:

Last time I burnt a massive heap of stuff to CD, I went to get it back a couple of years later and half the discs were corrupt. Now I just buy hard drives.


Stumbling upon defective/low quality/hard to read CDs (and more often, DVDs) happened to me too, but they were invariably noname discount brands. On the contrary, a verbatim gold CD-R from 1996 is still not only perfectly readable today, but is even writable (multisession). Really, don't cheapen out on those or you will regret it. In case you are stuck in such a situation though, try different CD/DVD drives before calling it quits. In any case, even with a batch of bad CDs, you're usually better off than with a single defective hard disk: with the HD it'e either all or nothing, with the CDs the data will be scattered and you have many more drive/disk/read attempt combos to try before calling it quits.

Super Jamie said:

All filesystems fragment, some just fragment slower than others.


Well said. Unfortunately I've stumbled upon one too many linux fanboys adamantly claiming that "Linux doesn't fragment, EVER!", even when I deliberately put them before a degenerate disk-filling scenario. To their defense, I'll condede that most of them were newbies that just parroted and perpetrated what they heard/read in some forum/page or from other newbies.

Share this post


Link to post
MikeRS said:

Again, get yourself a better filesystem. Modern ones (like ext3/4, ReiserFS, JFS, etc) have journaling to guard against that very case.


Put it in an OS worth a damn and I'll consider.

Share this post


Link to post
Csonicgo said:

Put it in an OS worth a damn and I'll consider.


AmigaOS? :-p

(The classic one, not that wanna be Linux-based "AmigaOS for PC" of today).

Share this post


Link to post
Csonicgo said:

Put it in an OS worth a damn and I'll consider.



How true.

I don't care about theoretical advantages but sorry to all Linux geeks out there: My OS of choice needs to run certain software without hassle - and Linux is not that system!

Share this post


Link to post
Graf Zahl said:

I don't care about theoretical advantages but sorry to all Linux geeks out there: My OS of choice needs to run certain software without hassle - and Linux is not that system!

Funnily enough that's the main reason I prefer Linux over Windows :-)

Share this post


Link to post
fraggle said:

Funnily enough that's the main reason I prefer Linux over Windows :-)


For that same reason, DOS may be preferable in some circumstances.

Share this post


Link to post
fraggle said:

Funnily enough that's the main reason I prefer Linux over Windows :-)


Do you happen to need any non-standard software? 90% of what I run is Windows exclusive.

Besides, nobody I am working with uses Linux so I'd maneuver myself into a dead corner if I'd switch OSs.

Share this post


Link to post
Graf Zahl said:

Do you happen to need any non-standard software? 90% of what I run is Windows exclusive.

It really depends what you consider standard.

I have a much greater need for grep/cat/awk/sed, bash scripting, a configurable window manager, dual clipboards and other Linux-centric features than I do for Microsoft Office or Doom Builder 2. Especially at work.

There's the right tool for every task. Sometimes one can get by in a less efficient manner with another tool, sometimes there is no alternative. Windows is the right tool for some things, Linux is the right tool for others. Even DOS and other older or obscure OSes have their place. OS/2 has powered ATMs for the last 25 years or so, only being replaced by Windows-based systems in the last ~5.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
×