Discrepency in Raid Size (1 Viewer)

Buickman

Super Forum Fanatic
Diamond VIP Contributor
Joined
Sep 30, 2004
Messages
8,752
Reaction score
11,951
Age
59
Location
Chester Gap, Va
Offline
I have a 3ware 9650SE 4 port that I just plugged 4 1.5T drives into a RAID 10. The raid bios are reporting 2.7T, Nautilis is reporting 2.1T, and fdisk reports 2.999T.

Which one of these should I believe? I thought it should be around 3T, or the 2.999T, but I'm thinking Nautilus is reporting the actual size.

I should add that I formatted this array as xfs.
 
Last edited:
Good to see more XFS use. It is beautiful! I use it on all my Linux boxen, even for my / partition (except /boot partition). The xfs_fsr command is nice (defrag files). You usually need to install xfsprogs suite to fully administer XFS.

RAID manufacturers read devices differently, typically by Sectors. (who really knows?)

Fdisk just reports the size of your slices (partitions).

DF reports the size of the filesystem(s). The size, reported by DF, can be different depending on the filesystem used and/or the block sizes of the filesystem, usually divisible by 512 bytes. Most filesystems in Linux are 4096 bytes per block, depending on the distro and size of the drive.

Nautilus uses "DF" to read sizes.

Short answer, I would trust df (Nautilus) for what you really have left.
 
Good to see more XFS use. It is beautiful! I use it on all my Linux boxen, even for my / partition (except /boot partition). The xfs_fsr command is nice (defrag files). You usually need to install xfsprogs suite to fully administer XFS.

RAID manufacturers read devices differently, typically by Sectors. (who really knows?)

Fdisk just reports the size of your slices (partitions).

DF reports the size of the filesystem(s). The size, reported by DF, can be different depending on the filesystem used and/or the block sizes of the filesystem, usually divisible by 512 bytes. Most filesystems in Linux are 4096 bytes per block, depending on the distro and size of the drive.

Nautilus uses "DF" to read sizes.

Short answer, I would trust df (Nautilus) for what you really have left.

If that's the case and df is reporting 2.1T, then that means I'm missing around 800-900G somewhere. How do I get the full size of that array? I thought xfs was designed for large file systems. Would xfsprogs do that?
 
If that's the case and df is reporting 2.1T, then that means I'm missing around 800-900G somewhere. How do I get the full size of that array?
I am not sure of the overhead used (spacewise) on the 3ware for the raid 10 (1+0).


If this raid set is not in use, yet. Create a large file and check the size of it, afterward.



... using /data as example for your raid ....
Code:
dd if=/dev/zero of=/data/BIG_OLE_FILE.file bs=1024
I thought xfs was designed for large file systems.
It is and does it very well. Format it in ext3 or something else and check the difference.

Would xfsprogs do that?
No, I wish.
With ext2/3/4, you can use tune2fs to change the 'reserved space' (around 5%). I wouldn't recommend that on a system drive, though.
 
After a lot of research I found out that there is a 2T limit, however it was supposed to be addressed by a much earlier Ubuntu kernel (6.06). A guy at work told me they use gparted on large arrays and it works like a champ. I came home and downloaded the iso and burned it, then booted from it. It did allow me to make a 2.8T partition, but everytime I tried to format, it would fail. I tried xfs, ext3 and ext4. I ended up creating a 900G and a 1.9T partition, and it worked.

The only other thing I can think to do is to flash the bios on the raid card.
 
It did allow me to make a 2.8T partition, but everytime I tried to format, it would fail. I tried xfs, ext3 and ext4.
When you used Gparted to format the array, it used GPT (GUID Partition Table). You will need to make sure that your kernel supports GPT when you boot.
If you do not have that support you will need to compile it into your kernel.
Code:
make menuconfig --> File Systems -- Partition Types --> Advanced partition selection --> [B]EFI GUID Partition support[/B]
Personally, I do not care for that solution and would go for either:
A: LVM2 underneath the XFS file system. LVM2 is really good stuff brings along a lot of cool goodies.

B: Compile kernel with "Support for large (2TB+) block devices and files (CONFIG_LBD or CONFIG_LBDAF in 2.6.30+)

I really thought all Linux distros enabled point "B' above at this point. :shrug:


Of course, if you want some real real fun with your RAID, try FreeBSD 7 or OpenSolaris w/ZFS. It takes no time to learn ZFS and it is THE ultimate in file systems. Linux will have BTRFS finished to compete (maybe) w/ZFS one of these days.
FreeNAS is easy to setup and use. IMO, the best free NAS option out there.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Users who are viewing this thread

    Back
    Top Bottom