Physical Volume Create Conundrum

I want to start by saying I already resolved my issue but I want to understand why I am seeing what I am seeing.

I have a server with a RAID controller two 500GB drives and six 600GB drives. The two 500GB drives are mirrored and have the OS installed on them. The six 600GB they wanted set as RAID0 and wanted them in a new volume group. My issue was getting the full size out of the RAID0 Virtual Disk.

On boot I saw the disk:
sd 0:2:1:0: [sdb] 7025983488 512-byte logical blocks: (3.59 TB/3.27 TiB)

I work mostly with LUNs and generally I can just do a pvcreate and I am ready to use the disk. But I could not issue a pvcreate on /dev/sdb and it seemed to need to be partitioned. So I used fdisk using this procedure, which I found to be the same on Red Hat's website:
Create Linux Partition

Once I had a partition I was issuing a pvcreate /dev/sdb1 and each time I ended up with only 2TB of usable space.

[root@testserv dev]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
[root@testserv dev]# pvscan
  PV /dev/sda2   VG vg01            lvm2 [464.75 GiB / 355.75 GiB free]
  PV /dev/sdb1                      lvm2 [2.00 TiB]

I ended up just issuing a pvremove, removing the partition and issueing a vgcreate against the raw device. I noticed that the vgcreate created the physical volume for me and it was the correct size.

[root@testserv ~]# vgcreate scratchvg /dev/sdb
  No physical volume label read from /dev/sdb
  Physical volume "/dev/sdb" successfully created
  Volume group "scratchvg" successfully created
[root@testserv ~]# pvscan
  PV /dev/sdb    VG scratchvg   lvm2 [3.27 TiB / 3.27 TiB free]
  PV /dev/sda2   VG vg01        lvm2 [464.75 GiB / 355.75 GiB free]
  Total: 2 [3.73 TiB] / in use: 2 [3.73 TiB] / in no VG: 0 [0   ]

My question is, what am I missing? I am completely fine with just skipping the fdisk and pvcreate steps but I thought I had to do those steps. What was I doing wrong? What is the proper procedure when taking a Virtual Disk presented by a RAID controller and creating or adding it to a volume group? I guess my own answer would be the procedure that works, but since I found so many references to using fdisk and partitioning the disk I am curious why that was not working for me. Thanks.

pvcreate has had some known issues in the past reporting wrong sizes, especially for disks larger than 1 TB.

It has been fixed in most recent RHEL releases. Which version are you using?

As for using raw devices vs partitions I'm gonna quote myself from another post in this forum:

I am using RHEL 6.4.

Thanks so much for providing the info on whole disk PVs, I had not known that was an issue.

---------- Post updated at 09:47 AM ---------- Previous update was at 07:15 AM ----------

So I have done some more reading on this and probably the reason I had not encountered issues is that I had not previously brought in an disks over 2TB. Our LUNs are typically always 2TB. From everything I have read, fdisk has a 2TB limit, instead I used parted. So then I at least was not using whole disk PVs.

I just wanted to add this info in the event others encounter this post in the future.

Also worth noting is that I have an additional array that provideds a virtual disk over 16TB. There is a 16TB limit with mkfs.ext4 although I read about some what seemed like unsupported methods for exceeding 16TBs.

Yes, when you start having such large environments, ext3 and ext4 start to look limited.

reiserfs, vxfs, xfs or another "large enterprise" solution would work better in those cases.

Under ext4 -although unlikely- you may even deplete the available inodes before you run out of disk space.

I have had this happen in ext2, ext3, and ext4, but always with very small filesystems, where the defaults are extremely conservative.