Originally Posted by [GSV]Trig
It really depends on your companies needs, silly things like support and imediate setup is sometimes a concern, if not, then build it yourself.
As for specification, it really depends on the servers load, as for specification itself, therse the choice between AMD's Operion 2xx series chips for SMP, and Intel's Xeons, personally I would forget about 64bit and just go for raw bruital processing that Intel was so proud of, I would say a CM Stacker Chasis, Asus NCCH-DL Board, 2x2.8Ghz Xeons, and 2Gb of PC3200 ECC memory would be a fine base for any job a server may have to perform. the only unfortunate issuse with that board is Asus decided to use the Promise PDC20319 controller, which only has RAID modes 0, 1, 10, and JBOD.
Also, disk i/o load is another concern, how many users will access their files at once?.. Will it be plugged into a gigabit ethernet port?.. if its only hooked into the network via a 100Base-TX port, then even 3xWD Raptors in a RAID 5 array will be overkill, but if its plugged into a gigabit switch, and could have 20 people making high demand disk operations at the same time, then U320 RAID card and disks will need to be considered.
So the choice comes down to Highend SATA RAID cards, or U320 RAID. By the sound of it SCSI will be overkill for this servers application, and its not a cheap solution either. so its:
a) Adaptec AAR-2410SA SATA RAID Controller (If you want hotswappable support)
b) Highpoint Rocket RAID 1640 4-channel SATA RAID 5 Host Adapter (If downtime for replacing a dead disk isn't a concern)
or c) Adaptec SCSI RAID 2230SLP (if you have a big budget and really do need U320 levels of throughput)
Persionally I would go for a).. and get the enclousure kit if possible for hotswappable disk support incase something happens to a disk, means no downtime. 3x74Gb WD Raptors would be enough for pretty much anything. I have two of em stripped in my new rig and they cream anything else ATA and SATA wise.
All this hardware will work with Linux, so a nice efficent server operating system to go with effienct hardware..
As far as OEM servers go.. they are as likely to suffer hardware faults (if not more so) than a custom build, the only difference is the huge extra chunk you pay is for a support contract, and with the dealings i've had with support people in the past, i would rather stress test my own build, deploy it, then plug'n'run an OEM and hope something doesn't go wrong. skimping out on time to rush a deployment is simply not worth it and not good administrator practice in the first place.. thats why we gotta put up with the "Intardnet".
But this is my opinion of course, others may think different, including yourself
Hope it helps..