Page 3 of 3 FirstFirst 123
Results 33 to 39 of 39

Thread: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

  1. #33
    Registered User
    Join Date
    Sep 2007
    Posts
    5
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

    Hence, the term "bleeding-edge." It stands to reason that higher performance goes along with higher cost. If someone wishes to buy a Lamborghini but is going to complain about the cost, they should instead go shopping for a Chevy and stop whining about how expensive Lamborghinis are or what else should be included for $100,000. You get what you pay for. Besides, prices always eventually come down, and/or features increase. The question is: Do you want to stand by and watch, or get in the game and play?

    I know people who've paid over $1000 for a CPU. Just the chip, which is nothing without a motherboard, RAM, video card, etc. Whereas, there are CPUs available for $300. It's all about priorities.

    P.S. Thanks for posting the link. I don't yet have the "privilege" to include URLs in my posts. Just a couple more to go...
    Last edited by dijitul; 30-09-2007 at 09:15 AM.

  2. #34
    Senior Member
    Join Date
    Jul 2003
    Posts
    12,116
    Thanks
    906
    Thanked
    583 times in 408 posts

    Re: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

    I dont get why these drives are so expensive, 4Gb USB Flash drives are ~£20 these days, take away the USB interface and use a SATA300 one, stick a few together and your away...

  3. #35
    Registered User
    Join Date
    Sep 2007
    Posts
    5
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

    Quote Originally Posted by [GSV]Trig View Post
    I dont get why these drives are so expensive, 4Gb USB Flash drives are ~£20 these days, take away the USB interface and use a SATA300 one, stick a few together and your away...
    I hear you, but it turns out not to be just that simple. The primary issues these days with flash drives (for operating systems) has been speed and life.

    Consumer flash memory does not read/write as fast as consumer magnetic mediums like hard drives (minus seek times for this comparison), so to increase rates they have to create larger buses or pipes to write more data at the same time. Therefore, to get equivalent speeds of SCSI and SATA controllers, it takes multiple flash pipes. Those small cards you pop into your camera are not really high-performance when you compare them to hard drives.

    Flash also has the issue of read/write life cycles, which modern high-end controllers extend by ensuring they write to every bit evenly (since fragmenting isn't much of a factor anymore). Deleting a file doesn't automatically mean that space will be immediately overwritten.

    So, we want fast but cheap flash? Consider the amount of storage, multiplied by the number of data channels required, and the firmware for distributing the wear evenly, striping the information across flash, and then passing it via the interface to the CPU. It's no longer cheap. As an example, a performance PCIe 12-port SATA-II RAID controller connected to twelve SATA-to-FLASH adapters, each with 4 GB SanDisk Extreme IV compactflash cards attached would give you about 48 GB of storage at under 300 MB/sec write speeds. Just how much would something like this cost? If you figure it out, a 40 GB FusionIO drive for $1200 doesn't seems so expensive anymore.

    Plus, these technologies, like all technologies, are getting cheaper to make. They will quickly become affordable. Bleeding-edge is bleeding-edge. There will always be something bigger and badder. No pain, no gain.

  4. #36
    Registered+
    Join Date
    Jul 2006
    Posts
    21
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

    The complaints about the high cost of the HyperDrive 4 are
    not totally unfounded.

    Look at it this way: a Tier 1 motherboard company like
    ASUS is now charging $200 to $300 for motherboards
    with tons of sophisticated on-board technology,
    only a subset -- < 50&#37; -- of which deals with RAM and SATA
    interfaces.

    And, you can easily find large server motherboards
    with 8 DIMM slots, as standard equipment.

    Setting engineering costs aside -- which would not be
    all that much, given that more sophisticated motherboards
    have already been designed and mass-produced --
    why should a competitor to the HyperDrive
    NOT retail somewhere between $100 and $200 (w/o RAM)
    and still support 8 x DDR2 DIMM slots?

    Is there a patent issue at stake here, perhaps?

    And, as long as the connection is via a SATA bus,
    exploiting widely available on-board RAID controllers
    is a natural evolution of these i-RAM devices:
    more speed is not achieved with more slots per bay,
    but with more i-RAM devices running in parallel
    on multiple SATA cables, because the SATA bus
    is the real bottleneck with such a RAMDISK.

    Thus, almost everyone who knows anything about
    such technology now agrees that Gigabyte should
    upgrade the 5.25" i-RAM to support DDR2 DIMMs
    and a 300 MB/second interface (at least) --
    possibly add a jumper to upgrade the interface
    to 600 MB/second when SATA-III is available.

    Word on the street is that the 5.25" i-RAM
    Project Manager quit to take a job with another
    company.


    Sincerely yours,
    /s/ Paul Andrew Mitchell
    Webmaster, Supreme Law Library

  5. #37
    Registered User
    Join Date
    Sep 2007
    Posts
    5
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

    The complaint I had responded to was about the high prices of high-speed flash storage, not the RAM disk. The FusionIO drive falls into the category of flash because it uses a solid-state technology which doesn't require power to maintain storage.

    I certainly agree the HyperDrive should be cheaper (eventually), but it's not and there are few other alternatives at this moment. Sure, there will be down the road, which just affirms that you always pay more for bleeding edge. Some people pay over $3000 for a cell phone ($600 + monthly service for two years). Why? Because it makes such a slight improvement in their life -- and maybe they feel more cool.

    Motherboards are manufactured by the millions, along with the chipsets and components those boards use. And, although motherboards are cheap, you still must buy a processor, RAM, drives, cables, cases, accessories, etc. You don't get a fully functioning system out-of-the-box, and by the time you do it's hundreds of dollars later. So I feel the motherboard comparison is a bit off-target in this particular case (regarding high-speed flash). Nobody has a template design to follow when it comes to engineering a hardware ramdisk with all the features we desire.

    Let's turn this conversation around a bit: Why won't the *motherboard* manufacturers give us this option instead? Why can't we load a motherboard with 16 GB of ram (or 64 GB of flash) and designate in the BIOS to use whatever amount we want as a physical disk? Wouldn't THAT be the cheapest implementation? Motherboards have every capability except an onboard battery to power RAM. Since Gigabyte designs both motherboards AND ramdisks, why haven't they done this? Patents, maybe, or is it just lack of demand? RAMdisks typically have a very specific purpose in life, whereas I can see the FuisionIO disk actually having more flexibility, compatibility, and reliability. The objective here is to get RAM speed with harddisk storage. FusionIO claims to have achieved this, and at a reasonable cost (IMHO) if we consider ALL that it might do. It might just be vaporware like so many other ideas!

  6. #38
    Registered+
    Join Date
    Jul 2006
    Posts
    21
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

    > Let's turn this conversation around a bit:
    > Why won't the *motherboard* manufacturers give us this option instead?


    Good point: I do specifically recall asking Intel's Amber Huffman
    if Intel would consider adding a BIOS option to designate a region of
    RAM as a RAMDISK -- with a native device driver that emulates
    a SATA/3G HDD. She replied that they had decided to go instead
    with the flash disk cache concept aka "Robson Technology".

    Here was her written reply on 1/28/2007:

    "The trend in the industry is towards using NAND for caches to speed disk access time.
    The major advantage of NAND is that it is non-volatile so you don't increase risk of data loss
    when using it as a cache. Intel's efforts in this space are referred to as 'Robson' and
    you can find more info on Intel's website and via Google."


    I have also suggested to ASUS that they consider
    dedicating a portion of the extra large number of
    DIMM slots on large server motherboards
    to a RAMDISK, but again this suggestion was
    not met with any noticeable enthusiasm.

    I should also clarify that I made my suggestion
    to Intel's Amber Huffman BEFORE I discovered
    the RamDisk Plus product from superspeed dot com :
    this product works great, because it saves and restores
    the contents of each RAMDISK between shutdowns and
    startups without BIOS changes.

    We configured a 512MB RAMDISK with that software and
    moved the IE7 browser cache to that partition,
    with MUCH SUCCESS!

    But, RamDisk Plus would NOT be suitable for loading
    Windows XP system software into such a partition;
    that partition must be enabled by system software
    that is launched after POST is completed and
    thereafter emulates a Windows letter drive.

    More to your point above, if standard PCI slots have
    a 5-Volt "stand by" ("SB") pin, there is no reason
    why a subset of DIMM slots could not be
    powered in the same manner, in order to
    prevent the loss of otherwise volatile data.

    There is also a recent IBM patent which situates
    RAM in an external case, with a ribbon-style
    cable that plugs into the motherboard's main
    DIMM slots.

    My motherboard analogy is consistent with the
    fact that all modern PCI-E motherboards have
    native support for Serial ATA hard disks,
    and market leaders now have native support
    for SATA/3G hard drives too. After stripping away
    all of the other added features on modern
    motherboards, we would be left with something
    very close to the HyperDrive4 developed by
    a firm in the UK.


    Thinking out loud for a minute, here is a
    concept which I think is worth exploring:

    (1) begin with the assumption that all
    system software is capable of using
    64-bit addressing (the obvious future
    e.g. XP x64);

    (2) populate a motherboard with a
    very large number of 2GB DIMM slots,
    allowing perhaps 32GB of physical RAM
    to be addressed in linear fashion,
    in anticipation of 4GB modules in the
    not-to-distant future (total of 64GB);

    (3) the boot procedure reads a config file
    from an SSD or DVD and literally formats a subset of RAM
    "on the fly" -- to host the C: system partition --
    beginning at physical address zero; drive image
    software like GHOST does this, BUT it writes
    that image file to a hard drive partition now;

    (4) then, the remaining RAM is made
    available to the OS kernel and
    "ring 0" OS database, as usual,
    allowing the lower RAM subset
    to operate exactly the same as if
    it were an ultra-fast C: partition;

    (5) for this concept to work best,
    registered error-checking RAM would
    be preferred, I would predict;

    (6) as long as the motherboard remains
    powered UP, the contents of the C: RAMDISK subset
    would be preserved;

    (7) in the event of any 2-bit errors
    which could NOT be corrected by the
    ECC logic built into the DIMMs,
    a corrupted code page could be
    "paged in" from a backup stored
    on something like the SSD or DVD;

    (8) if the motherboard is totally powered DOWN,
    it returns to the state at which it started
    immediately before the last startup.

    I realize that I am jumping right in at a running OS,
    and I am assuming that there would be a
    very special, one-time "Setup" procedure
    to get all these data files initialized e.g. SSD or DVD.


    Perhaps recent virtualization logic could be exploited
    effectively to "hide" that lower C: RAM subset
    from the running version of the OS, thus "tricking"
    the latter into treating that lower RAM subset
    as a standard C: system partition.

    A hardware-enforced "ring" system might be
    applicable to such a design also.


    Just thinking out loud here.


    Sincerely yours,
    /s/ Paul Andrew Mitchell
    Webmaster, Supreme Law Library

  7. #39
    Registered+
    Join Date
    Jul 2006
    Posts
    21
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: HEXUS.reviews :: GIGABYTE GC-RAMDISK i-RAM

    Here's what I just sent off to Intel's Amber Huffman
    and David Ray at ASUS:

    [This is where a BIOS option would be most useful
    i.e. to format the lower RAM subset for the C: system partition;
    once formatted, this boot process would proceed
    to load Windows system software from something like a
    special image file and write it into that C: partition.
    Once that task if finished, from that point forward
    the boot process runs normally to completion.]


    What do you think of this sequence?

    (A) run XP x64 Setup normally to completion
    with a stable set of system software i.e.
    loading XP onto a hard drive partitioned
    with drive letter C:;

    (B) run Symantec GHOST and save a drive image
    file of C: to a DVD or existing hard drive partition e.g. D:+;

    (C) enhance the BIOS to perform 2 special functions,
    which are analogous to the FLASH BIOS functions
    now available on recent ASUS motherboards
    e.g. EZ FLASH 2:

    (i) format a user-defined subset of RAM
    as an NTFS partition with drive letter C:,
    beginning at physical memory address zero;

    (ii) then, restore the drive image file to that C: partition;

    (D) as soon as that restore task is finished,
    save changes and exit the BIOS normally,
    permitting the boot-up sequence to run to completion.


    There are lots of implementation details to be addressed here,
    one of the most important of which is that we must implement
    the entire OS code and OS database so both are "relocatable", because
    we start to load XP at the next address AFTER the last
    memory address assigned to the C: system partition
    in this scheme INSTEAD OF starting to load XP at memory address zero.

    But, isn't that what virtualization is doing automatically already?


    Sincerely yours,
    /s/ Paul Andrew Mitchell
    Webmaster, Supreme Law Library

Page 3 of 3 FirstFirst 123

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Need help with timings for PC4000 RAM!
    By 8bit in forum PC Hardware and Components
    Replies: 12
    Last Post: 19-11-2004, 04:40 PM
  2. What should I do about my RAM?
    By spindle in forum PC Hardware and Components
    Replies: 2
    Last Post: 20-03-2004, 02:44 PM
  3. Best Corsair RAM for Asus P4C800-E ?
    By solrak in forum PC Hardware and Components
    Replies: 28
    Last Post: 19-08-2003, 12:04 AM
  4. RAM issues
    By Silence in forum PC Hardware and Components
    Replies: 11
    Last Post: 29-07-2003, 09:54 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •