Gå til innhold

SSD-tråden: info om og diskusjon rundt SSD


Anbefalte innlegg

Videoannonse
Annonse

NEWS:

[wall of text crits you for 9999dmg]

(note: link til orginalartikler som er lettere å lese er under hver quote)

 

Editor:- March 11, 2010 - whenever I'm asked -"What do you do for a living?" - the cutest answer I come up with is - "I waste my time so my readers don't have to waste theirs."

 

Few things are so time-wasting on the web - in my opinion - as videos which talk about the SSD market. In 99.9% of cases the same points have already been made - earlier, better, and about 30x quicker on static webpages.

 

It's several years since a reader asked - "Why isn't there a directory of SSD videos on StorageSearch.com?"

 

Well - there is now. I datamined and filtered it from the tens of thousands of hours I've spent reading and writing about SSDs. It's the smallest list of links in any directory page I've created since the web started - and at this rate of progress will struggle to reach double digits by the time the SSD market ends. Less is better - when it comes to wasting your time. ...read the article [følger under her]

http://www.storagesearch.com/ssd-videos.html

 

This SSD videos page is the smallest collection of links you'll see on a directory page here on StorageSearch.com.

 

 

I avoid infomercials for the same reason I avoid pdfs. They waste my time. Typically a 10 minute video contains no more new data than I can assimilate by looking at a static web page or email for about 10 seconds. This page lists a handful of SSD related videos that (for various reasons) I did not ignore.

They're not going to win any Oscars but might rate mentions in the SSD market acceptance speeches.

 

 

HDD versus SSD boot times and performance video ... Samsung was the 1st SSD oem to use a video clip to promote the advantages of SSDs compared to HDDs in notebooks (in 2006). Although that original video has disappeared - this one follows the same theme. Samsung's marketers later produced a whole series of SSD videos.

 

 

Google's video shows what happens when an HDD PC boots up - and why their SSD based OS will be better ... "You can make a sandwich in the time it takes a hard disk based notebook to power up" - that's why Google decided its Chrome OS would be SSD based. In the opening video of the Chrome OS blog (November 2009) we learned that the architects of the new OS were "obsessed with speed". The video says - there is no room in this OS for outmoded 50 year old hard disk technology.

 

 

Fusion-io SSD serves 1024 video streams at a tech show ... This demo shows how one single PCIe SSD card made by Fusion-io can serve 1,024 simultaneous full resolution digital video streams from a single box. I looked at the technical feasibility of broadcast on demand servers in the late 1980s when working for a company which had both military and broadcast customers. So I'm very impressed - not by the content of videos on the web - but by the fact that the underlying technology is now so affordable.

 

 

ioSafe demos its disaster proof backup SSD on BBC news ... In January 2010 - ioSafe launched its disaster-proof backup SSD. This clip - taken from BBC tv news - showed the kind of stresses that the new drive could survive without needing any data recovery.

For å se de 4 videoene som beskrives (anbefales for litt perspektiv), følg denne linken og klikk på bildene.

 

reaching for the petabyte SSD

 

Editor:- March 16, 2010 - previewing the final chapters in the long running SSD vs HDD wars - StorageSearch.com today published an industry changing new article - SSDs - reaching for the Petabyte.

 

What will the PB SSD look like? When will it appear? What technology problems do SSD designers have to solve to get there? What about the storage architecture that the PB SSD fits into? How much electrical power will it consume? And... you may be curious - how much will it cost?

 

All these questions and more - are discussed and answered in this article which - I anticipate - will inspire product managers and company founders to create completely new types of SSDs. ...read the article [følger under her]

http://www.storagesearch.com/ssd-petabyte.html

 

SSDs - reaching for the petabyte

In which the author explains why he thinks users will replace their hard drives with SSDs in the last bastions of

the datacenter (the cost sensitive backup archive) even if hard drives are given away free...

And publishes the business plan for a new industry in the SSD market.

Now's a good time to get that coffee (and the headache pills).

 

by Zsolt Kerekes, editor StorageSearch.com - published March 16, 2010..................................

reaching for the petabyte SSD - article on StorageSearch.com

SSDs - reaching for the Petabyte

 

This article maps my vision of the steps needed for the storage market to deliver affordable 2U rackmount SSDs with a PB capacity (1,000 TB) in the soonest possible time using evolutionary steps in chip technology but calling for a revolutionary change in storage architecture.

 

This can be read on its own - or viewed as a sequel to 2 preceding articles:-

 

* SSD market adoption model (2005 edition) - in which I described why users would buy SSDs. This analyzed the user value propositions for the 4 main markets in which SSDs would be adopted in the decade following publication.

 

* How solid is hard disk's future? (2007) - in which I explained that the growth of the SSD market wouldn't result in any sizable reductions in overall hard disk market revenue till about 2011 - because the HDD market was itself gaining revenue from new consumer markets such as video recorders almost as fast as it was losing revenue to SSDs in notebooks and embedded markets.

 

This new article - SSDs - reaching for the Petabyte - previews the final chapter in the SSD vs HDD wars. The elimination of hard drives from what is currently (2010) its strongest bastion - the bulk data storage market in the data center. That's been a cost sensitive market in which the cost per terabyte arguments - which I proved were irrelevant in the other parts of the SSD market penetration model - must have felt like a protective shield to hard disk makers. This article will show how the last remaining obstacles in the SSD domination roadmap will be removed - and it will describe the user value propositions whereby that transition takes place irrespective of whether SSDs intersect with the magnetic cost per terabyte curve.

high reliability flash SSDs for embedded and high reliability servers click for more info

This is a much more complicated market to model than you might guess (even for an experienced SSD analyst like myself) for these reasons:-

 

* MLC flash SSDs (the best looking horses in the theoretical race to achieve HDD and SSD price parity) will not play a significant part in dislodging hard drives from their use in archives for reasons explained later in this article connected with data integrity.

 

* New emerging applications and market conditions (in the 2011 to 2019 decade) will put much greater stress on archived data. These new search-engine centric applications will accelerate the growth of data capacity (by creating automatically generated data flow patterns akin to ant-virus strings matching - which will speed up the anticipation and delivery of appropriate customer data services). At the same time these new apps will demand greater IOPS access into regions of storage which have hitherto been regarded as offline or nearline data. Those factors would - on their own - simply increase storage costs beyond a level which most enterprises can afford.

 

* The ability to leverage the data harvest will create new added value opportunities in the biggest data use markets - which means that backup will no longer be seen as an overhead cost. Instead archived data will be seen as a potential money making resource or profit center. Following the Google experience - that analyzing more data makes the product derived from that data even better. So more data is good rather than bad. (Even if it's expensive.)

 

* Physical space and electrical power will continue playing their part as pressure constraining corsets on these systems. That's why new SSD architectures will be needed - because current designs are incapable of being scaled into these tight spaces.

 

Business plan for a new SSD market

 

In some ways I feel like any product manager launching any new product - nervous and hopeful that it will get a good reception. The product in this case - includes the idea of an entirely new class of SSDs.

 

By sharing this vision I'm throwing down the gauntlet to product managers in the SSD, hard disk, D2d and tape library markets and saying - "Look guys - you can do this. All the technology steps are incremental from where we are today... just start working on your aspect of the jigsaw puzzle. Because if you don't figure out which place you want to occupy in this new market (circa 2016) - then you will be unprepared for the market when it arrives - and won't have a business."

 

Because the audience for this article is technologists, product managers, senior management in storage companies and (as ever) founders of storage start ups... I'm not going to clutter up this article with things you already should know - or can easily find out - by reading any of the hundreds of other SSD related articles I've already published here in the past 11 years. Instead I'm going for a leaner style.

 

 

The 4 propositions discussed in this article are:-

 

* what kind of animal will the PB SSD be?

 

* who's going to buy it - and why?

 

* where will it fit in the datacenter storage architecture?

 

* what are the technical problems which need to be solved?

 

* looking back at the last bastions of magnetic storage

 

The easiest way to demonstrate the first 2 points is to roll forwards in time to the fictional launch press release for this type of product.

market reactions?

 

As ever - I'm expecting a lively dialog with my regular correspondents in the SSD market.

 

Following publication I'll include extracts and ideas from the most useful comments here in this column, followed in the coming years by a list of market milestones.

 

This is similar to what I've done with other classic articles in the past.

Stealth mode startup wakes petabyte SSD appliance market

 

Editor:- October 17, 2016 - Exabyte SSD Appliance emerged from stealth mode and today announced a $400 million series C funding round and immediate availability of its new Paranoid S3B series - a 2U entry level Solid State Backup appliance with 1PB (uncompressed) capacity.

 

Sustainable sequential R/W speeds are 12GB/s, random performance is 400K IOPS (MB blocks). Latency is 10 microseconds (for accesses to awake blocks) and 20 milli-seconds (for data accesses to blocks in sleep mode.)

 

The scalable system can deliver 20PB of uncompressed (and RAID protected) nearline storage in a 40U cabinet - which can be realistically compressed to emulate 100PB of rotating hard disk storage using less than 5kW of electric power.

 

Preconfigured personality modules include:- VTL / RAID emulation (Fujitsu, HP, IBM and HP), wire-speed dedupe, wire speed compression / decompression and customer specific encryption. Exabyte SSD also offers fast-purge as additional cost options for Federal customers or enterprises, like banks, whose data may be at higher risk from terrorist attacks. Pricing starts from $100,000 for a single PB unit with 4x Infiniband / FC or 2x Fusion-io compatible SSD ports. The company is seeking parterships with data migration service companies.

 

Editor's comments:-the "holy grail" for SSD bulk archives is to be able to replicate and replenish the entire enterprise data set daily - while also coping with the 24x7 demands of ediscovery, satellite office data recovery, datacenter server rebuilds and the marketing department's heavy loads (arising from the new generation of Google API inspired CRM data population software toys.) The Paranoid S3B hasn't quite achieved that lofty goal - with the current level of quoted performance (because in my opinion the proportion of "static data" - mentioned in the full text of the press release is much higher than is found in most corporations). Despite those misgivings the Paranoid S3B is the closest thing in the market to the idealized SSD bulk archive library as set out in my 2012 article.

 

Internally the Paranoid uses the recently announced 50TB SiliconLibrary (physically fat but architecturally skinny) SLC flash SSDs from WD - instead of the faster (but lower capacity) 2.5" so called "bulk archive" SSDs marketed by competing vendors. In reality many of those wannabe SSD archive SSDs are simply remarketed consumer video SSDs.

 

Exabyte SSD's president Serge Akhmatova told me - "...Sure you might use some of those other solutions on the market today if you only need to buy a few boxes and can fit all your data in a handful of Petabytes. Good luck to you. That's not our market. We're going for the customers who need to buy hundreds of boxes. Where are customers going to find the rackspace if they're using those old style, always-on SSDs? And let's not forget the electrical power. Our systems take 50x less electrical power - and are truly scalable to exabyte libraries. When you look at the reliability of the always-on SSDs it reminds me about the bad old days of the hard disk drives - when you had to change all the disks every few years."

 

The recently formed SSD Library Alliance is working on standards related to this class of SSD products - and will publish its own guidelines next year. I asked Exabyte SSD's president - was he worried that Google might launch its own similar product - because they were likely to be the biggest worldwide user for this type of system.

 

"I can't speculate on what Google might do in the future" said Serge Akhmatova "we signed NDAs with our beta customers. But it does say in our press release that the new boxes are 100% compatible with Google APIs. We worked very closely together to make absolutely sure it works perfectly. You draw your own conclusions."

 

Editor (again):- students of SSD market history may recall that one of the early pioneers in the SSD dedupe appliance market was WhipTail Technologies (who launched their 1st product in February 2009). The company, who recently announced...

 

 

In an earlier article - Why I Tire of "Tier Zero Storage" - I explained why I think that numbered storage tiers - applied to SSDs is a ridiculous idea - and I still hold to the view that SSD tiers are relativistic (to the application) rather than absolute.

 

But where does the Paranoid PB SSD appliance fit in?

 

In my view there are 3 distinct tiers in the SSD datacenter.

 

* acceleration SSD - close to the application server - as a DAS connection in the same or adjacent racks (via PCIe, SATA, SAS etc.)

 

* auxilary acceleration SSD - on the SAN / NAS.

 

* bulk storage archive SSD - whose primary purpose is affordable bulk storage which is accessible mainly to SAN / NAS - but in some data farms - may connect directly to the fastest servers.

 

 

 

technical problems which need to be solved

 

...introducing a new species of storage device - the bulk archive SSD

 

This is a very strange storage animal which - although internally uses nv memory such as flash - and externally looks like a fat 2.5" SSD - looks as alien architecturally to a conventional notebook or server SSD as does a tape library to a hard drive. The differences come from the need to manage data accesses in a way which optimize power use (and avoids the SSD melting) rather than optimizing the performance of data accesses.

 

The requirements of the controllers for SSDs in bulk store applications differs from those in today's SSDs (2010) in these important respects:-

 

* optimzation of electric power - the need to power manage memory blocks within the SSD so that at any point in time 98% are in sleep mode (in the powered off state). I'm assuming that the controller itself is always in the mostly awake state.

 

* architecture - internally each SSD controller is managing perhaps 20 to 50 independently power sequenceable SSDs. In this respect the 2.5" SSD architecture resembles some aspects of a mini auto MAID system.

 

* endurance - data writes to the library chips are nearly always in large sequential blocks (because it's a bulk storage appliance) therefore write amplification effects are a lesser concern than with conventional SSDs. Also the main memory is SLC - not MLC due to the need for data integrity. That's partly because the thousands of power cycles which occur during the life of the product - which can be triggered by reads (not just by writes) would lead to too many disturb errors - and also because the logic error bands in MLC thresholds are too small to cope with the electrical noise in these systems.

* power up / power down - for the controller this is a different environment than a notebook or conventional server acceleration SSD. It lives in a datacenter rack with a short term battery hold-up and in no other type of location - ever. The power cycling must be optimized to reduce the time taken for the 1st data accesses from the sleep state.

 

* performance - R/W throughput and IOPS are secondary considerations for this type of device and likely to lag 2 to 3 years behind the best specs seen for other types of rackmount SSDs. In the awake state sequential R/W throughput - for large data blocks - has to be compatible with rewriting the whole SSD memory space in approx 24 hours or less.

 

The design of the archive SSD presents many difficult challenges for designers and I'm not going to understate them. But in my view they are solvable - given the right economic market for this type of product and overall feature benefits. Here are some things to think about.

 

* refresh cycle - you know about refresh cycles in DRAMs - why need one for flash? The answer is that seldom accessed data inside the SiliconLibrary could spend years in the unpowered (or powered but static data) state. That would be a bad thing - because the data retention of the memory block can decline in certain conditions increasing the risk of data loss. So to guarantee integrity in the SSD is a house-keeping task which ensures that ALL memory blocks in the SSD are powered up and refreshed at regular intervals - maybe once every 3 months - for example. If you're familiar with tape library management - think of it as "spooling the tape."

 

* surges and ground bounce in the SSD. If uncontrolled - these could be a real threat to data integrity. That's one of the reasons why the wake access time has been specificied as a number like 20 milliseconds - instead of an arbitrary number like 200 micro-seconds. Soft starting the power up (by current limiting and shaping the slope) will reduce noise spikes in adjecent powered SSDs and also minimize disturb errors.

 

* awake duty cycle. I haven't said much about the nature of the duty cycle for the powered up (awake) state for memory blocks. To achieve good power efficiency I've assumed that in the long term this will be just a few per cent of the time. But how will it look on any given day? That depends on the HSM scheme used in the SSD library. My working assumption is that once an SSD block is woken - it stays in use for a period ranging from seconds to minutes. The controller which woke it needs to have a way of anticipating this. As the SSD library is the slowest tier in the SSD storage world - it would be reasonable to assume that the device which originated the request (an online SSD) can do some buffering and pack or unpack data requests into multiple GB chunks. Something for the "hello world!" members of the design team to think about.

 

 

looking back at the last bastions of magnetic storage

 

I published my long term projections for HDD revenue in an earlier article - Storage Market Outlook 2010 to 2015.

 

The hard disk market doesn't have to worry about an imminent threat from bulk storage SSDs for many years.

 

Instead I think the concept opens the door to a fresh opportunity for companies in the storage market to re-evaluate themselves.

 

If they believe the PB SSD will be coming (and the details may be different to the way I have suggsted) where will their own companies fit in?

 

I've also started to discuss this idea with software companies too. Because the new concept open up entirely new ways of thinking about backed up data. What is it for? What can you do with it? How can you grow new business tools from content which upto now - was just out of site and out of mind?

 

There are many exciting challenges for the market ahead.

 

(And many mistakes too in the initial draft of this article - which I'll pick up and deal with later.)

 

Thanks for taking the time to read this. If you like it please let other people know.

Key ideas to take away from this article

 

* More data is better - not worse. Data volumes wll expand due to new intelligence driven apps. But the data archive will be seen as a profit center - instead of a cost overhead.

 

* the "SSD revolution" didn't end in 2007. It will not stop soon - and instead will factionalize into SSD civil wars. Some of these will overlap - but many won't.

 

* True archive SSDs using switched power management may be able to pay for themselves by saving on electrical costs, disk replacement and datacenter space - even if the competing hard drives are free. But it will be impossible for hard drives to deliver the application performance needed in the petabyte ediscovery and Google API environment anyway.

 

* The new type of PB SSDs will boost the demand for SLC flash - because it's impossible for MLC to provide adequate data integrity in the power cycled environment. This may give a new lease of life to old-style industrial SSD makers.

 

* There will be no room in the datacenter for rotating storage of any type. It will be 100% SSD - with just 3 types of distinct SSD products.

 

Footnotes about the companies mentioned in this article

 

Some of the companies mentioned in the "fictional" part of this article - to illustrate the 2016 press release - are real companies. Thiese are the reasons why I chose the companies whose names I used to illustrate certain concepts. And I hope they won't be offended.

 

* WD Solid State Storage (SiliconLibrary) - WD's SSD business unit has been intensively testing SSD data integrity by running individual SSDs through thousands of power cycles - since 2005 - as part of verifying its PowerArmor data protection architecture. These are the longest running such test programs I know - and they have already harvested data from millions of hours of device tests (by 2010).

 

Understanding what happens to MLC SSDs when subjected to these stresses is a key factor in the confidence to design bulk storage SSDs - in which each memory block may undergo upwards of 5,000 power cycles in its operating life.

 

Although many other industrial flash SSD companies also have experience in this area - WD also has experience with designing hard disks which have good power performance in sleep mode. These features have not been widely deployed in MAID systems because of the long wake access time. Will the 20mS wake time - which I've proposed for archive SSD prevent its acceptance? We will see.

 

* Fusion-io (SSD port) - although SSDs currently use standard interfaces within and between racks - I speculate that in some markets there will be a cost / performance advantage to creating a new proprietary interface to "get the job done." I'm proposing that Fusion-io - already the best known brand in the PCIe SSD market (2010) - is a likely company to adapt its products for new markets when the need arises - and create a new defacto industry standard for inter-rack SSD ports.

 

* Google - It's not unreasonable to expect that in the 6 years following publication of this article Google (who already markets search appliances, is the #1 search company, and is working on an SSD based OS for notebooks) will be setting many key standards for the manipulation of large data sets within the enterprise. Google APIs will be as important to CIOs in the future as Oracle and other SQL compatible databases have been in the past as tools which support data driven businesses.

For å lese artikkelen i mer oversiktlig orginal format og med linker, følg denne linken.

 

 

Pliant's SSD benchmark video

 

Editor:- March 15, 2010 - Pliant Technology today published benchmark results to illustrate the capability of its 3.5" SAS SSDs when used in arrays.

 

The measurements performed and validated by OakGate Technology were performed on an array of 16 SSDs and are summarized in a video.

 

"We tested Lightning EFDs under conditions that closely mirrored the data throughput demands of today's mission-critical data centers..." said Bob Weisickle, CEO and founder of OakGate. "..even more impressive was the fact that these phenomenal performance numbers remained stable and consistent over time, which is a critical requirement for today's mission-critical 24x7 data centers."

 

Editor's comments:- when (like me) you're used to seeing SSD IOPS that look like telephone numbers, and IOPS that have a lot of GB/s in them you have ask yourself - what is this vendor really saying?

 

I think the point Pliant is making is that if you are an oem who wants to design a rackmount flash SSD which has the performance potential of a proprietary architecture such as Texas Memory Systems, or an array of PCIe SSDs such as Fusion-io, but you want to stay in the comfort zone of SAS SSDs while avoiding the "EMC use it so it must be expensive" feel associated STEC - please take a look another look at their products. The tag line on their home page says "Do more for less." (I've seen worse.) I've seen better SSD videos though. It was another 6 minutes of my life wasted (compared to reading the text).

Jeg synes videoen var interresant nokk, og den viser følgende tall. 16x 3,5" SAS SSD, JBOD:

8,4GB/s båndbredde @ 100% random, 50% read, 64KB, QD 64 (x16 = 1024).

2M+ IOPS @ 100% random, 100% read, 512B, QD 64 (x16)

860K IOPS read, 215K IOPS write 1075K IOPS totalt @ 100% random, 80% read, 4KB, QD 64 (x16)

Genialt navnt på fyren i videoen forresten :p

Endret av GullLars
Lenke til kommentar

Vurderer innkjøp av SSD.
Hadde i ungangspunktet tenkt å holde meg i skjiktet 1500-2000,-

Det er dog lett å forelske seg i de gode tallene ifra Crucial RealSSD C300. (4K)

Intel x25-M G2 80GB er varmt anbefalt overalt her, men jeg synes ikke det er ett enkelt valg.

Har også sterkt vurdert å gå for noe billig. Pleier som regel ikke å få så dårlig samvittighet året etterpå da =P
Kingston SSDNow V Series V GEN 2 128GB http://www.kingston.com/ukroot/ssd/v_series.asp?id=2
til 1790,- Men den skilter med særdeles lavere 4k read/write enn de to andre nevnt her.


Intel har ingen planer om noe nytt før i slutten av året?

Endret av Zeph
Lenke til kommentar

Det er nok fortsatt Intel X25-M som gir mest for pengene til under 2k. Eller eventuelt 2x X25-V i raid-0

Med tanke på at prisen er kommet ned på 1600,- så er det mye ytelse for pengene. Har lest rykter om at det skal komme en 80gb utgave av X25-V, men det blir neppe revolusjonerende.

Lenke til kommentar

Jeg vet ikke om det er èn kontroller eller fler. Det eneste vi vet er at konfigurasjonen var JBOD, som i praksis kan bety flere kontrollere i pass-through. De sier også at SSDene var plassert i serveren til test-selskapet fyren i videoen driver, så serveren kan inneholde fler HBAs.

 

Fikk du forresten PM om SSD Wave? (sorry mas :p)

Lenke til kommentar

Nei, den yter ikke likt i det hele tatt. Om du leser hva folk har skrevet til deg på forgie side, denne kommentartråden til en artikkel om denne SSDen; https://www.diskusjon.no/index.php?showtopic=1212194&st=20, og ser hva du selv har skrevet vil du se at det er en dårlig idè.

 

Denne er overpriset for å møte "sikkerhetskrav" som er urimelige for vanlige brukere. Sikkerheten ved Intel's SSDer bør være mer enn godt nokk for en hjemmebruker. Hvorfor må du absolutt kjøpe fra komplett, fler andre nettbutikker har den inne.

https://prisguiden.no/produkt/112606

Amentio og netshop har de på lager til 1650kr.

Lenke til kommentar

Viktig å ikke legge seg opp i tall. Teoretisk mulig sekvensiell skriving er mindre viktig når du ser etter en SSD. Dette er en typisk "felle" de fleste går i. Tar man for eksempel WD sin SSD så har den dårlig skriving på småfiler. Småfiler er i de fleste tilfeller det som er viktigst. For eksempel når du installerer programmer osv. I tillegg bør man heller se på hva SSDen klarer i IOPS. Det er her Intel kommer inn i bildet. Intel leverer bra med IOPS. Tar jeg ikke feil så er dette WD sin første SSD og den har fått lunken tilbakemelding i de testene jeg har lest.

Lenke til kommentar
Prisguiden sier at de har på lager. Når jeg går inn på sidene deres så har de ikke. WD ser jo ut til å ha produsert noe av kvalitet. Den har jo 170MB skriving. Og det som den er bedre på er 4k random iops noe, som jeg ikke forstår nytten av engang.

At du ikke forstår nytten betyr ikke at det er unyttig. Det betyr bare at du ikke forstår hvordan det er nyttig. Jeg skal prøve å forklare litt.

 

Jeg liker å forklare responstid med et eksempel om en dusj. Se for deg at du stiller deg i dusjen og skrur på vannet. Du vil garantert bry deg om det tar et minutt før vannet blir varmt i stedet for ett sekund. Dette er responstid. En helt annen størrelse er båndbredde, vannstrømning i liter per minutt. Responstid og båndbredde er to forskjellige ting. I praksis ser du deg blind på antall liter per minutt med kaldt vann, mens du venter og venter på det varme vannet. Poenget mitt er at responstid er viktig, ikke bare båndbredde. Husk å ikke forveksle de to.

 

Så litt tilbake til tallet 4k IOPS. Alle filer på PCen deles inn i blokker a 4 kilobyte. En blokk kan ikke deles opp. Dersom man lagrer en fil på 5 kB så vil den altså ta to slike blokker på harddisken og dermed 8 kB. Et operativsystem har en drøss med filer som tar under 4 kB (i praksis én 4 kB blokk). Hver gang et program startes skal en drøss slike filer leses. Derfor er akkurat slike filer relevant for totalopplevelsen, opplevd ytelse. IOPS er en forkortelse for in or out per second. Altså antall av aksesser inn til lagringsmediet eller fra lagringsmediet per sekund. Dette begrenses av responstiden, ikke båndbredden. (Hold tunga rett i munnen og ikke forveksl disse nå). En harddisk har såpass lang responstid (~12 ms) at de bare klarer opp mot ca 200 responser inn eller ut per sekund. Uansett hvor små de er. Når man jobber med OS eller programmer så er det ofte 4 kB aksesser. Det betyr opp mot 4*200 = ca 800 kB/s. Med andre ord et veldig forskjellig tall fra det du har sett på (170 MB/s). Grunnen til det er altså at responstiden, ikke båndbredden, begrenser ytelsen. Responstiden er altså en enorm flaskehals for åpning og lukking av programmer. Derfor bør du se på responsstid og IOPS i stedet for båndbredde.

 

Så hvor god yter SSD da? Nå har du fått et eksempel på et typisk IOPS-tall for harddisker (200) og det er på dette området SSD har sin store styrke. De yter flere tusen IOPS. Faktisk yter mange av de overkommelig prisede godt over 20 000 IOPS. Det er 100 ganger bedre enn harddisker. Hvis ikke den sammenligningen imponerer så vet ikke jeg.

 

Det er riktignok flere faktorer som forklarer ytelsen til SSD, men IOPS-tallet er hovedgrunnen til at SSD yter så vanvittig mye bedre til OS, programmer og småfiler enn vanlige harddisker.

Lenke til kommentar

Hva er greia med Intel sine kontroller drivere for tiden?

Trodde Intel Rapid Storage skulle ta over for Intel Matrix, altså at de bare byttet navn og at det fra nå av skulle bli hetende Intel Rapid Storage istedenfor?

 

Men nå har det seg jo slik at siste Intel Matrix driver (8.9.6.1002) faktisk er nyere enn siste Intel Rapid driver (9.6.0.1014)

 

 

Hva er det som faktisk fungerer best med ICH8 og ICH10 + Intel SSD? hmm.gif

Lenke til kommentar

Hva er greia med Intel sine kontroller drivere for tiden?

Trodde Intel Rapid Storage skulle ta over for Intel Matrix, altså at de bare byttet navn og at det fra nå av skulle bli hetende Intel Rapid Storage istedenfor?

 

Men nå har det seg jo slik at siste Intel Matrix driver (8.9.6.1002) faktisk er nyere enn siste Intel Rapid driver (9.6.0.1014)

 

 

Hva er det som faktisk fungerer best med ICH8 og ICH10 + Intel SSD? hmm.gif

 

 

Win 7 Microsoft drivere fungerer best, med tanke på støtte for TRIM.

Lenke til kommentar

Har en Intel 40GB V - etter å ha prøvd å kjørt Intel SSD Optimizer får jeg bare Failed og Action: Intel SSD Optimizer could not run due to the presence of Volume Shadow Copy Service data.... Jeg har såvidt jeg vet ingen backup-servicer kjørt. Medfølgende PDF beskriver en lignende situasjon i kapitlet:

 

4.0 False Volume Shadow Copy Service Reported

 

Men jeg har WinXP 32-bit og ikke 64-bit som nevnt i artikkelen. Noen som har noen tips?

 

Har også lest denne:

http://communities.intel.com/thread/9474;jsessionid=DF3D0EDAE999F9EDAED6DB00DB4ACE32.node5COMS?start=60&tstart=0

 

Som nevner noe om regional settings etc.

Endret av SigTill
Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
×
×
  • Opprett ny...