Author Topic: Get the most out of your Ram if you have 3gig or more in boot.ini file  (Read 10882 times)

0 Members and 1 Guest are viewing this topic.

sinc

  • Guest
Well, yeah...  That basically just repeated what's already been said in this thread.

Autodesk claims that Civil-3D is large-address-aware.  The exact quote:
Quote
Autodesk has conducted some preliminary testing on this configuration and has found some improvement in performance when working with larger datasets.

This seems to be causing lots of people to set the 3GB switch, because they are under the impression that C3D will run better.  I have serious doubts about this, but that's why I was asking if anyone has actually done any benchmark testing under various conditions.

One of the unknown factors is just what is the impact of limiting the Kernel to 1GB?  After all, C3D runs just fine for most things on a system with only 2GB RAM total.  So just how big a hit do we actually notice if we limit the system to 1GB, while we let the application use 3GB?

Because of the way system resources like the Free Memory Page Table get dramatically cut in size with the 3GB switch, I am leery of this option.  But I'm still curious about actual bench results.  I've been too busy with other stuff to try doing my own testing on this point, but I might get around to it eventually.

mjfarrell

  • Seagull
  • Posts: 14444
  • Every Student their own Lesson
I think this falls under the category of;

Just because you can

Does not mean you should.


This is a red herring. The real issue is product stability.

I'll type it real slow.....and loud for those in the back of the room...

S    T     A    B   I   L   I   T   Y.


3GB Switch or no, the random crashes have to go!

Be your Best


Michael Farrell
http://primeservicesglobal.com/

numa

  • Guest
yah, I was mainly trying to quote the portion of that article and the various semi-anecdotal evidence that the /3gb switch causes some problems in some of the random software that we use on a regular basis (winamp comes to mind) at any moment I have quite a few silly programs running, and it's sorta better if my machine doesn't flip out when I switch songs.  :)

John Mayo

  • Guest
This is a really good and informative post. Sourdog thanks for starting it, Michael, Sinc & the rest thanks for adding.

With all this said, noting that I agree that the 3GB switch does nothin' noticable for me, we will be loooking a new PC's.

Is the overall consensus to add a second HD for swap space? What about Don R's old trick of creating a partition on a drive & using that partition for swap space alone? With HD's being made with hundreds of GB, what would Windoze do with 250 GB for the swap file?

Forget the 3GB switch? What about all these boxes with 4 Gig & 8 Gig of ram? Is this just a waste? Why bother with more that 2GB?

Quad Cores?

John

sinc

  • Guest
Creating a partition on a hard drive for swap space is almost always a bad idea.  In fact, I can't think of a time when it would be a good idea.

You'll see far more benefit from setting up two drives in a RAID 0 array than you'll see by using one as the primary, and one for swap.  Use 10K Raptors, or 15K SCSIs, or some of the new vertical-write drives, configured in a RAID0 as your primary drive (with a MOBO that can handle that rate of data transfer), and you'll be amazed at the difference.

If you are running a 32-bit OS, then yes, those boxes with 8GB of RAM are wasting the bulk of it.  If you want to start using large amounts of memory, you should move toward a 64-bit OS.

If you are running 32-bit Windows XP, the difference between 3GB RAM and 4GB RAM tends to be rather insignificant for most operations.  It takes special circumstances to really notice any sort of difference between 3GB and 4GB, at least for a CAD workstation.

Swift

  • Swamp Rat
  • Posts: 596
RAID 0 is the way to go

I just built a machine using an EVGA 780I nforce board and 4 10k WD Raptors in RAID 0

Loud but VERY FAST!



John Mayo

  • Guest
Thanks Sinc.

We will be buying at the end of the year so I have time. The machines we have were purchased 4 years ago for LDT.

We will more than likely get Vista 64 (perhaps a dual boot with XP Pro) unless I hear unbelievable horror stories. I have heard of some issues but have not heard any drop dead horror stories yet.

Does anyone here have any?

I agree that fast HD's are always a must but I still wonder about the swap file. I think Michael posted that Windoze prefers the swap on a separate drive but I don't think anyone was comparing the configurations with Raid in mind. Any other advice knowing that our users can fill the primary drive with email, music & other essential items pretty fast? ;) Would you still go Raid 0? This fact has left me trying to send the swap file to another drive in the past. I was planning on two 250 GB 10K RPM HD's hopefully SCSI if Dell will put a SCSI card in the box.

"If you are running 32-bit Windows XP, the difference between 3GB RAM and 4GB RAM tends to be rather insignificant for most operations." Does the 3 GB help over the 2 GB if the switch is not used? What is windoze doing with the extra gig if the switch is off? I'm guessing it's just a waste then.

mjfarrell

  • Seagull
  • Posts: 14444
  • Every Student their own Lesson
As you will have the hardware either way; build one with the swap file on the second drive, and one set to raid 0. You can always go the other way if you don' like the other.
Put the 4GB of RAM in as most memory prefers to travel in pairs.
Although at this juncture I would most likely get a MB that will hold 6-8 in the future.
Run 32 bit for now, as even running this on 64bit XP pro has shown to be no better and without the lack of drivers issues.

In the nearterm autocad does not really know what to do with the extra headroom anyway.
Be your Best


Michael Farrell
http://primeservicesglobal.com/

numa

  • Guest
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136260&Tpk=VelociRaptor

are a pretty good deal at this point.  (i know still pricey).  Raid-0 is kinda neat, I run it on some machines, and it can be quite fast.  I don't know that it is a super cure-all and extra headache of having a system that has such a failure point (no recovery, no redundancy) is necessarily worth it for all users.

SCSI is interesting though it gets quite pricey even now and if you are looking at a very serious raid install, I would recommend going for one of the new 3ware pci-express SATA hardware controller cards, maybe even one with some onboard memory.  figure spending 300-500 dollars for such a card, 3 hundred per drive.  Some of the 3ware controllers allow you to combine raid 0 with raid1, (requires a minimum of 4 hard drives) to give you  a performance and reliablity advantage.  Then there's Raid-5 though you need a lot of drives for it to start paying a performance dividend.  This stuff really can get very expensive, and I don't know that it is worth it.  really.  

Some SCSI drives are very specifically tailored to server use, some are tailored for desktop use, there is some researching that has to be done to figure it out (cache sizes, and seek method).

Another consideration is that the solid state drives are improving, both in speed (which they lack) reliability (which has been questioned), and capacity (which is still dismal) and at some point in the near future will overtake mechanical drives (2 years, 5 years who knows).  When these drives improve, they will become the 'next great thing' that everyone will want and you might kick yourself for wasting money on a system that isn't satisfying at the moment.  

I have found the best performance/value/hassle balance to date on a system with a stripped down xp install (gutted everything I didn't need).  I use nlite to tweak stuff at install, add the drivers I need for my systems, slipstream in SP3, and generally disable stuff that is obsolete, or not needed for our small office.  The nice thing about slipstreaming is you can make your installs unattended, so when a machine starts to have a problem, reloading windows at least is very fast.  Course there is still the installing your adesk products...

And as alluded to  above, most of the 'tweaks' that are done by these randomn xp tweaker pro super whatever programs are totally useless.  I sorta like slipstreaming because it lets me have an up to date install without a bunch 'system restore points' and what not.  slipstreaming also gives you a good baseline (once you get everything working) to start from at a diagnostic standpoint. 

« Last Edit: July 28, 2008, 05:58:02 PM by numa »

sinc

  • Guest
OK, maybe some further explanation is in order.

Let's look at the way this is handled in WINDOWS XP 32-bit ONLY.  As we shall see, if you are running a 64-bit OS, this is all far less important.

But to start, I should point out that most people way over-think the pagefile.  If we could plot the number of hours spent on various topics, vs. the net gain in computer performance realized, I think the pagefile would rank at or near the top when it comes to wasted effort.

Now that I've gotten that out of the way, though, here's the important details (and remember, this is for 32-bit XP only):

All Windows applications run in something called a "virtual memory space".  This is ALWAYS 4GB, regardless of the amount of RAM you have in your system.  Each application has it's own 4GB "virtual memory space", regardless of how many other applications are running, and this virtual memory space is ALWAYS 4GB.  This is important point #1.

Now, some of this address space is "reserved" for the system.  Depending on whether or not you have the 3GB switch set, it is either split evenly between application and system, or 3GB go to the application, and 1GB goes to the system.

Now for important point #2:  the part allocated to the system is shared among every application that is running on your system.  The rest of the "virtual address space" (either 2GB or 3GB) is available to your application.

Important point #3: the system will automatically map this virtual address space to available RAM.  At any given time, only a portion of each application's virtual memory space will actually be taking up any RAM.  The rest of the virtual address space may be unused and unallocated.  Or, if it hasn't been used for a while, it can be sent from RAM into the pagefile, so the RAM can be used by another application.  Then, when that part of the virtual address space is needed again, it is swapped back from the pagefile into RAM.

Keep in mind that EACH APPLICATION has its own 4GB virtual address space.  So as you start more applications, Windows creates more 4GB virtual address spaces.  Each of these 4GB virtual address spaces gets some portion of it mapped to the physical RAM; the rest remains unallocated and unused, or gets swapped out to the pagefile.

Now let's look at how this impacts our system.  If we have 3GB of RAM, and we DO NOT use the 3GB switch, then each application can use a maximum of 2GB of RAM at any one time.  This means that we can theoretically have the entire allocated memory for AutoCAD in RAM all at once, and we are only using 2GB of it.  The remaining 1GB is shared between the OS and other applications.

If we raise the amount of RAM to 4GB, but leave the 3GB switch OFF, then Autocad can still only use 2GB of memory at once.  This is because it is limited to only 2GB of the virtual address space, so it doesn't matter that there is more RAM available.  The OS won't let the application address any more than 2GB.

Enter the 3GB switch.  This now allows the application to address up to 3GB of its VIRTUAL ADDRESS SPACE.  Remember, however, that some of this 3GB may be swapped out to the swap file at any one time, depending on how many other applications are demanding memory, and how much they are demanding.  So in theory, it would allow an application to run faster/better.  But the cost is to remove memory from the system.

With some applications, this is not an issue.  But the memory allocated to the system is also used by device drivers, and applications such as AutoCAD hit certain devices (the graphics card, in particular) very hard.  So limiting the resources of the system can have unintended side-effects, that keep the application from making full use of the extra RAM it now has access to.

Earlier, I mentioned that I had some concern of the paging tables, and other resources in the system.  This may not be an issue.  The PAE is designed to work with 32-bit Windows 2003 Server running up to 32GB of RAM, and turning on the 3GB switch means that it can now only access 16GB of RAM.  So if Windows 2003 Server can still address 16GB of RAM with the 3GB switch enabled, then I don't think it would be a problem for an XP machine with only 4GB of RAM.  The real concern would probably be the effect (if any) on the graphics drivers.

Now all of this probably answers another question you had - what would Windows do with 250GB of swap space?  The answer is: waste most of it.  Remember, each application is limited to only 2GB or 3GB total of usable address space.  So, as you run more and more applications, more and more virtual address spaces get created.  But in order to impact the pagefile, the application has to actually allocate the memory, then the total RAM requested by all applications has to exceed the amount of RAM available on the system, to trigger paging.  (Well, actually, a bit of paging will happen in any case, so this isn't strictly true, but it's true enough.)  So if you keep launching more and more applications, you would gradually eat into the pagefile.  But you would have to run an awful lot of memory-intensive applications to go through 250 GB of pagefile, and you would probably run out of other system resources long before you used a significant fraction of a 250GB pagefile.

sinc

  • Guest
Now for the configuration issues - multiple drives, where to put the swap, etc.

We can split disc access into three major chunks: loading/swapping application code, accessing application data, and general data swapping.

Application code undergoes swapping, just as data code does.  Your application can be loaded into memory, but if a chunk of the application code goes a long time without being run, it gets swapped out.  Application code is always swapped out to the place it came from on the drive, and takes up no space in the pagefile.  However, when it needs to be swapped back in, you see a performance hit similar to the first time you run the application, since the code must once again be loaded from disk.

Your application data is relatively self-explanatory - this would include your actual DWG file, and the temp files Autocad creates, etc.  This data is read from disk, or written to disk, in direct response to user operations.

Then there is data swapping.  This is the "behind-the-scenes" action, where Windows automatically moves data to/from RAM and the pagefile, in order to make sure that operations that are currently running have enough RAM.

To get absolute best performance, each of these three categories of action would hit a separate drive.  This is where we start to see Michael's recommendation of putting the swap on its own drive.  If it's on the primary drive, then application loading/swapping and data swapping will hit the same drive, possibly at the same time, causing the disk drive to rapidly jump back and forth between disparate locations on the disk.  This is what we want to avoid.  If we are also storing all our DWG files and temp files on this same drive, we see an even bigger hit in performance, as all these operations may end up in a fight for simultaneous access to the drive.

Now, in an office situation, many people have their DWG files on the server.  This off-loads one item, leading to a potential increase in performance (unless you have a slow network).  So now we're looking at the application loading/swapping, and the pagefile access.

Now, if we want to use only two drives, what is the best way to handle this?

Method one is to use one drive for the primary, the other for swap.  This means that the majority of the swap drive will be unused, because our swap file is probably less than 4GB in size.  But it can be used as a generic storage drive.  This now gets our swap space hitting one drive, while the other is reserved for application loading/swapping, and temp files.  This is an ideal situation for someone with not-very-much memory, such as 1GB of RAM, where swapping happens pretty much continuously.  But if you have 4GB of RAM, the benefits are much less, because your system doesn't need to swap any where near as much.

So now let's compare that to a RAID 0.  With the RAID 0, we are now hitting the same drive for application loading/swapping, temp file usage, and swap space.  But if we have a lot of RAM, we won't notice much of a difference having the swap space hit the same drive as everything else, because our computer won't swap much.  However, we'll notice a HUGE difference in application loading/swapping and temp file usage, because of the RAID 0, which typically results in about a 30% gain in overall disk performance over a single drive.

This is why I say you will get far more bang for the buck using two drives in a RAID 0, as opposed to dedicating one as a swap drive.  If you want to throw a third drive in there, then by all means, set up the first two drives in a RAID 0, then use the third drive for swap and overflow data storage or backup.  But in general, you will notice an overall increase in EVERYTHING if your primary drive is a RAID 0.

The big rule in all this is NEVER NEVER NEVER use a removable (i.e., external) drive for swap space.  If that connection gets interrupted, or you try to startup your system while the drive is powered down or disconnected, bad things will happen.

Now for the point Numa brought up:  which RAID?  First off, RAID arrays should NEVER be used for redundancy or recovery.  That's what backups are for.  A RAID 1 is useless, except for in very limited situations, typically only for servers that should not fail unexpectedly.  In other words, if that server MUST remain up as much as possible, you may want to use a RAID 1 or RAID 10 or RAID 5.  Those systems are designed so that the system can keep running (for a time) after a disk fails.  However, it is designed only for short-term redundancy, basically, only long enough until the IT person can get there, shut down the server in an "official" way, and replace the failed drive.  But rebuilding ANY sort of RAID array is more trouble than any sane person would want to deal with, so it should not be considered a path to data redundancy/recovery.  Use a good backup system instead.  And when you stop looking toward RAID as something that provides redundancy or backup, then a RAID 5 (or even a RAID 10) is obviously not worth the expense.

As far as a RAID 0 being more trouble than it's worth, I strongly disagree with that.  The argument against using a RAID 0 is basically that, with two drives involved, the chance of a drive failure is twice as great as when only one drive is involved.  However, these days, disks are good enough that drive failures happen very rarely.  I don't view this as a big concern.  Once upon a time, it was a valid concern, but not any longer (in my opinion).  If one of your drives fails, then you just deal with it the way you would deal with it if you had only the usual single-drive system, and your single drive fails.

Personally, I really like having 20-second startup times for Civil-3D.  Not to mention, software updates and installations go at blinding speed.  And that doesn't happen unless you have a fast primary drive, namely multiple fast drives in a RAID 0.

sinc

  • Guest
Oh, I forgot the final point - using a separate partition for the pagefile.

This idea came about in response to the way that Windows, by default, will auto-manage the size of the pagefile.  Typically, for most users and applications, this is the best way to go.  Most people overthink the pagefile, and create problems when they try to mess with it.  So for most people, simply leaving Windows setup in the default way is the best.

Where this causes an issue is that, as Windows resizes the pagefile, it can become "fragmented".  This is the same as what happens to normal files, as data gets appended to them - part of the file sits in one place on disk, another chunk in another place on disk, and the rest in a third place (as an example - in reality, some files may even be split up into as many as hundreds of fragments).  In order to read the file, the drive must jump around to many different places on the disk, resulting in lots of "seek time", and decaying performance.

When the pagefile gets fragmented, operations that hit the pagefile can take longer to perform, as the disk goes through lots of "seek time".

The theory behind creating a separate partition for the pagefile is that, since the pagefile is the only thing in the partition, the pagefile cannot become fragemented.  Unfortunately, creating a partition for the pagefile forces the pagefile way off to one edge of the disk.  This can result in a situation where, in order to access the pagefile, your drive must continuously bounce back and forth between the two partitions.  This can increase the amount of "seek time" by A LOT.  In effect, you are constraining the hardware, and telling the disk drive you know how it should be allocating its resources.  These days, disk drives are so smart that they are undoubtedly better at this task than a person, so best performance is typically achieved by letting the disk drive itself figure out where to put everything.

But when we do this, we still may want to make sure that the pagefile does not get fragmented.  Now we get to another of Michael's recommendations - don't let Windows automatically resize the pagefile, and set the minimum and maximum sizes of the pagefile to the same thing.  This guarantees that the pagefile will not become fragmented.

Of course, the pagefile may already be fragmented by the time you make this change to your pagefile settings.  In that case, you will want to defragment it.  Unfortunately, the pagefile cannot be defragged while it is in use, and it is always in use if Windows is running.  So in order to defrag your pagefile, you CANNOT use the Windows defragmenter.  (The pagefile is one of the things that shows up as a giant green "unmovable block" in the standard Windows defragmenter.)  There are various utilities (some of them free) available on the web that will defrag your pagefile.

This alternative is FAR preferable to the option of creating a separate partition for your pagefile.

John Mayo

  • Guest
Thank-you VERY much Sinc. I know it take time to post threads of this magnatude.

I appreciate your efforts greatly. I have a much better understanding.

John

John Mayo

  • Guest
One more thing. A Windoz user cannot turn off the page file to defrag & then restart it?

Not trying to be a wisenhiemer but there is a toggle to turn the pagefile off. Does Windoz still maintain a pagefiel even if it is turned off?

Mark

  • Custom Title
  • Seagull
  • Posts: 28762
That's some great info Sinc but you know I have to disagree with you when it comes to raid 0. :-)

Raid 0 is fast but you have to remember if one goes your SOL! Like Sinc said, don't rely on a raid 0 setup for data retention.
 
TheSwamp.org  (serving the CAD community since 2003)