Determining Values for NIce and Priority items in limits.conf file

I've been looking online trying to find the correct value nice and priority can take in the limits.conf file. ON the man page it says;

Does this mean priority can be any negative number and any positive?

Then

Does this mean any number between -20 and 19 also what does the definition of nice mean when it mentions Linux 2.6.12 or higher? Does the definition mean you can only set the nice value if you have Linux 2.6.12 or higher?

Sorry if this seems straight forward. I am just a little stuck on these two.

Reading the Googled bits, user priority might not be a working facility in all kernels!

Try some modest numbers like +5 and -5. See if it changes the default nice of a new login process (normally 20) or allows nice --5 (permission to raise your priority to nice=15).

Semantically, the - sign haunting nice comes from the use of "nice -19 ....", which does not say set my nice to 19 or -19, but - says option and 19 is what is added to nice. root can go "nice --19" and get a nice=1 child.

I'm sure process priority has been a feature of Linux for much longer than that.

The way it's traditionally worked, though, is that any user can lower their own priority. If you want to create processes of higher priority, you need root access -- partly because you don't want self-important software(or users!) clogging your machine with high-priority processes, partly because ignorantly running things at extremely high priority can be dangerous -- making user mode software higher priority than, say, an interrupt handler would be a Very Bad Thing.

This 2.6.12 feature apparently allows users to raise their own priority without root access, if granted by the system limits file. What's new is this feature in the config file, not process priority itself.

It looks like a good feature to me. There are occasional things which truly need high priority (CD burning, etc) and having to run them as root all the time has always struck me as a bit dicey.

Yes, it's a bit short sighted that UNIX ts mode login goes to the highest allowed priority, rather than somewhere in the middle, like 30. Then, users could tweak their background or other-terminal processes up or down using nice and renice. It's all relative, after all. I guess the real trick is to get out of time sharing into real time mode, like Windows' Realtime!

They even put a 'bgnice' option in ksh at some point, so you could make all background processes nicer by default.

I don't know, it just looks like human nature to me. If you give humans or software the option to raise their priority, they'll all abuse it. Especially since users who don't, are punished by users who do. It's not a go-faster setting after all, it's a give-everyone-else-less setting... Tragedy of the commons again.

If someone wants to lower their priority, however, their intentions are going to be honest.

So you might as well enforce them to be the same, unless they can make do with less.

If you allow for special cases I think it's a good default.

In a more perfect world, the dispatcher would get the CPU to programs that do not hog it and that do i/o on an expedited basis. I remember amazing operators who were copying tape with the difference if I made the job APRIOR, which meant real time. The drive went from "bup bup bup" to "ZZZZZZZZ", and nobody suffered, because it was i/o bound. Programs like that need to take their 1% off the top, which is no harm to the 99%, especially when there is still idle time. Writing dispatches is a big deal some places. One sysadmin refused to kill a looping pg for me, because he said their very custom dispatcher ensured that the CPU it took was off the bottom, so they had a policy to just wait for the periodic reboot.

I believe many schedulers do; I've always been impressed by how well UNIX in general timeshares, compared to Windows' nonstop stuttering. (Not even quad cores helps.) But it only works when they're equal priority... High priority will be favored over low priority regardless of how polite they are. That's what priority's for.

A runaway higher-priority process can lock lower-priority ones out quite harshly; users with the ability to raise their priority can badly affect other users. Starting them at maximum relative to each other prevents them from stalling each other.

I read somewhere that low pri can actually run faster/cheaper because the slice is larger, or something such. I find the SA's go nuts if you use it all nice -19, even though it works on fine. Appearance of impropriety.

On an idle machine quite possibly, though it sounds heavily implementation-specific and application-specific too. Big timeslices matter for CPU bound things.

On a loaded system, a nice -19'd process will get barely any time, politely behaving or not. That's not a bug or anything the scheduler can fix, that's simply the system doing what you told it to.

Unless everything else is 19'ed too, of course.

I agree that priority can be used intelligently, but think it should be up to the sysadmin to raise priorities above average. Leaving it up to the users can cause problems. Leaving it up to the sysadmin can cause problems too, but at least there's just one of them :wink:

Paging i/o being an exception -- favoring that can create thrashing. I accidentally found I could severely slow a system using mmap() to map a file and then read the data, for a long list of files in succession (an mmap() based fgrep). Memory was full of old mapped page images, and everyone else was on swap. There should be some limit on how many pages of ram one pid can have 'originated', something like 80%, so you can use ram for speed, but not so you roll everyon else out, maybe invoked when too may processes are awaiting page in. Many OS now use mmap() for input buffering of data flat files -- no buffer needed.

For a system to be very responsive to priority, you need prioritized queues for i/o that reach out into the peripherals and networks, and that raises a lot of issues off-host. With all the buffering, NFS, remote printers, SANs and such, things tend to get democratic and ballistic early on in the flow. Getting the CPU first is not enough to keep the low guys from filling the queue with requests.

Emotionally, people think a system runs faster when everyone has more priority! :smiley: LOL!

Yes, and severe swap is particularly nasty in that it can steal time from high-priority things indirectly. Stealing their memory now, means stealing their time later -- they'll go for their data and get a context switch instead. If you're burning a CD, that can mean coasters. Dealing with this in the OS itself is difficult since it adds so much overhead to each page operation though.

Linux supports voluntary measures for cache control, though. You can use madvise to tell the kernel you're done with a page, and so avoid cluttering up the cache with it.

I thought CD burners, the new ones, have enough buffer to survive o/s and app underflow. I guess it depends on how long a track is, or whether the firmware.CD hardware/medaia allows it to see where it left off and turn on writing right there.

The mmap()/munmap() is neat, though, as it allows a 32 bit program to use unlimited RAM. Things mapped stay in memory even if unmapped and are available to a new map, so you can swap super-pages of a huge set of files in your limited address space. Of course, mmap64() allows you to leave it all mapped, at the cost of fatter code. Some sort of client-server arrangement allows tight 32 bit clients to access all that data on a tight 64 bit data server.

16 megs of buffer vs 600 megs of data, worst case there's never enough.

Modern buffer underrun protection isn't quite that perfect, it leaves little recoverable errors on the disk. I don't think a CDR/DVDR has the angular resolution to turn on the laser right spot-on where it left off, so I think it leaves little markers for itself when it must. It can survive brief underruns, brief being the key.

Interesting idea.

Well, infrequent -- once you underrun and it writes a glitch-gap, you have all the time in the world to get back to writing, presumably after buffers are full again, but too many times on one CD/DVD and the bandwidth and capacity are impacted. It might be more forgiving on data CDs, since they are internally segmented. If a music CD is full 600 MB with 12 tracks, the average track is 50 MB, so 16 megs is a good percentage. Classical music albumns might have very few, larger tracks compared to rockNroll, like 11:30 for Beethoven's 2nd Symphony in D Major, Movement 2 Larghetto = 100 MB.