linux ulimit related information - Zhu Jun's blog - NetEase Blog.
Using linux ulimit 2010-11-08 13:30 ulimit -a is used to display the current various user process limits. Linux limits the maximum number of processes for each user. To improve performance, the maximum number of processes for each Linux user can be set based on device resource conditions. Below, I set the maximum number of processes for a certain Linux user to 10000: ulimit -u 10000 For Java applications that require many socket connections and keep them open, it is best to modify the number of files each process can open by using ulimit -n xx, where the default value is 1024. ulimit -n 4096 increases the number of files each process can open to 4096, default is 1024. Other important settings that are recommended to be set to unlimited are: Data segment length: ulimit -d unlimited Maximum memory size: ulimit -m unlimited Stack size: ulimit -s unlimited CPU time: ulimit -t unlimited Virtual memory: ulimit -v unlimited
Temporarily, it applies during the login shell session via the ulimit command. Permanently, by adding a corresponding ulimit statement to the file read by the login shell, i.e., the shell-specific user resource file, such as:
1) Remove the limits on the maximum number of processes and maximum number of open files in the Linux system: vi /etc/security/limits.conf # Add the following lines * soft noproc 11000 * hard noproc 11000 * soft nofile 4100 * hard nofile 4100
Note: * represents all users noproc represents the maximum number of processes nofile represents the maximum number of open files 2) Allow SSH to accept login programs, making it convenient to view ulimit -a resource limits on the SSH client: a) vi /etc/ssh/sshd_config Change the value of UserLogin to yes and remove the # comment b) Restart the sshd service: /etc/init.d/sshd restart 3) Modify the environment variable file for all Linux users: vi /etc/profile ulimit -u 10000 ulimit -n 4096 ulimit -d unlimited ulimit -m unlimited ulimit -s unlimited ulimit -t unlimited ulimit -v unlimited
Linux (kernel 2.6.*) ulimit and core dump 2010-09-13 16:27 Core dump generally refers to the process in user space when it receives a signal, the kernel dumps the running environment/state before the process terminates, used for analyzing and debugging programs. If a program itself has issues causing the process to error during execution, the kernel will send a signal to terminate it, or it can be killed manually. When the process receives a signal, it can enter its own signal handling mechanism; if not, it enters the system's default signal handling mechanism. There are many types of signals, and only some signals have the system's default handling as core dump, commonly SIGQUIT, which can be triggered by pressing "ctl+\” during program execution.
The ulimit command in Linux is a "built-in" command of bash, used to control the usage of system resources by processes. These limits only apply to processes running in the current shell or its child processes, as child processes inherit the resource limit conditions of the parent process. Conversely, if a child process modifies the resource limit conditions, it does not affect the parent process. The command "ulimit -c [num]" is used to view and set the size of the core dump file, where num specifies the size of the core dump file in K.
By default, the name of the core dump file is "core.PID", and the two parameters related to this setting are: /proc/sys/kernel/core_uses_pid defaults to 1 /proc/sys/kernel/core_pattern defaults to "core", which can include a timestamp [root@localhost]# echo "core%t" > /proc/sys/kernel/core_pattern Thus, the core dump file name will be core1284395646.19406, where 1284395646 is the number of seconds from 1970 to the current time, and 19406 is the process ID.
It is easy to verify the core dump functionality with a shell script: Using a script sleep.sh to sleep [root@localhost]# cat sleep.sh #!/bin/bash echo "sleep starting" sleep 100; echo "sleep end"
After running this script, press "ctl+\” immediately:
[root@localhost]# ./sleep.sh sleep starting ./sleep.sh: line 4: 19406 Quit (core dumped) sleep 100 sleep end
If a process receives a signal that should cause it to generate a core dump but does not produce a core dump file, possible reasons include: 1. System resource limits, core dump file size set to 0, or too small (less than 1K). 1. Insufficient process permissions to write files on disk. 2. System issues, no space? Not enough inodes? 3. A file with the same name exists. 4. A SUID process is not running as the real owner and group.
If a program is already running but was not started by the current shell, the ulimit command will not take effect. The ulimit() function can be used to set it, which was later replaced by the getrlimit() and setrlimit() functions. Use the getrlimit() function to obtain the current system core dump limits:
[root@localhost]# cat getl.c #include #include #include #include #include
int main() { struct rlimit *s_core;
if ((s_core=(struct rlimit *)malloc(sizeof(struct rlimit))) == NULL) perror("malloc() error:");
if (getrlimit(RLIMIT_CORE, s_core) != 0) perror("getrlimit() error:"); else printf("current core limit is %d, max core limit is %d\n", s_core->rlim_cur, s_core->rlim_max);
return 0; }
Use the setrlimit() function to set the system's core dump limits: [root@localhost]# cat setl.c #include #include #include #include #include
int main() { struct rlimit *s_core;
if ((s_core=(struct rlimit *)malloc(sizeof(struct rlimit))) == NULL) perror("malloc() error:");
s_core->rlim_max=RLIM_INFINITY; s_core->rlim_cur=204800;
if (setrlimit(RLIMIT_CORE, s_core) != 0) perror("setrlimit() error:");
printf("the process is going to sleep now...\n"); sleep(20); printf("the process totally slept 20s.\n");
return 0; }
It should be noted that the value corresponding to RLIM_INFINITY is -1, and the value set for rlim_cur is calculated in bytes. When the kernel generates the core dump file, if the system's core dump limit is relatively small, the kernel will truncate it, but if it is too small, the core dump file may not be generated. Run this program, and then immediately press "ctl+\” to generate a core dump:
[root@localhost]# ./setl the process is going to sleep now... Quit (core dumped)
By default, core dump files are generated in the directory where the program is located. If manually killed, you can see the "(core dumped)" indication. Of course, core dumps will dump the program's running environment, which can pose security risks if the core dump file is exploited to obtain program-related information or even root privileges of the system. Kernel panic can also dump, but there are more factors to consider, which are more complex; if interested, you can google kdump, kexec.
Linux file handle limit, ulimit 2010-06-11 16:07 When deploying applications under Linux, sometimes you may encounter the problem of Socket/File: Can’t open so many files. In fact, Linux has a file handle limit, and the default is not very high, generally 1024. As a production server, it is easy to reach this number, so we need to increase this value.
We can use ulimit -a to view all limit values, now only concerned about the file handle count issue
open files (-n) 1024
This is the limit number
Here, many articles about ulimit are vague; is this 1024 a system limit or a user limit? In fact, this is a user limit; the complete statement should be the limit of the program that the current user is about to run.
1) This limit is for a single program
2) This limit does not change the limits of programs that have already been run
3) Modifications to this value will disappear once the current shell is exited
For example, if I first run a program A, then modify the limit to 2048 via ulimit, then run B, and then exit the shell and log in again, then only B can open 2048 handles.
If we need to change the overall limit value, or if the program we run is system-started, how should we handle it?
One method is to put the ulimit modification command into /etc/profile, but this approach is not good.
The correct approach should be to modify /etc/security/limits.conf
It contains detailed comments, such as
* soft nofile 2048
* hard nofile 32768
This can uniformly change the file handle limit to soft 2048, hard 32768
This involves another issue, what is a soft limit, and what is a hard limit?
A hard limit is the actual limit, while a soft limit is a warning limit, which only issues a warning.
In fact, the ulimit command itself has both soft and hard settings; adding -H means hard, and adding -S means soft.
The default display is the soft limit; if the modification does not include the added parameters, both parameters will change together.
The first position in the configuration file is the domain, setting it to an asterisk represents global; additionally, you can set different limits for different users.
After modification, logging in again will immediately take effect; however, previously started programs need to be restarted to use the new values. I am using CentOS, and it seems that some systems require a reboot to take effect.
Ulimit is actually a limit for a single program.
What about the system's total limit?
In fact, it is here, /proc/sys/fs/file-max
You can view the current value using cat, and echo can modify it immediately.
There is also /proc/sys/fs/file-nr
Read-only, you can see the number of file handles currently used by the entire system.
When looking for file handle issues, there is also a very useful program called lsof.
It can easily show which handles a certain process has opened.
It can also show which process is occupying a certain file/directory (if it cannot be unmounted, you can see who is the problem).
Linux ulimit max open files 2010-01-24 20:21 Sometimes in a program, multiple files need to be opened for analysis. The system generally defaults to a quantity of 1024 (which can be seen using ulimit -a). This is sufficient for normal use, but too little for programs. Modification method: Restart is OK. Modify two files. 1. /etc/security/limits.conf vi /etc/security/limits.conf Add: * soft nofile 8192 * hard nofile 20480
2. /etc/pam.d/login session required /lib/security/pam_limits.so
Check with ulimit -a OK
Another article: Adjusting ulimit values (Linux file handle count) in Centos5 (RHEL5)
http://www.crazylemon.net/linux/173.html
When deploying applications under Linux, sometimes you may encounter the problem of Socket/File: Can’t open so many files. For example, when Squid is used as a proxy, when the number of open files reaches over 900, the speed can drop significantly, possibly leading to web pages not opening. In fact, Linux has a file handle limit, and the default is not very high, generally 1024. It is easy to reach this number when using a production server.
Viewing method
We can use ulimit -a to view all limit values
[root@centos5 ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited max nice (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 4096 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 max rt priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited||<
Among them, "open files (-n) 1024" is the limit on the number of file handles that the Linux operating system allows a process to open (which also includes the number of open SOCKETS, which can affect the number of concurrent connections in MySQL). This value can be modified using the ulimit command, but the value modified by the ulimit command only applies to the current user's current usage environment and will be invalid after a system reboot or user logout.
The system's total limit is here, /proc/sys/fs/file-max. You can view the current value using cat, and modify it in /etc/sysctl.conf as well.
There is also /proc/sys/fs/file-nr
Read-only, you can see the number of file handles currently used by the entire system.
When looking for file handle issues, there is also a very useful program called lsof.
It can easily show which handles a certain process has opened.
It can also show which process is occupying a certain file/directory (if it cannot be unmounted, you can see who is the problem).
linux ulimit max open files 2010-01-24 20:21 Sometimes in a program, multiple files need to be opened for analysis. The system generally defaults to a quantity of 1024 (which can be seen using ulimit -a). This is sufficient for normal use, but too little for programs. Modification method: Restart is OK. Modify two files. 1. /etc/security/limits.conf vi /etc/security/limits.conf Add: * soft nofile 8192 * hard nofile 20480
2. /etc/pam.d/login session required /lib/security/pam_limits.so
Check with ulimit -a OK
Another article: Adjusting ulimit values (Linux file handle count) in Centos5 (RHEL5)
http://www.crazylemon.net/linux/173.html
When deploying applications under Linux, sometimes you may encounter the problem of Socket/File: Can’t open so many files. For example, when Squid is used as a proxy, when the number of open files reaches over 900, the speed can drop significantly, possibly leading to web pages not opening. In fact, Linux has a file handle limit, and the default is not very high, generally 1024. It is easy to reach this number when using a production server.
Viewing method
We can use ulimit -a to view all limit values
[root@centos5 ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited max nice (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 4096 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 max rt priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited||<
Among them, "open files (-n) 1024" is the limit on the number of file handles that the Linux operating system allows a process to open (which also includes the number of open SOCKETS, which can affect the number of concurrent connections in MySQL). This value can be modified using the ulimit command, but the value modified by the ulimit command only applies to the current user's current usage environment and will be invalid after a system reboot or user logout.
The system's total limit is here, /proc/sys/fs/file-max. You can view the current value using cat, and modify it in /etc/sysctl.conf as well.
There is also /proc/sys/fs/file-nr
Read-only, you can see the number of file handles currently used by the entire system.
When looking for file handle issues, there is also a very useful program called lsof.
It can easily show which handles a certain process has opened.
It can also show which process is occupying a certain file/directory (if it cannot be unmounted, you can see who is the problem).
linux ulimit max open files 2010-01-24 20:21 Sometimes in a program, multiple files need to be opened for analysis. The system generally defaults to a quantity of 1024 (which can be seen using ulimit -a). This is sufficient for normal use, but too little for programs. Modification method: Restart is OK. Modify two files. 1. /etc/security/limits.conf vi /etc/security/limits.conf Add: * soft nofile 8192 * hard nofile 20480
2. /etc/pam.d/login session required /lib/security/pam_limits.so
Check with ulimit -a OK
Another article: Adjusting ulimit values (Linux file handle count) in Centos5 (RHEL5)
http://www.crazylemon.net/linux/173.html
When deploying applications under Linux, sometimes you may encounter the problem of Socket/File: Can’t open so many files. For example, when Squid is used as a proxy, when the number of open files reaches over 900, the speed can drop significantly, possibly leading to web pages not opening. In fact, Linux has a file handle limit, and the default is not very high, generally 1024. It is easy to reach this number when using a production server.
Viewing method
We can use ulimit -a to view all limit values
[root@centos5 ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited max nice (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 4096 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 max rt priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited||<
Among them, "open files (-n) 1024" is the limit on the number of file handles that the Linux operating system allows a process to open (which also includes the number of open SOCKETS, which can affect the number of concurrent connections in MySQL). This value can be modified using the ulimit command, but the value modified by the ulimit command only applies to the current user's current usage environment and will be invalid after a system reboot or user logout.
The system's total limit is here, /proc/sys/fs/file-max. You can view the current value using cat, and modify it in /etc/sysctl.conf as well.
There is also /proc/sys/fs/file-nr
Read-only, you can see the number of file handles currently used by the entire system.
When looking for file handle issues, there is also a very useful program called lsof.
It can easily show which handles a certain process has opened.
It can also show which process is occupying a certain file/directory (if it cannot be unmounted, you can see who is the problem).
linux ulimit max open files 2010-01-24 20:21 Sometimes in a program, multiple files need to be opened for analysis. The system generally defaults to a quantity of 1024 (which can be seen using ulimit -a). This is sufficient for normal use, but too little for programs. Modification method: Restart is OK. Modify two files. 1. /etc/security/limits.conf vi /etc/security/limits.conf Add: * soft nofile 8192 * hard nofile 20480
2. /etc/pam.d/login session required /lib/security/pam_limits.so
Check with ulimit -a OK
Another article: Adjusting ulimit values (Linux file handle count) in Centos5 (RHEL5)
http://www.crazylemon.net/linux/173.html
When deploying applications under Linux, sometimes you may encounter the problem of Socket/File: Can’t open so many files. For example, when Squid is used as a proxy, when the number of open files reaches over 900, the speed can drop significantly, possibly leading to web pages not opening. In fact, Linux has a file handle limit, and the default is not very high, generally 1024. It is easy to reach this number when using a production server.
Viewing method
We can use ulimit -a to view all limit values
[root@centos5 ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited max nice (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 4096 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 max rt priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited||<
Among them, "open files (-n) 1024" is the limit on the number of file handles that the Linux operating system allows a process to open (which also includes the number of open SOCKETS, which can affect the number of concurrent connections in MySQL). This value can be modified using the ulimit command, but the value modified by the ulimit command only applies to the current user's current usage environment and will be invalid after a system reboot or user logout.
The system's total limit is here, /proc/sys/fs/file-max. You can view the current value using cat, and modify it in /etc/sysctl.conf as well.
There is also /proc/sys/fs/file-nr
Read-only, you can see the number of file handles currently used by the entire system.
When looking for file handle issues, there is also a very useful program called lsof.
It can easily show which handles a certain process has opened.
It can also show which process is occupying a certain file/directory (if it cannot be unmounted, you can see who is the problem).
linux ulimit max open files 2010-01-24 20:21 Sometimes in a program, multiple files need to be opened for analysis. The system generally defaults to a quantity of 1024 (which can be seen using ulimit -a). This is sufficient for normal use, but too little for programs. Modification method: Restart is OK. Modify two files. 1. /etc/security/limits.conf vi /etc/security/limits.conf Add: * soft nofile 8192 * hard nofile 20480
2. /etc/pam.d/login session required /lib/security/pam_limits.so
Check with ulimit -a OK
Another article: Adjusting ulimit values (Linux file handle count) in Centos5 (RHEL5)
http://www.crazylemon.net/linux/173.html
When deploying applications under Linux, sometimes you may encounter the problem of Socket/File: Can’t open so many files. For example, when Squid is used as a proxy, when the number of open files reaches over 900, the speed can drop significantly, possibly leading to web pages not opening. In fact, Linux has a file handle limit, and the default is not very high, generally 1024. It is easy to reach this number when using a production server.
Viewing method
We can use ulimit -a to view all limit values
[root@centos5 ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited max nice (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 4096 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 max rt priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited||<
Among them, "open files (-n) 1024" is the limit on the number of file handles that the Linux operating system allows a process to open (which also includes the number of open SOCKETS, which can affect the number of concurrent connections in MySQL). This value can be modified using the ulimit command, but the value modified by the ulimit command only applies to the current user's current usage environment and will be invalid after a system reboot or user logout.
The system's total limit is here, /proc/sys/fs/file-max. You can view the current value using cat, and modify it in /etc/sysctl.conf as well.
There is also /proc/sys/fs/file-nr
Read-only, you can see the number of file handles currently used by the entire system.
When looking for file handle issues, there is also a very useful program called lsof.
It can easily show which handles a certain process has opened.
It can also show which process is occupying a certain file/directory (if it cannot be unmounted, you can see who is the problem).
linux ulimit max open files 2010-01-24 20:21 Sometimes in a program, multiple files need to be opened for analysis. The system generally defaults to a quantity of 1024 (which can be seen using ulimit -a). This is sufficient for normal use, but too little for programs. Modification method: Restart is OK. Modify two files. 1. /etc/security/limits.conf vi /etc/security/limits.conf Add: * soft nofile 8192 * hard nofile 20480
2. /etc/pam.d/login session required /lib/security/pam_limits.so
Check with ulimit -a OK