nginx nedir? ozellikleri

 Temel HTTP özellikleri

  • Statik ve index dosyalarının sunumu, otomatik indeksleme; açık dosya açıklayıcı önbellek;
  • Önbellek ile hızlandırılmış reverse proxying; basit yük dengeleme ve hata toleransı;
  • Uzak FastCgi sunucularının önbelleklenmesi ile hızlandırılmış destek; basit yük dengeleme ve hata toleransı;
  • Modüler yapı. Gzip, byte aralıkları, yığın cevaplar (chunked responses), XSLT, SSI, imaj boyutlandırma gibi filtreler. FastCGI veya proksilenmiş sunucular ile tek bir sayfada çoklu SSI içermelerinin paralel işlenmesi.
  • SSL ve TLS SNI desteği.

 

Diğer HTTP özellikleri

  • Ad ve IP tabanlı sanal sunucular;
  • Keep-alive ve pipelined bağlantı desteği;
  • Esnek yapılandırma;
  • İstemci işlemlerinde kopma olmadan yeniden yapılandırma ve online güncelleme;
  • Erişim kayıt (log) formatları, tamponlanmış kayıt yazımı ve hızlı kayıt devri;
  • 3xx-5xx hata kod yönlendirmeleri;
  • rewrite modülü;
  • İstemcinin IP adresine dayalı erişim kontrolü ve HTTP temel kimlik denetleme;
  • PUT, DELETE, MKCOL, COPY ve MOVE methodları;
  • FLV streaming;
  • Hız sınırlandırma;
  • Bir adresten gelen eşzamanlı bağlantı ve talepleri sınırlandırma.
  • Gömülü perl.

 

Mail proxy sunucu özellikleri

  • Harici bir HTTP kimlik denetleme sunucusunu kullanarak, kullanıcıyı IMAP/POP3 backend’ine yönlendirme;
  • Harici bir HTTP kimlik denetleme sunucusunu kullanarak, kullanıcıyı SMTP backend’ine yönlendirme ve kullanıcı kimlik denetlemesi;
  • Kimlik denetleme methodları:
    • POP3: USER/PASS, APOP, AUTH LOGIN/PLAIN/CRAM-MD5;
    • IMAP: LOGIN, AUTH LOGIN/PLAIN/CRAM-MD5;
    • SMTP: AUTH LOGIN/PLAIN/CRAM-MD5;
  • SSL desteği;
  • STARTTLS ve STLS desteği.

 

how to Configure and TEST Apache for Maximum Performance

 Tested , By Kutay ZORLU  ,,

Apache is an open-source HTTP server implementation. It is the most popular web server on the Internet; the December 2005 Web Server Survey conducted by Netcraft [1] shows that about 70% of the web sites on Internet are using Apache.

Apache server performance can be improved by adding additional hardware resources such as RAM, faster CPU, etc. But most of the time, the same result can be achieved by custom configuration of the server. This article looks into getting maximum performance out of Apache with the existing hardware resources, specifically on Linux systems. Of course, it is assumed that there is enough hardware resources – especially enough RAM that the server isn’t swapping frequently. First two sections look into various Compile-Time and Run-Time configuration options. The Run-Time section assumes that Apache is compiled with prefork MPM. HTTP compression and caching is discussed next. Finally, using separate servers for serving static and dynamic contents is covered. Basic knowledge of compiling and configuring Apache and Linux are assumed.

2 Compile-Time Configuration Options

2.1 Load only the required modules:

The Apache HTTP Server is a modular program where the administrator can choose the functions to be included in the server by selecting a set of modules [2]. The modules can be compiled either statically as part of the ‘httpd’ binary, or as Dynamic Shared Objects (DSOs). DSO modules can either be compiled when the server is built, or added later via the apxs utility, which allows compilation at a later date. The mod_so module must be statically compiled into the Apache core to enable DSO support.

Run Apache with only the required modules. This reduces the memory footprint, which improves the server performance. Statically compiling modules will save RAM that’s used for supporting dynamically loaded modules, but you would have to recompile Apache to add or remove a module. This is where the DSO mechanism comes handy. Once the mod_so module is statically compiled, any other module can be added or dropped using the ‘LoadModule’ command in the ‘httpd.conf’ file. Of course, you will have to compile the modules using ‘apxs’ if they weren’t compiled when the server was built.

2.2 Choose appropriate MPM:

The Apache server ships with a selection of Multi-Processing Modules (MPMs) which are responsible for binding to network ports on the machine, accepting requests, and dispatching children to handle the requests [3]. Only one MPM can be loaded into the server at any time.

Choosing an MPM depends on various factors, such as whether the OS supports threads, how much memory is available, scalability versus stability, whether non-thread-safe third-party modules are used, etc.

Linux systems can choose to use a threaded MPM like worker or a non-threaded MPM like prefork:

The worker MPM uses multiple child processes. It’s multi-threaded within each child, and each thread handles a single connection. Worker is fast and highly scalable and the memory footprint is comparatively low. It’s well suited for multiple processors. On the other hand, worker is less tolerant of faulty modules, and a faulty thread can affect all the threads in a child process.

The prefork MPM uses multiple child processes, each child handles one connection at a time. Prefork is well suited for single or double CPU systems, speed is comparable to that of worker, and it’s highly tolerant of faulty modules and crashing children – but the memory usage is high, and more traffic leads to greater memory usage.

3 Run-Time Configuration Options

3.1 DNS lookup:

The HostnameLookups directive enables DNS lookup so that hostnames can be logged instead of the IP address. This adds latency to every request since the DNS lookup has to be completed before the request is finished. HostnameLookups is Off by default in Apache 1.3 and above. Leave it Off and use post-processing program such as logresolve to resolve IP addresses in Apache’s access logfiles. Logresolve ships with Apache.

When using ‘Allow from’ or ‘Deny from’ directives, use an IP address instead of a domain name or a hostname. Otherwise, a double DNS lookup is performed to make sure that the domain name or the hostname is not being spoofed.

3.2 AllowOverride:

If AllowOverride is not set to ‘None’, then Apache will attempt to open the .htaccess file (as specified by AccessFileName directive) in each directory that it visits. For example:

If a request is made for URI /index.html, then Apache will attempt to open /.htaccess, /var/.htaccess, /var/www/.htaccess, and /var/www/html/.htaccess. These additional file system lookups add to the latency. If .htaccess is required for a particular directory, then enable it for that directory alone.

3.3 FollowSymLinks and SymLinksIfOwnerMatch:

If FollowSymLinks option is set, then the server will follow symbolic links in this directory. If SymLinksIfOwnerMatch is set, then the server will follow symbolic links only if the target file or directory is owned by the same user as the link.

If SymLinksIfOwnerMatch is set, then Apache will have to issue additional system calls to verify whether the ownership of the link and the target file match. Additional system calls are also needed when FollowSymLinks is NOT set.
For example:

For a request made for URI /index.html, Apache will perform lstat() on /var, /var/www, /var/www/html, and /var/www/html/index.html. These additional system calls will add to the latency. The lstat results are not cached, so they will occur on every request.

For maximum performance, set FollowSymLinks everywhere and never set SymLinksIfOwnerMatch. Or else, if SymLinksIfOwnerMatch is required for a directory, then set it for that directory alone.

3.4 Content Negotiation:

Avoid content negotiation for fast response. If content negotiation is required for the site, use type-map files rather than Options MultiViews directive. With MultiViews, Apache has to scan the directory for files, which adds to the latency.

3.5 MaxClients:

The MaxClients sets the limit on maximum simultaneous requests that can be supported by the server; no more than this number of child processes are spawned. It shouldn’t be set too low; otherwise, an ever-increasing number of connections are deferred to the queue and eventually time-out while the server resources are left unused. Setting this too high, on the other hand, will cause the server to start swapping which will cause the response time to degrade drastically. The appropriate value for MaxClients can be calculated as:

[4] MaxClients = Total RAM dedicated to the web server / Max child process size

The child process size for serving static file is about 2-3M. For dynamic content such as PHP, it may be around 15M. The RSS column
in “ps -ylC httpd --sort:rss” shows non-swapped physical memory usage by Apache processes in kiloBytes.

If there are more concurrent users than MaxClients, the requests will be queued up to a number based on ListenBacklog directive. Increase ServerLimit to set MaxClients above 256.

3.6 MinSpareServers, MaxSpareServers, and StartServers:

MaxSpareServers and MinSpareServers determine how many child processes to keep active while waiting for requests. If the MinSpareServers is too low and a bunch of requests come in, Apache will have to spawn additional child processes to serve the requests. Creating child processes is relatively expensive. If the server is busy creating child processes, it won’t be able to serve the client requests immediately. MaxSpareServers shouldn’t be set too high: too many child processes will consume resources unnecessarily.

Tune MinSpareServers and MaxSpareServers so that Apache need not spawn more than 4 child processes per second (Apache can spawn a maximum of 32 child processes per second). When more than 4 children are spawned per second, a message will be logged in the ErrorLog.

The StartServers directive sets the number of child server processes created on startup. Apache will continue creating child processes until the MinSpareServers setting is reached. This doesn’t have much effect on performance if the server isn’t restarted frequently. If there are lot of requests and Apache is restarted frequently, set this to a relatively high value.

3.7 MaxRequestsPerChild:

The MaxRequestsPerChild directive sets the limit on the number of requests that an individual child server process will handle. After MaxRequestsPerChild requests, the child process will die. It’s set to 0 by default, the child process will never expire. It is appropriate to set this to a value of few thousands. This can help prevent memory leakage, since the process dies after serving a certain number of requests. Don’t set this too low, since creating new processes does have overhead.

3.8 KeepAlive and KeepAliveTimeout:

The KeepAlive directive allows multiple requests to be sent over the same TCP connection. This is particularly useful while serving HTML pages with lot of images. If KeepAlive is set to Off, then for each images, a separate TCP connection has to be made. Overhead due to establishing TCP connection can be eliminated by turning On KeepAlive.

KeepAliveTimeout determines how long to wait for the next request. Set this to a low value, perhaps between two to five seconds. If it is set too high, child processed are tied up waiting for the client when they could be used for serving new clients.

4 HTTP Compression & Caching

HTTP compression is completely specified in HTTP/1.1. The server uses either the gzip or the deflate encoding method to the response payload before it is sent to the client. Client then decompresses the payload. There is no need to install any additional software on the client side since all major browsers support these methods. Using compression will save bandwidth and improve response time; studies have found a mean gain of %75.2 when using compression [5].

HTTP Compression can be enabled in Apache using the mod_deflate module. Payload is compressed only if the browser requests compression, otherwise uncompressed content is served. A compression-aware browser inform the server that it prefer compressed content through the HTTP request header – “Accept-Encoding: gzip,deflate“. Then the server responds with compressed payload and the response header set to “Content-Encoding: gzip“.

The following example uses telnet to view request and response headers:

In caching, a copy of the data is stored at the client or in a proxy server so that it need not be retrieved frequently from the server. This will save bandwidth, decrease load on the server, and reduce latency. Cache control is done through HTTP headers. In Apache, this can be accomplished through mod_expires and mod_headersmodules. There is also server side caching, in which the most frequently-accessed content is stored in memory so that it can be served fast. The module mod_cache can be used for server side caching; it is production stable in Apache version 2.2.

5 Separate server for static and dynamic content

Apache processes serving dynamic content take from 3MB to 20MB of RAM. The size grows to accommodate the content being served and never decreases until the process dies. As an example, let’s say an Apache process grows to 20MB while serving some dynamic content. After completing the request, it is free to serve any other request. If a request for an image comes in, then this 20MB process is serving static content – which could be served just as well by a 1MB process. As a result, memory is used inefficiently.

Use a tiny Apache (with minimum modules statically compiled) as the front-end server to serve static contents. Requests for dynamic content should be forwarded to the heavy-duty Apache (compiled with all required modules). Using a light front-end server has the advantage that the static contents are served fast without much memory usage and only the dynamic contents are passed over to the big server.

Request forwarding can be achieved by using mod_proxy and mod_rewrite modules. Suppose there is a lightweight Apache server listening to port 80 and a heavyweight Apache listening on port 8088. Then the following configuration in the lightweight Apache can be used to forward all requests (except requests for images) to the heavyweight server:
[9]

All requests, except for images, will be proxied to the backend server. The response is received by the frontend server and supplied to the client. As far as client is concerned, all the responses seem to come from a single server.

6 Conclusion

Configuring Apache for maximum performance is tricky; there are no hard and fast rules. Much depends on understanding the web server requirements and experimenting with various available options. Use tools like ab and httperf to measure the web server performance. Lightweight servers such as tux or thttpd can also be used as the front-end server. If a database server is used, make sure it is optimized so that it won’t create any bottlenecks. In case of MySQL, mtop can be used to monitor slow queries. Performance of PHP scripts can be improved by using a PHP caching product such as Turck MMCache. It eliminates overhead due to compiling by caching the PHP scripts in a compiled state.

Apache Performance Tuning easiest way

Apache 2.x is a general-purpose webserver, designed to provide a balance of flexibility, portability, and performance. Although it has not been designed specifically to set benchmark records, Apache 2.x is capable of high performance in many real-world situations.

Compared to Apache 1.3, release 2.x contains many additional optimizations to increase throughput and scalability. Most of these improvements are enabled by default. However, there are compile-time and run-time configuration choices that can significantly affect performance. This document describes the options that a server administrator can configure to tune the performance of an Apache 2.x installation. Some of these configuration options enable the httpd to better take advantage of the capabilities of the hardware and OS, while others allow the administrator to trade functionality for speed.

For Extra Things , READ APACHE GUIDE, from

apache.org

Hardware and Operating System Issues

The single biggest hardware issue affecting webserver performance is RAM. A webserver should never ever have to swap, as swapping increases the latency of each request beyond a point that users consider “fast enough”. This causes users to hit stop and reload, further increasing the load. You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes.

Beyond that the rest is mundane: get a fast enough CPU, a fast enough network card, and fast enough disks, where “fast enough” is something that needs to be determined by experimentation.

Operating system choice is largely a matter of local concerns. But some guidelines that have proven generally useful are:

  • Run the latest stable release and patchlevel of the operating system that you choose. Many OS suppliers have introduced significant performance improvements to their TCP stacks and thread libraries in recent years.
  • If your OS supports a sendfile(2) system call, make sure you install the release and/or patches needed to enable it. (With Linux, for example, this means using Linux 2.4 or later. For early releases of Solaris 8, you may need to apply a patch.) On systems where it is available, sendfile enables Apache 2 to deliver static content faster and with lower CPU utilization.

Run-Time Configuration Issues

Related Modules Related Directives
  • mod_dir
  • mpm_common
  • mod_status
  • AllowOverride
  • DirectoryIndex
  • HostnameLookups
  • EnableMMAP
  • EnableSendfile
  • KeepAliveTimeout
  • MaxSpareServers
  • MinSpareServers
  • Options
  • StartServers

HostnameLookups and other DNS considerations

Prior to Apache 1.3, HostnameLookups defaulted to On. This adds latency to every request because it requires a DNS lookup to complete before the request is finished. In Apache 1.3 this setting defaults to Off. If you need to have addresses in your log files resolved to hostnames, use the logresolveprogram that comes with Apache, or one of the numerous log reporting packages which are available.

It is recommended that you do this sort of postprocessing of your log files on some machine other than the production web server machine, in order that this activity not adversely affect server performance.

If you use any Allow from domain or Deny from domain directives (i.e., using a hostname, or a domain name, rather than an IP address) then you will pay for two DNS lookups (a reverse, followed by a forward lookup to make sure that the reverse is not being spoofed). For best performance, therefore, use IP addresses, rather than names, when using these directives, if possible.

Note that it’s possible to scope the directives, such as within a  section. In this case the DNS lookups are only performed on requests matching the criteria. Here’s an example which disables lookups except for .html and .cgi files:

HostnameLookups off

HostnameLookups on


But even still, if you just need DNS names in some CGIs you could consider doing the gethostbyname call in the specific CGIs that need it.

FollowSymLinks and SymLinksIfOwnerMatch

Wherever in your URL-space you do not have an Options FollowSymLinks, or you do have an Options SymLinksIfOwnerMatch Apache will have to issue extra system calls to check up on symlinks. One extra call per filename component. For example, if you had:

DocumentRoot /www/htdocs

Options SymLinksIfOwnerMatch

and a request is made for the URI /index.html. Then Apache will perform lstat(2) on /www/www/htdocs, and /www/htdocs/index.html. The results of these lstats are never cached, so they will occur on every single request. If you really desire the symlinks security checking you can do something like this:

DocumentRoot /www/htdocs

Options FollowSymLinks


Options -FollowSymLinks +SymLinksIfOwnerMatch

This at least avoids the extra checks for the DocumentRoot path. Note that you’ll need to add similar sections if you have any Alias or RewriteRule paths outside of your document root. For highest performance, and no symlink protection, set FollowSymLinks everywhere, and never setSymLinksIfOwnerMatch.

AllowOverride

Wherever in your URL-space you allow overrides (typically .htaccess files) Apache will attempt to open .htaccess for each filename component. For example,

DocumentRoot /www/htdocs

AllowOverride all

and a request is made for the URI /index.html. Then Apache will attempt to open /.htaccess/www/.htaccess, and /www/htdocs/.htaccess. The solutions are similar to the previous case of Options FollowSymLinks. For highest performance use AllowOverride None everywhere in your filesystem.

Negotiation

If at all possible, avoid content-negotiation if you’re really interested in every last ounce of performance. In practice the benefits of negotiation outweigh the performance penalties. There’s one case where you can speed up the server. Instead of using a wildcard such as:

DirectoryIndex index

Use a complete list of options:

DirectoryIndex index.cgi index.pl index.shtml index.html

where you list the most common choice first.

Also note that explicitly creating a type-map file provides better performance than using MultiViews, as the necessary information can be determined by reading this single file, rather than having to scan the directory for files.

If your site needs content negotiation consider using type-map files, rather than the Options MultiViews directive to accomplish the negotiation. See theContent Negotiation documentation for a full discussion of the methods of negotiation, and instructions for creating type-map files.

Memory-mapping

In situations where Apache 2.x needs to look at the contents of a file being delivered–for example, when doing server-side-include processing–it normally memory-maps the file if the OS supports some form of mmap(2).

On some platforms, this memory-mapping improves performance. However, there are cases where memory-mapping can hurt the performance or even the stability of the httpd:

  • On some operating systems, mmap does not scale as well as read(2) when the number of CPUs increases. On multiprocessor Solaris servers, for example, Apache 2.x sometimes delivers server-parsed files faster when mmap is disabled.
  • If you memory-map a file located on an NFS-mounted filesystem and a process on another NFS client machine deletes or truncates the file, your process may get a bus error the next time it tries to access the mapped file content.

For installations where either of these factors applies, you should use EnableMMAP off to disable the memory-mapping of delivered files. (Note: This directive can be overridden on a per-directory basis.)

Sendfile

In situations where Apache 2.x can ignore the contents of the file to be delivered — for example, when serving static file content — it normally uses the kernel sendfile support the file if the OS supports the sendfile(2) operation.

On most platforms, using sendfile improves performance by eliminating separate read and send mechanics. However, there are cases where using sendfile can harm the stability of the httpd:

  • Some platforms may have broken sendfile support that the build system did not detect, especially if the binaries were built on another box and moved to such a machine with broken sendfile support.
  • With an NFS-mounted files, the kernel may be unable to reliably serve the network file through it’s own cache.

For installations where either of these factors applies, you should use EnableSendfile off to disable sendfile delivery of file contents. (Note: This directive can be overridden on a per-directory basis.)

Process Creation

Prior to Apache 1.3 the MinSpareServersMaxSpareServers, and StartServers settings all had drastic effects on benchmark results. In particular, Apache required a “ramp-up” period in order to reach a number of children sufficient to serve the load being applied. After the initial spawning ofStartServers children, only one child per second would be created to satisfy the MinSpareServers setting. So a server being accessed by 100 simultaneous clients, using the default StartServers of 5 would take on the order 95 seconds to spawn enough children to handle the load. This works fine in practice on real-life servers, because they aren’t restarted frequently. But does really poorly on benchmarks which might only run for ten minutes.

The one-per-second rule was implemented in an effort to avoid swamping the machine with the startup of new children. If the machine is busy spawning children it can’t service requests. But it has such a drastic effect on the perceived performance of Apache that it had to be replaced. As of Apache 1.3, the code will relax the one-per-second rule. It will spawn one, wait a second, then spawn two, wait a second, then spawn four, and it will continue exponentially until it is spawning 32 children per second. It will stop whenever it satisfies the MinSpareServers setting.

This appears to be responsive enough that it’s almost unnecessary to twiddle the MinSpareServersMaxSpareServers and StartServers knobs. When more than 4 children are spawned per second, a message will be emitted to the ErrorLog. If you see a lot of these errors then consider tuning these settings. Use the mod_status output as a guide.

Related to process creation is process death induced by the MaxRequestsPerChild setting. By default this is 0, which means that there is no limit to the number of requests handled per child. If your configuration currently has this set to some very low number, such as 30, you may want to bump this up significantly. If you are running SunOS or an old version of Solaris, limit this to 10000 or so because of memory leaks.

When keep-alives are in use, children will be kept busy doing nothing waiting for more requests on the already open connection. The defaultKeepAliveTimeout of 5 seconds attempts to minimize this effect. The tradeoff here is between network bandwidth and server resources. In no event should you raise this above about 60 seconds, as most of the benefits are lost.

Compile-Time Configuration Issues

Choosing an MPM

Apache 2.x supports pluggable concurrency models, called Multi-Processing Modules (MPMs). When building Apache, you must choose an MPM to use. There are platform-specific MPMs for some platforms: beosmpm_netwarempmt_os2, and mpm_winnt. For general Unix-type systems, there are several MPMs from which to choose. The choice of MPM can affect the speed and scalability of the httpd:

  • The worker MPM uses multiple child processes with many threads each. Each thread handles one connection at a time. Worker generally is a good choice for high-traffic servers because it has a smaller memory footprint than the prefork MPM.
  • The prefork MPM uses multiple child processes with one thread each. Each process handles one connection at a time. On many systems, prefork is comparable in speed to worker, but it uses more memory. Prefork’s threadless design has advantages over worker in some situations: it can be used with non-thread-safe third-party modules, and it is easier to debug on platforms with poor thread debugging support.

For more information on these and other MPMs, please see the MPM documentation.

Modules

Since memory usage is such an important consideration in performance, you should attempt to eliminate modules that you are not actually using. If you have built the modules as DSOs, eliminating modules is a simple matter of commenting out the associated LoadModule directive for that module. This allows you to experiment with removing modules, and seeing if your site still functions in their absense.

If, on the other hand, you have modules statically linked into your Apache binary, you will need to recompile Apache in order to remove unwanted modules.

An associated question that arises here is, of course, what modules you need, and which ones you don’t. The answer here will, of course, vary from one web site to another. However, the minimal list of modules which you can get by with tends to include mod_mimemod_dir, and mod_log_config.mod_log_config is, of course, optional, as you can run a web site without log files. This is, however, not recommended.

Atomic Operations

Some modules, such as mod_cache and recent development builds of the worker MPM, use APR’s atomic API. This API provides atomic operations that can be used for lightweight thread synchronization.

By default, APR implements these operations using the most efficient mechanism available on each target OS/CPU platform. Many modern CPUs, for example, have an instruction that does an atomic compare-and-swap (CAS) operation in hardware. On some platforms, however, APR defaults to a slower, mutex-based implementation of the atomic API in order to ensure compatibility with older CPU models that lack such instructions. If you are building Apache for one of these platforms, and you plan to run only on newer CPUs, you can select a faster atomic implementation at build time by configuring Apache with the --enable-nonportable-atomics option:

./buildconf
./configure --with-mpm=worker --enable-nonportable-atomics=yes

The --enable-nonportable-atomics option is relevant for the following platforms:

  • Solaris on SPARC
    By default, APR uses mutex-based atomics on Solaris/SPARC. If you configure with --enable-nonportable-atomics, however, APR generates code that uses a SPARC v8plus opcode for fast hardware compare-and-swap. If you configure Apache with this option, the atomic operations will be more efficient (allowing for lower CPU utilization and higher concurrency), but the resulting executable will run only on UltraSPARC chips.
  • Linux on x86
    By default, APR uses mutex-based atomics on Linux. If you configure with --enable-nonportable-atomics, however, APR generates code that uses a 486 opcode for fast hardware compare-and-swap. This will result in more efficient atomic operations, but the resulting executable will run only on 486 and later chips (and not on 386).

mod_status and ExtendedStatus On

If you include mod_status and you also set ExtendedStatus On when building and running Apache, then on every request Apache will perform two calls to gettimeofday(2) (or times(2) depending on your operating system), and (pre-1.3) several extra calls to time(2). This is all done so that the status report contains timing indications. For highest performance, set ExtendedStatus off (which is the default).

accept Serialization – multiple sockets

Warning:

This section has not been fully updated to take into account changes made in the 2.x version of the Apache HTTP Server. Some of the information may still be relevant, but please use it with care.

This discusses a shortcoming in the Unix socket API. Suppose your web server uses multiple Listen statements to listen on either multiple ports or multiple addresses. In order to test each socket to see if a connection is ready Apache uses select(2)select(2) indicates that a socket has zero or at least one connection waiting on it. Apache’s model includes multiple children, and all the idle ones test for new connections at the same time. A naive implementation looks something like this (these examples do not match the code, they’re contrived for pedagogical purposes):

for (;;) {
for (;;) {
fd_set accept_fds;

FD_ZERO (&accept_fds);
for (i = first_socket; i <= last_socket; ++i) { FD_SET (i, &accept_fds); } rc = select (last_socket+1, &accept_fds, NULL, NULL, NULL); if (rc < 1) continue; new_connection = -1; for (i = first_socket; i <= last_socket; ++i) { if (FD_ISSET (i, &accept_fds)) { new_connection = accept (i, NULL, NULL); if (new_connection != -1) break; } } if (new_connection != -1) break; } process the new_connection; }

But this naive implementation has a serious starvation problem. Recall that multiple children execute this loop at the same time, and so multiple children will block at select when they are in between requests. All those blocked children will awaken and return from select when a single request appears on any socket (the number of children which awaken varies depending on the operating system and timing issues). They will all then fall down into the loop and try to accept the connection. But only one will succeed (assuming there’s still only one connection ready), the rest will be blocked in accept. This effectively locks those children into serving requests from that one socket and no other sockets, and they’ll be stuck there until enough new requests appear on that socket to wake them all up. This starvation problem was first documented in PR#467. There are at least two solutions.

One solution is to make the sockets non-blocking. In this case the accept won’t block the children, and they will be allowed to continue immediately. But this wastes CPU time. Suppose you have ten idle children in select, and one connection arrives. Then nine of those children will wake up, try to accept the connection, fail, and loop back into select, accomplishing nothing. Meanwhile none of those children are servicing requests that occurred on other sockets until they get back up to the select again. Overall this solution does not seem very fruitful unless you have as many idle CPUs (in a multiprocessor box) as you have idle children, not a very likely situation.

Another solution, the one used by Apache, is to serialize entry into the inner loop. The loop looks like this (differences highlighted):

for (;;) {
accept_mutex_on ();
for (;;) {
fd_set accept_fds;

FD_ZERO (&accept_fds);
for (i = first_socket; i <= last_socket; ++i) { FD_SET (i, &accept_fds); } rc = select (last_socket+1, &accept_fds, NULL, NULL, NULL); if (rc < 1) continue; new_connection = -1; for (i = first_socket; i <= last_socket; ++i) { if (FD_ISSET (i, &accept_fds)) { new_connection = accept (i, NULL, NULL); if (new_connection != -1) break; } } if (new_connection != -1) break; } accept_mutex_off ();
process the new_connection;
}

The functions accept_mutex_on and accept_mutex_off implement a mutual exclusion semaphore. Only one child can have the mutex at any time. There are several choices for implementing these mutexes. The choice is defined in src/conf.h (pre-1.3) or src/include/ap_config.h (1.3 or later). Some architectures do not have any locking choice made, on these architectures it is unsafe to use multiple Listen directives.

The directive AcceptMutex can be used to change the selected mutex implementation at run-time.

AcceptMutex flock
This method uses the flock(2) system call to lock a lock file (located by the LockFile directive).
AcceptMutex fcntl
This method uses the fcntl(2) system call to lock a lock file (located by the LockFile directive).
AcceptMutex sysvsem
(1.3 or later) This method uses SysV-style semaphores to implement the mutex. Unfortunately SysV-style semaphores have some bad side-effects. One is that it’s possible Apache will die without cleaning up the semaphore (see the ipcs(8) man page). The other is that the semaphore API allows for a denial of service attack by any CGIs running under the same uid as the webserver (i.e., all CGIs, unless you use something like suexec orcgiwrapper). For these reasons this method is not used on any architecture except IRIX (where the previous two are prohibitively expensive on most IRIX boxes).
AcceptMutex pthread
(1.3 or later) This method uses POSIX mutexes and should work on any architecture implementing the full POSIX threads specification, however appears to only work on Solaris (2.5 or later), and even then only in certain configurations. If you experiment with this you should watch out for your server hanging and not responding. Static content only servers may work just fine.
AcceptMutex posixsem
(2.0 or later) This method uses POSIX semaphores. The semaphore ownership is not recovered if a thread in the process holding the mutex segfaults, resulting in a hang of the web server.

If your system has another method of serialization which isn’t in the above list then it may be worthwhile adding code for it to APR.

Another solution that has been considered but never implemented is to partially serialize the loop — that is, let in a certain number of processes. This would only be of interest on multiprocessor boxes where it’s possible multiple children could run simultaneously, and the serialization actually doesn’t take advantage of the full bandwidth. This is a possible area of future investigation, but priority remains low because highly parallel web servers are not the norm.

Ideally you should run servers without multiple Listen statements if you want the highest performance. But read on.

accept Serialization – single socket

The above is fine and dandy for multiple socket servers, but what about single socket servers? In theory they shouldn’t experience any of these same problems because all children can just block in accept(2) until a connection arrives, and no starvation results. In practice this hides almost the same “spinning” behaviour discussed above in the non-blocking solution. The way that most TCP stacks are implemented, the kernel actually wakes up all processes blocked in accept when a single connection arrives. One of those processes gets the connection and returns to user-space, the rest spin in the kernel and go back to sleep when they discover there’s no connection for them. This spinning is hidden from the user-land code, but it’s there nonetheless. This can result in the same load-spiking wasteful behaviour that a non-blocking solution to the multiple sockets case can.

For this reason we have found that many architectures behave more “nicely” if we serialize even the single socket case. So this is actually the default in almost all cases. Crude experiments under Linux (2.0.30 on a dual Pentium pro 166 w/128Mb RAM) have shown that the serialization of the single socket case causes less than a 3% decrease in requests per second over unserialized single-socket. But unserialized single-socket showed an extra 100ms latency on each request. This latency is probably a wash on long haul lines, and only an issue on LANs. If you want to override the single socket serialization you can define SINGLE_LISTEN_UNSERIALIZED_ACCEPT and then single-socket servers will not serialize at all.

Lingering Close

As discussed in draft-ietf-http-connection-00.txt section 8, in order for an HTTP server to reliably implement the protocol it needs to shutdown each direction of the communication independently (recall that a TCP connection is bi-directional, each half is independent of the other). This fact is often overlooked by other servers, but is correctly implemented in Apache as of 1.2.

When this feature was added to Apache it caused a flurry of problems on various versions of Unix because of a shortsightedness. The TCP specification does not state that the FIN_WAIT_2 state has a timeout, but it doesn’t prohibit it. On systems without the timeout, Apache 1.2 induces many sockets stuck forever in the FIN_WAIT_2 state. In many cases this can be avoided by simply upgrading to the latest TCP/IP patches supplied by the vendor. In cases where the vendor has never released patches (i.e., SunOS4 — although folks with a source license can patch it themselves) we have decided to disable this feature.

There are two ways of accomplishing this. One is the socket option SO_LINGER. But as fate would have it, this has never been implemented properly in most TCP/IP stacks. Even on those stacks with a proper implementation (i.e., Linux 2.0.31) this method proves to be more expensive (cputime) than the next solution.

For the most part, Apache implements this in a function called lingering_close (in http_main.c). The function looks roughly like this:

void lingering_close (int s)
{
char junk_buffer[2048];

/* shutdown the sending side */
shutdown (s, 1);

signal (SIGALRM, lingering_death);
alarm (30);

for (;;) {
select (s for reading, 2 second timeout);
if (error) break;
if (s is ready for reading) {
if (read (s, junk_buffer, sizeof (junk_buffer)) <= 0) { break; } /* just toss away whatever is here */ } } close (s); }

This naturally adds some expense at the end of a connection, but it is required for a reliable implementation. As HTTP/1.1 becomes more prevalent, and all connections are persistent, this expense will be amortized over more requests. If you want to play with fire and disable this feature you can defineNO_LINGCLOSE, but this is not recommended at all. In particular, as HTTP/1.1 pipelined persistent connections come into use lingering_close is an absolute necessity (and pipelined connections are faster, so you want to support them).

Scoreboard File

Apache’s parent and children communicate with each other through something called the scoreboard. Ideally this should be implemented in shared memory. For those operating systems that we either have access to, or have been given detailed ports for, it typically is implemented using shared memory. The rest default to using an on-disk file. The on-disk file is not only slow, but it is unreliable (and less featured). Peruse the src/main/conf.h file for your architecture and look for either USE_MMAP_SCOREBOARD or USE_SHMGET_SCOREBOARD. Defining one of those two (as well as their companions HAVE_MMAPand HAVE_SHMGET respectively) enables the supplied shared memory code. If your system has another type of shared memory, edit the filesrc/main/http_main.c and add the hooks necessary to use it in Apache. (Send us back a patch too please.)

Historical note: The Linux port of Apache didn’t start to use shared memory until version 1.2 of Apache. This oversight resulted in really poor and unreliable behaviour of earlier versions of Apache on Linux.

DYNAMIC_MODULE_LIMIT

If you have no intention of using dynamically loaded modules (you probably don’t if you’re reading this and tuning your server for every last ounce of performance) then you should add -DDYNAMIC_MODULE_LIMIT=0 when building your server. This will save RAM that’s allocated only for supporting dynamically loaded modules.

top

Appendix: Detailed Analysis of a Trace

Here is a system call trace of Apache 2.0.38 with the worker MPM on Solaris 8. This trace was collected using:

truss -l -p httpd_child_pid.

The -l option tells truss to log the ID of the LWP (lightweight process–Solaris’s form of kernel-level thread) that invokes each system call.

Other systems may have different system call tracing utilities such as stracektrace, or par. They all produce similar output.

In this trace, a client has requested a 10KB static file from the httpd. Traces of non-static requests or requests with content negotiation look wildly different (and quite ugly in some cases).

In this trace, the listener thread is running within LWP #67.

Note the lack of accept(2) serialization. On this particular platform, the worker MPM uses an unserialized accept by default unless it is listening on multiple ports.

Upon accepting the connection, the listener thread wakes up a worker thread to do the request processing. In this trace, the worker thread that handles the request is mapped to LWP #65.

In order to implement virtual hosts, Apache needs to know the local socket address used to accept the connection. It is possible to eliminate this call in many situations (such as when there are no virtual hosts, or when Listen directives are used which do not have wildcard addresses). But no effort has yet been made to do these optimizations.

The brk(2) calls allocate memory from the heap. It is rare to see these in a system call trace, because the httpd uses custom memory allocators (apr_pool and apr_bucket_alloc) for most request processing. In this trace, the httpd has just been started, so it must call malloc(3) to get the blocks of raw memory with which to create the custom memory allocators.

Next, the worker thread puts the connection to the client (file descriptor 9) in non-blocking mode. The setsockopt(2) and getsockopt(2) calls are a side-effect of how Solaris’s libc handles fcntl(2) on sockets.

The worker thread reads the request from the client.

This httpd has been configured with Options FollowSymLinks and AllowOverride None. Thus it doesn’t need to lstat(2) each directory in the path leading up to the requested file, nor check for .htaccess files. It simply calls stat(2) to verify that the file: 1) exists, and 2) is a regular file, not a directory.

In this example, the httpd is able to send the HTTP response header and the requested file with a single sendfilev(2) system call. Sendfile semantics vary among operating systems. On some other systems, it is necessary to do a write(2) or writev(2) call to send the headers before callingsendfile(2).

This write(2) call records the request in the access log. Note that one thing missing from this trace is a time(2) call. Unlike Apache 1.3, Apache 2.x usesgettimeofday(3) to look up the time. On some operating systems, like Linux or Solaris, gettimeofday has an optimized implementation that doesn’t require as much overhead as a typical system call.

The worker thread does a lingering close of the connection.

Finally the worker thread closes the file that it has just delivered and blocks until the listener assigns it another connection.

Meanwhile, the listener thread is able to accept another connection as soon as it has dispatched this connection to a worker thread (subject to some flow-control logic in the worker MPM that throttles the listener if all the available workers are busy). Though it isn’t apparent from this trace, the next accept(2)can (and usually does, under high load conditions) occur in parallel with the worker thread’s handling of the just-accepted connection.

Permission Scheme for WordPress , and folder security of wordpress

Example Permission Modes

Mode Str Perms Explanation
0477 -r–rwxrwx owner has read only (4), other and group has rwx (7)
0677 -rw-rwxrwx owner has rw only(6), other and group has rwx (7)
0444 -r–r–r– all have read only (4)
0666 -rw-rw-rw- all have rw only (6)
0400 -r——– owner has read only(4), group and others have no permission(0)
0600 -rw——- owner has rw only, group and others have no permission
0470 -r–rwx— owner has read only, group has rwx, others have no permission
0407 -r—–rwx owner has read only, other has rwx, group has no permission
0670 -rw-rwx— owner has rw only, group has rwx, others have no permission
0607 -rw—-rwx owner has rw only, group has no permission and others have rwx
See full list 0000 to 0777.
/
|- index.php
|- wp-admin
| <code>- wp-admin.css
|- wp-blog-header.php
|- wp-comments-post.php
|- wp-commentsrss2.php
|- wp-config.php
|- wp-content
| |- cache
| |- plugins
| |- themes
| </code>- uploads
|- wp-cron.php
|- wp-includes
`- xmlrpc.php

.

Permissions will be different from host to host, so this guide only details general principles. It cannot cover all cases. This guide applies to servers running a standard setup (note, for shared hosting using “suexec” methods, see below).

Typically, all files should be owned by your user (ftp) account on your web server, and should be writable by that account. On shared hosts, files should never be owned by the webserver process itself (sometimes this is www, or apache, or nobody user).

Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress (which may be different than the server account). For example, you may have a user account that lets you FTP files back and forth to your server, but your server itself may run using a separate user, in a separate usergroup, such as dhapache or nobody. If WordPress is running as the FTP account, that account needs to have write access, i.e., be the owner of the files, or belong to a group that has write access. In the latter case, that would mean permissions are set more permissively than default (for example, 775 rather than 755 for folders, and 664 instead of 644).

The file and folder permissions of WordPress should be the same for most users, depending on the type of installation you performed and the umask settings of your system environment at the time of install.

Typically, all core WordPress files should be writable only by your user account (or the httpd account, if different). (Sometimes though, multiple ftp accounts are used to manage an install, and if all ftp users are known and trusted, i.e., not a shared host, then assigning group writable may be appropriate. Ask your server admin for more info.) However, if you utilize mod_rewrite Permalinks or other .htaccess features you should make sure that WordPress can also write to your /.htaccess file.

If you want to use the built-in theme editor, all files need to be group writable. Try using it before modifying file permissions, it should work. (This may be true if different users uploaded the WordPress package and the Plugin or Theme. This wouldn’t be a problem for Plugin and Themes installed via the admin. When uploading files with different ftp users group writable is needed. On shared hosting, make sure the group is exclusive to users you trust… the apache user shouldn’t be in the group and shouldn’t own files.)

Some plugins require the /wp-content/ folder be made writeable, but in such cases they will let you know during installation. In some cases, this may require assigning 755 permissions. The same is true for /wp-content/cache/ and maybe /wp-content/uploads/(if you’re using MultiSite you may also need to do this for /wp-content/blogs.dir/)

Additional directories under /wp-content/ should be documented by whatever plugin / theme requires them. Permissions will vary.

Shared Hosting with su exec

The above may not apply to shared hosting systems that use the “suexec” approach for running PHP binaries. This is a popular approach used by many web hosts. For these systems, the php process runs as the owner of the php files themselves, allowing for a simpler configuration and a more secure environment for the specific case of shared hosting.

Note: suexec methods should NEVER be used on a single-site server configuration, they are more secure only for the specific case of shared hosting.

In such an suexec configuration, the correct permissions scheme is simple to understand.

  • All files should be owned by the actual user’s account, not the user account used for the httpd process.
  • Group ownership is irrelevant, unless there’s specific group requirements for the web-server process permissions checking. This is not usually the case.
  • All directories should be 755 or 750.
  • All files should be 644 or 640. Exception: wp-config.php should be 600 to prevent other users on the server from reading it.
  • No directories should ever be given 777, even upload directories. Since the php process is running as the owner of the files, it gets the owners permissions and can write to even a 755 directory.

In this specific type setup, WordPress will detect that it can directly create files with the proper ownership, and so it will not ask for FTP credentials when upgrading or installing plugins.

Using an FTP Client

FTP programs (“clients”) allow you to set permissions for files and directories on your remote host. This function is often called chmodor set permissions in the program menu.

In a WordPress install, two files that you will probably want to alter are the index page, and the css which controls the layout. Here’s how you change index.php – the process is the same for any file.

In the screenshot below, look at the last column – that shows the permissions. It looks a bit confusing, but for now just note the sequence of letters.

Initial permissions

Right-click ‘index.php’ and select ‘File Permissions’
A popup screen will appear.

Altering file permissions

Don’t worry about the check boxes. Just delete the ‘Numeric value:’ and enter the number you need – in this case it’s 666. Then click OK.

Permissions have been altered

You can now see that the file permissions have been changed.

Unhide the hidden files

By default, most FTP Clients, including FileZilla, keep hidden files, those files beginning with a period (.), from being displayed. But, at some point, you may need to see your hidden files so that you can change the permissions on that file. For example, you may need to make your .htaccess file, the file that controls permalinks, writeable.

To display hidden files in FileZilla, in it is necessary to select ‘View’ from the top menu, then select ‘Show hidden files’. The screen display of files will refresh and any previously hidden file should come into view.

To get FileZilla to always show hidden files – under Edit, Settings, Remote File List, check the Always show hidden files box.

In the latest version of Filezilla, the ‘Show hidden files’ option was moved to the ‘Server’ tab. Select ‘Force show hidden files.’

 

Using the Command Line

If you have shell/SSH access to your hosting account, you can use chmod to change file permissions, which is the preferred method for experienced users. Before you start using chmod it would be recommended to read some tutorials to make sure you understand what you can achieve with it. Setting incorrect permissions can take your site offline, so please take your time.

  • Unix Permissions
  • Apple Chmod Reference

You can make all the files in your wp-content directory writable in two steps, but before making every single file and folder writable you should first try safer alternatives like modifying just the directory. Try each of these commands first and if they dont work then go recursive, which will make even your themes image files writable. Replace DIR with the folder you want to write in

If those fail to allow you to write, try them all again in order, except this time replace -v with -R, which will recursively change each file located in the folder. If after that you still cant write, you may now try 777.

 

About Chmod

chmod is a unix command that means “change mode” on a file. The -R flag means to apply the change to every file and directory inside of wp-content. 766 is the mode we are changing the directory to, it means that the directory is readable and writable by WordPress and any and all other users on your system. Finally, we have the name of the directory we are going to modify, wp-content. If 766 doesn’t work, you can try 777, which makes all files and folders readable, writable, and executable by all users, groups, and processes.

If you use Permalinks you should also change permissions of .htaccess to make sure that WordPress can update it when you change settings such as adding a new page, redirect, category, etc.. which requires updating the .htaccess file when mod_rewrite Permalinks are being used.

  1. Go to the main directory of WordPress
  2. Enter chmod -v 666 .htaccess
NOTE: From a security standpoint, even a small amount of protection is preferable to a world-writeable directory. Start with low permissive settings like 744, working your way up until it works. Only use 777 if necessary, and hopefully only for a temporary amount of time.

 

The dangers of 777

The crux of this permission issue is how your server is configured. The username you use to FTP or SSH into your server is most likely not the username used by the server application itself to serve pages.

Often the Apache server is ‘owned’ by the dhapache or nobody user accounts. These accounts have a limited amount of access to files on the server, for a very good reason. By setting your personal files and folders owned by your user account to be World-Writable, you are literally making them World Writable. Now the dhapache and nobody users that run your server, serving pages, executing php interpreters, etc.. will have full access to your user account files.

This provides an avenue for someone to gain access to your files by hijacking basically any process on your server, this also includes any other users on your machine. So you should think carefully about modifying permissions on your machine. I’ve never come across anything that needed more than 767, so when you see 777 ask why its necessary.

 

The Worst Outcome

The worst that can happen as a result of using 777 permissions on a folder or even a file, is that if a malicious cracker or entity is able to upload a devious file or modify a current file to execute code, they will have complete control over your blog, including having your database information and password.

 

Find a Workaround

Its usually pretty easy to have the enhanced features provided by the impressive WordPress plugins available, without having to put yourself at risk. Contact the Plugin author or your server support and request a workaround.

 

Finding Secure File Permissions

The .htaccess file is one of the files that is accessed by the owner of the process running the server. So if you set the permissions too low, then your server won’t be able to access the file and will cause an error. Therein lies the method to find the most secure settings. Start too restrictive and increase the permissions until it works.

 

Example Permission Settings

The following example has a custom compiled php-cgi binary and a custom php.ini file located in the cgi-bin directory for executing php scripts. To prevent the interpreter and php.ini file from being accessed directly in a web browser they are protected with a .htaccess file.

Default Permissions (umask 022)

Secured Permissions

.htaccess permissions

644 > 604 – The bit allowing the group owner of the .htaccess file read permission was removed. 644 is normally required and recommended for .htaccess files.

 

php.ini permissions

644 > 600 – Previously all groups and all users with access to the server could access the php.ini, even by just requesting it from the site. The tricky thing is that because the php.ini file is only used by the php.cgi, we only needed to make sure the php.cgi process had access. The php.cgi runs as the same user that owns both files, so that single user is now the only user able to access this file.

 

php.cgi permissions

755 > 711 This file is a compiled php-cgi binary used instead of mod_php or the default vanilla php provided by the hosting company. The default permissions for this file are 755.

 

php5.cgi permissions

755 > 100 – Because of the setup where the user account is the owner of the process running the php cgi, no other user or group needs access, so we disable all access except execution access. This is interesting because it really works. You can try reading the file, writing to the file, etc.. but the only access you have to this file is to run php scripts. And as the owner of the file you can always change the permission modes back again.

5 useful url rewriting examples using .htaccess

If you are looking for the examples of URL rewriting then this post might be useful for you.

In this post, I’ve given five useful examples of URL rewriting using .htacess.

If you don’t know something about url rewriting then please check my older post about url rewriting using .htaccess.

 

Now let’s look at the examples

1)Rewriting product.php?id=12 to product-12.html

It is a simple redirection in which .php extension is hidden from the browser’s address bar and dynamic url (containing “?” character) is converted into a static URL.

RewriteEngine on
RewriteRule ^product-([0-9]+)\.html$ product.php?id=$1

2) Rewriting product.php?id=12 to product/ipod-nano/12.html

SEO expert always suggest to display the main keyword in the URL. In the following URL rewriting technique you can display the name of the product in URL.

RewriteEngine on
RewriteRule ^product/([a-zA-Z0-9_-]+)/([0-9]+)\.html$ product.php?id=$2

3) Redirecting non www URL to www URL

If you type yahoo.com in browser it will be redirected to www.yahoo.com. If you want to do same with your website then put the following code to .htaccess file. What is benefit of this kind of redirection?? Please check the post about SEO friendly redirect (301) redirect in php and .htaccess.

RewriteEngine On
RewriteCond %{HTTP_HOST} ^optimaxwebsolutions\.com$
RewriteRule (.*) http://www.optimaxwebsolutions.com/$1 [R=301,L]

4) Rewriting yoursite.com/user.php?username=xyz to yoursite.com/xyz

Have you checked zorpia.com.If you type http://zorpia.com/roshanbh233 in browser you can see my profile over there. If you want to do the same kind of redirection i.e http://yoursite.com/xyz to http://yoursite.com/user.php?username=xyz then you can add the following code to the .htaccess file.

RewriteEngine On
RewriteRule ^([a-zA-Z0-9_-]+)$ user.php?username=$1
RewriteRule ^([a-zA-Z0-9_-]+)/$ user.php?username=$1

5) Redirecting the domain to a new subfolder of inside public_html.

Suppose the you’ve redeveloped your site and all the new development reside inside the “new” folder of inside root folder.Then the new development of the website can be accessed like “test.com/new”. Now moving these files to the root folder can be a hectic process so you can create the following code inside the .htaccess file and place it under the root folder of the website. In result, www.test.com point out to the files inside “new” folder.

 

RewriteEngine On
RewriteCond %{HTTP_HOST} ^test\.com$ [OR]
RewriteCond %{HTTP_HOST} ^www\.test\.com$
RewriteCond %{REQUEST_URI} !^/new/
RewriteRule (.*) /new/$1

6)

 

 

 

opencv ubuntu setup, debian and configuration of library

/* Installing OpenCV 2.3 in Ubuntu.I had a lot of problems initially to install and port the OpenCV library to Python. But after reading a lot of blogs I finally found a solution which I have posted below. */
1)Install all pre requisites

 

2)Install the python development headers
sudo apt-get install python-dev 

3)Download the source code:
http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/2.3/

4)Go to the directory where OpenCV is downloaded via the terminal only and then un zip the package:
(note:- It is recommended that you move the downloaded OpenCV package to the home/ directory

5)Now make a new directory called build and go in to it
mkdir build & cd build

6)Run Cmake

7)Now make

make

8) Make it permanent

sudo make install

9)Configuring OpenCV for using shared libraries:

sudo gedit /etc/ld.so.conf.d/opencv.conf

Add the following line at the end of the file (it may be an empty file, that is ok) and then save it:

Close the file and run the following command to configure the library:

sudo ldconfig

OR RUN : export LD_LIBRARY_PATH=/usr/local/lib
10)Open your .bashrc file and add the following:

Save and close the file

11) Reboot the system

http://ubuntuone.com/0qwnXh8VoufDMHGiu5kLIK

Code::Blocks is an GPL based and cross-platform IDE. This is the tutorials using Code::Blocks with OpenCV.

There are 2 different ways to configure OpenCV (shown below), but first you should create a new project:

Create a simple console project.

We can use project wizard to create a simple console project. Here is the steps

Give this project name of “test_opencv”

Then copy some sample code to main.cpp, such as the contents of "C:\OpenCV\samples\cpp\kutayzorlu.cpp"

Configuring Code::Blocks for OpenCV v2.4

Now you need to configure the compiler to find the OpenCV header files and libraries. There are 2 different ways you can configure Code::Blocks for OpenCV v2.4. OpenCV was made of just 4 libraries originally (until v2.1), but now there are many more library files, so you are highly recommended to use the tool “pkg-config” (as mentioned on CompileOpenCVUsingLinux), but if you are using MinGW on Windows then you might prefer the manual method.

  1. Automatically with the pkg-config tool (easier with Linux or Mac).
  2. Manually adding the OpenCV library (easier with MinGW on Windows).

Configuring Code::Blocks for OpenCV using pkg-config

pkg-config is a free command-line tool (available on Windows, Mac and Linux) that should have been automatically setup correctly if you built OpenCV with CMake. Open a command-line and enter pkg-config opencv –cflags or pkg-config opencv –libs, it will display the compiler and linker arguments to successfully compile your own OpenCV projects without worrying about where OpenCV is installed or worrying about which version you have installed, or how to link to the library in each Operating System, etc. If you get an error that it does not know what pkg-config is, then install pkg-config yourself (Linux or Mac can use the official release, but for Windows (MinGW) you should use the tool pkg-config-liteinstead).

If you were compiling your project on the command line you could type:

(Note: pkg-config is surrounded by back-tick characters, not quote or apostrophe characters (its usually the same key as the Tilda key ~, next to the 1 and Esc keys).

To setup Code::Blocks to use pkg-config, first you should right-click on your project and open the “Build options …” dialog.

Now you can simply put this into “Linker settings -> Other linker options”:

And put this into “Compiler-> Other options”:

(Remember to include the back-tick characters, by copy-pasting those lines directly into Code::Blocks).

The beauty of this method is that it should work for Linux, Windows and Mac, and for all versions of OpenCV, whereas the old manual method has different lib filenames for different Operating Systems and different versions of OpenCV.

Troubleshooting

If you have tried the pkg-config method above but Code::Blocks does not find OpenCV headers and libs but you have verified that pkg-config works for OpenCV on a command-line, then Code::Blocks probably doesn’t know where pkg-config is installed. So you should place a link into "/usr/bin/" (Linux & Mac) or "C:\Windows\" (Windows) so that it will be found. For Linux or Mac, you can find the path to pkg-config if you type this into a terminal:

This might print something like “/opt/local/bin/pkg-config”.

So then type this in a terminal:

This allows accessing pkg-config from the most common program folder “/usr/bin/” as well as it’s actual location.

Configuring Code::Blocks for OpenCV v2.4 manually

In this tutorial I will be using OpenCV v2.4.2 and Code Blocks v10.05 with GNU compiler (MinGW) on Windows 7. To work on OpenCV with Code Blocks you just need to add some OpenCV libraries and the folder with OpenCV header files. Following are some simple steps that I followed:

1.  http://sourceforge.net/projects/opencvlibrary/files/opencv-win/,

2.  installed OpenCV to “C:\OpenCV”, so adjust the paths if you installed somewhere else)。In Code::Blocks, goto the menu “Settings > Compiler and Debugger > Search Directories”. Then goto “Add” and add the directory “C:\OpenCV\include\opencv”. This will let your project find the OpenCV header include files when compiling (before linking):

4. In the Linker tab, add the directory "C:\OpenCV\build\x86\mingw\lib":

5. Now click on “Linker Settings”. Add all the .lib files from "C:\OpenCV\build\x86\mingw\lib" (many files). This will let you project link to OpenCV libraries:

6. That’s it!! Now you are ready to run your first OpenCV program. I tested a sample program “kutayzorlu” in the folder “C:\OpenCV\samples\cpp\kutayzorlu.cpp”.

Alternative method, for OpenCV 2.2

In order to make OpenCV 2.2 (containing C++ code) work under Windows with Code::Blocks and MinGW, Matthew (mazeus12 on the mailing list) did the following:

As mentioned in "OpenCV-2.2.0-win-README.txt""OpenCV-2.2.0-win32-vs2010.exe" does not contain binaries for MinGW so they need to be built from the contents in "OpenCV-2.2.0-win.zip".

Steps to build OpenCV 2.2 with Code::Blocks and MinGW:

1. Install Code::Blocks (10.05) with the (MinGW) C++ compiler option. This should among other install the C++ compiler and mingw32-make to"C:\Program Files\CodeBlocks\MinGW\bin" (I also tried to install the latest MinGW using "mingw-get install gcc g++ mingw32-make" from www.mingw.org but I got an error in extracting some files…)

2. Add "C:\Program Files\CodeBlocks\MinGW\bin" to system PATH (at your own judgment: remove any other paths to MinGW (SomehowDevCpp MinGW paths with probable different versions messed up the build process))

3. Install Cmake (2.8)

4. Extract "OpenCV-2.2.0-win.zip" to "C:\OpenCV-2.2.0-win" (It creates a second folder so the final destination looks like that: "C:\OpenCV-2.2.0-win\OpenCV-2.2.0")

5. Run Cmake (cmake-gui)

6. Set the source code: "C:\OpenCV-2.2.0-win\OpenCV-2.2.0"

7. Set where to build the binaries: e.g. "C:\OpenCV2.2MinGW"

8. Press configure

9. Let Cmake create the new folder

10. Specify the generator: MinGW Makefiles

11. Select “Specify native compilers” and click next

12. For C set: C:/Program Files/CodeBlocks/MinGW/bin/gcc.exe

13. For C++ set: C:/Program Files/CodeBlocks/MinGW/bin/g++.exe

14. Click finish

15. In the configuration screen type in “RELEASE” (or “DEBUG” if you want to build a debug version) for “CMAKE_BUILD_TYPE”. Select BUILD_EXAMPLES if you want (I didn’t change anything else here like “WITH_TBB” or “WITH_QT” etc. I’ll try that when time comes to use TBB or Qt)

16. Click configure again

17. Click generate

18. Close cmake

19. Go to the command prompt and inside the folder "C:\OpenCV2.2MinGW" type "mingw32-make" and hit enter (takes some time)

20. Then type "mingw32-make install" and hit enter again

21. Open Code::Blocks and create a new C++ project (Configuration similar to the guide: CodeBlocks).

22. In menu: “Project/Build options/Linker settings/Link libraries” add "C:\OpenCV2.2MinGW\lib\libopencv_calib3d220.dll.a" and all the other *.dll.a files in this folder

23. In menu: “Project/Build options/Search directories/Compiler” add "C:\OpenCV2.2MinGW\include" (includes in a new program schould look like this: “#include , #include , #include etc.”)

24. In menu: “Project/Build options/Search directories/Linker” add "C:\OpenCV2.2MinGW\lib" The options in steps 22 – 24 can also be added as global options in menu: Settings/Compiler an Debugger/Global compiler settings/…, so they will apply to any project and opened *.cpp file.

25. If necessary, in menu: Settings/Compiler an Debugger/Global compiler settings/Toolchain executables” specify"C:\Program Files\CodeBlocks\MinGW" for the compiler’s installation directory.

26. Open a sample file (or import it into the project), e.g. "C:\OpenCV2.2MinGW\samples\cpp\dft.cpp" and built it. (If the *.cpp files do not exist in this folder, you can find them in the initial folder where you extracted "OpenCV-2.2.0-win.zip" i.e. "C:\OpenCV-2.2.0-win\OpenCV-2.2.0\samples\cpp")

27. Add "C:\OpenCV2.2MinGW\bin" to the system path

28. Run the program

Configuring Code::Blocks using old method for OpenCV v1.1

Set the include header file path (OpenCV v1.1):

Set the library path (OpenCV v1.1):

Add the libraries (OpenCV v1.1) directive:

Here is another way you can add the include path and lib path in Code::Blocks

First, you need to add a global variable in Codeblocks, see here: You can open this dialog in:

Then you can define the #cv.include and # cv.lib  path, in my system,

fill the include edit bar with(this is where your opencv include path locates) : $(#cv)\OpenCV-2.1.0\include\opencv

fill the lib path with(this is where your opencv libraries locate) : $(#cv)\opencv_build\lib

Later, in your Opencv project, you can change the build options like below:

Also, you can add the libraries by using these linker options: