www.logicsupport.com

Recent Updates Recent Updates

LogicSupport Review (20/09/09)

Please check out the new series of logicsupport reviews.

New Expansion (03/04/09)

LogicSupport is proud to announce that we have completed 5 years in operation and at an overrall staff strength of over 50, we continue to expand and offer some of the best support services available!

New Semi Dedicated Plans (04/01/09)

You are not ready for hiring your own dedicated techs yet? Please check out our new semi dedicated plans. They are sure to meet your needs and help reduce your support costs without compromising quality.

Client Testimonials Client Testimonials

July 20, 2010

UltraMonkey – Heartbeat Configuration

Ultra Monkey- Heartbeat

Linux HA Solution

This one took a touch longer than we anticipated. Though the configurations were set, up and running, testing was a crucial aspect in implementing this H.A solution. Thanks to Nikhil for completing the ultramonkey project successfully. If you are looking to setup a high availability cluster using open source software, you can contact our team who has the skill and experience to implement this in any cluster.

Ultra Monkey is a project to create load balanced and highly available network services. For example a cluster of web servers that appear as a single web server to end-users. The service may be for end-users across the world connected via the internet, or for enterprise users connected via an intranet. Ultra Monkey makes use of the Linux operating system to provide a flexible solution that can be tailored to a wide range of needs. From small clusters of only two nodes to large systems serving thousands of connections per second. The advantage of using a load balancer compared to using round robin DNS is that it takes care of the load on the web server nodes and tries to direct requests to the node with less load, and it also takes care of connections/sessions. Many web applications (e.g. forum software, shopping carts, etc.) make use of sessions, and if you are in a session on Apache node 1, you would lose that session if suddenly node 2 served your requests. In addition to that, if one of the Apache nodes goes down, the load balancer realizes that and directs all incoming requests to the remaining node which would not be possible with round robin DNS.

Configuring UltraMonkey in Ubuntu 9.10

For this setup, we need four nodes (two Apache nodes and two load balancer nodes) and five IP addresses: one for each node and one virtual IP address that will be shared by the load balancer nodes and used for incoming HTTP requests.

* Apache node 1: apachenode1.com (webserver1) – IP address: 192.168.1.101; Apache document

root: /var/www

* Apache node 2: apacheode2.com (webserver2) – IP address: 192.168.1.102; Apache document root:

/var/www

* Load Balancer node 1: loadb1.com (loadb1) – IP address: 192.168.1.103

* Load Balancer node 2: loadb2.com (loadb2) – IP address: 192.168.1.104

* Virtual IP Address: 192.168.1.105 (used for incoming requests)

1 Enable IPVS On The Load Balancers

First we must enable IPVS on our load balancers. IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching

loadb1/loadb2:

echo ip_vs_dh >> /etc/modules

echo ip_vs_ftp >> /etc/modules

echo ip_vs >> /etc/modules

echo ip_vs_lblc >> /etc/modules

echo ip_vs_lblcr >> /etc/modules

echo ip_vs_lc >> /etc/modules

echo ip_vs_nq >> /etc/modules

echo ip_vs_rr >> /etc/modules

echo ip_vs_sed >> /etc/modules

echo ip_vs_sh >> /etc/modules

echo ip_vs_wlc >> /etc/modules

echo ip_vs_wrr >> /etc/modules

Then we do this:

loadb1/loadb2:

modprobe ip_vs_dh

modprobe ip_vs_ftp

modprobe ip_vs

modprobe ip_vs_lblc

modprobe ip_vs_lblcr

modprobe ip_vs_lc

modprobe ip_vs_nq

modprobe ip_vs_rr

modprobe ip_vs_sed

modprobe ip_vs_sh

modprobe ip_vs_wlc

modprobe ip_vs_wrr

If you get errors, then most probably your kernel wasn’t compiled with IPVS support, and you need to compile a new kernel with IPVS support (or install a kernel image with IPVS support) now.

2 Install Ultra Monkey On The Load Balancers

Loadb1/loadb2:

To install Ultra Monkey, we must edit /etc/apt/sources.list now and add these two lines :

vi /etc/apt/sources.list

deb http://www.ultramonkey.org/download/3/ sarge main

deb-src http://www.ultramonkey.org/download/3 sarge main

apt-get update

If you encounter update error due to the key. Installing the key with the following steps will fix the update error :

vi /home/key.asc

—–BEGIN PGP PUBLIC KEY BLOCK—–

Version: SKS 1.1.0

mQGiBEOBaH4RBACmZGWUpGxm6fOtwvrlOu38+cSh6iwdr9KMeNc9+qna8Ena3sEqV2FX9qxj

aALm/VnrfbtUu823VpjpiNy7rMjvOl17omDS8v4S6HSBRFuOFxmIoPi4BLDPlTWhfTYZJ3rH

Gx/2J3JFjvzkVoM1Htq1jbeZzzqPaHAvGJrnIY+6DwCg7aCaaKfOQzwaVeTq1jhu4zH0MMsD

/00cd61uRttV97JBGBDx4Zwp5CBzlnra24kZkHf43iQiexI7JzOx0xvqmbAhz2Aei3UcQXM7

c90EaA6FgFJplaRFUcwYJkdxu3+quK0R1EmKmDA50ugiUzu8qt30ODBoJav7ZM8/pq4DvcrT

mHLBK/hlKOcAhAVrAqwTztynILPdA/0YRs8GYGIqrAEKfX4/8mhBbCCrD6yJViohbSJ3lRTD

jKnMI3vNj/+s6eeT9B1vF8CyEuzZufmEBXL329CnhDRNWaVbTgpBl/GLI1FSXTiDSnDGdReP

mF7h5HHlL0IFwWTYl+oiXz5RF2hAOUALIxfX+RQTU6mH/Raz59VH2j7pW7QfU2ltb24gSG9y

bWFuIDxob3Jtc0BkZWJpYW4ub3JnPohGBBARAgAGBQJDgW2HAAoJENUjpuZgBiiEx2AAn0dk

OefzVqNYB+CmQgOgRjS5A/fnAJsHVdO3i/8NckSuQeJ9z/MyO6iCE4hGBBARAgAGBQJDgW+R

AAoJEBcFOQ7mbJuwAGQAoJDv1/chu75aQOIxK5DIQEZ09jRVAJwJtwDyqgdUnd1AEc//OrNZ

1kIuwohGBBARAgAGBQJDgXq1AAoJEFPlmVtRVTMKiCYAn1KlsPSPSdWFeAiHf3Qm54gXH9Yp

AJ9DIJ4jy1Z9IXfp9i1fVnwKHicZxohGBBARAgAGBQJIQuQaAAoJEPaNV/uq5zQnFQEAn0cZ

cG/XW3nZn7XJkmTQ7BhCct7wAKCFofIDm0ecW2LIku5W3ByRpwpTYIhGBBARAgAGBQJJt0yC

AAoJEBjYpOLQ7bZNKBsAn15fxX5vOVPxcCjG0MTkKq1KL+6uAJ9NBuT48l0lZrHoodrEK7HJ

yBATAIhGBBARAgAGBQJJt2+7AAoJEGykGndDuNbIWVsAnjgyEhiDo5Mje4EbPoX7BYQlYq+5

AJ47SpquAYY6zLB6u2lmzQ8DaHbm4IhMBBMRAgAMBQJDlAzMBYMSuV6yAAoJEAgFz2XePm2T

uMQAn1fWqSGo1EMJiglmWV+AQypVaz09AKCUH6rBS7sbPboBV2Xy+1I1oZdJMIhmBBMRAgAm

BQJDgWi5AhsDBQkSzAMABgsJCAcDAgQVAggDBBYCAwECHgECF4AACgkQA8ACPgVBDpew7wCg

273GZJS2311rS3/3C8VJa8cM9LwAn3QzO5DFhKN/3XOGI99vxaK+x0zCtCFTaW1vbiBIb3Jt

YW4gPGhvcm1zQHZlcmdlLm5ldC5hdT6IRgQQEQIABgUCQ4FthQAKCRDVI6bmYAYohM+7AJ4n

3bKIpEaejv/1pbHKLL11fse1DQCfXjFRojcLeRIXXkwvSWBDY/xpGvCIRgQQEQIABgUCQ4Fv

jQAKCRAXBTkO5mybsAjoAJ41mgdVAz23U51NJ8cgpWp+KaWYWwCfdE1q/K/DNbx+Pix8kkOL

NhQ2T6qIRgQQEQIABgUCQ4F6rwAKCRBT5ZlbUVUzCsoiAJ9GZNyA1XDcCdotS1qTFlp33tnA

OQCfWYzBttAL7etuutb4CDJqPwOh4j+IRgQQEQIABgUCSELkGgAKCRD2jVf7quc0J3OXAJ4/

nzcUCOYJKNtjYZtPaqVj0cW7BgCfb/Trs4uvS3vooJizh0cJyPQqjymIRgQQEQIABgUCSbdM

ggAKCRAY2KTi0O22TVM/AJ4yWpNFyqo67oYYOgr8OqQfMJnijACfbF0TqBE3xJYraCK3MIW5

Z1Yr+AuIRgQQEQIABgUCSbdvuwAKCRBspBp3Q7jWyEgjAJ98Zu4GfDiOylz+EDbm2T7P1BE7

kQCfbECHQARujttCnPxy2q3vXwL0yrWITAQTEQIADAUCQ5QMzAWDErlesgAKCRAIBc9l3j5t

kwVzAJ4wywabgeuYCDMWZuGAsC2hDSFlFQCfQbdeCT0VoUveXfh3e58f8GJV4h6IZgQTEQIA

JgUCQ4FofgIbAwUJEswDAAYLCQgHAwIEFQIIAwQWAgMBAh4BAheAAAoJEAPAAj4FQQ6XyuEA

n3jEhRhdOW2sftDoYVN4yxxwihw6AKDgTV6czvp1mp1n1/3IPRNdzHXqwIhpBBMRAgApAhsD

BQkSzAMABgsJCAcDAgQVAggDBBYCAwECHgECF4AFAkOBa6sCGQEACgkQA8ACPgVBDpeYsACg

6YLLgyAExRHb7/HoY9kHFsG1W+QAn3D/G7LJJgKv+Ak623nzyhhu/grVuQINBEOBaIoQCADP

7yaGkyu2a5A5clGb7sD6nDnKjz/rJrpFkluQ0Rwz73KxajjLzQeNGp2EV91UDRXbg1NCkf2E

F97maJO00SYR7ZbQdyBKORgQZuLU/aSJn8UR15ChsuWxe874p/QGylzwEea/JVJUltud3Aep

Dm8sB1nErPzz++iDZSFSdGqM5ltcv3geP0Ax+SDNqe7HWdmgv/EKaFgRUN62yS05V+HWylN+

wuj/SJJiKUhtiZQ26NiawQNDUcxUPgb8oYgGyZRL1GcGWs45RADiEUNSHifqm+CcRSkQFXex

GFHDDnHKFFnyVHKaWrwG6Ty9TFlx7+jo8gwDyphBLV0nsTYQcJvjAAMFCACSMCA/aewrLvVo

C63o90CKgftkpwt53lp/1vqBvRzuZlCTLQ5ijbA2Pn/9oqPfWUqeIObr2YCRlpwnw5jfOKah

wknlujA9nAzruqA2xwkDp2jpyEOoh1meDhbeaPa8lx4A2sxYwfSMCDDZ3Jwm/c0lFdhmlUEA

dm9tnXL024evFr5BJVmmgbQsHErSwfyI6CDWjVXbySO60j3K3GVIvTm55iB0rlsMjwhRD4Sy

5PO5aaCktmICqBtSDBHZjje7PWPKN92ZhHr24g11Xc3u42U1bQ+J9oNmuTJMI20NcR0lgJNR

d8oQvqHyU7B5mRMBalCKNDdtVLbTuh2Cs6vrS6jFiE8EGBECAA8FAkOBaIoCGwwFCRLMAwAA

CgkQA8ACPgVBDpfxKACeIfmJdm0wyb6FNyfP9/yYlSak1T8AoJjH/Pc+Uwq5T0kUEdcurSEm

MZd1

=5NNR

—–END PGP PUBLIC KEY BLOCK—–

Run the following command to fetch the missing public key

gpg –import /home/key.asc

and add the key to the keyring

apt-key add /root/.gnupg/pubring.gpg

Then do :

apt-key update

apt-get update

Install UltraMonkey

apt-get install ultramonkey

dpkg-reconfigure ipvsadm

Answer as follows:

Do you want to automatically load IPVS rules on boot?

No

Select a daemon method.

None

3 Enable Packet Forwarding On The Load Balancers

The load balancers must be able to route traffic to the Apache nodes. Therefore we must enable packet forwarding on the load balancers. Add the following lines to /etc/sysctl.conf:

vi /etc/sysctl.conf

# Enables packet forwarding

net.ipv4.ip_forward = 1

Then do this:

sysctl –p

4 Configure heartbeat And ldirectord

Now we have to create three configuration files for heartbeat. They must be identical on loadb1 and loadb2

vi /etc/ha.d/ha.cf

logfacility local0

bcast eth0 # Linux

mcast eth0 225.0.0.1 694 1 0

auto_failback off

node loadb1.com

node loadb2.com

respawn hacluster /usr/lib/heartbeat/ipfail

apiauth ipfail gid=haclient uid=hacluster

Important: As nodenames we must use the output of uname –n on loadb1 and loadb2.

vi /etc/ha.d/haresources

loadb1.com \

ldirectord::ldirectord.cf \

LVSSyncDaemonSwap::master \

IPaddr2::192.168.1.105/24/eth0/192.168.1.255

The first word is the output of uname –n on loadb1, no matter if you create the file on loadb1 or loadb2! After IPaddr2 we put our virtual IP address 192.168.1.105.

vi /etc/ha.d/authkeys

auth 3

3 md5 somerandomstring

somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other.

/etc/ha.d/authkeys should be readable by root only, therefore we do this:

Chmod 600 /etc/ha.d/authkeys

ldirectord is the actual load balancer. We are going to configure the two load balancers (loadb1.com and loadb2.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. To make it work, we must create the ldirectord configuration file /etc/ha.d/ldirectord.cf which again must be identical on loadb1 and loadb2.

vi /etc/ha.d/ldirectord.cf

checktimeout=10

checkinterval=2

autoreload=no

logfile=”local0”

quiescent=yes

virtual=192.168.1.105:80

real=192.168.1.101:80 gate

real=192.168.1.102:80 gate

fallback=127.0.0.1:80 gate

service=http

request=”ldirector.html”

receive=”Test Page”

scheduler=rr

protocol=tcp

checktype=negotiate

Afterwards we create the system startup links for heartbeat and remove those of ldirectord because ldirectord will be started by the heartbeat daemon:

update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .

update-rc.d -f ldirectord remove

Finally we start heartbeat (and with it ldirectord):

/etc/init.d/ldirectord stop

If you are getting the following error while stopping ldirectord service :

Can’t locate Socket6.pm in @INC (@INC contains: /etc/perl

/usr/local/lib/perl/5.10.0 /usr/local/share/perl/5.10.0 /usr/lib/perl5

/usr/share/perl5 /usr/lib/perl/5.10 /usr/share/perl/5.10

/usr/local/lib/site_perl .) at /usr/sbin/ldirectord line 815.

Run :

apt-get install libsocket6-perl

/etc/init.d/heartbeat start

5 Test the Load Balancers

ip addr sh eth0

The active load balancer should list the virtual IP address (192.168.1.105):

2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000

link/ether 00:16:3e:40:18:e5 brd ff:ff:ff:ff:ff:ff

inet 192.168.1.103/24 brd 192.168.1.255 scope global eth0

inet 192.168.1.105/24 brd 192.168.1.255 scope global secondary eth0

The hot-standby should show this:

2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000

link/ether 00:16:3e:50:e3:3a brd ff:ff:ff:ff:ff:ff

inet 192.168.1.104/24 brd 192.168.1.255 scope global eth0

ldirectord ldirectord.cf status

Output on the active load balancer:

ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1455

Output on the hot-standby:

ldirectord is stopped for /etc/ha.d/ldirectord.cf

ipvsadm -L –n

Output on the active load balancer:

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.0.105:80 rr

-> 192.168.0.101:80 Route 0 0 0

-> 192.168.0.102:80 Route 0 0 0

-> 127.0.0.1:80 Local 1 0 0

Output on the hot-standby:

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

/etc/ha.d/resource.d/LVSSyncDaemonSwap master status

Output on the active load balancer:

master running

(ipvs_syncmaster pid: 1591)

Output on the hot-standby:

master stopped

6 Configure The Two Apache Nodes

Finally we must configure our Apache cluster nodes apachenode1.com and apachenode1.com to accept requests on the virtual IP address 192.168.1.105.

apt-get install iproute

Add the following to /etc/sysctl.conf:

vi /etc/sysctl.conf

# Enable configuration of arp_ignore option

net.ipv4.conf.all.arp_ignore = 1

# When an arp request is received on eth0, only respond if that address is

# configured on eth0. In particular, do not respond if the address is

# configured on lo

net.ipv4.conf.eth0.arp_ignore = 1

# Ditto for eth1, add for all ARPing interfaces

#net.ipv4.conf.eth1.arp_ignore = 1

# Enable configuration of arp_announce option

net.ipv4.conf.all.arp_announce = 2

# When making an ARP request sent through eth0 Always use an address that

# is configured on eth0 as the source address of the ARP request. If this

# is not set, and packets are being sent out eth0 for an address that is on

# lo, and an arp request is required, then the address on lo will be used.

# As the source IP address of arp requests is entered into the ARP cache on

# the destination, it has the effect of announcing this address. This is

# not desirable in this case as adresses on lo on the real-servers should

# be announced only by the linux-director.

net.ipv4.conf.eth0.arp_announce = 2

# Ditto for eth1, add for all ARPing interfaces

#net.ipv4.conf.eth1.arp_announce = 2

Then run this:

sysctl -p

Add this section for the virtual IP address to /etc/network/interfaces:

vi /etc/network/interfaces

auto lo:0

iface lo:0 inet static

address 192.168.0.105

netmask 255.255.255.255

pre-up sysctl -p > /dev/null

Then run this:

ifup lo:0

Finally we must create the file ldirector.html. This file is requested by the two load balancer nodes repeatedly so that they can see if the two Apache nodes are still running. I assume that the document root of the main apache web site on webserver1 and webserver2 is /var/www, therefore we create the file /var/www/ldirector.html:

vi /var/www/ldirector.html

Test Page

7 Further Testing

You can now access the web site that is hosted by the two Apache nodes by typing http://192.168.1.105 in your browser.

Now stop the Apache on either webserver1 or webserver2. You should then still see the web site on http://192.168.1.105 because the load balancer directs requests to the working Apache node. Of course, if you stop both Apaches, then your request will fail.

Now let’s assume that loadb1 is our active load balancer, and loadb2 is the hot-standby. Now stop heartbeat on loadb1:

loadb1:

/etc/init.d/heartbeat stop

Wait a few seconds, and then try http://192.168.1.105 again in your browser. You should still see your web site because loadb2 has taken the active role now.

Now start heartbeat again on loadb1:

/etc/init.d/heartbeat start

loadb2 should still have the active role.

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


July 5, 2010

Installation of Plesk in CloudLinux

Glad to submit the result of our recent workshop with CloudLinux. The key is to disable SELinux.

1.Login to the CloudLinux as root.

# uname -i

i586

2.Check SELinux status, we have it disabled but still… execute “getenforce” it should say “Disabled” or “Not enforcing”. If it’s enforcing, disable it by executing: “setenforce 0″.

# getenforce

Disabled

3. Update yum

# yum update –exclude=xorg* –exclude=kernel* –exclude=cloudlinux-release*

4.Change your working directory to where you want to download the Auto-installer utility:

# mkdir /root/plesk

# cd /root/plesk

5.Once done, download the Auto-installer utility that suits your operating system and save it on your server’s hard drive :

# wget http://download1.parallels.com/Plesk/PPP9/CloudLinux5/parallels_installer_v3.6.0_build100407.15_os_CloudLinux_5_i386

6 Set the execution permission for Auto-installer:

# chmod +x parallels_installer_v3.6.0_build100407.15_os_CloudLinux_5_i386

7.Run the Auto-installer:

# ./parallels_installer_v3.6.0_build100407.15_os_CloudLinux_5_i386

8.Read installation notes displayed on the screen and type ‘n’ to proceed to the next screen. Press ENTER.

Select the product versions that you want to install: type the number corresponding to the product version you need and press ENTER, then type ‘n’ and press ENTER to continueThe packages will be downloaded and installed. When the installation is finished, Parallels Plesk Panel will start automatically.

9.Now to complete the initial configuration, log in to the Parallels Plesk Panel running on your host at https://machine.domain.name:8443/ or https://IP-address:8443/. Use the username ‘admin‘ and password ‘setup‘ (both are case sensitive). For security reasons, change the password upon initial login.

Next week, we will have something interesting to show in the Linux HA segment.

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


June 28, 2010

LVE – Lightweight Virtual Environment

Filed under: Technical — Tags: , , , , — visiondream3 @ 8:01 am


LVE is a kernel level technology developed by the CloudLinux team. The technology has common roots with container based virtualization. Yet, it is lightweight, and transparent. The goal of LVE is to make sure that no single web site can bring down your web server.

Today, a single site can consume all CPU resources, IO resources or apache processes — and bring the server to the halt. LVE prevents that.

It is done via collaboration of apache modules and kernel. mod_hostinglimits is the apache module that:

1 . Detects VirtualHost from which the request came.
2 . Detects if it was meant for cgi or PHP script.
3 . Switches apache process used to serve that request into LVE for the user determined via
SuexecUserGroup for that virtual host.
4 . Lets apache to serve the request.
5 . Removes apache process from user’s LVE.

The kernel makes sure that all LVEs get fair share of the server’s resources. This means, for example, that 20 apache processes serving a heavy site will use the same amount of CPU as one apache process serving a smaller site.

Each LVE limits amount of entry processes (Apache processes entering into LVE) to prevent single site exhausting all apache processes. If the limit is reached — mod_hostinglimits will not be able to place apache process into LVE, and will return error code 503 (server busy).

By using isolation technology to create lightweight virtual environment (LVE), CloudLinux can:

1. Limit the amount of resources one site can use so no single account can slow or take down a whole server
2. Provide better security by running all processes under the correct user and in their own container
3. Protect the server from hackers and poorly written scripts that drain resources for other tenants

Installation process of LVE

In order to install LVE services , we need to setup CloudLinux. We can either install CloudLinux by downloading it or migrating it from CentOS 5.

You can download latest ISO for CloudLinux CD and DVDs from:

x86_64 version: http://repo.cloudlinux.com/cloudlinux/5.5/iso/x86_64
i386 version: http://repo.cloudlinux.com/cloudlinux/5.5/iso/i386

Version 5.4 is available here:

x86_64 version: http://repo.cloudlinux.com/cloudlinux/5.4/iso/x86_64
i386 version: http://repo.cloudlinux.com/cloudlinux/5.4/iso/i386

Switch From CentOS to CloudLinux today

It is easy to switch from CentOS 5 to CloudLinux. The process takes a few minutes and replaces just a
handful of RPMs.

Get <activation_key> either by getting trial subscription or by purchasing subscription,

Download script: centos2cl
Execute sh centos2cl -k <activation_key>

Reboot

# wget http://repo.cloudlinux.com/cloudlinux/sources/cln/centos2cl
# sh centos2cl -k <activation_key>
# reboot

Enabling Apache LVE support

If you have no control panel, or you run one of the following control panels: Webmin, Plesk, Interworx, do

# yum install mod_hostinglimits

If you run apache with worker MPM, do:

# yum install mod_sucgid

If you have cPanel, use install-lve script :

# wget http://repo.cloudlinux.com/cloudlinux/sources/install-lve

# sh install-lve -a
Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


May 6, 2010

Apache tuning : Beyond the configuration files

Filed under: Technical — visiondream3 @ 6:49 am

When we think of apache tuning, our focus go straight to httpd.conf. How do I tune the configuration file to achieve that perfect setting? Wrong approach.

An inverted pyramid approach is the best way to analyze factors that bring in maximum performance out of your web server. There are various factors that affect the performance of a server application and fine tuning the configuration file should be reached only after considering the parameters that affect it externally.

Processor/CPU :

If you install a default configuration of apache on 386 and P4 dual cores, the performance difference is huge. Therefore, whenever possible give your server, the best equipment to function. If the cost model permits, run apache on a dedicated server. Running it along with other applications will affect its performance. There are control panels that allow you to adopt this clustered model.

RAM :

Having considered the processor capabilities, the next factor that stands out is RAM. RAM is precious, it is the real-estate on server. Every process that runs is battling away to utilize RAM in order to complete its task. So, if you have many static pages to serve, performance is higher if you allow apache to cache these in RAM, thereby avoiding read/write access to the harddisk. Disk or RAM, caching definitely helps to improve performance avoiding continuous read/write. Use mod-cache if you are short of RAM or mod_mem_cache at best if you have enough RAM to play with.

Hard drive :

Hard drive is next in list. No doubt, we need to use a disk as fast as possible and if possible a hardware RAID, improving the speed of files served. Conclusion is that the entire hardware contributes to how quickly and efficiently the apache calls are processed.

Nature of files served :

We need to keep in mind that the type of files served are a determining factor. If apache needs to do a lot of thinking before a file is served, you are asking it to again utilize more cpu and memory. This means, the more dynamic your web pages, the more resource intensive it is. So, consider what is best and really essential before deciding the trade off.

Server environment :

Even if you run apache as a dedicated server, there are plenty of other applications that come along with a default installation. Be sure to disable it. For instance, NFS and print server and so on and generally enabled by default. You have to realize each of these small processes take up enough memory to eventually run the RAM out. Disable every single process that you don’t require to serve the web pages.

Don’t use the system for anything else!

Yes, don’t login to the system unless it is really essential to login. Use automated scripts to alert you of changes happening on the server. Decide the interval for doing such checks carefully. It is recommended not to login frequently, test compile new applications, other tests that involve editing/moving files around or check the load on the server every other time of day. All these activities will affect the web serving performance. If necessary, employ remote plugins and set an interval for the checks without compromising the server resources.

Keep the software versions up to date :

Newer stable versions of any software are more likely to perform better. So, whenever possible upgrade your server to the latest stable version with proper planning. Most control panels that help hosting automation do these updates automatically. Still, it is always good to cross check.

Apache application environment :

You make compromises on the modules to configure along with your initial compilation of apache. However, these compromises can prove costly in the long run if not carefully selected. Many custom scripts that support this build usually configures it with default settings. It is extremely important to understand the environment the web pages are going to be served before it is decided which modules should be compiled along with the initial build.

An important factor to consider during the build is to decide whether to compile a given module as static or dynamic. For once, we know dynamic is always convenient, but it utilizes more cpu resource each time it is loaded on and off the memory. This takes a performance hit. So, if you think you can accommodate a commonly and most frequently used module, compiling it as static makes more sense. In all other cases, compile the module as shared.

However, choose the static modules carefully as it eats up into the server RAM. When multiple child processes run serving the same type of module, consider the amount of memory utilized. Even though, statically loaded modules are served faster, you should enable it carefully so that it doesn’t create a RAM crunch which may keep the processes in queue and eventually degrade performance.

Configuring the httpd.conf file :

a) Remove everything you don’t need!

Yes, it is normal practice leave everything we don’t utilize as ‘comments’. Not only is it disruptive and confusing, it is unnecessarily adding to unused data working against the principle of optimization.

b) Make the configuration simple

By trying to achieve high performance we tend to add a host of features without looking at the conflicting nature of these settings. These eventually add to additional cpu cycles, affecting performance.

c) MaxClients :

MaxClients setting determines how many simultaneous connections Apache can handle. A very low MaxClients value, on a high traffic server, below par with what the server can handle will definitely affect the performance of your web server. This puts a lot of requests in wait and if the processor and the RAM allows these requests to be served simultaneously it should be allowed. Study the maximum number of requests expected before changing this setting.

d) ServerSignature Off, Server status and info :

Disable these features as they add additional overhead to your server by serving these requests unless you are monitoring, deploying applications or testing an error.

e) HostnameLookups :

HostnameLookups will slow down the server as it tries to lookup hostname information of a client IP each time it requests a page. It is not necessary as you always have the option to process the logs later for more info, even if you disable it. Turn it off by using “HostnameLookups off” directive.

f) FollowSymLinks :

If you turn this on, apache checks for symlinks for each directory served in the path. There is additional process calls involved for each, so if you really don’t need to use symlinks to serve your web pages, turn off this feature for the web directory.

g) Custom settings for DirectoryIndex :

This decides which page should be ( the index page ) displayed in order of priority when someone accesses a web directory. As far as possible, try to avoid fancy custom pages that override the main configuration. Let the server use default settings. index.html and index.php are the standard.

h) Put all CGI components into one single directory and configure it so, in order to prevent apache from spending time to determine how it should be handled.

Server Logs :

This is an area which takes a lot of disk read/write. It is not recommended to disable it, but you will be surprised to see the increased results in performance when you disable logging. For security reasons, you may not be able to do this, so at least make sure you have disabled hostname lookup.

.htaccess files :

.htaccess provides everyone a convenient way to make changes specific to a user account. This has several drawbacks though. Each time the server needs to check the .htaccess file in the requested directory and all its parent directory for the configurations to decide which overrides the other. This is a time consuming process and hence it will definitely affect the performance. Therefore, as far as possible, include the changes you require in the main configuration file itself.

.htaccess can be disabled by using AllowOverride None in the main configuration file, the directive itself imply the meaning.

I think the above list comprehensively covers most areas that affect server performance. When you talk about the apache server in itself, it is unfortunate that the main reasons for server slowness are actually factors that are outside these configuration. In most cases, apache spends time waiting to serve the data because a script or a dynamic content which is supposed to give the data is taking time to execute. So, changes like CGI to mod_perl increases the overall speed of execution and drastically improve performance to as much as 70%. It is interesting to note that once the data is released, it takes only milliseconds for Apache to serve the request :)

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


May 5, 2010

What is Ogg Vorbis?

Filed under: General — Tags: , , , — visiondream3 @ 10:59 am

Ogg Vorbis is an open source audio codec which is used for compressing and decompressing audio files. It is an alternative to MP3 media codecs patented by Fraunhofer IIS, AT&T-Bell Labs, Thomson-Brandt, CCETT, and others. Ogg Theora is the open source alternative to H.264 administered by MPEG LA.

In 2001, now director of the Xiph.org Foundation, worked for Green Witch, an online company that competed with Music Match. Fraunhofer, one of the MP3 patent holders with Thompson, bought a stake in Music Match and charged Green Witch $60m to license MP3 for the year. Green Witch couldn’t pay and was sold to a company that owned another web radio provider, iCAST. Ogg Vorbis was created to escape the MP3 noose and avoid a repeat of history.

There are several software application providers who use Ogg Theora as its video codec. Opera 10.5 which offers HTML5 video is one of them. The popularity of this codec is increasing since license fee for MP3 is controlled by a group of companies and for several businesses, an open source version makes a better business sense.

If you want MP3, you have to pay Thompson, which helped create MP3 along with three other companies. Decoding costs $0.75 for a patent and software license per unit, but if you want to encode the media – which, of course, you have to – then that’s priced at up to $5.00 per unit.

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


April 30, 2010

WordPress Redirection

Filed under: Technical — Tags: , , , , , — visiondream3 @ 10:57 am

Whenever I go to www.domain.com it is redirected to Non-www and the browser shows domain.com . There is no redirect rules set and I checked my .htacces file. I knew it is something to do with my wordpress . My WordPress is installed in the root directory and not in its own subdirectory called /blog or /wordpress .

Here is how I fixed it . Consider my domain name is domain.com and username : doma
# mysql –user=doma –password=mypass

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 399164
Server version: 5.0.88 Source distribution

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| doma |
+——————–+
2 rows in set (0.00 sec)

mysql> use doma;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+———————–+
| Tables_in_doma |
+———————–+
| wp_commentmeta |
| wp_comments |
| wp_links |
| wp_options |
| wp_postmeta |
| wp_posts |
| wp_term_relationships |
| wp_term_taxonomy |
| wp_terms |
| wp_usermeta |
| wp_users |
+———————–+
11 rows in set (0.00 sec)

mysql> desc wp_options;
+————–+———————+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+————–+———————+——+—–+———+—————-+
| option_id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| blog_id | int(11) | NO | | 0 | |
| option_name | varchar(64) | NO | UNI | | |
| option_value | longtext | NO | | NULL | |
| autoload | varchar(20) | NO | | yes | |
+————–+———————+——+—–+———+—————-+
5 rows in set (0.00 sec)

mysql> select * from wp_options where option_name=’siteurl’;
+———–+———+————-+———————+———-+
| option_id | blog_id | option_name | option_value | autoload |
+———–+———+————-+———————+———-+
| 2 | 0 | siteurl |
http://domain.com | yes |
+———–+———+————-+———————+———-+
1 row in set (0.00 sec)

There you can see the siteurl is given without www . Below I’m going to change it to www .

mysql> update wp_options set option_value=’http://www.domain.com’ where option_name=’siteurl’;
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1 Changed: 0 Warnings: 0

mysql> select * from wp_options where option_name=’home’;
+———–+———+————-+———————+———-+
| option_id | blog_id | option_name | option_value | autoload |
+———–+———+————-+———————+———-+
| 38 | 0 | home |
http://domain.com | yes |
+———–+———+————-+———————+———-+
1 row in set (0.00 sec)

mysql> update wp_options set option_value=’http://www.domain.com’ where option_name=’home’;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

Done . Isn’t that cool ? :)

Submitted by,

Manjula Kesavan,
System Administrator,
LogicSupport.com

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


April 21, 2010

Backup your MySQL Databases

Filed under: Technical — Tags: , , , , — visiondream3 @ 9:58 am

How many times have you come across situations when corrupt or old database eventually lost you an entire list of leads and emails collected over several years? It is heart breaking indeed. There have also been instances when the monthly backup got overwritten by the latest one and there was no way to return to the old settings. It is best to have an easy, alternate backup method in place in order to prevent losing those precious data on your server.
You can automate the backup process by making a small shell script which will create a daily backup file without overwriting the older backup .

#!/bin/sh
date=`date -I`
mysqldump –all-databases | gzip > /var/backup/backup-$date.sql.gz
Change the above directory to your backup drive location.

1) Save above script to a file eg: /root/dailybackup.sh
2) Make the file executable :- chmod +x /root/dailybackup.sh
3) Add following line in cron to take backup daily at 01:00 am

0 1 * * * /root/dailybackup.sh

Since mysqldump sends its output to the console, we can pipe the output through gzip or bzip2 and send the compressed dump to the backup file. Here’s another way of doing it with bzip2

mysqldump –all-databases -p | bzip2 -c >databasebackup.sql.bz2

Enter password:
If you wish to transfer the file to your other server and decompress it , say my backup file is

backup-2010-02-14.sql.bz

You can decompress as follows :-

bzip2 -d backup-2010-02-14.sql.bz

It will be now in the form

backup-2010-02-14.sql

Restore your backup using the mysql command:

mysql -p < backup-2010-02-14.sql

Done!

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


March 29, 2010

Plesk User Detail

Filed under: Plesk — Tags: , , , — Manjula @ 8:37 am

Here is how you can find your users email , database and ftp details from psa database .

mysql> use psa ;

Database changed

EMAIL ACCOUNT DETAILS

mysql> SELECT mail.mail_name, accounts.password, domains.name FROM mail, accounts, domains WHERE domains.id=mail.dom_id AND mail.account_id=accounts.id;

DATABASE USER DETAILS

mysql> SELECT domains.name, data_bases.name, data_bases.type , db_users.login,accounts.password FROM domains, data_bases, db_users, accounts WHERE domains.id=data_bases.dom_id AND data_bases.id=db_users.db_id AND db_users.account_id=accounts.id;

FTP / SYSTEM USER DETAILS

mysql> SELECT domains.name, sys_users.login, accounts.password, sys_users.home, sys_users.shell, sys_users.quota FROM domains, accounts, hosting, sys_users WHERE domains.id=hosting.dom_id AND hosting.sys_user_id=sys_users.id AND sys_users.account_id=accounts.id;

Enjoy!

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


February 19, 2010

Smarterstats

Filed under: Technical — Tags: , , , , — visiondream3 @ 4:22 am

SmarterStats is a Web log analytics and SEO software provided by www.smartertools.com that provides complete management and control of website statistics and SEO marketing efforts in a single application. SmarterStats helps businesses make informed decisions that drive them toward quantifiable objectives for website traffic and sales.

Issue with smarterstats account, especially after a migration is not uncommon. Sometimes prompting a re-import and resetting of the entire log files. Recently we faced an issue with one of the smarterstats account and it kept displaying the error “Information about the last 7 days is not available in your statistics. Logs have been imported for 10/1/2008 through 10/31/2009.”

Well, every time the admin attempt to re-import the log files, it gave a ‘cannot read’ error.

Upon checking, he came to know that the settings match exactly as that of a working domain account, hence he wasn’t able to identify the error.

‘Path to the log file’ is what confused the admin. Though the log files were in the path /x/y/domainname/, almost all of the working domains showed the settings as /x/y. Missing the logic, the admin kept on trying with the /x/y base directory until he finally decided to re-import using the path /x/y/domainname/, which worked finally.

Just a tip for all those who troubleshoot by doing a comparison of settings, please keep your senses open for the obvious logic and do not be misguided by an incorrect interface provided by the programmer.

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


February 10, 2010

PCI compliance

Filed under: General — Tags: , , , — visiondream3 @ 4:49 am

The Payment Card Industry Data Security Standard is a worldwide information security standard assembled by the Payment Card Industry Security Standards Council (PCI SSC). The standard was created to help organizations that process card payments prevent credit card fraud through increased controls around data and its exposure to compromise. The standard applies to all organizations which hold, process, or pass cardholder information from any card branded with the logo of one of the card brands.

If you are a host who sell business critical web environment that process credit card data, you need to know how to help your customers acquire compliance. LogicSupport helps their customers acquire PCI compliance by directly working with their data centers and any approved scan vendors ( ASV). Only certified ASVs can perform PCI-sanctioned compliance audits.

A detailed ASV list is given below for reference :-

https://www.pcisecuritystandards.org/pdfs/asv_report.html

Most of these ASVs provide you a report to work on or help you to clear them by providing technical assistance. If at any point, you need consultation with a security expert, you could always approach us and we will be able to guide you with our experience. We can even assist you by providing engineers who are experts at helping you clear the vulnerable scores.

An ASV bases the audit result on the Common Vulnerability Scoring System (CVSS), Version 2, score that is calculated for every vulnerability. Scores range from 0 to 10.0, with 4.0 or higher indicating failure to comply with PCI standards.

Any asset that contains at least one vulnerability with CVSS score of 4.0 or higher is considered non-compliant. And, if at least one asset is non-compliant, the entire organization is considered to be non-compliant.

Also, any vulnerability that exposes an asset to XSS or SQL injection indicates failure to comply with PCI standards, regardless of CVSS score.

The CVSS system rates all vulnerabilities on a scale of 0.0 to 10.0 with 10.0 representing the greatest security risk. A ranking of 4.0 or higher indicates failure to comply with PCI standards.

A moderate vulnerability, which ranges from 0.0 to 3.9 on the CVSS system can only be exploited locally and requires authentication. A successful attacker has little or no access to unrestricted information, cannot destroy or corrupt information, and cannot cause outages on any systems. Examples include default or guessable SNMP community names and the OpenSSL PRNG Internal State Discovery vulnerability.

A severe vulnerability, which ranges from 4.0 to 6.9 on the CVSS system, can be exploited with a moderate level of hacking experience and may or may not require authentication. A successful attacker has partial access to restricted information, can destroy some information, and can disable individual target systems on a network. Examples include Anonymous FTP Writeable and Weak LAN Manager hashing permitted.

A critical vulnerability, which ranges from 7.0 and 10.0 on the CVSS system, can be exploited with easy access and requires little or no authentication. A successful attacker has access to confidential information, can corrupt or delete data, and can cause a system outage. Examples include the ability of anonymous users can obtain a Windows password policy.

Though compliance is not the final word for web security, it will go a long way in helping our payments and card data secure on the web.

Digg this Post! Add Post to del.icio.us Bookmark Post in Technorati Furl this Post!


Older Posts »