AWS – relationship between Elastic Load Balancing, CloudWatch, and Auto Scale

March 20th, 2014

Introduction

The monitoring, auto scaling, and elastic load balancing features of the Amazon EC2 services give you easy on-demand access to capabilities that once required a complicated system architecture and a large hardware investment.
Any real-world web application must have the ability to scale. This can take the form of vertical scaling, where larger and higher capacity servers are rolled in to replace the existing ones, or horizontal scaling, where additional servers are placed side-by-side (architecturally speaking) with the existing resources. Vertical scaling is sometimes called a scale-up model, and horizontal scaling is sometimes called a scale-out model.

Vertical Scaling

At first, vertical scaling appears to be the easiest way to add capacity. You start out with a server of modest means and use it until it no longer meets your needs. You purchase a bigger one, move your code and data over to it, and abandon the old one. Performance is good until the newer, larger system reaches its capacity. You purchase again, repeating the process until your hardware supplier informs you that you’re running on the largest hardware that they have, and that you’ve no more room to grow. At this point you’ve effectively painted yourself into a corner.
Vertical scaling can be expensive. Each time you upgrade to a bigger system you also make a correspondingly larger investment. If you’re actually buying hardware, your first step-ups cost you thousands of dollars; your later ones cost you tens or even hundreds of thousands of dollars. At some point you may have to invest in a similarly expensive backup system, which will remain idle unless the unthinkable happens and you need to use it to continue operations.

Horizontal Scaling

Horizontal scaling is slightly more complex, but far more flexible and scalable in the long term. Instead of upgrading to a bigger server, you obtain another one (presumably of the same size, although there’s no requirement for this to be the case) and arrange to share the storage and processing load across two servers. When two servers no longer meet your needs, you add a third, a fourth, and so on. This scale-out model allows you to add resources incrementally and economically. As your fleet of servers grow, you can actually increase the reliability of your system by eliminating dependencies on any particular server.
Of course, sharing the storage and processing load across a fleet of servers is sometimes easier said than done. Loosely coupled systems tied together with SQS message queues like those we saw and built in the previous chapter can usually scale easily. Systems with a reliance on a traditional relational database or another centralized storage can be more difficult.

Monitoring, Scaling, and Load Balancing

We’ll need several services in order to build a horizontally scaled system that automatically scales to handle load.
First, we need to know how hard each server is working. We have to establish how much data is moving in and out across the network, how many disk reads and writes are taking place, and how much of the time the CPU (Central Processing Unit) is busy. This functionality is provided by Amazon CloudWatch. After CloudWatch has been enabled for an EC2 instance or an elastic load balancer, it captures and stores this information so that it can be used to control scaling decisions.
Second, we require a way to observe the system performance, using it to make decisions to add more EC2 instances (because the system is too busy) or to remove some running instances (because there’s too little work for them to do). This functionality is provided by the EC2 auto scaling feature. The auto scaling feature uses a rule-driven system to encode the logic needed to add and remove EC2 instances.
Third, we need a method for routing traffic to each of the running instances. This is handled by the EC2 elastic load balancing feature. Working in conjunction with auto scaling, elastic load balancing distributes traffic to EC2 instances located in one or more Availability Zones within an EC2 region. It also uses configurable health checks to detect failing instances and to route traffic away from them.
Figure 7-1 depicts how these features relate to each other.
An incoming HTTP load is balanced across a collection of EC2 instances. CloudWatch captures and stores system performance data from the instances. This data is used by auto scale to regulate the number of EC2 instances in the collection.
As you’ll soon see, you can use each of these features on their own or you can use them together. This modular model gives you a lot of flexibility and also allows you to learn about the features in an incremental fashion.

ELB_CloudWatch_AutoScalePS:

This article is from book <Host Your Web Site In The Cloud: Amazon Web Services Made Easy>.

 

Categories: Clouding Tags:

Resolved – print() on closed filehandle $fh at ./perl.pl line 6.

March 19th, 2014

You may find that print sometimes won’t work as expected in perl, for example:

[root@centos-doxer test]# cat perl.pl
#!/usr/bin/perl
use warnings;
open($fh,”test.txt”);
select $fh;
close $fh;
print “test”;

You may expect “test” to be printed, but actually you got error message:

print() on closed filehandle $fh at ./perl.pl line 6.

So how’s this happened? Please see my explanation:

[root@centos-doxer test]# cat perl.pl
#!/usr/bin/perl
use warnings;
open($fh,”test.txt”);
select $fh;
close $fh; #here you closed $fh filehandle, but you should now reset filehandle to STDOUT
print “test”;

Now here’s the updated script:

#!/usr/bin/perl
use warnings;
open($fh,”test.txt”);
select $fh;
close $fh;
select STDOUT;
print “test”;

This way, you’ll get “test” as expected!

 

Categories: Perl, Programming Tags:

set vnc not asking for OS account password

March 18th, 2014

As you may know, vncpasswd(belongs to package vnc-server) is used to set password for users when connecting to vnc using a vnc client(such as tightvnc). When you connect to vnc-server, it’ll ask for the password:

vnc-0After you connect to the host using VNC, you may also find that the remote server will ask again for OS password(this is set by passwd):

vnc-01For some cases, you may not want the second one. So here’s the way to cancel this behavior:

vnc-1vnc-2

 

 

Categories: Linux, Systems Tags: ,

stuck in PXE-E51: No DHCP or proxyDHCP offers were received, PXE-M0F: Exiting Intel Boot Agent, Network boot canceled by keystroke

March 17th, 2014

If you installed your OS and tried booting up it but stuck with the following messages:

stuck_pxe

Then one possibility is that, the configuration for your host’s storage array is not right. For instance, it should be JBOD but you had configured it to RAID6.

Please note that this is only one possibility for this error, you may search for PXE Error Codes you encoutered for more details.

PS:

  • Sometimes, DHCP snooping may prevent PXE functioning, you can read more http://en.wikipedia.org/wiki/DHCP_snooping.
  • STP(Spanning-Tree Protocol) makes each port wait up to 50 seconds before data is allowed to be sent on the port. This Delay in turn can cause problems with some applications/protocols (PXE, Bootworks, etc.). To alleviate the problem, Porfast was implemented on Cisco devices, the terminology might differ between different vendor devices. You can read more http://www.symantec.com/business/support/index?page=content&id=HOWTO6019
  • ARP caching http://www.networkers-online.com/blog/2009/02/arp-caching-and-timeout/
Categories: Hardware, Storage, Systems Tags:

Oracle BI Publisher reports – send mail when filesystems getting full

March 17th, 2014

Let’s assume you have one Oracle BI Publisher report for filesystem checking. And now you want to write script for checking that report page and send mail to system admins when filesystems are getting full. As the default output of Oracle BI Publisher report needs javascript to work, and as you may know javascript is evil that wget/curl can not get them, so after log on, the next step you need to do is to find the html version’s url of that report for you to use in your script(and the html page has all records when javascript one has only part of them):

BI_report_login

BI_export_html

 

Let’s assume that the html’s url is “http://www.example.com:9703/report.html”, and the display of it was like the following:

bi report

Then here goes the script that will check this page for hosts that has less than 10% available space and send mail to system admins:

#!/usr/bin/perl
use HTML::Strip;
system(“rm -f spacereport.html”);
system(“wget -q –no-proxy –no-check-certificate –post-data ‘id=admin&passwd=password’ ‘http://www.example.com:9703/report.html’ -O spacereport.html”);
open($fh,”spacereport.html”);

#or just @spacereport=<$fh>;
foreach(<$fh>){
push(@spacereport,$_);
}

#change array to hash
$index=0;
map {$pos{$index++}=$_} @spacereport;

#get location of <table> and </table>
#sort numerically ascending
for $char (sort {$a<=>$b} (keys %pos))
{
if($pos{$char} =~ /<table class=”c27″>/)
{
$table_start=$char;
}

if($pos{$char} =~ /<\/table>/)
{
$table_end=$char;
}

}

#get contents between <table> and </table>
for($i=$table_start;$i<=$table_end;$i++){
push(@table_array,$spacereport[$i]);
}
$table_htmlstr=join(“”,@table_array);

#get clear text between <table> and </table>
my $hs=HTML::Strip->new();
my $clean_text = $hs->parse($table_htmlstr);
$hs->eof;

@array_filtered=split(“\n”,$clean_text);

#remove empty array element
@array_filtered=grep { !/^\s+$/ } @array_filtered;
system(“rm -f space_mail_warning.txt”);
open($fh_mail_warning,”>”,”space_mail_warning.txt”);
select $fh_mail_warning;
for($j=4;$j<=$#array_filtered;$j=$j+4){
#put lines that has free space lower than 10% to space_mail_warning.txt
if($array_filtered[$j+2] <= 10){
print “Host: “.$array_filtered[$j].”\n”;
print “Part: “.$array_filtered[$j+1].”\n”;
print “Free(%): “.$array_filtered[$j+2].”\n”;
print “Free(GB): “.$array_filtered[$j+3].”\n”;
print “============\n\n”;
}
}
close $fh_mail_warning;

system(“rm -f space_mail_info.txt”);
open($fh_mail_info,”>”,”space_mail_info.txt”);
select $fh_mail_info;
for($j=4;$j<=$#array_filtered;$j=$j+4){
#put lines that has free space lower than 15% to space_mail_info.txt
if($array_filtered[$j+2] <= 15){
print “Host: “.$array_filtered[$j].”\n”;
print “Part: “.$array_filtered[$j+1].”\n”;
print “Free(%): “.$array_filtered[$j+2].”\n”;
print “Free(GB): “.$array_filtered[$j+3].”\n”;
print “============\n\n”;
}
}
close $fh_mail_info;

#send mail
#select STDOUT;
if(-s “space_mail_warning.txt”){
system(‘cat space_mail_warning.txt | /bin/mailx -s “Space Warning – please work with component owners to free space” [email protected]’);
} elsif(-s “space_mail_info.txt”){
system(‘cat space_mail_info.txt | /bin/mailx -s “Space Info – Space checking mail” [email protected]’);
}

Categories: Perl, Programming Tags:

wget and curl tips

March 14th, 2014

Imagine you want to download all files under http://www.example.com/2013/downloads, and not files under http://www.example.com/2013 except for directory ‘downloads’, then you can do this:

wget -r –level 100 -nd –no-proxy –no-parent –reject “index.htm*” –reject “*gif” ‘http://www.example.com/2013/downloads/’ #–level 100 is large enough, as I’ve seen no site has more than 100 levels of sub-directories so far.

wget -p -k –no-proxy –no-check-certificate –post-data ‘id=username&passwd=password’ <url> -O output.html

wget –no-proxy –no-check-certificate –save-cookies cookies.txt <url>

wget –no-proxy –no-check-certificate –load-cookies cookies.txt <url>

curl -k -u ‘username:password’ <url>

curl -k -L -d id=username -d passwd=password <url>

curl –data “loginform:id=username&loginform:passwd=password” -k -L <url>

 

Categories: Linux, Programming, SHELL Tags: