In this article we will see how to automate back from our local machine to AWS S3 with sync command. Requirements:AWS Cli should be configured on Local Linux MachineInternet Connectivity [Of course] In our previous posts we saw how to install and configure AWS Cli on local machine. If you are don’t know how to do it please check my previous posts. Steps to Automate Backup: Create S3 Bucket where we would be sending our data for backup Using AWS Cli to issue the command to backup Using Cron to schedule our backup So let’s start: If you remember from our last post we already created S3 bucket by the name of tekco2020 as you can see in below image, we can use it to backup our data. Folder or directory on our local computer which we would like to backup is /opt/tekco-backup root@Red-Dragon:/opt/tekco-backup# pwd /opt/tekco-backup root@Red-Dragon:/opt/tekco-backup# ls fb-https.txt itpings-curl.txt password-curl-header.txt tekco.net-https-info.txt itpings-curl-header.txt password-2-curl-header.txt tekco.net-https-info2.txt As we can see above that there is data available in our folder tekco-backup Now issue the below command to start backing up data to AWS S3 bucket root@Red-Dragon:/opt/tekco-backup# aws s3 sync . s3://tekco2020 upload: ./itpings-curl.txt to s3://tekco2020/itpings-curl.txt upload: ./itpings-curl-header.txt to s3://tekco2020/itpings-curl-header.txt upload: ./password-curl-header.txt to s3://tekco2020/password-curl-header.txt upload: ./password-2-curl-header.txt to s3://tekco2020/password-2-curl-header.txt upload: ./tekco.net-https-info2.txt to s3://tekco2020/tekco.net-https-info2.txt upload: ./tekco.net-https-info.txt to s3://tekco2020/tekco.net-https-info.txt upload: ./fb-https.txt to s3://tekco2020/fb-https.txt root@Red-Dragon:/opt/tekco-backup# We can see that the files were successfully uploaded , let’s confirm it from AWS S3 Console Great, we can see that our data is in the bucket. Now let’s create a file and rerun the same commad. root@Red-Dragon:/opt/tekco-backup# touch sal.txt root@Red-Dragon:/opt/tekco-backup# aws s3 sync . s3://tekco2020 upload: ./sal.txt to s3://tekco2020/sal.txt As we can see , this time only new file got copied to S3 bucket , lets confirm it from AWS S3 console. Great ! Now to Automate the backup task we will setup cron. For this demo i will set it up to backup every 1 min. So let’s do it. root@Red-Dragon:/opt/tekco-backup# crontab -e Add the following lines to cron */1 * * * * /usr/local/bin/aws s3 sync /opt/tekco-backup/ s3://tekco2020 Save and quit and list with the following command root@Red-Dragon:/opt/tekco-backup# crontab -l m h dom mon dow command */1 * * * * /usr/local/bin/aws s3 sync /opt/tekco-backup/ s3://tekco2020 Now restart cron , copy some files to tekco-backup folder and wait for a min to see if the backup starts automatically. root@Red-Dragon:/opt/tekco-backup# systemctl restart cron root@Red-Dragon:/opt/tekco-backup# systemctl status cron ● cron.service – Regular background program processing daemon Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-10-06 16:40:11 +06; 5s ago Docs: man:cron(8) Main PID: 18618 (cron) Tasks: 1 (limit: 18972) Memory: 520.0K CGroup: /system.slice/cron.service └─18618 /usr/sbin/cron -f Oct 06 16:40:11 Red-Dragon systemd[1]: Started Regular background program processing daemon. Oct 06 16:40:11 Red-Dragon cron[18618]: (CRON) INFO (pidfile fd = 3) Oct 06 16:40:11 Red-Dragon cron[18618]: (CRON) INFO (Skipping @reboot jobs — not system startup) root@Red-Dragon:/opt/tekco-backup# touch mynewfile-aftercron.txt Now after 1 min , let’s check our bucket root@Red-Dragon:/opt/tekco-backup# aws s3 ls s3://tekco2020 PRE .aptitude/ PRE .aws/ PRE .cache/ PRE .config/ PRE .dbus/ PRE .local/ PRE .ssh/ PRE .synaptic/ 2020-10-06 16:39:05 11147 .bash_history 2020-10-06 16:39:05 3106 .bashrc 2020-10-06 16:39:10 31 .lesshst 2020-10-06 16:39:10 161 .profile 2020-10-06 16:39:11 75 .selected_editor 2020-10-06 16:39:13 12103 .viminfo 2020-01-04 21:39:25 70522 Kubernetes-Install.pdf 2020-10-06 15:27:26 148119 fb-https.txt 2020-10-06 15:27:26 384 itpings-curl-header.txt 2020-10-06 15:27:26 53772 itpings-curl.txt 2020-10-06 16:59:06 0 mynewfile-aftercron.txt 2020-10-06 15:27:26 242 password-2-curl-header.txt 2020-10-06 15:27:26 187 password-curl-header.txt 2020-10-06 16:29:05 0 sal.txt 2020-10-06 15:27:26 71968 tekco.net-https-info.txt 2020-10-06 15:27:26 71968 tekco.net-https-info2.txt Perfect, we can see our cron is working and file has been copied. Lets add one more file “dragon.txt” and check after 1 min from AWS S3 Console root@Red-Dragon:/opt/tekco-backup# touch dragon.txt Great our Automatic Backup is working every , minute , you can adjust it as per your requiremets. Thanks, Salman A. Francishttps://www.youtube.com/linuxkinghttps://www.tekco.net
Connecting S3 from AWS CLI Version 2
In this post we will see What is S3 How to Create User in AWS How to Configure AWS Cli Version 2 to list S3 buckets and content So Let’s start: S3:Stands for Simple Storage Service. It is a Cloud based Storage Service which can hold unlimited amount of data and the data can be retrieved anytime over the web.This data is stored in a general graphic locations called regions.These regions contain Availability zones. These AZ’s are isolated facilities.**Note: Data in S3 is replicated to 3 AZ’s and Amazon handles this automatically.S3 is object based storage e.g it stores files , jpegs, pictures, pdf but not run applications. For apps we use Block devices such as EBS. These objects are stored in bucket and buckets are created within the region. Object can be upto 5 TB in size. We can set permission on objects as well as on the buckets.Storage Types:Standard S3S3 IA [ Infrequent Access]S3 Single Zone Glacier Now to setup AWS Cli to access S3 content: 1) Create an IAM User2) Give this user S3 -Access let’s make him S3Admin3) Get his Access key and Secret key from Security Credentials Creating an IAM user:To create an IAM user login to AWS Console and type IAM in the search bar as shown below and click on IAM. Now Click on Users as show below Now click on Add user Then give details as show below Note: You can Check Access Key box here or later. Now Click Next and then Click on Create group. Now give group a name e.g “tekco-s3 ” as show below and from policy filter type S3 and select AmazonS3FullAccess Then click Next and click on create group. Once you have clicked on create the group, you will be presented with the following screen. If all is as per your desire click finish. Click next and you will receive a success message for your new user. Now once the user is created, click on Security Credentials and , then create “Access Key” Once you click on Create access key , You will see Access key and Security key. Please save these keys and move to next step of Configuring AWS Cli Version 2 Now Let’s Configure our AWS Cli Version 2 To configure aws cli to access S3 go to your linux terminal and type the following: root@Red-Dragon:/home/salman# aws configure It will ask you to provide with Access key, Secret key and the region ( You have all the information when you created the user earlier) AWS Access Key ID [None]: AKIAYQYYFIDYUMHTUMHC AWS Secret Access Key [None]: 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 Default region name [None]: us-east-1 Default output format [None]: root@Red-Dragon:/home/salman# Thats it, our AWS Cli is configured for S3, to confirm that issue the following commands: root@Red-Dragon:/home/salman# aws s3 ls 2019-11-28 00:08:55 itpings2000 2020-01-04 21:37:18 tekco2020 root@Red-Dragon:/home/salman# aws s3 ls s3://tekco2020 2020-01-04 21:39:25 70522 Kubernetes-Install.pdf We can see that the content is listed and our bucket tekco2020 contains Kubernetes-install.pdf , let’s verify this by checking S3 from AWS S3 console As we can see above it does shows the same content as seen from Cli. Let’s create a bucket from AWS CLI with aws s3 mb command. root@Red-Dragon:/home/salman# aws s3 mb s3://tekco-demo-2020 make_bucket: tekco-demo-2020 root@Red-Dragon:/home/salman# aws s3 ls 2019-11-28 00:08:55 itpings2000 2020-10-01 23:47:17 tekco-demo-2020 2020-01-04 21:37:18 tekco2020 Let’s go to our AWS S3 Console and check if the bucket is being created. Perfect, we have successfully created our bucket from AWS Cli. Thanks, Salman Francishttps://www.youtube.com/linuxkinghttps://www.tekco.net
Installing AWS CLI Version 2 in Linux
Hey all, today i decided to install AW Cli version 2 in my Linux mint and therefore would like to share how to do that in simple steps. So let’s start: Prerequisites:Curlunzip Downloading:curl https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip Unzipunzip awscliv2.zip Install:sudo ./aws/install Confirmation:# aws –version **Note: If aws command is not in your path, you can add it by issuing the following command: [root@ip-192-168-200-84 ~]# export PATH=/usr/local/bin/:$PATH Done. Thanks,Salman Francishttps://www.youtube.com/linuxking
Renewing AWS EC-2 Key-Pair
In this post i will explain how easy it is to renew / add new AWS EC-2 instance key pair. So let’s start Requirements: You must be able to login to EC-2 instance from your PC terminal. This article is not for you in case you have lost your existing key-pair Step No. 1 Creating new Key-pair Login to your Amazon Console and Click on EC-2 Once clicked, Scroll down and you will find Key-pairs under Network & Security Section as shown below Click on Key Pairs and you will see a window similar to below. Give Name to your Key Pair and Click “Create Key Pair” Once you click the Create Key Pair, it will Download to your system. Change the permission of the key pair salman@Red-Dragon:~/Pictures/AWS$ chmod 400 tekco-key-omen.pem Now retrieve the public part from your Key-pair and copy it as shown below. Now Connect with your EC-2 instance with your old key Now edit the authorized_keys file as show below: Paste the public key which you retrieved earlier in authorized_keys file as show below Save and exit and Now from your PC connect the EC2 instance with your new Key-Pair Perfect. We have successfully entered our EC-2 instance with our New Key-Pair. Once you are satisfied, you can delete the old key from authorized_keys and start using the new one. Thank you. Salman A. Francishttps://www.youtube.com/linuxking
Rainloop Webmail Server Connect Issue
So today i had problem when my Rainloop webmail gave me the below error Solution: The Solution was simple but it took me sometime to figure out. It was related to SSL So a quick fix was to go to Admin panel of your Rainloop server e.g https://<yourdomain>/?Admin After logging in Click on Domains as show below Once you click the domain you will see window as show below Now click on your domain and you will see window as show below Now change SSL/TLS under IMAP to None for the time being and click update. Your should be able to login now. I think the issue is with SSL Certificate , it should be updated , after update you must revert the changes from None to SSL/TLS. Thanks, Salman A. Francishttps://www.youtube.com/linuxking
Setup PHP 7.4 in CentOS 8
In this post we will see, how to easily install php 7.4 in CentOS 8 Install php 7.4 on Centos 8 [root@sal-test ~]# yum-y update Install extra packages repository for enterprise Linux [root@sal-test ~]# yum -y install epel-release Install remi repository for CentOs 8 [root@sal-test ~]# yum -y install https://rpms.remirepo.net/enterprise/remi-release-8.rpm Enable remi repository for php 7.4 [root@sal-test ~]# yum module enable php:remi-7.4Remi’s Modular repository for Enterprise Linux 8 – x86_64 506 kB/s | 577 kB 00:01Safe Remi’s RPM repository for Enterprise Linux 8 – x86_64 1.1 MB/s | 1.5 MB 00:01 Dependencies resolved. Package Architecture Version Repository Size Enabling module streams:php remi-7.4 Transaction Summary Is this ok [y/N]: yComplete! [root@sal-test ~]# yum install php php-cli php-commonLast metadata expiration check: 0:01:42 ago on Mon Sep 7 15:19:00 2020. Dependencies resolved. Package Architecture Version Repository Size Installing:php x86_64 7.4.10-1.el8.remi remi-modular 3.0 Mphp-cli x86_64 7.4.10-1.el8.remi remi-modular 4.6 Mphp-common x86_64 7.4.10-1.el8.remi remi-modular 1.2 M [root@sal-test ~]# php -vPHP 7.4.10 (cli) (built: Sep 1 2020 13:58:08) ( NTS )Copyright (c) The PHP GroupZend Engine v3.4.0, Copyright (c) Zend Technologieswith Zend OPcache v7.4.10, Copyright (c), by Zend Technologies Done.
Setup Nginx Reverse Proxy for Docker
In this post we will take a look at how to setup Nginx as a reverse proxy for Docker in CentOS 8. You can also find my Youtube video on the same at the end of this post. So Let’s Start: First question , why do we need reverse proxy ? The answer is simple, if we need to run many applications or webserver servicing different pages on the same server machine, we would need a reverse proxy.Reverse proxy will run on port 80 (Default port of a non secure web server) and forward the request to webserver / application running on different ports (Other than port 80 since the default port is already being used by Nginx reverse proxy). Installing Nginx: Install Docker: Install Apache pod: Setting up and Configuring Nginx for Reverse Proxy.
Top 10 Linux Commands Every IT Admin Should Know !
Hey guys so today i would like to post an article about the most important commands every Admin / Engineer must know in order to understand whats going on in their Linux machine. So let’s start: The Top Command This command is one of the most important command for any Linux user. It gives you full system info in a single glance.To run top command type the following: salman@Linux:~> top So let me show you what few of the things shown in above screenshot means:1) 00:36:22 is the current time, up 34 days, 12:59 means system is up for how many days and hours, 2 users means 2 users are logged in.2) Load average: 0.76 is the load average of system in last 1 min, 0.77 is the load average in last 5 mins and 0.77 is the la in last 15 mins.3) %CPU = What percentage of CPU is being used. Press 1 to see all the cores of the system4) Mem = Memory size , the rest is self explanatory. Press M5) Zombie = Processes whose data remains in the memory even after they are terminated.6) WA = The time CPU waits for I/O to be completed is wa or wait time.7) ID = The time CPU remains idle.8) HI = Hardware Interrupt9) SI = Software Interrupt 10) ST = Steal Time is the time lost due to waiting to get resources from CPU. Mostly used in Virtual environment. If greater than 10% for 25 minutes, requires attention as it means that the machine is performing slower.Now on the lower part:6) PID = Process id7) USER= User who started the process8) PR/NI = Priority and Nice value , Nice value effects the priority of a process. Lower nice value means higher priority and vice versa.9) VIRT = The total amount of virtual memory used by the task.10) RES = This is a resident memory which is the non-swapped physical memory a task has used.11) SHR = Shared memory with other processes.12) S = Status of a process it can be one of the below:The status of the task which can be one of:D = uninterruptible sleepR = runningS = sleepingT = traced or stoppedZ = zombie13) Time + = Total CPU time used by the process /task since it started.14) Command = Name of the process. *Note to get details of a specific process let’s say chrome, you can type: salman@Linux:~> top | grep chrome3278 salman 20 0 969348 223536 43460 S 12.50 5.723 4:23.19 chrome 2539 salman 20 0 1475352 246336 90532 S 6.250 6.307 23:33.54 chrome 2577 salman 20 0 657672 124012 50900 S 6.250 3.175 31:00.94 chrome 16330 salman 20 0 723356 59792 32524 S 6.250 1.531 1:27.85 chrome To get the information about the processes run by specific user, run the following command: salman@Linux:~> top -u root The above will show processes running by user root Note: On a single core the load average of a machine must be under 1. 1, for single core it means 100% load. If the load is continuous you must check the issue and find the solution before it’s too late. ( Dos attacks, Viruses and corrupt scripts could cause machine to over load. The lsblk Command lsblk or list block device command is another useful command for Linux admins. The command when type will display all the block devices along with their mount point. The lsblk command reads the sysfs filesystem ( The sysfs filesystem is commonly mounted at /sys) and udev db (Dynamic Device Management) to gather the informationAn example of lsblk can be seen below: salman@Linux:~> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk └─sda1 8:1 0 931.5G 0 part /home sdb 8:16 0 223.6G 0 disk ├─sdb1 8:17 0 8G 0 part [SWAP] ├─sdb2 8:18 0 502M 0 part /boot/efi └─sdb3 8:19 0 214.6G 0 part /boot/grub2/x86_64-efi The df Command This command is very important for sys admins to get information about the space available.df stands for disk free and it reports the disk space usage. Linux:/home/salman/Pictures # df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 130M 1.8G 7% /dev/shm tmpfs 1.9G 2.7M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sdb3 215G 28G 188G 13% / /dev/sdb3 215G 28G 188G 13% /var/log -h means human readable. Use -BM option such as df -BM to display the size in bytes. The du Command This command gives you information about the disk usage. Let’s say you want to see the size of a directory including it’s content. Use the following command: Linux:/home/salman # du -sh Downloads/ 822G Downloads/ In above example -sh means to show summary in human read able option. For more info about the command use man page or type help The stat Command This command display file or file system status and is usually overlooked. It is a very important command while working with files and file system. Some examples of stat command are: Displaying octal permissions of a file or directory [(usually they are in human readable form such as rwx- (read , write , execute)] Linux:/home/salman/Pictures # stat -c %a userlist.txt 644 Linux:/home/salman/Pictures # stat -c %a WP-PICS/ 755 As you can see in the above example we used stat command to convert human readable format to show access rights in octal format. -c means the format we would like to use%a option is used to convert the human readable format to octal format Stat command without any option will display a lot of information including type of file, when it was created, accessed, size, permissions in both octal and human readable form and a lot more. Lets take a look at it now salman@Linux:~> stat Courses/ File: ‘Courses/’ Size: 62 Blocks: 0 IO Block: 4096 directory Device: 801h/2049d