Category Archives: Linux

HOWTO: Create an Active/Passive TEAM device in RHEL7

Let’s say you have a few spare NICs and want to put them together in a/n (active/passive) bond.  What do you do?

This write-up is VERY similar to HOWTO: Create a BOND with RHEL7 – but it’s for teaming.

This is also is pretty straight-forward.

First, connect via SSH to an IP on a NIC that WILL NOT be part of the bond.

Using ‘nmcli’ – remove references to the NICs you want IN the bond and reload nmcli:

[root@rhce ~]# nmcli con del ens224 ens256
Connection 'ens224' (92d6456d-16bd-4eae-9ecb-386cb4ce4d29) successfully deleted.
Connection 'ens256' (e52eca2f-8c84-428d-8959-93e85f4b03f3) successfully deleted.
[root@rhce ~]# nmcli con reload

Next, with nmcli, create the bond:

[root@rhce ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner": {"name": "activebackup"}}'
Connection 'team0' (01567cc6-b2da-42ac-adf8-b9085c3f4309) successfully added.

Time to add the two NICs (ens224 & ens256) in:

[root@rhce ~]# nmcli con add con-name team0-ens224 ifname ens224 type team-slave master team0
[root@rhce ~]# nmcli con add con-name team0-ens256 ifname ens256 type team-slave master team0

Optional (I think, but I still do it), modify the team’d device to have an IP and DNS:

[root@rhce ~]# nmcli connection modify team0 ipv4.address "192.168.1.50/24"
[root@rhce ~]# nmcli connection modify team0 ipv4.dns "192.168.1.1,8.8.8.8"

 

Check the state of the team & see which NIC is active:

# teamdctl team0 state
setup:
  runner: activebackup
ports:
  ens224
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
  ens256
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
runner:
  active port: ens224

Looks like ens224 (first NIC) is active; let’s pull the cable, start a PING test, & see what happens:

[root@rhce ~]# ping 192.168.1.50
PING 192.168.1.50 (192.168.1.50) 56(84) bytes of data.
64 bytes from 192.168.1.50: icmp_seq=1 ttl=64 time=0.140 ms
64 bytes from 192.168.1.50: icmp_seq=2 ttl=64 time=0.167 ms
64 bytes from 192.168.1.50: icmp_seq=3 ttl=64 time=0.167 ms
64 bytes from 192.168.1.50: icmp_seq=4 ttl=64 time=0.172 ms
64 bytes from 192.168.1.50: icmp_seq=5 ttl=64 time=0.166 ms
64 bytes from 192.168.1.50: icmp_seq=6 ttl=64 time=0.171 ms
64 bytes from 192.168.1.50: icmp_seq=7 ttl=64 time=0.153 ms
64 bytes from 192.168.1.50: icmp_seq=8 ttl=64 time=0.181 ms
64 bytes from 192.168.1.50: icmp_seq=9 ttl=64 time=0.172 ms
64 bytes from 192.168.1.50: icmp_seq=10 ttl=64 time=0.194 ms
64 bytes from 192.168.1.50: icmp_seq=11 ttl=64 time=0.172 ms
64 bytes from 192.168.1.50: icmp_seq=12 ttl=64 time=0.172 ms
64 bytes from 192.168.1.50: icmp_seq=13 ttl=64 time=0.172 ms
64 bytes from 192.168.1.50: icmp_seq=14 ttl=64 time=0.191 ms
64 bytes from 192.168.1.50: icmp_seq=15 ttl=64 time=0.165 ms
64 bytes from 192.168.1.50: icmp_seq=16 ttl=64 time=0.155 ms

Nothing strange there … what about the team?

[ root@rhce ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  ens256
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
  ens224
    link watches:
      link summary: down
      instance[link_watch_0]:
        name: ethtool
        link: down
        down count: 1
runner:
  active port: ens256

It switched to ens256 & didn’t drop a ping.

Looks like you got an active/passive team working successfully!

Migrating to a new GIT server

Scenario:

You’re decommissioning an old server, which just so happens to have your GIT repo on it.

How can you migrate it without losing the history?

From the NEW SERVER:

Make your new directory & set the permissions to your GIT_USER:

mkdir /opt/git/qa.git
chown git.git /opt/git/qa.git

‘su’ to GIT_USER on the server, and initialize the newly created directory

su - git
cd /opt/git/qa.git
git init --bare

From the EXISTING CLIENT (with most recent copy of repo), make new directory, ‘cd’ into there and initialize it

mkdir ~/git/qa
cd ~/git/qa
git init .

Now, clone the existing repository INTO here.  It will not have the code, it’ll have the repository configuration.  Again, the command will be pointing at the ORIGINAL (soon to be decommissioned) server:

git clone --bare ssh://git@243.268.8.11/opt/git/qa.git

You’ll see something like:

Cloning into bare repository 'qa.git'...
git@243.268.8.11's password:
remote: Counting objects: 841, done.
remote: Compressing objects: 100% (595/595), done.
remote: Total 841 (delta 292), reused 605 (delta 208)
Receiving objects: 100% (841/841), 21.52 MiB | 23.68 MiB/s, done.
Resolving deltas: 100% (292/292), done.

Then, it will create another <repo>.git directory in your “new” local directory (~/git/qa).  ‘cd’ into that

cd ~/git/qa/qa.git

Now, it’s your job to PUSH that config to the NEW server:

git push --mirror ssh://git@991.11.78.221/opt/git/qa.git

You’ll see something like:

Counting objects: 841, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (511/511), done.
Writing objects: 100% (841/841), 21.52 MiB | 0 bytes/s, done.
Total 841 (delta 292), reused 841 (delta 292)
To ssh://991.11.78.221/opt/git/qa.git
 * [new branch]      master -> master

When that’s done, you can ‘cd’ back a level & delete the <repo>.git file that the clone created.

cd ..
rm -rf qa.git

Now, you can either re-clone from the NEW server, seen HERE, or modify the existing .git directory’s config file to point to the new location:

vi ~/git/original_qa_repo_directory/includes/.git/config

and change the OLD server’s IP …

[remote "origin"]
        url = ssh://git@243.268.8.11/opt/git/qa.git

… to the NEW server’s IP:

[remote "origin"]
        url = ssh://git@991.11.78.221/opt/git/qa.git

Now, issue a ‘git pull’ and you should be all set!

$ git pull
Already up-to-date.

 

Piping debug output through grep

Would seem straight-forward, but it gave me a google challenge tonight.

iscsiadm -m node -d 8

Run it.  Well, if you use iscsi, that is. — that was my test tonight.

I was looking for the selective, ‘grepped’ output of:

node.session.err_timeo.lu_reset_timeout

But, when I ran iscsiadm -m node -d 8 | grep timeo.lu — it didn’t give me just those matches.

The man page failed me, so I found a reference to “|&” — so, gave it a go, with success!

# iscsiadm -m node -d 8 |& grep timeo.lu
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'

 

I changed the password for my git repo and now it’s failing authentication on a pull

On my primary machine that’s always locked, I cache my password for some repos.  I just do.

So, after being forced to change the password to a (https!) repo, I tried a pull and it just happened to me.  This doesn’t impact ssh public key authentication.

$ git pull
remote: Invalid username or password. If you log in via a third party service you must ensure you have an account password set in your account profile.
fatal: Authentication failed for 'https://bitbucket.org/me/repo/'

You have to reset your credential helper cache, like so:

$ git config --global credential.helper cache
$ git pull

Ah, now — prompted for username & password.  #verynice

Username for 'https://bitbucket.org': fakeusename_1  
Password for 'https://fakeusername_1@bitbucket.org': 
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (10/10), done.
....

And, scene.

git clone config global reset author –what?

Ah, cloning a git repo again, for the first time.   Here’s me using bitbucket.org; it’s free for slackers like me.

OK, so first:

$ mkdir -p ~/git/bitbucketrepo

$ git init ~/git/bitbucketrepo

$ cd ~/git/bitbucketrepo

$ git clone https://full-address-as-seen-in-bitbucket

Cool, now I add a few scripts & am ready to ‘stage’ them with ‘add.’

$ git add .

Unfortunately, this machine will get auto-assigned a name & email based on your login & some FQDN stuff.  I think we should change it.

$ git config --global user.name "Tom's Fedora 24 Workstation"
$ git config --global user.email tomblog@personalemail.email

Now, kick of a commit:

$ git commit -m "testing for blog"
[master 0e08355] testing for blog
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode ....

Now, time to ‘push’ it to bitbucket:

$ git push

AWW Crap, more stuff:

$ git push
warning: push.default is unset; its implicit value has changed in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the traditional behavior, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

When push.default is set to 'matching', git will push local branches
to the remote branches that already exist with the same name.

Since Git 2.0, Git defaults to the more conservative 'simple'
behavior, which only pushes the current branch to the corresponding
remote branch that 'git pull' uses to update the current branch.

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Then you’re prompted for your password & everything works.

HOWEVER — for the next ‘push’ – let’s adapt for the ‘new behavior:’

$ git config --global push.default simple

Make a test file & test again:

$ echo "BLOG TEST" > new_stuff.txt
$ git add .
$ git commit -m "blog test"
[master xxx] blog test
 1 file changed, 2 insertions(+)
 create mode ....
$ git push
Password for 'https://.....
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.

Looks good!

See you in a year when you need to do it again.

 

SUNNOVA … why doesn’t NOPASSWD work in /etc/sudoers in Fedora 24?

I’m used to just copy/pasting root & adding in my username, then tacking on NOPASSWD: ALL at the end, like so:

## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
tbizzle     ALL=(ALL)       NOPASSWD: ALL

Then, running a sudo command, I STILL had to enter the password:

[tbizzle@f24-mac ~]$ sudo date
[sudo] password for tbizzle: 
Tue Jun 28 01:11:55 EDT 2016

CRAP.  That’s not what I wanted.

 

But NOW, it’s different.  The “fix” was to add the entry AFTER wheel for it to work:

[tbizzle@f24-mac ~]$ sudo grep -A4 -B4 bizzle /etc/sudoers | grep -A4 -B4 NOPASS
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL
tbizzle     ALL=(ALL) NOPASSWD: ALL

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL

and now:

[tbizzle@f24-mac ~]$ sudo date
Tue Jun 28 01:10:26 EDT 2016

 

Hope it helps

Repos and Subscriptions needed to install RHEV 3.5

After some fighting, here’s what you have to to ..

Install a RHEL 6 VM

First:
# subscription-manager register
Registering to: subscription.rhn.redhat.com:443/subscription
Username: your new shiny name
Password:
The system has been registered with ID: XXXXXXXX

Then:
# subscription-manager attach
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

(Does the above look familiar?)

 

Once that’s done, go to your RHN account & click on the VM you just ‘attached’ and pick ‘Attach a subscription’ and select your Virtualization Entitlement.

Once that’s done, issue:

subscription-manager repos –enable rhel-6-server-rhevm-3.5-rpms ; sleep 1 ; subscription-manager repos –enable jb-eap-5-for-rhel-6-server-rpms ; sleep 1 ; subscription-manager repos –enable rhel-6-server-supplementary-rpms  ; sleep 1 ; subscription-manager repos –enable jb-eap-6-for-rhel-6-server-rpms; sleep 1 ; subscription-manager repos –enable rhel-6-server-rhevh-rpms

THEN, you can install RHEV & the hypervisor (to get the ISOs):

yum -y install rhevm “rhev-hypervisor*”

Enjoy!

HOWTO: APACHE – permanent redirect to another server & port

I’m using CentOS 7.2 & the corresponding layout as seen here.

So, I have a few VMs that host sites and I elected *not* to move on with AWS due to my very strained budget and it’s using Ubuntu and docker.
That being said, I kept an Ubuntu VM and it can’t share port 80 due to just a single Internet connection inbound and I was forced to make changes.

Here’s what I did to get around it (mind you, none of this is actual):

Take your /etc/httpd/sites-enabled file and make some additions:

# cat blog-toloughlin.conf

ServerName blog.toloughlin.com
ServerAlias blog.toloughlin.com
RedirectPermanent / http://www.blog.toloughlin.com:81
# optionally add an AccessLog directive for
# logging the requests and do some statistics
Next time you visit that domain, it’ll push the traffic back to port 81 (translated by your router).

Caveat: you’ll see :81 in your URL bar and some of your site may not work correctly (things coded to use the domain & no port numbers).

It’s hackey, but it works … fairly well.

HOWTO: MOSH – when you need to SSH and there’s intermittent connectivity problems

Read about is here: https://mosh.mit.edu/

I loaded it up on RHEL 7.2, and here’s the process that I went through …

Add pre-requisite packages:
yum -y install git protobuf-c autoconf automake wget bzip2 gcc-c++ zlib-devel libutempter ncurses-devel openssl-devel net-tools

Run all of these commands:

PREFIX=$HOME
wget http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.bz2
tar -xf protobuf-2.4.1.tar.bz2
cd protobuf-2.4.1
./configure --prefix=$PREFIX
make
make install

export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/root/lib/pkgconfig

$ git clone https://github.com/mobile-shell/mosh
$ cd mosh
$ ./autogen.sh
$ ./configure
$ make
# make install

echo "export LD_LIBRARY_PATH=/root/lib" >> ~/.bashrc ; source ~/.bashrc

firewall-cmd –add-port=60000-61000/udp

Have you heard that RHEL is available ‘free’ for your Development Environment?

It sure is – woo hoo!

Dance on over to https://developer.redhat.com, sign up and accept their terms.

You can then download the latest ISO (7.2 at the time of this writing) and load it up on a server or VM. Make sure you select “Developer Tools” during the installation.

If you selected Basic (no GUI), you’ll need to run a few extra steps after installing, in order to get your yum updates.

First:

# subscription-manager register

Registering to: subscription.rhn.redhat.com:443/subscription
Username: your new shiny name
Password:
The system has been registered with ID: XXXXXXXX

Then:

# subscription-manager attach

 

Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

Finally:

# subscription-manager repos --enable=rhel-server-rhscl-7-rpms
# subscription-manager repos --enable=rhel-7-server-optional-rpms
# subscription-manager repos --enable=rhel-7-server-extras-rpms

 



Now don’t be a jerk and try to use it in production; all it takes is one support call and accidentally outing yourself to cause your entire company to be forced to conduct a licensing audit.  That won’t be fun. 

HOWTO: firewalld – allowing individual host access

So, you’re rolling out a new webserver and want only certain people to take a look at the content? Here’s how you do it.
CentOS 7.2 is the OS being used.

What zone are you in?
[root@blog-test ~]# firewall-cmd --get-default-zone
public

OK, let’s make a new zone:

firewall-cmd --permanent --new-zone=blog
systemctl reload firewalld

Now, let’s add your IP & a friends IP to start testing … given you’re using apache & it’s still on port 80:

firewall-cmd --permanent --zone=blog --add-source=YOUR_IP/32
firewall-cmd --permanent --zone=blog --add-source=FRIENDS_IP/32
firewall-cmd --permanent --zone=blog --add-port=80/tcp

NOTE:  If you are using that port in another zone, remove it from that other zone first, because it can’t be in 2 zones at once.

That’s all there is. Move along now.

 

Need WordPress to send email, but you’re on Comcast?

Sending mail with Comcast as your ISP – this is on CentOS 7.2.

Install:
# yum install cyrus-sasl{,-plain}

Edit /etc/postfix/main.cf and insert the following below the other ‘relayhost’ references:
relayhost = [smtp.comcast.net]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/smtp_password
smtp_sasl_security_options =

Note: smtp_sasl_security_options = … is intentionally blank.

Edit:
/etc/postfix/smtp_password and insert:
[smtp.comcast.net]:587
username@comcast.net:password

Lock down the perms:
# chmod 600 /etc/postfix/smtp_password

Run:
postmap hash:/etc/postfix/smtp_password

Create a localhost-rewrite rule. This must be done, or else the Comcast SMTP server will reject your mail as coming from an invalid domain. Insert the following into:
/etc/postfix/sender_rewrite:
/^([^@]*)@.*$/ $1@<
your_domain_here>.com

Allow SELinux to accept apache’s access to send mail:
# setsebool -P httpd_can_sendmail 1

Restart postfix:
# systemctl restart postfix

Test. If it fails, tail /var/log/maillog!

** NEW INFO **
I had some troubles with this (mail still showing root@localhost in the maillog) – and here were a few more steps, if that doesn’t completely work.

vi /etc/postfix/sender_canonical

… and insert the following, to make “root” appear to be the “wordpressuser” on outbound mail. This should have been rewritten by the rule up above, but it wasn’t doing it.

root wordpressuser@yourdomain.com

Create /etc/postfix/sender_canonical.db file
postmap hash:/etc/postfix/sender_canonical

Add sender_canonical variable to /etc/postfix/main.cf
postconf -e "sender_canonical_maps=hash:/etc/postfix/sender_canonical"

Restart postfix:
# systemctl restart postfix

Do you want to build a WordPress …… (site)?

Welcome.

Here’s a build-out on CentOS 7.2.

Install just the core, then add packages as needed – as you see below:

[root@wordpress-server ~]# yum update -y
[root@wordpress-server ~]# yum install bash-completion -y
[root@wordpress-server ~]# systemctl reboot

[root@wordpress-server ~]# yum install httpd php php-gd mariadb mariadb-server php-mysql rsync wget -y
[root@wordpress-server ~]# systemctl start httpd mariadb
[root@wordpress-server ~]# systemctl enable httpd mariadb

[root@wordpress-server ~]# firewall-cmd –add-service=http
[root@wordpress-server ~]# firewall-cmd –add-service=http –permanent

Set passwords for MySql / MariaDB:

[root@wordpress-server ~]# mysql_secure_installation

Set root password? [Y/n] Y
New password:
Re-enter new password:
Password updated successfully!

Remove anonymous users? [Y/n] Y
… Success!

Disallow root login remotely? [Y/n] n
… skipping.

Remove test database and access to it? [Y/n] Y
– Dropping test database…
… Success!
– Removing privileges on test database…
… Success!

Reload privilege tables now? [Y/n] Y
… Success!

[root@wordpress-server ~]# mysql -u root -p
Enter password:

MariaDB [(none)]> create database wp_site_1;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> create user wordpressadmin@localhost identified by ‘pass_from_above’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on wp_site_1.* to wordpressadmin@localhost identified by ‘pass_from_above’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

[root@wordpress-server ~]# groupadd wp

[root@wordpress-server ~]# wget http://wordpress.org/latest.tar.gz

[root@wordpress-server html]# tar zxvf latest.tar.gz

in /var/www/html:
root@wordpress-server html]# mkdir site_1

Copy software to the new directory:
[root@wordpress-server ~]# rsync -aP /root/wordpress/ /var/www/html/site_1/

Fix ownership:
[root@wordpress-server html]# chown -R apache.wp *
drwxr-xr-x. 5 apache wp 4096 Feb 2 12:12 site_1

[root@wordpress-server site_1]# cp wp-config-sample.php wp-config.php

Edit wp-config.php file, then copy to the other site_ directories:
define('DB_NAME', 'wp_site_1');
define('DB_USER', 'wordpressadmin');
define('DB_PASSWORD', 'password_from_mysql_secure_installation');

Again:
[root@wordpress-server html]# chown -R apache.wp *

Edit PHP.INI:
[root@wp-srv-001 html]# vi /etc/php.ini
change the line to this: upload_max_filesize = 25M

Add the following as the last line in /etc/httpd/conf/httpd.conf:
IncludeOptional sites-enabled/*.conf

in /etc/httpd, make these directories:
[root@wordpress-server httpd]# mkdir sites-available
[root@wordpress-server httpd]# mkdir sites-enabled

in sites-available, make config files for each domain:
[root@wordpress-server sites-available]# ll
total 12
-rw-r--r--. 1 root root 203 Feb 4 23:37 yourdomain.conf

The file should have:

DocumentRoot /var/www/html/site_1
ServerName www.yourdomain.com
ServerAlias yourdomain.com
ErrorLog logs/yourdomain_error.log

 

Create the following symlinks to the .conf files:
ln -s /etc/httpd/sites-available/yourdomain.conf /etc/httpd/sites-enabled/yourdomain.conf

RESTART APACHE!

[root@wordpress-server httpd]# apachectl restart

Go to your domains!

HOWTO: Back-up your MariaDB and then restore later?

This is with CentOS 7.2.

Dump the Database you want to backup:
mysqldump mariadb_name -u root > /backup/dir/db_name.$(date +%m%d).sql

Make a tarball with the newly created database dump & the /var/www/html/ directory:
tar czf /backup/dir/wp_site_1_backup_$(date +%m%d).tgz /backup/dir/db_name.$(date +%m%d).sql /var/www/html/site_1

Remove the database dump that was just tar’d up:
<code?rm -f /backup/dir/wp_site_1.$(date +%m%d).sql

In use:

[root@websites ~]# mysqldump mariadb_name -u root > ~/backups/mariadb_name/mariadb_name.$(date +%m%d).sql

[root@websites ~]# tar czf ~/backups/mariadb_name/mariadb_name_full_$(date +%m%d).tgz ~/backups/mariadb_name/mariadb_name.$(date +%m%d).sql /var/www/html/mariadb_name

[root@websites ~]# rm -f ~/backups/mariadb_name/mariadb_name.$(date +%m%d).sql

[root@websites ~]# ll ~/backups/mariadb_name
total 9164
-rw-r–r–. 1 root root 9383639 Feb 7 17:30 mariadb_name_full_0207.tgz

[root@websites ~]# tar tzvf mariadb_name_full_0207.tgz | head -n 3
-rw-r–r– root/root 1211612 2016-02-07 17:30 root/backups/mariadb_name/mariadb_name.0207.sql
drwxr-xr-x apache/wp 0 2016-02-07 13:31 var/www/html/mariadb_name/
drwxr-xr-x apache/wp 0 2016-02-02 12:11 var/www/html/mariadb_name/wp-admin/

To script it, in root’s home directory (or whichever user), create:
.my.cnf ; chmod 600 .my.cnf

In the file, have the following:
[mysqldump]
password=

Need to restore?

[root@websites ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 65
Server version: 5.5.44-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]> create database databasename;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> \q
Bye
[root@websites ~]# mysql -u root -p -h localhost mariadb_name < backup_file.sql
Enter password:

cd ../../../ … ugh.

I picked a shortcut, called: aa

Then, in ~/.bashrc (my .bashrc), I created a function:

function aa
{
cd $(for ((i=0 ; i<$1 ;i++)); do printf "../" ; done)
pwd
}

This will allow you to ‘cd’ back X number of directories by issuing: aa X (where X is a number of directories you wanna go backwards).

So … say I’m in /var/www/html/ and I want to go back 2 levels to /var
I could:

$ cd ../../
or:
$ cd /var
or now:
$ aa 2

Example:

[admin@linux1 html]$ pwd
/var/www/html

[admin@linux1 html]$ aa 2
/var

So handy.

Copy / Paste issues with Synergy?

Regardless of it’s cross OSes, or Windows to Windows … you may run into issues copying & pasting in either Server -> Client … or Client -> Server direction.

If you opened up the Server Log window, you may have noticed disconnects … but you don’t lose synergy connectivity (the lightning bolt stays attached).

If you’re running a server on Linux or Mac, then you can skip the next step – which talks about Windows config.

Grab WinSSHD from Bitvise (www.bitvise.com/winsshd) & install it on the server. Ensure you allow port forwarding.

Once it’s running, on the CLIENT Side, use SecureCRT (or puTTy) & configure port forwarding to the newly installed Windows SSH Server.

Port forwarding with SecureCRT:

Once configured – on the CLIENT, login & fire up Synergy.

Enter ‘localhost’ for the hostname:

The established SSH connection will be the transport for port 24800 from the Client to the Server.

If using Mac or Linux, use command-line SSH to setup the port forwarding. Again, the Server must be configured to allow port forwarding:
# ssh -L 24800:localhost:24800 user@ssh_server

If setup properly, you’ll get your lightning bolt & all will be well. Since using this setup, I have not had a single issue copying/pasting between my two windows machines.

EDIT:

This started failing about 8 hours later & hasn’t worked since — even going back to older releases & up to the latest Betas.

It’s GOTTA be the x64 OS —

Automounter – how I didn’t miss you

I really haven’t touched automount on Linux since my RHCE exam, but decided to refresh some of my outdated abilities & set up automounting home directories.

I used 2 VMs with CentOS 5.6 with the latest updates.

CentOS1:
CentOS1 needed NFS enabled & running, along with portmap and autofs.

Additionally, since we aren’t using NIS, each user account has to be present on both machines, and have the same UID & GIDs. If not, you’ll get into permission hell like this later on:

# su - test2
su: warning: cannot change directory to /home/test2: Permission denied
-bash: /home/test2/.bash_profile: Permission denied

Now — NFS needed some config, where I exported a “home” directory (which was the system’s home directory):

# cat /etc/exports
/home *(rw,sync)

Then, I ran exportfs -a to update the exported directory & tested that it was available locally by doing:
#showmount -e localhost …. which got me:

Export list for localhost:
/home *

Once it was available, I had to add in IPTables rules to allow the other VM the ability to mount from it. But before I could do that, I needed to know the ports required.
I ran rpcinfo -p localhost & filtered out the tcp/udp ports for autofs, nfs & portmap (edited for content):

# rpcinfo -p localhost

program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100003 2 udp 2049 nfs
100003 2 tcp 2049 nfs
100005 1 udp 976 mountd
100005 1 tcp 979 mountd

Then, I added in the IPTables rules & restarted iptables:


-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 976 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 979 -j ACCEPT

CentOS2:
Now that the ports are open, I needed to be sure that everything truly was open.
From CentOS2, I ran:
#rpcinfo -p centos1 … which got me (edited for content):

program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100003 2 udp 2049 nfs
100003 2 tcp 2049 nfs
100005 1 udp 976 mountd
100005 1 tcp 979 mountd

BOOM, I was on my way to configuring automount on CentOS2 now.

On CentOS2, I needed nfs, portmap and autofs enabled & running as well.

** NOTE** If you want the mount to be /home – you’ll need to move the original /home to a new name & create a new /home directory.

Edit /etc/auto.master & include the following:
/home /etc/auto.home --timeout 600

Edit /etc/auto.home & include the following:
* -fstype=nfs,soft,intr,rsize=8192,wsize=8192,nosuid,tcp
centos1:/home/&

Restart autofs & su – to a user that exists, with a home directory in /home – and you’ll have a shared home directory via automount & NFS.

If you want to be sure — run df -h & check out the mount points.

`sudo -l` returns – nothing?

FYI – All aliases, commands, users etc. have been changed.

I added a new Host_Alias (DB_SRVRS) for a server group that I’ve rolled out.

I tacked it onto the end of a really long User_Alias [_] Host_Alias [_] Cmnd_Alias entry at the end of the sudoers file (yeah, I used visudo).
Here’s what came next:

[otech@server1 ~]$ sudo -l
[sudo] password for otech:
User otech may run the following commands on this host:
[otech@server1 ~]$

Nuthin’. What? I’ve configured the file to give me some [fake] privs.

Solution:
There was too much on the long User_Alias [_] Host_Alias [_] Cmnd_Alias line.

Again, I’m making the command aliases’ up, but it looked something like:
ENG ENG_SRVRS = PIDS, PROCS, SERVICE : IT_SRVRS = PIDS, PROCS, SERVICE : QA_SRVRS = PIDS, PROCS, SERVICE : DB_SRVRS = TC, PGCTL, MYSQL

Once I dropped DB_SRVRS to a new line, it all worked.
Final:
ENG ENG_SRVRS = PIDS, PROCS, SERVICE : IT_SRVRS = PIDS, PROCS, SERVICE : QA_SRVRS = PIDS, PROCS, SERVICE
ENG DB_SRVRS = TC, PGCTL, MYSQL

Now, I get:
[otech@server1 ~]$ sudo -l
[sudo] password for otech:
User otech may run the following commands on this host:
(tomcat) /usr/tommycat/bin/startup.sh, /usr/tommycat/bin/shutdown.sh
(pgusr) /usr/postgrass/bin/postgrassql restart
(mysqlusr) /usr/mahsql/bin/mahsqld restart
[otech@server1 ~]$