Migrating to a new GIT server


You’re decommissioning an old server, which just so happens to have your GIT repo on it.

How can you migrate it without losing the history?

From the NEW SERVER:

Make your new directory & set the permissions to your GIT_USER:

mkdir /opt/git/qa.git
chown git.git /opt/git/qa.git

‘su’ to GIT_USER on the server, and initialize the newly created directory

su - git
cd /opt/git/qa.git
git init --bare

From the EXISTING CLIENT (with most recent copy of repo), make new directory, ‘cd’ into there and initialize it

mkdir ~/git/qa
cd ~/git/qa
git init .

Now, clone the existing repository INTO here.  It will not have the code, it’ll have the repository configuration.  Again, the command will be pointing at the ORIGINAL (soon to be decommissioned) server:

git clone --bare ssh://git@

You’ll see something like:

Cloning into bare repository 'qa.git'...
git@'s password:
remote: Counting objects: 841, done.
remote: Compressing objects: 100% (595/595), done.
remote: Total 841 (delta 292), reused 605 (delta 208)
Receiving objects: 100% (841/841), 21.52 MiB | 23.68 MiB/s, done.
Resolving deltas: 100% (292/292), done.

Then, it will create another <repo>.git directory in your “new” local directory (~/git/qa).  ‘cd’ into that

cd ~/git/qa/qa.git

Now, it’s your job to PUSH that config to the NEW server:

git push --mirror ssh://git@991.11.78.221/opt/git/qa.git

You’ll see something like:

Counting objects: 841, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (511/511), done.
Writing objects: 100% (841/841), 21.52 MiB | 0 bytes/s, done.
Total 841 (delta 292), reused 841 (delta 292)
To ssh://991.11.78.221/opt/git/qa.git
 * [new branch]      master -> master

When that’s done, you can ‘cd’ back a level & delete the <repo>.git file that the clone created.

cd ..
rm -rf qa.git

Now, you can either re-clone from the NEW server, seen HERE, or modify the existing .git directory’s config file to point to the new location:

vi ~/git/original_qa_repo_directory/includes/.git/config

and change the OLD server’s IP …

[remote "origin"]
        url = ssh://git@

… to the NEW server’s IP:

[remote "origin"]
        url = ssh://git@991.11.78.221/opt/git/qa.git

Now, issue a ‘git pull’ and you should be all set!

$ git pull
Already up-to-date.


HOWTO: Add a new NIC into RHEL7 and configure it for use via ‘nmcli’

So, I have a VM and just added another NIC.  When I run ‘ip a’ – I see it, but there is no info:

[root@rhce-prep-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno16780032: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
    link/ether 00:0c:29:0d:22:d0 brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic eno16780032
       valid_lft 86236sec preferred_lft 86236sec
    inet6 2601:191:8380:f23d:20c:29ff:fe0d:22d0/64 scope global noprefixroute dynamic 
       valid_lft 189151sec preferred_lft 189151sec
    inet6 fe80::20c:29ff:fe0d:22d0/64 scope link 
       valid_lft forever preferred_lft forever
3: eno33559296: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:0c:29:0d:22:da brd ff:ff:ff:ff:ff:ff

‘nmcli con show’ – leaves me hanging as well:

[root@rhce-prep-1 ~]# nmcli con show
NAME         UUID                                  TYPE            DEVICE      
eno16780032  41003ce7-fd00-41e4-8524-67e1e9418179  802-3-ethernet  eno16780032

Querying the device status, I get some more info:

[root@rhce-prep-1 ~]# nmcli device status
eno16780032  ethernet  connected     eno16780032 
eno33559296  ethernet  disconnected  --          
lo           loopback  unmanaged     --

A little finagling with nmcli –help & man nmcli, gets me this syntax and a positive result:

[root@rhce-prep-1 NetworkManager]# nmcli con add type ethernet ifname eno33559296 con-name eno33559296
Connection 'eno33559296' (b3e327b9-538e-4b95-b729-4daaa4b56ddc) successfully added.

Re-running ‘nmcli con show’ gives me the new interface & ‘nmcli device status’ show’s ‘connected’ now:

[root@rhce-prep-1 NetworkManager]# nmcli con show
NAME         UUID                                  TYPE            DEVICE      
eno33559296  b3e327b9-538e-4b95-b729-4daaa4b56ddc  802-3-ethernet  eno33559296 
eno16780032  41003ce7-fd00-41e4-8524-67e1e9418179  802-3-ethernet  eno16780032 

[root@rhce-prep-1 NetworkManager]# nmcli device status
eno16780032  ethernet  connected  eno16780032 
eno33559296  ethernet  connected  eno33559296

‘ip a’ registers a new DHCP address:

[root@rhce-prep-1 NetworkManager]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno16780032: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
    link/ether 00:0c:29:0d:22:d0 brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic eno16780032
       valid_lft 85265sec preferred_lft 85265sec
    inet6 2601:191:8380:f23d:20c:29ff:fe0d:22d0/64 scope global noprefixroute dynamic 
       valid_lft 188178sec preferred_lft 188178sec
    inet6 fe80::20c:29ff:fe0d:22d0/64 scope link 
       valid_lft forever preferred_lft forever
3: eno33559296: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:0c:29:0d:22:da brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic eno33559296
       valid_lft 86098sec preferred_lft 86098sec
    inet6 2601:191:8380:f23d:20c:29ff:fe0d:22da/64 scope global noprefixroute dynamic 
       valid_lft 188178sec preferred_lft 188178sec
    inet6 fe80::20c:29ff:fe0d:22da/64 scope link 
       valid_lft forever preferred_lft forever


HOWTO: Configure SELinux to use non-standard ports

First, you need to be able to tune the parameters, so you need some packages:

[root@rhce ~]# yum -y install setroubleshoot-server selinux-policy-devel

Wait, I want to use a port other than 80 for apache/http – how do I know what to use?

[root@rhce ~]# semanage port -l | grep http
http_cache_port_t              tcp      8080, 8118, 8123, 10001-10010
http_cache_port_t              udp      3130
http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000
pegasus_http_port_t            tcp      5988
pegasus_https_port_t           tcp      5989

OK.  80 is “http_port_t”

Now, I need to choose the port I want to use (25000) & see if it’s in use:

[root@rhce ~]# sepolicy network -p 25000
25000: tcp unreserved_port_t 1024-32767
25000: udp unreserved_port_t 1024-32767

NICE!  It’s not in use.  Now, I need to allow apache/httpd to use it:

[root@rhce ~]# semanage port -a -t http_port_t -p tcp 25000

If you want to remove the port, substitute -a for -d & run again.

Check to see that it’s been applied appropriately:

[root@rhce ~]# sepolicy network -p 25000
25000: tcp http_port_t 25000
25000: tcp unreserved_port_t 1024-32767
25000: udp unreserved_port_t 1024-32767


Next, open the firewall to allow the port & then make it permanent:

[root@rhce ~]# firewall-cmd --add-port 25000/tcp
[root@rhce ~]# firewall-cmd --add-port 25000/tcp --permanent

Make your httpd.conf / vhosts.conf changes, restart apache & you’re IN with the new port:


HOWTO: Add Virtual Hosts in Apache on RHEL7

This isn’t terrible.  Install httpd & open up the firewall:

[root@rhce ~]# yum -y install httpd
[root@rhce ~]# firewall-cmd --add-service http
[root@rhce ~]# apachectl start

Test that the webpage responds (use the bond you just set up!) and when it does, enable the service and make the firewall permanent:

[root@rhce ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@rhce ~]# firewall-cmd --add-service http --permanent

Now, make some directories in /var/www/html and echo some content into an index.html file:

[root@rhce html]# mkdir tom base blog
[root@rhce html]# ll
total 0
drwxr-xr-x. 2 root root 23 Sep 15 22:45 base
drwxr-xr-x. 2 root root 23 Sep 15 22:43 blog
drwxr-xr-x. 2 root root 23 Sep 15 22:43 tom

[root@rhce html]# for i in $(ls); do echo "you are at $(pwd)/$i" >> $i/index.html; done

Fix the SELinux contexts for this location as so:

[root@rhce html]# restorecon -R *

Now, create & edit the /etc/httpd/conf.d/vhosts.conf file.  You want to make the directories above the “DocumentRoot” and the URLs to be the ServerName directives:

<virtualhost *:80>
ServerName tom.rhce.com
DocumentRoot /var/www/html/tom
<virtualhost *:80>
ServerName blog.rhce.com
DocumentRoot /var/www/html/blog
<virtualhost *:80>
ServerName rhce.com
DocumentRoot /var/www/html/base

Once saved, restart httpd:

[root@rhce html]# systemctl restart httpd

- or - 

[root@rhce html]# apachectl restart

And browse to your site & test.  You can see that the loop above inserted the “you are at …” text into the index.html file, which is shown when you browse the site:


HOWTO: Create a BOND with RHEL7

Let’s say you have a few spare NICs and want to put them together in a (active/passive) bond.  What do you do?

Well, this is pretty straight-forward.

First, connect via SSH to an IP on a NIC that WILL NOT be part of the bond.

Using ‘nmcli’ – remove references to the NICs you want IN the bond and reload nmcli:

[root@rhce ~]# nmcli con del p4p1 p4p2
Connection 'p4p1' (92d6456d-16bd-4eae-9ecb-386cb4ce4d29) successfully deleted.
Connection 'p4p2' (e52eca2f-8c84-428d-8959-93e85f4b03f3) successfully deleted.
[root@rhce ~]# nmcli con reload

Next, with nmcli, create the bond:

[root@rhce ~]# nmcli con add type bond ifname bond0 con-name bond0 mode active-backup miimon 100 ip4
Connection 'bond0' (5886c4c3-6ed7-4785-be41-7ef4c6f29373) successfully added.

Now, the bond is just an IP at this point in time; there are no NICs associated with it.  Time to add the two NICs (p4p1 & p4p2) in:

[root@rhce ~]# nmcli connection add type bond-slave ifname p4p1 con-name p4p1 master bond0
Connection 'p4p1' (ef7fd007-af66-43f2-a769-a8916dbf09c9) successfully added.
[root@rhce ~]# nmcli connection add type bond-slave ifname p4p2 con-name p4p2 master bond0
Connection 'p4p2' (d40213dd-4fd7-4a62-ac4c-1cc2d7480284) successfully added.

Optional (I think, but I still do it), modify the bond to have DNS:

[root@rhce ~]# nmcli connection modify bond0 ipv4.dns ","

Now, ‘up’ the bond:

[root@rhce ~]# nmcli con bond0 up

It’ll take about 30 seconds to configure behind the scenes, so set up a continuous ping and wait for it to reply.

The last part of this is testing the functionality.  Start a PING test, pull a cable & see what happens:

$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=52 ttl=64 time=0.357 ms
64 bytes from icmp_seq=53 ttl=64 time=0.301 ms
64 bytes from icmp_seq=54 ttl=64 time=0.359 ms
64 bytes from icmp_seq=55 ttl=64 time=0.339 ms
Pull active cable
Request timeout for icmp_seq 6
<10-45 more times>
Request timeout for icmp_seq 51
64 bytes from icmp_seq=82 ttl=64 time=0.607 ms
64 bytes from icmp_seq=83 ttl=64 time=0.339 ms
64 bytes from icmp_seq=84 ttl=64 time=0.361 ms
64 bytes from icmp_seq=85 ttl=64 time=0.276 ms
64 bytes from icmp_seq=86 ttl=64 time=0.306 ms

Looks like you got an active/backup bond working successfully!

HOWTO: Set up an iSCSI target on RHEL7

Install targetcli:

[root@rhce ~]# yum install targetcli -y

I used a USB drive as the soon-to-be-block device, so I had to prep it:

[root@rhce ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-31285247, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-31285247, default 31285247): 
Using default value 31285247
Partition 1 of type Linux and of size 14.9 GiB is set

Command (m for help): p

Disk /dev/sdb: 16.0 GB, 16018046976 bytes, 31285248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    31285247    15641600   83  Linux

Command (m for help): w
The partition table has been altered!

Start & enable (start on boot) target  (not targetd or targetcli):

[root@rhce ~]# systemctl start target
[root@rhce ~]# systemctl enable target

Enter targetcli and go to the backstores/block directory:

[root@rhce ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> cd /backstores/block 

Now, we create a LUN from the newly carved out USB drive:

/backstores/block> create lun0 /dev/sdb1 
Created block storage object lun0 using /dev/sdb1.

Now,  go to the /iscsi directory & create an official target name:

/backstores/block> cd /iscsi
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.rhce.x8664:sn.dd8b652b6367.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (, port 3260.

Now, ‘cd’ into the iqn and target name (seen as TPG 1 above):

/iscsi> cd iqn.2003-01.org.linux-iscsi.rhce.x8664:sn.dd8b652b6367/

/iscsi/iqn.20....dd8b652b6367> cd tpg1

Add your ACL; it could be an IP or IQN of another machine.  I elected to use the Microsoft Initiator, mainly because I had a Windoze VM running at the time:

/iscsi/iqn.20...652b6367/tpg1> cd acls
/iscsi/iqn.20...367/tpg1/acls> create iqn.1991-05.com.microsoft:whoosiewhatsit
Created Node ACL for iqn.1991-05.com.microsoft:whoosiewhatsit

Without a TargetIP, you can’t get here … so, let’s set a listener:

/iscsi/iqn.20...367/tpg1/acls> cd ../portals
/iscsi/iqn.20.../tpg1/portals> create
Using default IP port 3260
Binding to INADDR_ANY (
This NetworkPortal already exists in configFS

Now, we have to map the LUN created earlier, into this portal.  You’ll see that it carries across and maps the ACL.

/iscsi/iqn.20.../tpg1/portals> cd ../luns 
/iscsi/iqn.20...367/tpg1/luns> create /backstores/block/lun0 
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.1991-05.com.microsoft:whoosiewhatsit

‘cd’ back to the beginning & save the config:

/iscsi/iqn.20...367/tpg1/luns> cd /
/> saveconfig

The (second to) last thing you need to do, is open up the firewall to allow iSCSI port 3260:

[root@rhce ~]# firewall-cmd --add-port 3260/tcp

Now, test the iSCSI initiator, using the IP of the system and see if your SEND_TARGETS request comes back with your new “target”:


SUCCESS!  Now, you must make your firewall change permanent:

[root@rhce ~]# firewall-cmd --add-port 3260/tcp --permanent

You’re now free to connect, initialize, assign a drive letter & sector-align that bad-boy.



HOWTO: Configure ddclient to update your Dynamic DNS entries with Google Domains

In a continuation of Google Domains includes Dynamic DNS for your self-hosted websites, I set up my RaspberryPi (running Raspbian) to be the DDNS daemon to make sure that if my IP changes, there’s a semi-quick update.

The first thing I did was install ddclient.

sudo apt-get install ddclient

Then, I used the Dynamic DNS entry created during the other blog posts – and grabbed the user/password info for that ddns entry.

I edited the /etc/ddclient.conf file, adding in the necessary info from domains.google.com.

pi@raspberrypi:~# sudo cat /etc/ddclient.conf 
# Configuration file for ddclient generated by debconf
# /etc/ddclient.conf

login=<from the domains page>
password='<in single quotes on purpose>'

The www in www.blogdomain.dom above must match the DDNS record within Google Domains DNS settings.

Once that’s done, execute the ddns update:

pi@raspberrypi:~# sudo ddclient -verbose -foreground
CONNECT:  checkip.dyndns.org
SENDING:   Host: checkip.dyndns.org
SENDING:   User-Agent: ddclient/3.8.2
SENDING:   Connection: close
RECEIVE:  Content-Type: text/html
RECEIVE:  Server: DynDNS-CheckIP/1.0
RECEIVE:  Connection: close
RECEIVE:  Cache-Control: no-cache
RECEIVE:  Pragma: no-cache
RECEIVE:  Content-Length: 104
RECEIVE:  <html><head><title>Current IP Check</title></head><body>Current IP Address: <redacted></body></html>
INFO:     forcing updating www.blogdomain.dom because no cached entry exists.
INFO:     setting IP address to <redacted> for www.blogdomain.dom
UPDATE:   updating www.blogdomain.dom
CONNECT:  domains.google.com
SENDING:  GET /nic/update?system=dyndns&hostname=www.blogdomain.dom&myip=<redacted> HTTP/1.0
SENDING:   Host: domains.google.com
SENDING:   Authorization: Basic <redacted>
SENDING:   User-Agent: ddclient/3.8.2
SENDING:   Connection: close

Now, add it to cron to run every 2 hours on the half-hour & log the results:

30 */2 * * * /usr/sbin/ddclient -verbose >> /var/log/ddclient_updates.out


Google Domains includes Dynamic DNS for your self-hosted websites

Oh, the good-old-days, when DYNDNS was free – and so was zoneedit.  Move forward to today, where everything costs (and it should!).

Since I’m so heavily invested in Google from a tech perspective, I opted in to try their registrar service: Google Domains.

Transferring in was easy (see ya 1and1!) and setting up custom resource records (DNS entries) was simple.

Then, I read about Dynamic DNS being included with the service (along with free domain privacy) and I was intrigued.

Google offers their ‘help’ page here – but it left me questioning something.

I’m not going to re-write their doc, however, if you have any subdomains (www, blog, etc), THOSE are your synthetic Dynamic DNS records.

Piping debug output through grep

Would seem straight-forward, but it gave me a google challenge tonight.

iscsiadm -m node -d 8

Run it.  Well, if you use iscsi, that is. — that was my test tonight.

I was looking for the selective, ‘grepped’ output of:


But, when I ran iscsiadm -m node -d 8 | grep timeo.lu — it didn’t give me just those matches.

The man page failed me, so I found a reference to “|&” — so, gave it a go, with success!

# iscsiadm -m node -d 8 |& grep timeo.lu
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'


HOWTO: Run Chrome in Incognito Mode on the Mac, with its own shortcut

Step 1, install Chrome.

Step 2 and the rest of them:

Open up your “Applescript Editor” App.

–> NOTE ** You can search for it by hitting command+space on the keyboard (aka, opening up spotlight) and typing: applescript editor ** <–

When it opens, select the: New Document button & paste in:

tell application "Google Chrome"
  close windows
  make new window with properties {mode:"incognito"}
end tell

Then click:  File/Export.

Change the File Format from “Script” to “Application”

Name the application, select the “Applications” Location on your system & run it.


I changed the password for my git repo and now it’s failing authentication on a pull

On my primary machine that’s always locked, I cache my password for some repos.  I just do.

So, after being forced to change the password to a (https!) repo, I tried a pull and it just happened to me.  This doesn’t impact ssh public key authentication.

$ git pull
remote: Invalid username or password. If you log in via a third party service you must ensure you have an account password set in your account profile.
fatal: Authentication failed for 'https://bitbucket.org/me/repo/'

You have to reset your credential helper cache, like so:

$ git config --global credential.helper cache
$ git pull

Ah, now — prompted for username & password.  #verynice

Username for 'https://bitbucket.org': fakeusename_1  
Password for 'https://fakeusername_1@bitbucket.org': 
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (10/10), done.

And, scene.

git clone config global reset author –what?

Ah, cloning a git repo again, for the first time.   Here’s me using bitbucket.org; it’s free for slackers like me.

OK, so first:

$ mkdir -p ~/git/bitbucketrepo

$ git init ~/git/bitbucketrepo

$ cd ~/git/bitbucketrepo

$ git clone https://full-address-as-seen-in-bitbucket

Cool, now I add a few scripts & am ready to ‘stage’ them with ‘add.’

$ git add .

Unfortunately, this machine will get auto-assigned a name & email based on your login & some FQDN stuff.  I think we should change it.

$ git config --global user.name "Tom's Fedora 24 Workstation"
$ git config --global user.email tomblog@personalemail.email

Now, kick of a commit:

$ git commit -m "testing for blog"
[master 0e08355] testing for blog
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode ....

Now, time to ‘push’ it to bitbucket:

$ git push

AWW Crap, more stuff:

$ git push
warning: push.default is unset; its implicit value has changed in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the traditional behavior, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

When push.default is set to 'matching', git will push local branches
to the remote branches that already exist with the same name.

Since Git 2.0, Git defaults to the more conservative 'simple'
behavior, which only pushes the current branch to the corresponding
remote branch that 'git pull' uses to update the current branch.

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Then you’re prompted for your password & everything works.

HOWEVER — for the next ‘push’ – let’s adapt for the ‘new behavior:’

$ git config --global push.default simple

Make a test file & test again:

$ echo "BLOG TEST" > new_stuff.txt
$ git add .
$ git commit -m "blog test"
[master xxx] blog test
 1 file changed, 2 insertions(+)
 create mode ....
$ git push
Password for 'https://.....
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.

Looks good!

See you in a year when you need to do it again.


SUNNOVA … why doesn’t NOPASSWD work in /etc/sudoers in Fedora 24?

I’m used to just copy/pasting root & adding in my username, then tacking on NOPASSWD: ALL at the end, like so:

## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
tbizzle     ALL=(ALL)       NOPASSWD: ALL

Then, running a sudo command, I STILL had to enter the password:

[tbizzle@f24-mac ~]$ sudo date
[sudo] password for tbizzle: 
Tue Jun 28 01:11:55 EDT 2016

CRAP.  That’s not what I wanted.


But NOW, it’s different.  The “fix” was to add the entry AFTER wheel for it to work:

[tbizzle@f24-mac ~]$ sudo grep -A4 -B4 bizzle /etc/sudoers | grep -A4 -B4 NOPASS

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL
tbizzle     ALL=(ALL) NOPASSWD: ALL

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL

and now:

[tbizzle@f24-mac ~]$ sudo date
Tue Jun 28 01:10:26 EDT 2016


Hope it helps

HOWTO: Wipe your Nexus’ Cache Partition?

This is an easy one … but I forgot enough times that I needed to add it here.

  • Turn off your Nexus device
  • Press and hold the Volume Down and Power keys at the same time for about 7 seconds
  • An image of an Android lying on its back will appear
  • Press the Volume Down key twice – you’ll see “Recovery mode” appears on the screen
  • Press the Power key to restart in “Recovery mode”
  • An image of an Android and a red triangle will appear
  • Press the Power key and Volume Up keys simultaneously (it may take a few tries)
  • Use the Volume keys to scroll to “wipe data cache partition” and press the Power key

Repos and Subscriptions needed to install RHEV 3.5

After some fighting, here’s what you have to to ..

Install a RHEL 6 VM

# subscription-manager register
Registering to: subscription.rhn.redhat.com:443/subscription
Username: your new shiny name
The system has been registered with ID: XXXXXXXX

# subscription-manager attach
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

(Does the above look familiar?)


Once that’s done, go to your RHN account & click on the VM you just ‘attached’ and pick ‘Attach a subscription’ and select your Virtualization Entitlement.

Once that’s done, issue:

subscription-manager repos –enable rhel-6-server-rhevm-3.5-rpms ; sleep 1 ; subscription-manager repos –enable jb-eap-5-for-rhel-6-server-rpms ; sleep 1 ; subscription-manager repos –enable rhel-6-server-supplementary-rpms  ; sleep 1 ; subscription-manager repos –enable jb-eap-6-for-rhel-6-server-rpms; sleep 1 ; subscription-manager repos –enable rhel-6-server-rhevh-rpms

THEN, you can install RHEV & the hypervisor (to get the ISOs):

yum -y install rhevm “rhev-hypervisor*”


HOWTO: Linksys – EA9500 / AC5400 & hosting your own Websites

I have an EA9500 Smart Router and when I activated this and turned my ASUS RT-N66U into an Access Point, I found myself unable to access the websites I hosted myself.

Well, for Linksys, there’s an enabled FEATURE, that just so stops this from working properly aka, breaks NAT loopback.

Here’s the symptom.

On the network at home, you can’t get to your website.

Pop over to your phone / LTE – there’s your site.

Strip carriage returns and then add a break after X characters

Work in Progress ….

Take the file that has ^Ms in it and …

Strip the new lines:
tr -d '\n' < filename > output_filename

VI the output filename & run:

** NOTE ** the ^M is created by doing a Control-V Control-M

Then, add the returns:

sed -e "s/.\{71\}/&\n/g" < output_filename > final_file.out

** NOTE ** the 71 is the # of characters you want a newline after.

HOWTO: APACHE – permanent redirect to another server & port

I’m using CentOS 7.2 & the corresponding layout as seen here.

So, I have a few VMs that host sites and I elected *not* to move on with AWS due to my very strained budget and it’s using Ubuntu and docker.
That being said, I kept an Ubuntu VM and it can’t share port 80 due to just a single Internet connection inbound and I was forced to make changes.

Here’s what I did to get around it (mind you, none of this is actual):

Take your /etc/httpd/sites-enabled file and make some additions:

# cat blog-toloughlin.conf

ServerName blog.toloughlin.com
ServerAlias blog.toloughlin.com
RedirectPermanent / http://www.blog.toloughlin.com:81
# optionally add an AccessLog directive for
# logging the requests and do some statistics
Next time you visit that domain, it’ll push the traffic back to port 81 (translated by your router).

Caveat: you’ll see :81 in your URL bar and some of your site may not work correctly (things coded to use the domain & no port numbers).

It’s hackey, but it works … fairly well.

corrupted or tampered with during downloading ???

Well, I guess it’s common now to see this when trying to install OS X. My example happened when I tried to install El Capitan, fresh (no upgrade) on a newly formatted SSD – and had me scratching my bean.

I got this:
This copy of the Install OS X El Capitan application can't be verified. It may have been corrupted or tampered with during downloading

People have identified the need to set the clock back via Terminal, right before you install the OS after boot-up.

I checked my time & it was spot on (although it thought I was on the Left Coast, which I’m not).

I COULD have ran the infamous date command (date MMDDHHmmYY), but elected not to.

I deleted the installer and downloaded El Capitan yet again. Guess what? It worked.

Here is what I’m thinking. If you download & set it aside for a while, you need to roll your clock back. If not, you’re good to go.

So if you don’t have the luxury of downloading the OS again, see what the time/date stamp shows up as and set the date back to a week later than that and you should be all set.

So, if you see this (for example):

But it’s April, 2016 now … run:
date 0401101016

Exit the Terminal App and try the install again.