ddclient to update your Dynamic DNS entries

In a continuation of Google Domains includes Dynamic DNS for your self-hosted websites, I set up my RaspberryPi (running Raspbian) to be the DDNS daemon to make sure that if my IP changes, there’s a semi-quick update.

The first thing I did was install ddclient.

sudo apt-get install ddclient

Then, I used the Dynamic DNS entry created during the other blog posts – and grabbed the user/password info for that ddns entry.

I edited the /etc/ddclient.conf file, adding in the necessary info from domains.google.com.

pi@raspberrypi:~# sudo cat /etc/ddclient.conf 
# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf

protocol=dyndns2
ssl=yes
use=web
server=domains.google.com
login=<from the domains page>
password='<in single quotes on purpose>'
www.blogdomain.dom

The www in www.blogdomain.dom above must match the DDNS record within Google Domains DNS settings.

Once that’s done, execute the ddns update:

pi@raspberrypi:~# sudo ddclient -verbose -foreground
CONNECT:  checkip.dyndns.org
CONNECTED:  using HTTP
SENDING:  GET / HTTP/1.0
SENDING:   Host: checkip.dyndns.org
SENDING:   User-Agent: ddclient/3.8.2
SENDING:   Connection: close
SENDING:   
RECEIVE:  HTTP/1.1 200 OK
RECEIVE:  Content-Type: text/html
RECEIVE:  Server: DynDNS-CheckIP/1.0
RECEIVE:  Connection: close
RECEIVE:  Cache-Control: no-cache
RECEIVE:  Pragma: no-cache
RECEIVE:  Content-Length: 104
RECEIVE:  
RECEIVE:  <html><head><title>Current IP Check</title></head><body>Current IP Address: <redacted></body></html>
INFO:     forcing updating www.blogdomain.dom because no cached entry exists.
INFO:     setting IP address to <redacted> for www.blogdomain.dom
UPDATE:   updating www.blogdomain.dom
CONNECT:  domains.google.com
CONNECTED:  using SSL
SENDING:  GET /nic/update?system=dyndns&hostname=www.blogdomain.dom&myip=<redacted> HTTP/1.0
SENDING:   Host: domains.google.com
SENDING:   Authorization: Basic <redacted>
SENDING:   User-Agent: ddclient/3.8.2
SENDING:   Connection: close

Now, add it to cron to run every 2 hours on the half-hour & log the results:

30 */2 * * * /usr/sbin/ddclient -verbose >> /var/log/ddclient_updates.out

 

Google Domains includes Dynamic DNS for your self-hosted websites

Oh, the good-old-days, when DYNDNS was free – and so was zoneedit.  Move forward to today, where everything costs (and it should!).

Since I’m so heavily invested in Google from a tech perspective, I opted in to try their registrar service: Google Domains.

Transferring in was easy (see ya 1and1!) and setting up custom resource records (DNS entries) was simple.

Then, I read about Dynamic DNS being included with the service (along with free domain privacy) and I was intrigued.

Google offers their ‘help’ page here – but it left me questioning something.

I’m not going to re-write their doc, however, if you have any subdomains (www, blog, etc), THOSE are your synthetic Dynamic DNS records.

Piping debug output through grep

Would seem straight-forward, but it gave me a google challenge tonight.

iscsiadm -m node -d 8

Run it.  Well, if you use iscsi, that is. — that was my test tonight.

I was looking for the selective, ‘grepped’ output of:

node.session.err_timeo.lu_reset_timeout

But, when I ran iscsiadm -m node -d 8 | grep timeo.lu — it didn’t give me just those matches.

The man page failed me, so I found a reference to “|&” — so, gave it a go, with success!

# iscsiadm -m node -d 8 |& grep timeo.lu
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'

 

Run Chrome in Incognito Mode on the Mac, with its own shortcut

Step 1, install Chrome.

Step 2 and the rest of them:

Open up your “Applescript Editor” App.

–> NOTE ** You can search for it by hitting command+space on the keyboard (aka, opening up spotlight) and typing: applescript editor ** <–

When it opens, select the: New Document button & paste in:

tell application "Google Chrome"
  close windows
  make new window with properties {mode:"incognito"}
  activate
end tell

Then click:  File/Export.

Change the File Format from “Script” to “Application”

Name the application, select the “Applications” Location on your system & run it.

Enjoy.

I changed the password for my git repo and now it’s failing authentication on a pull

On my primary machine that’s always locked, I cache my password for some repos.  I just do.

So, after being forced to change the password to a (https!) repo, I tried a pull and it just happened to me.  This doesn’t impact ssh public key authentication.

$ git pull
remote: Invalid username or password. If you log in via a third party service you must ensure you have an account password set in your account profile.
fatal: Authentication failed for 'https://bitbucket.org/me/repo/'

You have to reset your credential helper cache, like so:

$ git config --global credential.helper cache
$ git pull

Ah, now — prompted for username & password.  #verynice

Username for 'https://bitbucket.org': fakeusename_1  
Password for 'https://fakeusername_1@bitbucket.org': 
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (10/10), done.
....

And, scene.

git clone config global reset author –what?

Ah, cloning a git repo again, for the first time.   Here’s me using bitbucket.org; it’s free for slackers like me.

OK, so first:

$ mkdir -p ~/git/bitbucketrepo

$ git init ~/git/bitbucketrepo

$ cd ~/git/bitbucketrepo

$ git clone https://full-address-as-seen-in-bitbucket

Cool, now I add a few scripts & am ready to ‘stage’ them with ‘add.’

$ git add .

Unfortunately, this machine will get auto-assigned a name & email based on your login & some FQDN stuff.  I think we should change it.

$ git config --global user.name "Tom's Fedora 24 Workstation"
$ git config --global user.email tomblog@personalemail.email

Now, kick of a commit:

$ git commit -m "testing for blog"
[master 0e08355] testing for blog
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode ....

Now, time to ‘push’ it to bitbucket:

$ git push

AWW Crap, more stuff:

$ git push
warning: push.default is unset; its implicit value has changed in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the traditional behavior, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

When push.default is set to 'matching', git will push local branches
to the remote branches that already exist with the same name.

Since Git 2.0, Git defaults to the more conservative 'simple'
behavior, which only pushes the current branch to the corresponding
remote branch that 'git pull' uses to update the current branch.

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Then you’re prompted for your password & everything works.

HOWEVER — for the next ‘push’ – let’s adapt for the ‘new behavior:’

$ git config --global push.default simple

Make a test file & test again:

$ echo "BLOG TEST" > new_stuff.txt
$ git add .
$ git commit -m "blog test"
[master xxx] blog test
 1 file changed, 2 insertions(+)
 create mode ....
$ git push
Password for 'https://.....
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.

Looks good!

See you in a year when you need to do it again.

 

SUNNOVA … why doesn’t NOPASSWD work in /etc/sudoers in Fedora 24?

I’m used to just copy/pasting root & adding in my username, then tacking on NOPASSWD: ALL at the end, like so:

## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
tbizzle     ALL=(ALL)       NOPASSWD: ALL

Then, running a sudo command, I STILL had to enter the password:

[tbizzle@f24-mac ~]$ sudo date
[sudo] password for tbizzle: 
Tue Jun 28 01:11:55 EDT 2016

CRAP.  That’s not what I wanted.

 

But NOW, it’s different.  The “fix” was to add the entry AFTER wheel for it to work:

[tbizzle@f24-mac ~]$ sudo grep -A4 -B4 bizzle /etc/sudoers | grep -A4 -B4 NOPASS
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL
tbizzle     ALL=(ALL) NOPASSWD: ALL

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL

and now:

[tbizzle@f24-mac ~]$ sudo date
Tue Jun 28 01:10:26 EDT 2016

 

Hope it helps

Need to Wipe your Nexus’ Cache Partition?

This is an easy one … but I forgot enough times that I needed to add it here.

  • Turn off your Nexus device
  • Press and hold the Volume Down and Power keys at the same time for about 7 seconds
  • An image of an Android lying on its back will appear
  • Press the Volume Down key twice – you’ll see “Recovery mode” appears on the screen
  • Press the Power key to restart in “Recovery mode”
  • An image of an Android and a red triangle will appear
  • Press the Power key and Volume Up keys simultaneously (it may take a few tries)
  • Use the Volume keys to scroll to “wipe data cache partition” and press the Power key

Repos and Subscriptions needed to install RHEV 3.5

After some fighting, here’s what you have to to ..

Install a RHEL 6 VM

First:
# subscription-manager register
Registering to: subscription.rhn.redhat.com:443/subscription
Username: your new shiny name
Password:
The system has been registered with ID: XXXXXXXX

Then:
# subscription-manager attach
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

(Does the above look familiar?)

 

Once that’s done, go to your RHN account & click on the VM you just ‘attached’ and pick ‘Attach a subscription’ and select your Virtualization Entitlement.

rhev-sub

Once that’s done, issue:

subscription-manager repos –enable rhel-6-server-rhevm-3.5-rpms ; sleep 1 ; subscription-manager repos –enable jb-eap-5-for-rhel-6-server-rpms ; sleep 1 ; subscription-manager repos –enable rhel-6-server-supplementary-rpms  ; sleep 1 ; subscription-manager repos –enable jb-eap-6-for-rhel-6-server-rpms; sleep 1 ; subscription-manager repos –enable rhel-6-server-rhevh-rpms

THEN, you can install RHEV & the hypervisor (to get the ISOs):

yum -y install rhevm “rhev-hypervisor*”

Enjoy!

Linksys – EA9500 / AC5400 & hosting your own Websites

I have an EA9500 Smart Router and when I activated this and turned my ASUS RT-N66U into an Access Point, I found myself unable to access the websites I hosted myself.

Well, for Linksys, there’s an enabled FEATURE, that just so stops this from working properly aka, breaks NAT loopback.

Here’s the symptom.

On the network at home, you can’t get to your website.

Pop over to your phone / LTE – there’s your site.
Continue reading Linksys – EA9500 / AC5400 & hosting your own Websites

APACHE – permanent redirect to another server & port

I’m using CentOS 7.2 & the corresponding layout as seen here.

So, I have a few VMs that host sites and I elected *not* to move on with AWS due to my very strained budget and it’s using Ubuntu and docker.
That being said, I kept an Ubuntu VM and it can’t share port 80 due to just a single Internet connection inbound and I was forced to make changes.

Here’s what I did to get around it (mind you, none of this is actual):

Take your /etc/httpd/sites-enabled file and make some additions:

# cat blog-toloughlin.conf

ServerName blog.toloughlin.com
ServerAlias blog.toloughlin.com
RedirectPermanent / http://www.blog.toloughlin.com:81
# optionally add an AccessLog directive for
# logging the requests and do some statistics
Next time you visit that domain, it’ll push the traffic back to port 81 (translated by your router).

Caveat: you’ll see :81 in your URL bar and some of your site may not work correctly (things coded to use the domain & no port numbers).

It’s hackey, but it works … fairly well.

corrupted or tampered with during downloading ???

Well, I guess it’s common now to see this when trying to install OS X. My example happened when I tried to install El Capitan, fresh (no upgrade) on a newly formatted SSD – and had me scratching my bean.

I got this:
This copy of the Install OS X El Capitan application can't be verified. It may have been corrupted or tampered with during downloading

People have identified the need to set the clock back via Terminal, right before you install the OS after boot-up.

I checked my time & it was spot on (although it thought I was on the Left Coast, which I’m not).

I COULD have ran the infamous date command (date MMDDHHmmYY), but elected not to.

I deleted the installer and downloaded El Capitan yet again. Guess what? It worked.

Here is what I’m thinking. If you download & set it aside for a while, you need to roll your clock back. If not, you’re good to go.

So if you don’t have the luxury of downloading the OS again, see what the time/date stamp shows up as and set the date back to a week later than that and you should be all set.

So, if you see this (for example):
el-cap

But it’s April, 2016 now … run:
date 0401101016

Exit the Terminal App and try the install again.

MOSH – when you need to SSH and there’s intermittent connectivity problems

Read about is here: https://mosh.mit.edu/

I loaded it up on RHEL 7.2, and here’s the process that I went through …

Add pre-requisite packages:
yum -y install git protobuf-c autoconf automake wget bzip2 gcc-c++ zlib-devel libutempter ncurses-devel openssl-devel net-tools

Run all of these commands:

PREFIX=$HOME
wget http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.bz2
tar -xf protobuf-2.4.1.tar.bz2
cd protobuf-2.4.1
./configure --prefix=$PREFIX
make
make install

export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/root/lib/pkgconfig

$ git clone https://github.com/mobile-shell/mosh
$ cd mosh
$ ./autogen.sh
$ ./configure
$ make
# make install

echo "export LD_LIBRARY_PATH=/root/lib" >> ~/.bashrc ; source ~/.bashrc

firewall-cmd –add-port=60000-61000/udp

Have you heard that RHEL is available ‘free’ for your Development Environment?

It sure is – woo hoo!

Dance on over to https://developer.redhat.com, sign up and accept their terms.

You can then download the latest ISO (7.2 at the time of this writing) and load it up on a server or VM. Make sure you select “Developer Tools” during the installation.

If you selected Basic (no GUI), you’ll need to run a few extra steps after installing, in order to get your yum updates.

First:

# subscription-manager register

Registering to: subscription.rhn.redhat.com:443/subscription
Username: your new shiny name
Password:
The system has been registered with ID: XXXXXXXX

Then:

# subscription-manager attach

 

Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

Finally:

# subscription-manager repos --enable=rhel-server-rhscl-7-rpms
# subscription-manager repos --enable=rhel-7-server-optional-rpms
# subscription-manager repos --enable=rhel-7-server-extras-rpms

 



Now don’t be a jerk and try to use it in production; all it takes is one support call and accidentally outing yourself to cause your entire company to be forced to conduct a licensing audit.  That won’t be fun. 

firewalld – allowing individual host access

So, you’re rolling out a new webserver and want only certain people to take a look at the content? Here’s how you do it.
CentOS 7.2 is the OS being used.

What zone are you in?
[root@blog-test ~]# firewall-cmd --get-default-zone
public

OK, let’s make a new zone:

firewall-cmd --permanent --new-zone=blog
systemctl reload firewalld

Now, let’s add your IP & a friends IP to start testing … given you’re using apache & it’s still on port 80:

firewall-cmd --permanent --zone=blog --add-source=YOUR_IP/32
firewall-cmd --permanent --zone=blog --add-source=FRIENDS_IP/32
firewall-cmd --permanent --zone=blog --add-port=80/tcp

NOTE:  If you are using that port in another zone, remove it from that other zone first, because it can’t be in 2 zones at once.

That’s all there is. Move along now.

 

Windows 7 – Can’t Check for Updates

So, I booted up a Win7 VM that hasn’t been online in 11 months — Windows Update won’t work!

Microsoft was nice enough to give me this message:

Windows Update Cannot Check For Updates, Because The Service Is Not Running

I tried letting Microsoft “fix it for me” from this page, but it didn’t work:
https://support.microsoft.com/en-us/kb/2730071

Here’s the fix.

Start -> type cmd
Right-click on cmd and click on: Run as administrator
Type the following lines, hitting enter after each one:

net stop wuauserv
cd %systemroot%
ren SoftwareDistribution SoftwareDistribution.bad
net start wuauserv

Launch Windows Update again – and — let the updates begin!

Need WordPress to send email, but you’re on Comcast?

Sending mail with Comcast as your ISP – this is on CentOS 7.2.

Install:
# yum install cyrus-sasl{,-plain}

Edit /etc/postfix/main.cf and insert the following below the other ‘relayhost’ references:
relayhost = [smtp.comcast.net]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/smtp_password
smtp_sasl_security_options =

Note: smtp_sasl_security_options = … is intentionally blank.

Edit:
/etc/postfix/smtp_password and insert:
[smtp.comcast.net]:587
username@comcast.net:password

Lock down the perms:
# chmod 600 /etc/postfix/smtp_password

Run:
postmap hash:/etc/postfix/smtp_password

Create a localhost-rewrite rule. This must be done, or else the Comcast SMTP server will reject your mail as coming from an invalid domain. Insert the following into:
/etc/postfix/sender_rewrite:
/^([^@]*)@.*$/ $1@<
your_domain_here>.com

Allow SELinux to accept apache’s access to send mail:
# setsebool -P httpd_can_sendmail 1

Restart postfix:
# systemctl restart postfix

Test. If it fails, tail /var/log/maillog!

** NEW INFO **
I had some troubles with this (mail still showing root@localhost in the maillog) – and here were a few more steps, if that doesn’t completely work.

vi /etc/postfix/sender_canonical

… and insert the following, to make “root” appear to be the “wordpressuser” on outbound mail. This should have been rewritten by the rule up above, but it wasn’t doing it.

root wordpressuser@yourdomain.com

Create /etc/postfix/sender_canonical.db file
postmap hash:/etc/postfix/sender_canonical

Add sender_canonical variable to /etc/postfix/main.cf
postconf -e "sender_canonical_maps=hash:/etc/postfix/sender_canonical"

Restart postfix:
# systemctl restart postfix

Where I write things down, so I don't have to Google it later