Playing with AI Art Generators

Artificial Intelligence is escaping the bounds of purely technical implementations such as code completion algorithms or more ambitious goals like self-driving cars. The floodgates have opened on new creative-minded AI Art Generators that create art from something as simple as a short phrase. While there is some controversy over the technology — borrowing? Stealing? — being trained on real-life human artists and all the existential and legal debates that it brings, the results are intriguing.

Regardless of where the chips falls on the “is this moral” and “is this legal” debate, it is clear the technology is fully out in the wild. In my opinion it is going to have a big impact in the creative community, hitting hardest on new artists that are trying to earn some extra cash on sites live Fivver.

Nighstudio.Cafe AI Art Generated image for the phrase Deep computer memory wiring tower view 3d
One of my first attempts at AI art turned out OK – from NightCafe.Studio
Read More

Fixing Postfix After An Ubuntu Upgrade

As part of a set of security updates one of the servers in a cluster was upgraded from Ubuntu 16.04LTS to 20.04LTS. Everything seemed to be fine until we found that Postfix was no longer sending email via the Amazon AWS SES service. The past few hours have been spent fixing Postfix after an Ubuntu upgrade.

Ubuntu Changed How Postfix Works

Turns out this upgrade wasn’t completely innocuous when it comes to Postfix and the ability to send mail. It didn’t take long to uncover multiple errors in the server’s /var/log/mail.log file.

Name service error for type=MX

Turns out this is a bit of red herring. Running through the postfix configuration did uncover some oddities that needed to be fixed, however.

A local lookup shows there are no MX records for that host available from the DNS servers.

host -t mx
Read More

Using IPv6 To Access Lightsail WordPress Instances

white switch hub turned on

AWS Lightsail makes it easy to spin up new WordPress instances and get a completely custom site online in minutes. However the default configuration can lead to dynamic IP addresses that are difficult to map to a domain name. The quick option is to modify the instance to use a static IP address, which doesn’t change, then point your DNS A record for your website URL to that static IP.

The problem is you only get to have a maximum of five static IPs with your Lightsail account. However, there is another way – using IPv6 to access Lightsail.

The downside to this approach is there are still a lot of ISPs using subpar DNS services that do not support IPv6 connectivity. As such using IPv6 ONLY is still only truly viable for semi-private or private resources where you know your audience is using an IPv6 friendly ISP.

Using IPv6 To Access Lightsail

By default the Lightsail WordPress template by Bitnami will enable both IPV4 and IPV6 networking.

With IPV6 the auto-assigned address for the instance never changes. It will only go away when you delete the instance, or possibly change if you disable IPv6 then re-enable it (don’t do this). IPv5 addresses are also always public, which means you don’t have a separate “internal IP for the network” and “published public IP” that are different.

The nice thing about IPv6 addresses with AWS Lightsail is that you get as many as you want with no limit. Every single resource in your AWS cloud presence can have its own IPv6 address.

How do you point your domain name to the Lightsail instance using IPv6?

Enter the AAAA record – this has been part of DNS for years specifically to support IPv6 traffic. All respectable domain hosting providers will support AAAA records. To get your web traffic going to your Lightsail instance click on the instance you started and go to the Networking tab. Copy the IPV6 address listed there. Go to your DNS provider, create a new hostname entry with the AAA record.

Using the right DNS provider

Even today, more than 5 years since the 2017 ratification of the IPv6 standard by IETF, you’ll find plenty of DNS providers that do not properly support IPv6 and the AAAA records. My current ISP, WoW is one of those providers.

Using IPv6 to access lightsail may require overriding your ISPs DNS provider
Overriding the default DNS provider with OpenDNS on a MacBook

You can check your current service provider by using the IPv6 test tool listed in resources below. If you need to change your DNS provider, as I did, you have two ways to do it.

The first method – changing network settings on your device, means you’ll have IPv6 access whenever you are using the device.

If you want to get IPv6 throughout your home you can opt to manually override your DNS provider on your router. This means every device in your home will have IPv6 connectivity as long as the devices are connecting to your router. If you have a mobile device or a laptop and connect to another network, the WiFi at your local coffee shop for instance, you will lose access to IPv6 only web content. Thankfully (or sadly) IPv6 is still so poorly supported in the United States that any major online presence still operates a horribly outdated IPv4 connection (as I do on this blog).

Configuring a router to use OpenDNS

Personally I setup my mobile devices to use IPv6 by overriding the default network settings to use OpenDNS while also setting my home router to use IPv6. This means I don’t have to configure the dozen-plus devices at home, like the Playstation or FireTV, and I still have IPv6 on my laptop when I’m at another location.

If you find you have to use a service like OpenDNS to get IPv6 access, please send your ISP or DNS provider a note asking them to fully implement IPv6.

Using the right router

Not all routers are created equal. Many older, heck even some newer, routers do not properly support IPv6. We found that out when configuring the DNS provider at a shared workspace where a high end Nighthawk router by Netgear would not pass through IPv6 data. That router always converted IPv6 addresses to IPv4 or required the upstream pass-through routers to provide the IPv6 configuration information. Neither was an option.

What routers work? Here is what is being used at one location. If you know of others, please share.

Using the right browser

The latest version of Firefox has IPv6 DNS lookups enabled by default.

Getting Your SSL Cert

IPv6 will require you to use proper HTTPS certs. Add one to your Lightsail instance after setting up your record pointers.

Login via SSH from the Lightsail console then type:

sudo /opt/bitnami/bncert-tool

Awwww… snap… that does not work with IPv6. See this post about it.

The easiest workaround for Bitnami’s IPv6 SSL Cert tool’s shortcoming is to point an A record to your temporary IPv4 address. Once that is live, usually within 5 minutes with standard TTL propagation rules, you can re-run the cert tool.

Once your site has the SSL certificate it will work for the IPv6 address. Certs are bound to domain names not IP addresses. The IP address is only used during the certificate generation process to validate ownership.

Related Resources

Enabling and disabling IPv6 for your resources in Amazon Lightsail

  • IPv6 is enabled by default for Lightsail instances, container services, CDN distributions, and load balancers.
  • The IPv6 address for an instance persists when you stop and start your instance. It’s released only when you delete your instance, or disable IPv6 for your instance.

Test Your IPv6 Connectivity

  • This service helps check IPv6 connectivity from your current device and the ISP providing your Internet access at your current location.
  • Many users, especially in the United States, are still dealing with half-implemented IPv6 services; Typically through a misconfigured 6to4 service or by using a DNS service that does not support IPv6 (hello WoW… we are way past 1999 here).

Cisco OpenDNS with IPv6 Support

  • Change your DNS provider on your router or on your device by overriding the default network settings and get full access to IPv6 sites.
  • The OpenDNS servers are at

IPv6 on Wikipedia

  • IPv6 was created by IETF to deal with IPv4 address exhaustion
  • IPv6 is meant to replace IPv4
  • IPv6 was introduced in 1998
  • IPv6 became an Internet standard in 2017
  • IPv6 uses 128-bit addresses resulting in 3.4 x 10^38 (3.4 with 38 zeros after it) addresses
    • IPv4 uses a 32-bit address with 4,300,000,000 addresses
    • All 4.3 billion addresses were allocated and in-use by April 2011
    • Many tricks, like sharing of IP addresses has allowed IPv4 to live another decade, but it causes lots of problems for security and performance

Removing Already Commited gitignore Files

Already committed files to your git repository that you now realized should not be part of the project? This is a common issue as projects grow from concept to ongoing development. In some cases it is nothing more than a “janitorial” concern. In other cases, such as including a node_modules directory for a node based project, it can lead to larger repositories that slow down the setup and tear down phases of a development cycle.

The biggest issue comes into play if a developer happens to include private keys or passwords in a file that got pushed to a public repository; In this case removing the file is a must. The steps below will help start the mitigation process to ensure that doesn’t happen on future commits.

Thankfully there is an easy way to remove the files and stop them from being committed on future updates.

First – update your .gitignore file and ensure the file you want to skip is on the list. If it is a single file use the full filename. The file name needs to be fully qualified as a relative path based on where your .gitignore file lives.

The example below assumes the file resides in the same directory as the .gitignore file. The file name is the same as what you’d enter in the .gitignore file.

Then update the git cache and index for that file.

git rm --cached notthisfile.txt
git update-index --skip-worktree notthisfile.txt

Removing a file in a subdirectory

In this example you’d add the mysubdir/notthisfile.txt to the .gitignore file as well.

git rm --cached mysubdir/notthisfile.txt
git update-index --skip-worktree mysubdir/notthisfile.txt

Removing Already Commited gitignore Directories

This example shows how to remove an entire build directory using git version 2.35+ CLI commands.

git rm -r --cached build/
git update-index --skip-worktree build/

git Documentation

Looking for more git commands to manage your repositories? Go directly to the official git SCM guide or follow my other git tips & tricks posts.

Have more git tips and tricks? Share them in the comments.

post image:

Aurora MySQL Storage Monitoring & Reduction

Using Aurora MySQL is a great way to host a MySQL database at low cost while providing a scalable and fault tolerant environment. Once you’ve setup your multi-zone cluster AWS takes care of much of the scaling for the database server. One of the components it manages automatically is the volume, or disk storage, up to a maximum of 128TB. However, this does not mean volume storage is fully managed.

All AWS RDS implementations have several components that can impact the monthly billing. While things like CPU and memory are based on the workload and often require application restructuring to reduce consumption, disk space — or volume storage — can often be reduced with routine maintenance.

Implementing routine maintenance to reduce volume consumption should be part of a monthly health-check for your AWS services.

Checking The Aurora MySQL Volume Size

The easiest way to check the amount of storage being used by the nodes in the data cluster is to use CloudWatch. You can follow this AWS article on How To View Storage to view the volume sizes. For monthly reviews, add a Number widget for this metric to your main Cloudwatch dashboard.

The Store Locator Plus® CloudWatch dashboard.

Checking MySQL Database and Table Sizes

You can check table sizes with this MySQL command:

           ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) AS `Size MB`
        FROM   information_schema.tables
        GROUP  BY table_name) AS tmp_table

Checking database sizes in MySQL can be done with this command:

           ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) AS `Size MB`
        FROM   information_schema.tables
        GROUP  BY TABLE_SCHEMA) AS tmp_table
Our database sizes

Additional File & Table Size Commands

Temp files in Aurora MySQL

SELECT file_name, ROUND(SUM(total_extents * extent_size)/1024/1024/1024,2) AS "TableSizeinGB" from information_schema.files;

Table and fragmented space (GB)

SELECT table_schema AS "DB_NAME", SUM(size) "DB_SIZE", SUM(fragmented_space) APPROXIMATED_FRAGMENTED_SPACE_GB FROM (SELECT table_schema, table_name, ROUND((data_length+index_length+data_free)/1024/1024/1024,2) AS size, ROUND((data_length - (AVG_ROW_LENGTH*TABLE_ROWS))/1024/1024/1024,2)
AS fragmented_space FROM information_schema.tables WHERE table_type='BASE TABLE' AND table_schema NOT IN ('performance_schema', 'mysql', 'information_schema') ) AS TEMP GROUP BY DB_NAME ORDER BY APPROXIMATED_FRAGMENTED_SPACE_GB DESC;

Reducing Volume Size

Yes, Amazon did add automatic volume reduction as part of the Aurora MySQL and PostgreSQL implementations a couple of years ago, but there are some caveats. First, deleting records from a table does not release the disk space; MySQL keeps that space for future inserts and updates. Second, Optimize Table should help but it temporarily increases volume usage as it replicates the table during the process; Also InnoDB tables — the default unless overridden should be auto-optimized so you won’t gain much if anything from this step.

This Stack Overflow post has some great details about MySQL and volume storage.

Dropping unused tables will reduce volume size IF innodb_file_per_table is on. It should be for default Aurora MySQL instances. You can check that setting with this MySQL command.

show variables like "innodb_file_per_table";

Cloning a database with dump/restore and dropping the original will reduce volume size and may be the only option to “reclaim” space if a lot of records have been dropped. You’ll be doubling volume use during the operation, but once the original source database is removed you should see a reduction in space.

You may be interested in following this AWS article on performance and scaling for Aurora DB clusters as well.

Or this article from AWS on troubleshooting storage space.

%d bloggers like this: