Playing with AI Art Generators

Artificial Intelligence is escaping the bounds of purely technical implementations such as code completion algorithms or more ambitious goals like self-driving cars. The floodgates have opened on new creative-minded AI Art Generators that create art from something as simple as a short phrase. While there is some controversy over the technology — borrowing? Stealing? — being trained on real-life human artists and all the existential and legal debates that it brings, the results are intriguing.

Regardless of where the chips falls on the “is this moral” and “is this legal” debate, it is clear the technology is fully out in the wild. In my opinion it is going to have a big impact in the creative community, hitting hardest on new artists that are trying to earn some extra cash on sites live Fivver.

Nighstudio.Cafe AI Art Generated image for the phrase Deep computer memory wiring tower view 3d
One of my first attempts at AI art turned out OK – from NightCafe.Studio
Read More

Fixing Postfix After An Ubuntu Upgrade

As part of a set of security updates one of the servers in a cluster was upgraded from Ubuntu 16.04LTS to 20.04LTS. Everything seemed to be fine until we found that Postfix was no longer sending email via the Amazon AWS SES service. The past few hours have been spent fixing Postfix after an Ubuntu upgrade.

Ubuntu Changed How Postfix Works

Turns out this upgrade wasn’t completely innocuous when it comes to Postfix and the ability to send mail. It didn’t take long to uncover multiple errors in the server’s /var/log/mail.log file.

Name service error for name=email-smtp.us-east-1.amazonaws.com type=MX

Turns out this is a bit of red herring. Running through the postfix configuration did uncover some oddities that needed to be fixed, however.

A local lookup shows there are no MX records for that host available from the DNS servers.

host -t mx email-smtp.us-east-1.amazonaws.com
Read More

Removing Already Commited gitignore Files

Already committed files to your git repository that you now realized should not be part of the project? This is a common issue as projects grow from concept to ongoing development. In some cases it is nothing more than a “janitorial” concern. In other cases, such as including a node_modules directory for a node based project, it can lead to larger repositories that slow down the setup and tear down phases of a development cycle.

The biggest issue comes into play if a developer happens to include private keys or passwords in a file that got pushed to a public repository; In this case removing the file is a must. The steps below will help start the mitigation process to ensure that doesn’t happen on future commits.

Thankfully there is an easy way to remove the files and stop them from being committed on future updates.

First – update your .gitignore file and ensure the file you want to skip is on the list. If it is a single file use the full filename. The file name needs to be fully qualified as a relative path based on where your .gitignore file lives.

The example below assumes the file resides in the same directory as the .gitignore file. The file name is the same as what you’d enter in the .gitignore file.

Then update the git cache and index for that file.

git rm --cached notthisfile.txt
git update-index --skip-worktree notthisfile.txt

Removing a file in a subdirectory

In this example you’d add the mysubdir/notthisfile.txt to the .gitignore file as well.

git rm --cached mysubdir/notthisfile.txt
git update-index --skip-worktree mysubdir/notthisfile.txt

Removing Already Commited gitignore Directories

This example shows how to remove an entire build directory using git version 2.35+ CLI commands.

git rm -r --cached build/
git update-index --skip-worktree build/

git Documentation

Looking for more git commands to manage your repositories? Go directly to the official git SCM guide or follow my other git tips & tricks posts.

Have more git tips and tricks? Share them in the comments.

post image: https://pixabay.com/photos/bottle-of-wine-came-bottles-2570215/

Aurora MySQL Storage Monitoring & Reduction

Using Aurora MySQL is a great way to host a MySQL database at low cost while providing a scalable and fault tolerant environment. Once you’ve setup your multi-zone cluster AWS takes care of much of the scaling for the database server. One of the components it manages automatically is the volume, or disk storage, up to a maximum of 128TB. However, this does not mean volume storage is fully managed.

All AWS RDS implementations have several components that can impact the monthly billing. While things like CPU and memory are based on the workload and often require application restructuring to reduce consumption, disk space — or volume storage — can often be reduced with routine maintenance.

Implementing routine maintenance to reduce volume consumption should be part of a monthly health-check for your AWS services.

Checking The Aurora MySQL Volume Size

The easiest way to check the amount of storage being used by the nodes in the data cluster is to use CloudWatch. You can follow this AWS article on How To View Storage to view the volume sizes. For monthly reviews, add a Number widget for this metric to your main Cloudwatch dashboard.

The Store Locator Plus® CloudWatch dashboard.

Checking MySQL Database and Table Sizes

You can check table sizes with this MySQL command:

SELECT *
FROM   (SELECT table_name, TABLE_SCHEMA, ENGINE,
           ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) AS `Size MB`
        FROM   information_schema.tables
        GROUP  BY table_name) AS tmp_table
ORDER  BY `Size MB` DESC;

Checking database sizes in MySQL can be done with this command:

SELECT *
FROM   (SELECT TABLE_SCHEMA,
           ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) AS `Size MB`
        FROM   information_schema.tables
        GROUP  BY TABLE_SCHEMA) AS tmp_table
ORDER  BY `Size MB` DESC;
Our database sizes

Additional File & Table Size Commands

Temp files in Aurora MySQL

SELECT file_name, ROUND(SUM(total_extents * extent_size)/1024/1024/1024,2) AS "TableSizeinGB" from information_schema.files;

Table and fragmented space (GB)

SELECT table_schema AS "DB_NAME", SUM(size) "DB_SIZE", SUM(fragmented_space) APPROXIMATED_FRAGMENTED_SPACE_GB FROM (SELECT table_schema, table_name, ROUND((data_length+index_length+data_free)/1024/1024/1024,2) AS size, ROUND((data_length - (AVG_ROW_LENGTH*TABLE_ROWS))/1024/1024/1024,2)
AS fragmented_space FROM information_schema.tables WHERE table_type='BASE TABLE' AND table_schema NOT IN ('performance_schema', 'mysql', 'information_schema') ) AS TEMP GROUP BY DB_NAME ORDER BY APPROXIMATED_FRAGMENTED_SPACE_GB DESC;

Reducing Volume Size

Yes, Amazon did add automatic volume reduction as part of the Aurora MySQL and PostgreSQL implementations a couple of years ago, but there are some caveats. First, deleting records from a table does not release the disk space; MySQL keeps that space for future inserts and updates. Second, Optimize Table should help but it temporarily increases volume usage as it replicates the table during the process; Also InnoDB tables — the default unless overridden should be auto-optimized so you won’t gain much if anything from this step.

This Stack Overflow post has some great details about MySQL and volume storage.

Dropping unused tables will reduce volume size IF innodb_file_per_table is on. It should be for default Aurora MySQL instances. You can check that setting with this MySQL command.

show variables like "innodb_file_per_table";

Cloning a database with dump/restore and dropping the original will reduce volume size and may be the only option to “reclaim” space if a lot of records have been dropped. You’ll be doubling volume use during the operation, but once the original source database is removed you should see a reduction in space.

You may be interested in following this AWS article on performance and scaling for Aurora DB clusters as well.

Or this article from AWS on troubleshooting storage space.

Testing MySQL Connectivity From WordPress Docker Containers

cargo container lot

Moving forward with feature development for the Store Locator Plus® SaaS service includes moving forward with our tool kits and environments. Docker containers have become a bigger part of our development toolkit. They make for rapid deployment of isolated environments for testing various concepts without the overhead of virtual machines. While we have been working on Virtualbox virtualized machines for years, Docker is far faster to spin up and far less resource intensive.

Along the way we have found the need to perform deeper dives into the interplay of Docker, WordPress, and MySQL. The latest endeavor, discovering why our WordPress container is not communicating with our AWS hosted Aurora MySQL test data set.

While we can connect to our Aurora MySQL database using the credentials is our docker-compose YAML file, the WordPress instance running in docker on localhost is throwing connectivity errors.

Assumptions

For this article we assume a rudimentary knowledge of Docker, a baseline container with WordPress installed — likely from the default WordPress image, and a GUI interface such as Docker Desktop for managing instances and providing quick access to the shell command line of the container.

We also assume you have a docker-compose.yaml config file that is used to startup the container that looks similar to this:

version: '3'

services:

  wp:
    image: wordpress:6.0.1-php7.4-apache
    container_name: wp-slpsaas
    platform: linux/amd64
    ports:
      - '80:80'
    environment:
      WORDPRESS_DB_HOST: 'some_rds_instance.c0glwpjjxt7q.us-east-1.rds.amazonaws.com'
      WORDPRESS_DB_USER: wp_db_user
      WORDPRESS_DB_PASSWORD: '9aDDwTdJenny8675309'
      WORDPRESS_DB_NAME: wp_db_name

And you’ve started the container either via Docker Desktop or the command line with
docker-compose up -d

Installing MySQL Client On Docker WordPress Containers

Connect to the shell for the container after the container has been started. The easiest way to do this is with Docker Desktop and clicking the shell icon for the instance.

Our SLP SaaS container running WordPress in the Docker Desktop. The red arrow is the icon to connect to the container’s shell command line.

By default the WordPress Docker image does not contain very many tools. That is to be expected as the whole idea behind the containers is lightweight environments. As such we’ll need to install the MySQL client with apt:

apt-get update
apt-get install default-mysql-client

Checking Connectivity To The WordPress Database

With the MySQL client installed we can now test connectivity from within the container to our AWS database directly. This ensures our network configuration and docker-compose environment variables are set properly to communicate with the database.

mysql -A -h $WORDPRESS_DB_HOST -u $WORDPRESS_DB_USER --password=$WORDPRESS_DB_PASSWORD $WORDPRESS_DB_NAME

MySQL > show tables;

If you get a list of WordPress tables the connectivity is working.

%d bloggers like this: