The Job Interview

(This is an old joke I remember from childhood.)

A guy was in a job interview, answering every question correctly and showing an impressive level of knowledge and skill. After nearly an hour of tough questions, the interviewer said, “You seem like a great fit, but I see there was a 7 year gap since your last job. What happened there?”

Guy: “Oh, I went to Yale”.

Employer: “Oh great! Well, you’re hired, and you start Monday. What did you say your name was again?”

Guy: “Yim Yohnson.”

The 80/20 Rule Is Going To Ruin Your Life

I’ve touched on this before, but it bears elaboration.

With the rise of AI, which most people call “Artificial Intelligence” but I call “Artificial Ignorance”, you are absolutely going to have problems if you are not normal, or are in the minority in any particular situation.

This is because AI systems use large databases to generate “rules” or guidelines about the world. Because they’re based on math and averages, there will always be a margin of error.

Let’s go with some fake statistics that I made up for the sake of this post.

  • 88% of people in Michigan like vanilla ice cream.
  • 91% of people in Michigan follow professional hockey.
  • 78% of people in Michigan own a pair of skis.

Well, if you don’t like vanilla ice cream, don’t follow hockey, and don’t own skis, it might be pretty safe to assume you’re not from Michigan. After all, only 0.2% of people in Michigan fail to meet one of these criteria, so it’s safe to assume everyone else is a faker.

When these 80/20-rule-type things start making decisions about your life, you are absolutely going to run into problems. Why?

  • Nobody is completely, entirely, 100% normal. Everyone is unique in their own little way. You will inevitably be in a situation where you don’t match the math that the computer has decided is correct.
  • As Google has shown, companies can get away with having almost exactly zero support for their products, so there will probably not be a Human you can talk to to solve your problem.
  • If there ARE Humans available for support, they probably won’t have the power to fix your problem, or will be overwhelmed with the 20,000 other people (0.2% of the population of Michigan) who have been miscategorized and need to have their issues fixed.

The AI apocalypse will not be robots shooting Humans. It will be dumb computers denying us food and housing because we don’t match their badly-calculated templates.

Google Is Not Very Smart (or Incredibly Smart)

To the layperson, Google might seem like the smartest company in the world.

Once you understand technology, it’s obvious that they’re not very smart. Or incredible geniuses.

If you’ve spent more than a few years building websites, you know very well the types of ignorant robot-stupid mistakes that Google can make. An example from this very blog is that I spent quite a few years working on multi-user dungeons (MUDs for short). There are a lot of posts on the subject, and Google’s ad system thinks that one of the main topics of this blog is digging holes in wet soil, and it’s not uncommon to see an ad for such machines on this site.

Google is PHENOMENALLY good at miscategorizing things. If there are two meanings to a phrase or topic, and 80% of the discussions in the world focus on the more popular meaning and 20% are about the less-popular meaning, the less-popular topic will be buried in the noise, because the systems will assume you mean “computer keyboards” when you meant “music keyboards”.

When you combine this with what I call “computational laziness”, this means that if your site happens to be categorized in a certain manner, you’re basically stuck there. Google doesn’t put much effort into revising its categorization models or re-analyzing sites based on new information.

What does that mean for web developers? Well, for a newer site, if Google puts you in a place you don’t want to be, you’re probably better off starting over from a different angle.

For people searching the web, it’s a little more complicated. Google is NOT designed to be the best search engine on the planet. Now that the search engine wars are over, the design has changed to focus on revenue maximization. What does that mean?

It means that Google as a search engine is designed to show you just-barely-adequate results that kind-of-but-not-really satisfy the question you were asking. It’s designed to be mostly-accurate but slightly frustrating, so that you are tempted to click on ads that seem likely to answer your question.

In a perfect world, a search engine would give you exactly the information you were looking for, as quickly and as accurately as possible. In THIS world, search engines are designed to give you the plausibly-relevant information that will benefit the search engine the most.

In a world where the profit motive rules over everything, product quality must necessarily suffer for the purpose of maximizing margins.

As something used by most of the people in the world pretty much daily, when does it make sense for search to become a public utility? It’s an interesting question worth pondering, but the mathematics and economics are far too complicated for a sound-bite answer.

The number of websites in existence has been relatively flat since 2017, not growing any faster than the world’s population, but processing power has effectively tripled. If Moore’s Law was still in effect, it should have grown 16x, but that ship has sailed. What this means is that, even though sites today have many more pages and much more data on average than six year ago, the ability to organize that information has grown faster than the actual quantity of information.

Google is not special. They’re just another business. And, with their original core patents being expired or expiring soon, there’s a lot of room to build something of higher quality with lower cost. Given the level of mind-share that they have (as any good look at Bing’s market share will confirm), is it worth it to build a competitor?

It depends.

Blocking Twitter and Other Nuisance Sites with BlockSite

It’s very common for news articles to contain links to Twitter. Some lazy writers even create articles that are nothing but a page of Twitter embeds.

I think Twitter is trash and never want to visit the site, and sometimes it’s not obvious that a link ends up on Twitter, especially with the widespread use of URL shorteners on the web.

I found a quick solution that works well via a plugin called BlockSite. I use both Firefox and Chrome, and it’s available for both.

BlockSite for Chrome

BlockSite for Firefox

There’s a lot more to it than just blocking garbage sites like Twitter, including things like setting up distraction-free hours to block social media during your workday, so check out their website for more info.

Here’s an example of the plugin preventing a visit:

BlockSite Plugin Preventing Twitter Access

BlockSite Plugin Preventing Twitter Access

PostgreSQL Command History

The command-line PostgreSQL database client keeps a command history.

Much like on a Linux console, you can use the up arrow to scroll back through previous commands. This can be handy if you want to repeat a command, or to change a parameter on a long query.

One other thing that is less widely-known is that you can view the ENTIRE history by typing “\s”. Backslash-s will function pretty much the same as the Linux “history” command, and can give you multiple pages of commands.

Much like the history of the Bash shell, the Postgres command history is stored in your home directory in the hidden file .psql_history. Although this is handy, it can also be a security risk to keep old commands where prying eyes can find them. You can easily delete this file, or clear it using the command:

 truncate -s 0 .psql_history

I like the truncate command because it’s a nice, clean, one-step way to clear a file without deleting it.

Full Support for Ukraine

I don’t say much about politics here, because on the internet nobody cares and nobody’s opinion is important.

BUT

I am a US citizen. I am 100% in favor of military support for Ukraine. I don’t care how much it costs, and I’m totally OK with my taxes going up if need be. Russia is a terrorist state and must be defeated, no matter the cost. That includes nukes.

Russia is the aggressor. They were not provoked. The war in Ukraine is nothing short of imperialist ambition. The idea that “We were great and will be again” has killed tens of thousands of people, and is nonsense at its core. There will never be another Roman Empire. Give up.

Russia must be defeated, dismantled, and demilitarized. It’s the only path forward if you want world peace.

If you visit any of my websites or consume any of my content, you should know that a not-exactly-specified (because it varies depending on my monthly budget) amount of my earnings are going toward supporting Ukraine.

Russia has nothing to offer this world until and unless it decides to withdraw from Ukraine and give back all that it has stolen. Anything short of this requires the complete destruction of the Russian “empire”. I use quotes because the Russian “empire” is a has-been and its military is a sad joke.

How to Insert Values Into a Table from a Select Query in PostgreSQL

I recently had to do a data migration where I needed to populate a new table with some data from another table.

Rather than perform some sort of export, transform, and import, it was much easier to do this via a select query within Postgres itself.

Although you can use the star operator, it’s safer and smarter to name fields specifically, in order. In addition, you should manually verify your select query first.

In my case, there were a set of a few fields I wanted to insert, and converted into a select query, looked like this:

SELECT date_added, description, last_updated, name, type FROM information;

This gave me all of the fields I wanted, and I verified that the data looked good.

Next, I added this to an insert query:

INSERT INTO information_subset (date_added, description, last_updated, name, type) SELECT date_added, description, last_updated, name, type FROM information;

Although it’s something that seems like it might be complex, inserting into one table using data from another is quite simple in Postgres and most other databases.

Using Location to Assume User Language

My first language is English. I live in Uruguay, and while I’m comfortable and nearly fluent in Spanish, sometimes I’d rather use a website in English.

It is EXTREMELY common for a website to see the location of my IP address and assume they should send me a page in Spanish. This is perfectly fine, because everyone in Uruguay speaks Spanish. Not everyone has it as their first (or preferred) language given the number of immigrants here (especially from Brazil), but everyone speaks at least SOME Spanish here.

The other thing that sites may use is the language setting of your web browser. This is also a reasonable approach, although less common. I have one computer set to English and one set to Spanish, and I’ll sometimes get different versions of a site depending on which computer I use. This is mostly OK, but there are an unreasonable number of computers that have the language set to English (the most common default) even though the users don’t prefer English. There are also computers used by multi-lingual households, so using the computer’s language setting is an assumption at best, and probably less accurate than using someone’s location.

Of course, the right thing to do after you’ve made an assumption about the user’s language is allow them to change it. Typically this is a link or button somewhere along the top of the site. Although sometimes this will be a flag icon, that’s less than ideal when you’re dealing with countries that have multiple languages (Belgium, for example). People from that country have learned to adjust, and can pick the French, Netherlands, or even German flag if they need to.

The problem comes when a website doesn’t give someone the chance to change the language they’re viewing. Not only are geolocation services not 100% accurate, but just because you are IN a place doesn’t mean you are OF a place.

Sure, auto-translation tools are nice, and they get you 80% of the way there, but they rarely understand context, words with multiple meanings, regional word usage, or any of the other Human nuances. Maybe some day, but not today. So it’s important to allow a user to select their preferred language.

Sometimes it’s easy to change the language manually by hand-editing the URL. For example, these urls:

es.wikipedia.org/page-name
example.com/es/page-name
example.com/page-name?lang=es

Can easily be changed to:

en.wikipedia.org/page-name
example.com/en/page-name
example.com/page-name?lang=en

Where it gets really ugly, and I’ve seen this with media/streaming sites more than anything else, is when you deliberately visit a page in a specific language, and you are force-redirected back to the language you didn’t want.

The only hope for sites like that is to use a proxy server. Or close the browser tab. You should never force a language on a visitor if your site is available in other languages.

And if you find a site with this bad behavior, please send them a message letting them know the error of their ways. Maybe they’ll eventually learn.

 

MusicSrch Improvements

It’s been a long time since I’ve worked on MusicSrch.

For the past few years, I really didn’t have any time for side projects, and a few searches were very broken on the site due to changes in the various third-party websites. It needed some serious rehab.

I spent a few days fixing things and improving various features, and now it’s a lot better.

Give it a try at https://musicsrch.com

I plan to spend a lot more time working on it this year, and there are more sites I want to add to search.

Why Do So Few People Develop Apps for Windows?

There are a lot of reasons so few people develop for Windows anymore.

We’ll skip the obvious ones:
– There’s a whole lot more growth in mobile apps.
– Most of the apps that people could want or need on Windows have already been written.
– Many apps make more sense as websites, so every platform can access them.

For apps that make more sense on a desktop PC, such as audio/video production and 3D design, and all of the other sorts of things that are not optimal on a 5-inch screen or in a stateless system, there’s a much more painful reason:

MICROSOFT’S DEVELOPER ECOSYSTEM IS HORRIBLE.

For example, the answer to this question:

What framework should I build my app with?

It is almost impossible to answer, and the official answer changes every year or two.

Win32?
WPF?
WinForms?
.NET MAUI?
WinUI?
Xamarin?
WinRT?
MFC?
Qt?
wxWidgets?
Silverlight?
Universal Windows Platform (UWP)?
Windows App SDK?
Progressive Web App (PWA)?
MSIX, MSI, or EXE installer?
Are any of these things different names for the same thing? (yes)

Good luck. Microsoft has come out with many new technologies that were “almost finished”, and each one has had some glaring omissions and dropped some important technologies or features that will probably never be finished before the system is replaced with something else. I know this pain all too well since I’ve worked with text-to-speech and audio applications extensively.

And, of course, when one system supersedes another, they don’t put a disclaimer on the docs saying “you should use this new thing instead” and provide a migration guide, and many of the docs don’t even have a date so you at least have some idea whether it’s current.

In some of the latest nonsense: In the Windows Store, you can submit an app with an MSIX installer or an MSI/EXE installer. You can set an app with an MSI/EXE installer as paid, but you CANNOT ACTUALLY CHARGE MONEY FOR IT. I don’t know who designed a system like that, but they were probably drunk at the time. Of course, I didn’t learn this until after I went through the entire process of getting an app approved for the store.

Another aborted effort was enabling ad-based monetization of free apps distributed via the Windows store.

Why is it that every mobile operating system (even the ones with only a handful of users) has managed to get this right, and Microsoft just can’t figure out how to build a system that people actually want to build a business around?

I like building desktop apps, but Microsoft has done everything they possibly can to make it a nightmare to support Windows. Nowadays you can probably only make money writing Windows apps if you’re writing malware.

You’re Going to Have to Get a Lot Weirder

With the growth of artificial intelligence, you’re going to have to get a lot weirder.

AI systems like GPT-3, Stable Diffusion, DALL-E, and others are trained on gigantic collections of data. They analyze this data and find things that are in common, and use them to synthesize “new” things based on the rules and styles they’ve learned.

Essentially, AI art, writing, and music manages to plagiarize thousands of works simultaneously.

What this means, is that generally things produced by AI are more “average” and a lot closer to the middle ground. “Interesting” and “weird” will gradually be eroded and replaced with things that are more and more average over time. Nuance will be replaced by uniformity and commonality.

Your only hope is to always be weirder, more interesting, and more original than the robots.

Twitter: I Was Off By Four Years

Back in early 2018, I posted that 2018 would be the year that Twitter “ceases to be relevant”.

It turns out that I was off by four (or maybe five) years.

Twitter is certainly relevant TODAY, but much of its relevance is as a lesson in “how to destroy a company in under six months”. Whether it takes a few weeks or a year for the majority of users to leave and the site to become a mostly-forgotten memory is yet to be seen, but it’s toast.

When Elon purchased Twitter, I finally deleted my personal account. I had only been using the site to keep track of what was going on in Ukraine, and wasn’t interested in “joining the circus” after the transition.

Today I deleted links to Twitter from all of my websites. It’s just not somewhere worth participating in, and it’s not a useful place for communicating/publishing news about new things to an audience.

I don’t see it recovering, and not much of value will be lost.

Fixing OSError: cannot write mode RGBA as JPEG in Python

I have an application that was originally written in Python 2.7 and used the Python Image Libary (PIL). Now it uses Python 3.x and Pillow (the PIL replacement). One of the things it does is automatically generate .jpg file thumbnails of uploaded files.

It does this by resizing the uploaded image, and then saving it with the filename extension changed.

I started getting an error when uploading a .png file:

OSError: cannot write mode RGBA as JPEG

A bit of research told me that the behavior has changed. Now you need to discard the alpha (transparency) channel of an image before you’re allowed to save it as an image type that doesn’t have an alpha channel. Since PNG files normally have a transparency layer, it makes sense why I would hit that error. It’s trying to protect me from losing data I might potentially want to keep. However, I don’t want or need alpha channel data for my thumbnails.

The solution is just to convert the image before saving.

thumbnail = im.convert('RGB')
thumbnail.save("my_image_filename.jpg", 'JPEG', quality=92)

It’s a simple fix, but slightly annoying that the behavior has changed over the years.

Creating Album Art Videos with Wondershare Filmora

It’s been a couple years since I last created an album art video, and I normally create them for an entire album all at once. It looks like the tool I used to use is no longer available, so I decided to figure out how to use Wondershare Filmora to do it.

Screenshots are from Filmora 11, but this should be pretty much the same on older and newer versions since it’s a basic operation that’s not likely to change much.

The first step is to create a new project and import the media. In addition to the .wav (or alternatively .mp3 files, though those are lower-quality), you should also have created an album art image file that is 1920×1080 pixels, the standard dimensions of an HD video. If you want to create a 4k video, you’ll need a larger image.

The first step is to drag the image to the video track.

Next drag a song to the audio track.

Now you need change the length of the video track to match the audio track. This can be tricky because the single image is too small to grab at the default zoom level. If you click “zoom in” three or four times you should be able to grab it and drag it longer.

You’ll need to drag this to match the length of your audio track. Filmora makes this easy because it wants to “snap” the video length to the audio length.

An album art project in Wondershare Filmora.

An album art project in Wondershare Filmora.

Next select Export -> Create Video from the menu. Edit the name of the video and select a folder to export to.

Under “Preset” you’ll want to click “Settings”. I have settings I’ve saved as a preset to make things faster. The video encoding bit rate isn’t very important, but you won’t want to set that too high or you’ll have a huge video file for no reason. More important is to set the audio settings to a high bit rate so you have good quality music encoding.

Wondershare Filmora Encoding Settings for Album Art Video

Wondershare Filmora Encoding Settings for Album Art Video

After you have the settings configured, click “Export” and wait a bit for it to finish.

You don’t need to create a new project for each video, and it’s a bunch of extra work to do so.

After the first video is exported, delete the audio track from the audio timeline, drag in the next one, drag the video track slider to match the length, and then click Export -> Create Video again. Remember to change the name. It’s that simple.

Fixing Error LNK2038 In a Visual Studio Project

While building a Visual Studio C++ project in Visual Studio 2022, I received this error while linking to a library:

Error LNK2038 mismatch detected for ‘_MSC_VER’: value ‘1800’ doesn’t match value ‘1900’ in MidiPlayer.obj MIDIPlayer E:\code\MIDIPlayer\wxmsw30ud_core.lib(dlgcmn.obj)

In case you’re not familiar with this error, Visual Studio wants to link with libraries that were built with the same version of Microsoft C (same toolset) as the code you’re compiling. In older versions of Visual Studio that meant that they had been built with the same version of Visual Studio, but now that versions aren’t as tightly-coupled to Visual Studio, it’s also possible that you can get this with libraries built with the same version of Visual Studio that you’re using.

The solution is to rebuild the library with the same MSC version as your project.

For reference, here are the different MSC, MSVC, and toolset versions:

Microsoft C 6.0: _MSC_VER == 600
Microsoft C/C++ 7.0: _MSC_VER == 700
Microsoft Visual C++ 1.0: _MSC_VER == 800
Microsoft Visual C++ 2.0: _MSC_VER == 900
Microsoft Visual C++ 4.0: _MSC_VER == 1000
Microsoft Visual C++ 4.1: _MSC_VER == 1010
Microsoft Visual C++ 4.2: _MSC_VER == 1020
Microsoft Visual C++ 5.0: _MSC_VER == 1100 (Visual Studio 97)
Microsoft Visual C++ 6.0: _MSC_VER == 1200 (Visual Studio 6.0)
Microsoft Visual C++ 7.0: _MSC_VER == 1300 (Visual Studio 2002)
Microsoft Visual C++ 7.1: _MSC_VER == 1310 (Visual Studio 2003)
Microsoft Visual C++ 8.0: _MSC_VER == 1400 (Visual Studio 2005 – v80)
Microsoft Visual C++ 9.0: _MSC_VER == 1500 (Visual Studio 2008 – v90)
Microsoft Visual C++ 10.0: _MSC_VER == 1600 (Visual Studio 2010 – v100)
Microsoft Visual C++ 11.0: _MSC_VER == 1700 (Visual Studio 2012 – v110)
Microsoft Visual C++ 12.0: _MSC_VER == 1800 (Visual Studio 2013 – v120)
Microsoft Visual C++ 14.0: _MSC_VER == 1900 (Visual Studio 2015 – v140)
Microsoft Visual C++ 14.1 – 14.16: _MSC_VER == 1910 – 1916 (Visual Studio 2017 – v141)
Microsoft Visual C++ 14.2 – 14.29: _MSC_VER == 1920 – 1929 (Visual Studio 2019 – v142)
Microsoft Visual C++ 14.3: _MSC_VER == 1930 (Visual Studio 2022 – v143)

It’s interesting that starting with Visual Studio version 2017, Microsoft started incrementing the MSC version with each “micro” build change.

Telling Git To Ignore File Mode Changes

It’s possible for Git to think there have been changes to files when only permissions have changed. This is something that is pretty common if you work across multiple operating systems, and it can be annoying.

For example, running “git status” showed that two of my icon files had changed, but hadn’t edited them. Running “git diff” showed that it was just the file mode properties (permissions or “props”) that had changed.

$ git diff
diff –git a/images/samplitron_largeicon.png b/images/samplitron_largeicon.png
old mode 100644
new mode 100755
diff –git a/images/samplitron_largeicon.xcf b/images/samplitron_largeicon.xcf
old mode 100644
new mode 100755

To make Git stop treating this as a change, there’s a command you can run:

git config –global core.fileMode false

Now when you run “git status” or “git diff”, file mode changes will be ignored.

Fixing Error MSB3843 in a Visual Studio Project

I updated my DrumPads code project to the latest version of Visual Studio 2022 and received this error when trying to build:

Error MSB3843 Project “DrumPads” targets platform “Windows”, but references SDK “Visual C++ 2015-2019 Runtime for Universal Windows Platform Apps v14.0” which targets platform “UAP”. C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\Bin\amd64\Microsoft.Common.CurrentVersion.targets

I’m not sure whether the project was actually set to be UAP, or whether the migration set something to that.

Visual Studio Solution Explorer Showing "Universal Windows"

Solution Explorer Showing “Universal Windows”

To fix the error, I edited the DrumPads.vcxproj file manually. In the PropertyGroup sections for the build configurations I changed this:

<PropertyGroup Condition=”‘$(Configuration)|$(Platform)’==’Release|Win32′” Label=”Configuration”>
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
<PlatformToolset>v143</PlatformToolset
<WindowsAppContainer>true</WindowsAppContainer>
</PropertyGroup>

I removed the line <WindowsAppContainer>true</WindowsAppContainer> entries from all of the configurations and reloaded the project. It no longer showed up as “Universal Windows” and I was able to build without the error.

Solution Explorer Without "Universal Windows"

Solution Explorer Without “Universal Windows”

Changing the Remote URL for a Git Repository

Maybe the repository has been moved, or maybe you have an old repository that was checked out with username and password authentication, and you can’t push to it anymore because GitHub requires ssh authentication now. That was true in my case, since I was going back to work on code I hadn’t touched for a few years.

Although you could just re-check-out the repository, that is unnecessary, and can be annoying if you have changes that you want to push.

Changing the remote URL for a git repository is a simple thing to do, but it’s not necessarily obvious. It’s done using the “git remote” command.

To display the URL of the remote repository:

git remote get-url origin

https://github.com/xangis/SigmaTizm
To change this to a new URL:
git remote set-url origin git@github.com:Xangis/Sigmatizm.git
With this one-line change I was able to continue working as normal without needing to re-check-out the repository.

New Bloodless Mushroom Release – Hydropus

I recently released some new Bloodless Mushroom music, an album titled Hydropus. It’s a collection of glitchy ambient instrumental tracks composed on various vintage synthesizers. The name comes from a genus of mushroom that grows in tropical forests.

The album art is a painting I created early in the Coronavirus lockdowns.

It’s available for listening on YouTube here:

It’s also available on Spotify and other streaming services. Enjoy. 🙂