Category Archives: Websites

Why You Shouldn’t Use Google’s Programmable Search Engine

Google’s Programmable Search Engine, formerly known as the Custom Search Engine (or CSE), is a simple solution to integrate basic site search into a website. In just a few clicks, you can add an easy search box that more-or-less works for site-wide search.

The search box looks like this, complete with Google branding:

Google Programmable Search Engine Search Box



In addition, you can integrate this search with AdSense so that you get some ad revenue when someone clicks on an ad or sponsored link in the search results. It’s not much, only a tiny fraction of what you might get from other ads on your site, but it’s something.

You don’t have any control over how pages rank in search results.

Here’s an example of what the search results looked like for one of my sites:

Google Custom Search Engine Results With Ads

If you don’t really care about the user experience or look and feel of your site and want to give away all of the valuable information that search data can tell you, then you’re done.

If, however, you care about the user experience for your site and want to make it better over time, you’re better off finding a creating a different solution.

If you’re using a framework or CMS like WordPress, there are plenty of built-in and plugin solutions that you can use that will let you see what people actually searched for.

Let’s compare two pieces of data.

First, here is the google Programmable Search report for my website

Google Programmable Search Engine Data

Note that there is not a lot of useful information here. Although you can see the text of queries that people made more than 10 times, that doesn’t tend to be a lot of information. You lose ALL unique search queries, uncommon variations, and are left with almost nothing that you can use to improve your site content or user experience. This is all I get from more than 30,000 queries over more than a decade!

Second, here’s an example of what I saw in the first 24 hours after switching to my own site search:

Homegrown Site Search Data


In my case, I coded something from scratch, but you probably don’t have to. Like I said, there will be options and plugins for whatever you’re using.

Although some of these searches are from me doing testing, I already have some useful information. For example, I had some information on the “triton” synthesizer, but nothing about the “triton extreme”. This was a clue that I should add that to my site (which I did). I would never have realized that I had left that out if I didn’t know what people were looking for. Even better, because I built this custom, I can see what the top search result was for each query. This tells me whether people are getting what they want with a quick glance.

The value of this information is FAR more than the paltry few pennies that you’d get from Google’s custom search, and those pennies come at a cost — directing people away from your site.

In addition, I now have full control of how pages rank on my site. Because I know my site better than Google and have more context about my particular niche, I can provide more relevant prioritization of results.

Hosting your own search is more work overall (although maybe just a few minutes), and you may need to do a bit to filter out bot and crawler traffic, but this is valuable information worth having access to. You shouldn’t just give it away.

Goodnight MusicSrch

I’ve had MusicSrch for about seven years now.

Although it was an interesting curiosity, I never really figured out what to do with it, and it never managed to have more than a few dozen visits per month.

It was super helpful when writing music reviews because I could just dig up all the streaming links for a band with a quick search.

About a year ago I added a sort of “directory browse” feature, where it would save the data for a band, and keep track of some historical numbers, and let you browse by genre. That was kind of interesting, but not really useful, especially given the plethora of music sites out there in the world.

If I haven’t figured out what to do with the site in seven years, seven more won’t help.

So, goodnight, MusicSrch.

If you’d like to see the historical site, it’s been saved on the Wayback Machine over the years:*/

Seasonal Website Advertising Income Differences (Monthly RPM Changes)

I’ve been running advertising-supported websites for about 15 years now, primarily music-related.

I typically notice a decline in income at the beginning of the year, and a spike in income toward the end of the year. My audience is international, but the majority of visitors are from the USA and to a lesser extent other English-speaking countries.

I gathered up the data on my historical earnings for the last three years and charted how much income changes throughout the year on average. Using January as a baseline, this is how things change throughout the year:

Month Earnings vs. January Pageviews vs. January
January 0% 0%
February -2.2% -9.9%
March -2.0% -3%
April -6.1% +6.6%
May -15.0% -0.4%
June -18.0% -15.7%
July -16.1% -17.0%
August -6.8% -21.0%
September -4.4% -38.6%
October +2.5% -8.0%
November +16.3% -4.1%
December +26.6% -3.2%

What’s clear and obvious is that income is heavily impacted by the holiday season.

Something else that I didn’t realize is how much both traffic and earnings drop during the North American summer and early fall. I think this a sort of “go out and play” influence — I tend to get less traffic on days when people are out with friends in the evening or otherwise on vacation. People tend to spend less time working on music and looking for tools to make that music when they’re out playing music in clubs or at parties.

That’s my theory, at least.

Of course, this is a sample size of one person (across two sites) over three years, so there will be a significant margin of error. Nevertheless, I still find it interesting.

Google Is Not Very Smart (or Incredibly Smart)

To the layperson, Google might seem like the smartest company in the world.

Once you understand technology, it’s obvious that they’re not very smart. Or incredible geniuses.

If you’ve spent more than a few years building websites, you know very well the types of ignorant robot-stupid mistakes that Google can make. An example from this very blog is that I spent quite a few years working on multi-user dungeons (MUDs for short). There are a lot of posts on the subject, and Google’s ad system thinks that one of the main topics of this blog is digging holes in wet soil, and it’s not uncommon to see an ad for such machines on this site.

Google is PHENOMENALLY good at miscategorizing things. If there are two meanings to a phrase or topic, and 80% of the discussions in the world focus on the more popular meaning and 20% are about the less-popular meaning, the less-popular topic will be buried in the noise, because the systems will assume you mean “computer keyboards” when you meant “music keyboards”.

When you combine this with what I call “computational laziness”, this means that if your site happens to be categorized in a certain manner, you’re basically stuck there. Google doesn’t put much effort into revising its categorization models or re-analyzing sites based on new information.

What does that mean for web developers? Well, for a newer site, if Google puts you in a place you don’t want to be, you’re probably better off starting over from a different angle.

For people searching the web, it’s a little more complicated. Google is NOT designed to be the best search engine on the planet. Now that the search engine wars are over, the design has changed to focus on revenue maximization. What does that mean?

It means that Google as a search engine is designed to show you just-barely-adequate results that kind-of-but-not-really satisfy the question you were asking. It’s designed to be mostly-accurate but slightly frustrating, so that you are tempted to click on ads that seem likely to answer your question.

In a perfect world, a search engine would give you exactly the information you were looking for, as quickly and as accurately as possible. In THIS world, search engines are designed to give you the plausibly-relevant information that will benefit the search engine the most.

In a world where the profit motive rules over everything, product quality must necessarily suffer for the purpose of maximizing margins.

As something used by most of the people in the world pretty much daily, when does it make sense for search to become a public utility? It’s an interesting question worth pondering, but the mathematics and economics are far too complicated for a sound-bite answer.

The number of websites in existence has been relatively flat since 2017, not growing any faster than the world’s population, but processing power has effectively tripled. If Moore’s Law was still in effect, it should have grown 16x, but that ship has sailed. What this means is that, even though sites today have many more pages and much more data on average than six year ago, the ability to organize that information has grown faster than the actual quantity of information.

Google is not special. They’re just another business. And, with their original core patents being expired or expiring soon, there’s a lot of room to build something of higher quality with lower cost. Given the level of mind-share that they have (as any good look at Bing’s market share will confirm), is it worth it to build a competitor?

It depends.

Blocking Twitter and Other Nuisance Sites with BlockSite

It’s very common for news articles to contain links to Twitter. Some lazy writers even create articles that are nothing but a page of Twitter embeds.

I think Twitter is trash and never want to visit the site, and sometimes it’s not obvious that a link ends up on Twitter, especially with the widespread use of URL shorteners on the web.

I found a quick solution that works well via a plugin called BlockSite. I use both Firefox and Chrome, and it’s available for both.

BlockSite for Chrome

BlockSite for Firefox

There’s a lot more to it than just blocking garbage sites like Twitter, including things like setting up distraction-free hours to block social media during your workday, so check out their website for more info.

Here’s an example of the plugin preventing a visit:

BlockSite Plugin Preventing Twitter Access

BlockSite Plugin Preventing Twitter Access

Using Location to Assume User Language

My first language is English. I live in Uruguay, and while I’m comfortable and nearly fluent in Spanish, sometimes I’d rather use a website in English.

It is EXTREMELY common for a website to see the location of my IP address and assume they should send me a page in Spanish. This is perfectly fine, because everyone in Uruguay speaks Spanish. Not everyone has it as their first (or preferred) language given the number of immigrants here (especially from Brazil), but everyone speaks at least SOME Spanish here.

The other thing that sites may use is the language setting of your web browser. This is also a reasonable approach, although less common. I have one computer set to English and one set to Spanish, and I’ll sometimes get different versions of a site depending on which computer I use. This is mostly OK, but there are an unreasonable number of computers that have the language set to English (the most common default) even though the users don’t prefer English. There are also computers used by multi-lingual households, so using the computer’s language setting is an assumption at best, and probably less accurate than using someone’s location.

Of course, the right thing to do after you’ve made an assumption about the user’s language is allow them to change it. Typically this is a link or button somewhere along the top of the site. Although sometimes this will be a flag icon, that’s less than ideal when you’re dealing with countries that have multiple languages (Belgium, for example). People from that country have learned to adjust, and can pick the French, Netherlands, or even German flag if they need to.

The problem comes when a website doesn’t give someone the chance to change the language they’re viewing. Not only are geolocation services not 100% accurate, but just because you are IN a place doesn’t mean you are OF a place.

Sure, auto-translation tools are nice, and they get you 80% of the way there, but they rarely understand context, words with multiple meanings, regional word usage, or any of the other Human nuances. Maybe some day, but not today. So it’s important to allow a user to select their preferred language.

Sometimes it’s easy to change the language manually by hand-editing the URL. For example, these urls:

Can easily be changed to:

Where it gets really ugly, and I’ve seen this with media/streaming sites more than anything else, is when you deliberately visit a page in a specific language, and you are force-redirected back to the language you didn’t want.

The only hope for sites like that is to use a proxy server. Or close the browser tab. You should never force a language on a visitor if your site is available in other languages.

And if you find a site with this bad behavior, please send them a message letting them know the error of their ways. Maybe they’ll eventually learn.


MusicSrch Improvements

It’s been a long time since I’ve worked on MusicSrch.

For the past few years, I really didn’t have any time for side projects, and a few searches were very broken on the site due to changes in the various third-party websites. It needed some serious rehab.

I spent a few days fixing things and improving various features, and now it’s a lot better.

Give it a try at

I plan to spend a lot more time working on it this year, and there are more sites I want to add to search.

Twitter: I Was Off By Four Years

Back in early 2018, I posted that 2018 would be the year that Twitter “ceases to be relevant”.

It turns out that I was off by four (or maybe five) years.

Twitter is certainly relevant TODAY, but much of its relevance is as a lesson in “how to destroy a company in under six months”. Whether it takes a few weeks or a year for the majority of users to leave and the site to become a mostly-forgotten memory is yet to be seen, but it’s toast.

When Elon purchased Twitter, I finally deleted my personal account. I had only been using the site to keep track of what was going on in Ukraine, and wasn’t interested in “joining the circus” after the transition.

Today I deleted links to Twitter from all of my websites. It’s just not somewhere worth participating in, and it’s not a useful place for communicating/publishing news about new things to an audience.

I don’t see it recovering, and not much of value will be lost.

What Was was a software download site that I ran (with some interruptions) from 2013-2021.

It offered software downloads for Windows and had a few hundred listings. Listings could be added by submitting a Portable Application Description (PAD) file.

PAD files were an interesting idea that made it much easier for shareware and freeware authors to distribute software, but over time they were co-opted by spammers (especially affiliate link spam) and people who wanted to distribute viruses, so filtering out the bad/useless things was an ever-increasing chore.

My site was an interesting experiment and it got a bit of traffic, but ultimately there’s no real demand for Windows software download sites now that Windows has a proper app store. Even once-massive sites like are struggling. That’s why I shut it down in August 2021.

Now that you’re here, feel free to explore the blog a bit. I have a bunch of websites and music projects I’ve created, and you might find some of them interesting (under the “My Stuff” section of the sidebar).

Thoughts on Bidvertiser

Bidvertiser stands alone in that it is the only Adsense alternative that never made me angry.

It just worked. Setup was easy. There was no shady business, malicious javascript, browser hijacking, popover or popunder ads, push notification nonsense, or other user-alienating tomfoolery.

It’s the only ad network that I experimented with that didn’t make me want to turn on adblock on my own site, which had pretty minimal ads in the first place.

Earnings were terrible, as shown below. My site was a general-purpose non-niche site with global traffic based mostly in India, Pakistan, Turkey, and other non-high-revenue countries for advertising. In addition, the traffic volume was not high enough to attract anyone seeking to buy ads specifically on the site. Given that, take these earnings with a grain (or a bowl) of salt. Earnings would be much higher if your traffic was entirely from the U.S. Even so, the same traffic with Adsense would likely have earned around $8 or so.

If I had another site to monetize that was not compatible with Adsense, I’d use them again. I just wouldn’t expect to buy a fancy yacht from the earnings.

Outrage Is Useless

Before the algorithms took over the internet entirely, the news was obsessed with fear. Making people afraid was their goal, and it was what kept eyeballs glued to the screen.

As the internet evolved, fear was still in heavy circulation, and it benefited those who knew how to wield it. It was not just the news, but politicians and products meant to make you feel “safe”. But it started to change.

Over the past few years, thanks to “the algorithms”, I’ve noticed a shift more toward outrage than fear. When I visit a site, something is invariably presented that is meant to outrage me. Twitter does its best to make most of its trends political or “what celebrity did which outrageous thing you should get mad about”. Facebook shows me memes and news stories meant to make me mad, get my hackles up, and bring forth the fires of righteous indignation. So do all the other news and social media outlets.

When you’re presented with something, it’s worth taking a step back and looking at what sort of outrage it’s intended to provoke. Do you really want to waste your time yelling at some celebrity or making some 15-year-old kid cry for saying something ignorant on camera?

And when someone wants to direct their outrage your way, it’s best to just ignore it and let it blow over. It’s only temporary and in 10 minutes they’ll be outraged at someone or something else. Someone will always be offended by what you do or who you are. Don’t walk on eggshells for fear that someone might say something mean to you. Their opinions don’t matter, and that’s no way to live.

Outrage is useless. Don’t let it control you.

The WbSrch Experiment

Off-and-on over the last 8 years I’ve worked on an independent search engine called WbSrch. It made it as far as being as good as the late-1990s search engines, which is great, because the original goal was to build something much like Altavista. That was my first “main” search engine.

At one point I tried to turn it into a real business. That went poorly and I shut it down. Then I brought it back to work on as a hobby/fun project. That was interesting and fun for a while, but it’s run its course. I’ve done all the things I set out to do and learned all the things I wanted to learn. I’ve had my fun, so there’s no need to tinker with web search anymore. It did keep me busy toward the end of the pandemic as I was starting to go stir crazy, and I’m grateful for that.

If you’d like to see what it looked like when I finished with it, take a look at this capture on

If you’d like to use a pretty good alternative search engine, I suggest Mojeek or Yandex. The MusicSrch music search engine is still going, too.

And if you’d like to get a copy of some of the data I collected, there are a few inexpensive data downloads available.

Now that you’re here, feel free to explore the blog a bit. I have a bunch of websites and music projects I’ve created, and you might find some of them interesting (under the “My Stuff” section of the sidebar).

PoSSE and Facebook

One core idea of the “Indie Web” is “Publish on Your Own Site, Syndicate Elsewhere” (PoSSE). The idea is that you post content on your own website first and foremost, and then mirror it to social networks such as Facebook. This gives you more control over the original content, keeping it from being hidden behind a walled garden and preventing it from disappearing if you are banned from a site, it shuts down, the algorithms decide you’re not interesting, or it just decides to hide things older than X years.

It’s a good idea, and I think I’ll be implementing it a bit more in my own life. Don’t be surprised if you see more posts showing up and backfilling the site with non-recent publication dates. Most of my activity is on Facebook, but there is a little on Instagram, and even less on Twitter.

The one obvious drawback to publishing things publicly on your own site is that it lacks visibility controls like “friends only”, which is valuable, but not foolproof because anyone can screenshot and forward anything. it does help keep down the number of randos sea-lioning into your conversations.

Since this blog intentionally does not allow comments, there’s little worry about that. There is still a little privacy concern, but as an Extremely Online Person, I don’t care much about privacy and everything is pretty much out there anyway.

Removing Politics From Twitter

My disdain for Twitter is no secret. It is a cesspool of the worst people on Earth. But it does have some redeeming qualities if you can manage to filter out all the political nonsense

Here’s how I filter out most of the crap (there are a few more that go off the screen, but not that many).

I should really turn off trends, but instead I either click “Not interested in the topic” or “This trend is harmful or spammy” when I see anything political. Anecdotally, clicking “not interested” seems to have more effect. I also not-interested sports topics since I’m genuinely not interested in any sports. They don’t make me angry, though.

I also block everyone who looks even remotely annoying and have built a block list of around 1000 people over the past 10 years or so. My block list is insane and is about 90% MAGA idiots (and there seems to be a deep supply of them) and about 10% always-outraged liberals. Most of the MAGA scum on Twitter are either bots or morons who are indistinguishable from bots. This does of course mean that I’m missing out on the finer details of the United States’ inevitable descent into totalitarian fascism, which is a real loss.

All in all, it is a LOT of effort to de-politicize your Twitter feed, and it’s probably not worth it. If Twitter had any sense, which they don’t, they’d add an option to filter out political nonsense. I think they know that if they added that option, there would be almost nothing left and most of the wingnuts would leave, destroying their monthly active user numbers. So, instead of making it a decent place where you can find useful information, they made it a place full of angry assholes always getting angrier about things. That’s the thing with social media — the algorithms LOVE to keep people outraged and angry because that results in more eyeballs-glued-to-the-site time.

My feed is for the most part now a mix of cute capybara pictures, 3D art, and pictures of Spain. You should probably follow CAPYBARA_MAN.

Or just don’t bother wasting your time with Twitter. That’s always an option. Fear not, you’re missing out on nothing.

New Web Browser: Scleroglossa

For quite a while I’ve wanted to build a web browser based on the Gecko engine by Mozilla, which is what powers Firefox. Until recently I never had the right combination of time and motivation to dig in.

Well, now that I have, here’s the result – the Scleroglossa browser for Windows.

It’s available for download on the Lambda Centauri website.

Cleaner URLs Without Tracking Nonsense

Have you ever seen a link with a bunch of extra stuff on it? Facebook URLS with “fbclid=<big string of letters” or links with a bunch of “utm_medium=<whatever>” or those horrendously long product links you get from Amazon?

They’re used for tracking behavior, and handy for people getting marketing and attribution data. If you don’t mind them, that’s cool. They annoy me a little because I like clean, readable URLs.

There’s a browser extension to get rid of them, called ClearURLs:



I Don’t Care About Cookies

I’m tired of websites showing me cookie warnings that I have to click through to remove some sort of overlay that obscures some portion of the site. I have not nor will I ever care about cookies. They’re a built-in part of the browser that should just work invisibly, and they’re an important part of making apps work.

There’s an extension that’s called, appropriately, “I Don’t Care About Cookies”. Here it is:



Windows Software by Lambda Centauri

I’ve written a lot of apps for Windows (and other) PCs. Originally I published everything as Zeta Centauri, but it was a weird combination of audio apps and utilities that didn’t mesh well with audio apps (calculators, word processing, image viewer, browser). I’ve launched a new website for the utility apps to keep them separate from the audio apps.

Check it out here:

Blocking Spammy or Malicious IPs with Nginx

Over the years I’ve added a bunch of sites and networks to my IP block list. They’re sources of spam, hack scripts, and traffic that just wastes my time and resources.

Not all of these sites are spam or malicious at this very moment (some had viruses or were otherwise compromised as part of a botnet), it’s just what I’ve found useful to block. The names are optional. I just added comments with what the network looked up as (or why it was blocked).

Here’s what I’ve blocked. To implement this in Nginx, edit the nginx.conf and in the main block add “include blockips.conf” with this file saved as blockips.conf in the same directory.


# Every one of these networks has been blocked because they have
# behaved in a spammy, botlike, no-account hackscript way.
# Add this to your /etc/nginx/nginx.conf like so:
# include blockips.conf;
# (assuming it's in the same directory)
deny; # CHINANET Hubei province network
deny; # CHINANET hebei province network
deny; # China Unicom Fujian Province Network
deny; # China Unicom Shandong province network
deny; # China Unicom Henan province network
deny; # OVH SAS - For spamming searches on WbSrch.
deny; # >30k spam queries from
deny; # >15k spam queries from
deny; # China Unicom HuNan province network
deny; # China Unicom FuJian province network
deny; # China Unicom Jiangsu province network
deny; # China Unicom Guangdong province network
deny; # CHINANET fujian province network
deny; # CHINANET fujian province network
deny; # China Unicom Hebei Province Network
deny; # China Unicom Hebei Province Network
deny; # China Unicom Heilongjiang Province Network
deny; # China Unicom Zhejiang province network
deny; # CHINANET anhui province network
deny; # CHINANET Zhejiang province network
deny; # Beijing Bitone United Networks Technology Service Co., Ltd.
deny; # China Unicom Shanxi Province Network
deny; # Daqing Zhongji Petroleum Communication
deny; # CHINANET Guangdong province network
deny; # CHINANET Guangdong province network
deny; # CHINANET jiangsu province network
deny; # China TieTong Telecommunications Corporation
deny; # China Unicom Hebei province network
deny; # UNICOM ZheJiang Province Network
deny; # GoDaddy site spamming domain searches (perhaps acting like they're a site submit service)
deny; # CHINANET xinjiang province network
deny; # China Unicom Hebei province network
deny; # China Unicom Hebei province network
deny; # China Mobile Communications Corporation
deny; # China Mobile Communications Corporation
deny; # China Unicom Fujian Province Network
deny; # North Star Information Ltd. Co. (China)
deny; # China Unicom Shandong province networ
deny; # CHINANET Guangdong province network
deny; # CHINANET Guangdong province network
deny; # China Unicom Henan province network
deny; # CHINANET Zhejiang province network
deny; # CHINANET Zhejiang province network
deny; # >50k Spam queries from
deny; # CHINANET Fujian province network
deny; # CHINANET Shanxi(SN) province network
deny; # China Mobile Communications Corporation
deny; # Building D, No.2 Shangdi Xinxi Road Pioneering Park,
deny; # CHINANET Qinghai Province Network
deny; # China Unicom SiChuan province network
deny; # China Unicom HuNan province network
deny; # China Unicom Jilin province network
deny; # Xiamen Broadcasting & TV Network Transmit Co.Ltd
deny; # China Unicom Heibei Province Network
deny; # China Unicom Heibei Province Network
deny; # China Mobile Communications Corporation
deny; # CHINANET Fujian province network
deny; # CHINANET jiangsu province network
deny; # China Unicom Henan province network
deny; # China Unicom Shandong Province Network
deny; # China Unicom Shandong province network
deny; # CHINANET Guangdong province network
deny; # China Unicom Jiangsu province network
deny; # China Unicom Fujian Province Network
deny; # China Unicom Fujian Province Network
deny; # Beijing Baidu Netcom Science and Technology Co., Ltd.
deny; # Chinanet Jiangsu Province Network
deny; # CHINANET Sichuan province network
deny; # CHINANET Chongqing Province Network
deny; # CHINANET Zhejiang province network
deny; # China Mobile Communications Corporation
deny; #
deny; # Beijing Blue Ocean information technology co.LTD
deny; # CHINANET fujian province network
deny; # China Unicom Shandong province network
deny; # CHINANET Shanghai province network
deny; # China Mobile Communications Corporation
deny; # CHINANET Guangdong province network
deny; # CHINANET Fujian province network
deny; # CHINANET Hunan province network
deny; # China Unicom Shandong province network
deny; # China Unicom SiChuan province network
deny; # CHINANET jiangsu province network
deny; # CHINANET fujian province network
deny; # China Unicom Henan province network
deny; # CHINANET Chongqing province network
deny; # CHINANET Hunan province network
deny; # China Mobile Communications Corporation
deny; # CHINANET Guangxi province network

PageRank Lives: OpenPageRank by Domcop

In the early days of Google, PageRank was a very important piece of information about a website. It let you know the general authority level of a site and how well it would tend to rank against similar content on another site. The PageRank toolbar, released in 2000, became an important tool in the SEO world.

Over time, Google de-emphasized PageRank, partly because people were gaming the system and partly because they switched to emphasizing other factors when ranking a site. They eventually stopped updating PageRank data and in 2016, they finally shut off the toolbar.

There have been other metrics created, such as Domain Authority by Moz, but nothing has quite been a proper replacement.

Now that the PageRank patent has expired, companies are free to implement their own versions. Domcop has done just that, using the Common Crawl data to calculate PageRank for the top 10 million domains on the web.

You can use it here:

As of the time of this post, has an OpenPageRank of 3.40.