In which, I play seasonally-appropriate video games.

I don’t normally treat “spooky season” as a reason to do anything other than eating my own weight in pumpkin spice-flavored snacks, so it’s not common for me to bust out the horror games this time of year.

That said, there is a new Silent Hill game, and I preordered it, and I was NOT going to do the thing where you preorder games and play the intro when it comes out and then put it aside “for later” and it’s been marked down to 20 bucks or less multiple times by the time you finally see any of the game past the tutorial.

Not that I’ve done this.  But, uh, a friend has.  On several occasions.  This friend is so silly, and so bad with money! Let’s take a moment to laugh at my friend.

So now that we’ve established that, while I didn’t technically pick it out of the Steam library BECAUSE it was a horror game and this is horror season, it’s certainly thematic.

Also it’s a really good experience.  I know there was a lot of drama around having a Silent Hill game that is not set in, well, Silent Hill, but even if this just had the branding slapped on it to move more copies it still delivers everything I want out of a Silent Hill game.

I did have the WEIRDEST damn sensation several times playing it, though.  See, for a good chunk of the game you are wandering through a very rural 1960s Japanese town, and if you ignore the faceless flesh monstrosities and the rot that is slowly consuming everything it’s actually a very pleasant little town.  Like, I would 100% visit this place.  Take pictures home to show the friends and family.  That would be a slide show for the ages.  “Here’s the little corner store that is still selling 100% hand-made dagashi, never mind the mysterious fleshy lumps, and here’s a rice field that has gone through 12 generations of farmers, and if you look in the background you’ll see that there is a scarecrow wearing a school uniform, holding a knife and covered in blood…”

Overall, a bit of a disturbing experience but still recommended.

The ending recommends a second play through now that you know The Big Twist, and I think it’s justified… but I have another thing taking my time before I get back to that.

Specifically, this:

I don’t know why Shift Up decided to do a Nikke/Resident Evil crossover.  I’m not entirely sure it makes sense.  Nikke is pretty much about robot butts, and Resident Evil is not normally considered a sexy game.

HOWEVER.

Previously, I have gone into their crossover events not knowing much about the other characters in the event, and the storylines have fallen a little flat as a result.  But, I looked up the Resident Evil characters featured in this and discovered that they are all from the first RE two games.   So catching up on the series enough to know a little about the characters seemed easy enough.

It’s not that I’ve never played a Resident Evil game, mind you!  In fact, I’ve owned the first Resident Evil for several platforms, going all the way back to the “Director’s Cut DualShock Version” for the PS1.  It’s just that, well, I’ve never really gotten very far in the game.

OK, cards on the table, I’ve never made it to the first item box.  Mostly I’ve just wandered around the first couple of rooms of the mansion, completely lost because of the camera angles and fighting the controls, and I get eaten by birds or something and rage quit.

But it turns out that the PC remaster of the GameCube remake of the original game features a “very easy” mode which effectively neuters all of the enemies and throws ammo and save ribbons at you, meaning that you are not so much fighting the zombies as much as you are fighting the camera and the controls and the godawful inventory system and that stupid one way door near the east wing item box and the miserable miserable experience of needing to backtrack to the last item box because you didn’t bring the correct quest item to rub against the correct shiny spot on the screen and…

…ahem.

Let me just say, if you made it through this game at its intended difficulty level, you have my respect.  That goes double if you made it through the game in any of the American releases, where they bumped up the difficulty to screw over the rental market.   There is a good game here, but man it demands a level of patience that I absolutely did not have in the late 1990s and probably could not command even nearly 30 years later..

Taking a look through the achievements for the game, it looks like there are achievements for completing the game in three hours or less, and for doing it without saving, and for limiting yourself to melee combat, and…

…and I will never be seeing any of these achievements.  Ever.

Anyway, next up is Resident Evil 2 Remake.  Technically, I’ve played a little of this because it was free on Amazon Luna for a month and I put a few hours into it.  I don’t know if I can download my Luna save and apply it to the Steam version, and I’d be completely lost even if I did, so I’ll be starting from scratch.  The game actually lends itself pretty well to that, though, with its two protagonist system.  I can pick Claire this time, instead of Leon, and it should be at least a slightly different experience – and Claire is in the Nikke crossover, so that works out as well.

 

Posted in PC Gaming, videogames | Leave a comment

Why do I do this to myself? Adventures in Apple Mail.

Sometimes, I manage to shoot myself in the foot badly enough that I feel compelled to document it, mostly out of the hope that someone, someday, will find themselves in the same situation and may be sent here by a search engine.

Today is one of those times.

Let’s start with the background.  About 20 years ago, I bought a Mac Mini and made an iTunes account.  For this, I used a gmail address.

A few years later, I subscribed to Apple’s MobileMe service, which gave me an @me.com mail address.  I haven’t been subscribed to that for a long time.  I kinda wonder if it still exists out there?  I know some people have legacy me.com addresses.  But that’s not important.

What IS important is that Apple would not let me change the email address for my iTunes account to the @me.com address.  I’ve never been entirely clear why, but the gist of things was that you could CREATE an iTunes account with an @me.com address, and you could change the email address of an iTunes account, but there was something about changing it, specifically, to the me.com domain.

Eventually, I got an @icloud.com address, and have moved the majority of my accounts over to that address.  Part of this is because I’m not a huge fan of having my email on google servers, and part of it is because my iCloud email address is considerably shorter.

But I still couldn’t move my iTunes account over to icloud.com.  By this time, it wasn’t an iTunes account any more and had become an AppleID.  Apple Account. They’ve renamed it a couple times.

Then, quite recently, Apple did something on their side and made this possible.  I had a moderate amount of trepidation about the whole thing, but finally went ahead with it last month, to great effect.  As of now, the gmail address is still there but gets a tiny fraction of my daily email and I log in to all Apple services with the iCloud account.

Great success!

However.

See, I am one of those old people who still uses a mail client instead of web mail.  Specifically, I use mail.app since it comes with every Mac and is decent enough.  I’ve occasionally used Outlook, but it’s never seemed to be better enough to be worth having two mail clients installed.

AND, because I get a lot of mail that is not important enough to see in my inbox but not annoying enough to be really called spam, I have a bunch of mail rules in mail.app to send things to folders.  I set these up ages ago and they are a great help in keeping my notification count down.

MOSTLY down.  Look, 152 unread emails is nothing compared to what some of you animals have on your phone RIGHT NOW.

All of this is coming to a point, I promise you.

After I changed my Apple Account to use the iCloud address, these rules broke.  I didn’t know the two things were connected, at first.  It was just that I started seeing a lot of not-quite-spam in my inbox instead of in folders.  Not enough to be annoying…

…until the most recent Prime Day sale, where I bought a bunch of junk that I probably don’t need, and suddenly realized that one specific class of emails I was NOT seeing was any emails from Amazon.  They weren’t showing up in my inbox, and weren’t in the folder labeled Amazon.

So obviously, something had changed with my rules that was responsible for both the unwanted mails in my inbox AND the missing emails from Amazon.  My next thought was to go into the email settings on my phone to fix the rules…

…this was not what I had expected to see.  Suddenly I had no email rules?  But I hadn’t deleted them.  And, to make matters more confusing, I had recently (a) bought a new phone and (b) upgraded all of the Apple devices in the house to iOS 26 / iPadOS 26 / tvOS 26 / macOS 26 / watchOS 26 / did they make any more OS 26s? What do HomePods run?  Whatever those run, if there’s a ’26 version they’re on it now.

So, not remembering that I had recently changed my Apple Account away from gmail, I spent quite a while falling down rabbit holes trying to figure out whether the OS update or the new device had broken things for me, where the heck my Amazon emails were, and – rather more critically – whether this was affecting anything else..

Eventually, I found an enormously-useful thread on Apple’s support forums, and things started falling into place.

See, Mac mail.app doesn’t USE iCloud’s mail rules.  It uses its own, and they’re stored in a plist file in ~/Library/Mail/V10/MailData/SyncedRules.plist.  And thankfully this is a human-readable plist file, because I was able to read it and see that all of the email rules were trying to send emails to a imap URI of imap://<my gmail address>/<mailbox name>

And that lead me to go looking in my gmail account, where I found that there was an Amazon folder – but the amazon emails weren’t in it.  They were, instead, in gmail’s archived email.

I recreated all of my email rules, confirmed that the URIs in SyncedRules.plist now looked like imap://<my iCloud address>/<mailbox name>, and dragged all of the emails from the gmail archive back into my iCloud inbox.  Where they were promptly sorted into my iCloud’s Amazon folder.

So what happened, as near as I can figure, is that new email was coming in that met criteria for “move this to a mailbox”.  Most of the time, this was erroring out because there wasn’t, for example, a “Nextdoor” folder on my gmail account.  Those emails landed in my inbox.

The Amazon emails were being processed because there WAS a folder for them in gmail… but they were being dropped in there with a To: address of my iCloud email, and gmail was moving them to archive because they weren’t addressed to a gmail account.  The end result was, they appeared to disappear.

Also, quite surprisingly, mail.app can move email BETWEEN imap servers.  I did not know this was possible.  It looks like it does it with a copy-and-delete, but I’m not entirely sure.  It fails when trying to move email back from gmail, though – rather than having only a single copy in iCloud, I got a duplicate and there is still a copy in gmail.

Oh, side note.  iCloud’s mail rules are very rudimentary, and I wouldn’t recommend using them at all, but definitely DO NOT use them if you are also using mail.app rules.  There is apparently a high chance of  a Great Disagreement over which takes precedence.

Posted in iOS, mac, random | Leave a comment

One more homelab post. It works, and I’m not sure why.

There is a particular sort of despair that hits you when you have finally solved a problem, you are looking at all of the notes you took during the process and all of the sources you reviewed looking for solutions… and you come to the conclusion that you have no idea which of the random things you threw at the problem was the determining factor in making it finally work.

That’s where I’m at today.

The short version is that, after I installed Bazzite on to an AMD-based mini PC I had and determined that it worked surprisingly well, I moved on to step two of the project – virtualizing Bazzite so I could run games off my home server without having a computer devoted solely to the task.

Spoiler: It worked!  Eventually.

I was following this as my template, which was close to but not quite a tutorial.

I even wound up using the same GPU he recommended, the AMD W5500.  It’s basically a single slot, lower-power version of the RX580 – not the world’s most up-to-date GPU, but more than enough for my purposes.  It was also only $65 on eBay.

(This is not mine.  It’s a random picture from the internet because I didn’t think to take any photos before putting the side panels on)

The biggest challenge of the whole process was trying to understand PCIe passthrough.  In really basic terms, it’s telling the computer hosting the virtual machines that, while the computer has a GPU plugged into it, it shouldn’t use it – rather, it should let a virtual machine take it over.  That way, Bazzite running in the VM has a real GPU and can actually play 3D games.

I had to set up PCIe passthrough when I initially created my Proxmox server, because I wanted to pass through the SATA controllers to the instance of Unraid I was running and I also needed to pass through a USB port so Unraid would see the USB drive that it uses as its license.  That was actually really easy!  I didn’t have to do any special configuration other than selecting the SATA controller in the system settings for the Unraid VM.

GPU Passthrough was not so forgiving, for reasons that eventually made sense.  See, Proxmox is just Linux.  And, like any operating system, it wants display hardware to put up a user interface, even if the UI in question is nothing more than a black screen with a login prompt.

So, it grabs the GPU and doesn’t want to release it – and, while it’s in this state, nothing else can take it over.  The real problem, then, becomes convincing Proxmox to not take ownership of the GPU.

I am not sure how I eventually accomplished this.  There’s a whole set of steps where you blacklist drivers and unload kernel modules and… in the end it was working, but I could not tell you why.  It may have even come down to needing a monitor plugged into the computer’s onboard video.  (Normally I would run this server in headless mode, with no monitor)  I’m not certain and right now I don’t want to try to undo anything to see when it breaks.

I do get one warning when launching the VM.

I don’t like this warning.  Mostly because I don’t know what it’s telling me.  But it doesn’t seem to affect anything.

Anyway, adding a GPU to my main server did increase the power it sucks down a bit.  It idles at like 80 watts now.  Thanks to living in the Pacific Northwest, that’s only about 3 cents a day in energy costs and I think I can deal with that.

Posted in homelab, videogames | Leave a comment

Practical Applications of Withered Technology

Another day, another adventure / experiment in self hosting something.

FreshRSS interface, installed as a web app in macOS

I’ve recently gotten a new job, so I’m faced with the bleak prospect of transitioning from a happy unemployed life into one where I need to at least pretend to be beholden to the whims of others for 8 hours a day.  So I’m about to lose a lot of free time and should probably look into ways to use what I will be left with in more intelligent ways.

Starting with: I realized that I spend a lot of time opening various news sites to see if there are new articles that I’m interested in, and then I wind up scrolling through old articles if there aren’t new ones and… well, this isn’t a great use of time.

Enter RSS, possibly the oldest web technology (it dates back to 1999!) that I’ve never really interacted with.  I was vaguely familiar with it as a way of aggregating news feeds, and I have one friend who has sung its praises in the past, so naturally I went to him for advice.

He recommended something called InoReader, and that lead me to my first discovery regarding RSS.  I had assumed that you just installed an RSS client, like you would install a web browser or FTP client, and it went out and checked for articles on your news feeds and served them up to you in a handy list.

This turns out not to be the case.  Rather, the RSS readers I found all assume that you are using a third party service that does the checking and aggregation and then you log into that third party service to read them, and that wasn’t quite what I was looking for.

A few months ago, I probably would have kinda fumed about this and either signed up for one of these services or – more likely – have given up on the whole idea.  That was before I got into the habit of wondering whether there was a way to do it with my own server infrastructure, and less than a minute after having this thought I was looking at the “Apps” tab in Unraid, which is where you can find all of the pre-built docker containers.

A few minutes after that, I had downloaded FreshRSS, done some very minimal configuration (selected a different port since there was a port conflict with another container) and was happily adding feeds… sorta.  Not every site has a handy RSS button – in fact, they seem pretty uncommon, probably for reasons I’ll get into in a minute.

On the other hand, a surprising number of sites allow you to just append a /feed to the end of their main URL to get the RSS feed.  So even if there’s no handy button for it, most sites DO support RSS, if only to a point – and that point extends to the actual content they are willing to present via RSS.  Many sites only give you a tiny article abstract or the first few lines of an article, and some just give you a list of article titles.  This makes sense, though, since naturally they aren’t just putting news up as a public service.  They want you to read their articles on their sites so you see all of the ads surrounding them, and so they get all sorts of exciting click through metrics and can drop tracking cookies on you and so on and so forth.

Which is fine!  It’s good to want things.  Personally, I want to read all of their content for free and never see an ad.  There’s obviously a conflict here.

I haven’t found a total solution for this, but I did find a couple of plugins for FreshRSS that will do things like load webcomics (“Comics in feed“) or try to load complete articles rather than abstracts (“Af_Readability“). Honestly, they haven’t been super effective, but they have improved the experience a bit.  And even for sites where I’m stuck getting tiny excerpts instead of full articles, it’s made it so I can decide whether it’s really important enough to actually click through to the site or whether the info in the excerpt is enough.

This has saved me a LOT of time, and it’s helping me clean up my browser bookmarks as well.

Also, since RSS isn’t giving me the “comments” sections on these articles, I am avoiding getting sucked into rage-induced spirals where I keep reading the absolute dumbest takes, one after another.  Big win!

Now I just need something like this for Reddit, and I need to find a way to keep myself from clicking on the “For You” tab on X.  Both of those would give me a lot of time back.

Update:  It turns out you can add “.rss” to the end of virtually any Reddit URL to create a feed from it.  So, http://www.reddit.com/r/macgaming.rss gives me an RSS feed for that particular subreddit.  It’s not a great reader, but it lets me only click through to the main web site if the article looks interesting.

 

Posted in homelab | Leave a comment

Cracking open the EDID black box.

OK, so.  History time!

I’m typing this while in front of a high-resolution monitor.  It displays a picture that is 5120 pixels wide and 2880 pixels high, and it can update the screen up to 60 times per second.  I don’t really need to know ANY of this, because I plug it into my computer and it “just works”.

This wasn’t always the case, though.

Way back in the day, when dinosaurs roamed the earth, getting your computer to recognize specific monitor resolutions and refresh rates used to be a bit of a pain.  There were a few common ones that MOST monitors would support, so connecting your display to your computer would generally give you a picture, but you wouldn’t always get the full capabilities of your display.  Like, maybe you’d get a picture at 640×480 or 800×600 but Windows wouldn’t recognize that your monitor was capable of 1280×960.

(For simplicity and sanity’s sake I am going to completely ignore fixed-resolution displays like CGA and just talk about multisync monitors here.)

Anyway, the answer for Windows was usually to install a driver for your monitor.  These weren’t really drivers in the sense of code, but more a small text file that explained the capabilities of the display in a way that let Windows adjust.  For Linux it was a bit more complicated and involved editing text files in a way that could – in theory – actually damage the monitor if you got them wrong.  I never had this happen, and it was probably mostly scaremongering, but it always felt a little risky tinkering with it.

Anyway, this was abject nonsense.  In the mid 90s, display manufacturers recognized that it was nonsense and we got the EDID standard, which was a way for displays to communicate their capabilities to the computer so the computer and monitor could hash things out with minimal input from the lump of semi-sentient meat that owned both of them.

I’ve never really understood EDID.  It’s been a sort of void, or a black box if you will, that I’ve never been able to see inside of.  I’ve just had to accept that it’s there and that it probably works.

Enter my recent experiments with Bazzite and with game streaming, which has honestly worked pretty well.  I have two computers that are usable for streaming now, actually – both the original mini PC running Bazzite and a far beefier and far more power-hungry Windows gaming PC.

Neither of these machines have a display connected.  Rather, they have little dummy plugs that pretend that a monitor is present, and they do this by presenting EDID data to the computer that describes a monitor.  It’s a handy bit of fiction.

There’s just been one mild frustration with the mini PC.  See, it doesn’t have a GPU.  It has an AMD APU that delivers stunning CPU numbers but is somewhere around an Xbox 360 or PS3 in terms of what it can do visually.  That’s not necessarily BAD, because it plays a ton of really good games, but the issue is that the monitor that the dummy plug pretends is connected is one that can do 1920 x 1080 at 60 hz, and this is a pretty rough resolution for the system to render at.  Things are much better when I play games at or around 720p, and most games can be configured for that… but there are some games that see the potential resolution of the monitor and really want to use the higher resolution.  It was MUCH better when I had it hooked up to an older Sony flatscreen that only went up to 1360×768.  Games wouldn’t try to go above that and the result was a much more pleasant experience.

So, I went looking for a dummy plug that would present itself to the computer as a lower resolution.  My ideal solution would have had some sort of way to physically manipulate this, maybe via DIP switches.

I couldn’t find anything of the sort, but I stumbled across this amazing blog post.  It was put up like six weeks ago and honestly the timing could not have been better:

Modifying an HDMI dummy plug’s EDID using a Raspberry Pi

I happen to have a Raspberry Pi 3 just hanging around!  It was purchased several years ago for a project that failed to materialize, but thankfully I’ve never completely given up on the thought of someday going back to that project.  It no longer had an OS installed and in fact didn’t even have a microSD card installed, and there was a minor issue when I realized I didn’t have any spare microSD cards to PUT an OS on… but I eventually stole one out of a digital audio recorder I haven’t been using and got it booted.

Side note: buy some microSD cards so I have them around for the next time.

After getting it up and running, Doug’s tutorial was dead simple to follow.  I kept mistyping a variable name, which gave me no end of grief, but things got much better once I realized that I was being held back by a typo and I was eventually able to clone the EDID data from the TV on the new device.  I have the EDID data from both backed-up for future use, too – I may someday want to restore the original configuration, after all.

Anyway, plugging the reprogrammed dummy plug into the Bazzite box gave the hoped-for results:

Games now automatically run at the lower resolution rather than trying to push for 1080p, the APU in the mini PC can actually keep up with it, and the overall effect is so much more enjoyable.

His page also introduced me to a site that lets me paste in EDID data and explains it all in a very friendly format.  So, like now I know how to read the EDID data from a monitor into a binary file and then look at what the monitor actually has to say about itself.  It’s probably not something I will OFTEN need to do, but it’s just very satisfying to see behind the curtains.

Sadly the Raspberry Pi doesn’t have a DisplayPort connector.  I have another dummy plug that uses DisplayPort instead of HDMI and I would like to get into its head as well. 🙂

 

 

 

 

Posted in linux gaming | Leave a comment

In which, I am annoyed by copy protection

I’ve mentioned a few times recently that my wife has been coming up with reasons for me to create scripts.  Mostly, these are because she wants to have local copies of fan-translated Chinese novels.

Personally, I don’t have a lot of interest in the content… but it’s great at motivation.  I want her to be happy, I get to learn new stuff, I justify the existence of the new server that materialized in my server closet a couple of months ago, it all works out.

The last tool I made is one that crawls a web site and generates a list of links from the site.  She can import this list into a program on her iPad and it downloads all of the linked pages as local copies.  I’m not certain how they’re stored and I have no idea how to back them up or view them outside of this specific program, but I’m not really invested and as long as she’s happy then I’m happy.

Not long after I created this tool, she told me she was having trouble downloading links from a specific fan translator’s site.  I took a quick look at the site and could immediately understand why – it was purposefully obtuse.  There was virtually no HTML – rather, everything was being rendered via Javascript.  They REALLY don’t want their content to be read anywhere but their approved page.

I looked at this, and it looked like, well, work.  Which, whenever possible, I try to avoid.

So I told her that, well, I was very sorry but there was not going to be a way to get the content from this page.

And we left it at that.

Then, a few days later, I was cleaning up some chaff in my development folder and noticed the temporary files that had been created when I was trying to parse the offending page.

And, for no real reason, I opened one up in a text editor.  I just wanted to get a second look at what it was doing.

And it made me angry.  

Like, fan translations are inherently illegal.  You can’t “own” them.  It’s polite to acknowledge that translating something from one language into another is an awful lot of effort, and to appreciate the fans that do this, but it doesn’t convey any sort of ownership of the original work.

The pages on this site were incredibly wasteful, and all of it done in the name of trying to prevent anyone from downloading the text.  We’re talking, a single page with 20k of text on it was a 335k file, and the specific translated novel that she wanted to read from this site was over 80 of these files.  It was just a crazy amount of wasted space and excess processing, and the more I looked at it the more I wanted to break it just for spite.

So, I did.

The first thing I considered was trying to use curl or wget to download the site’s pages, but this just gave me the raw data and wasn’t very helpful.

Enter Lynx.

Lynx, for people who may not have ever had to browse the web over a telnet connection – I imagine that is most people – is a text-only web browser designed for non-graphical connections.  It has a couple of interesting ways in which it can be used, as well – you can use it to get a list of all links on a page, and you can tell it to download the contents of a page and save it as a text file.

It’s also open-source!  So if, for some reason, a site were to try to detect it by user agent and block it, it’s easy to tell it to pretend that it’s Chrome or something.

Once I decided that it would make a good tool for the task, about an hour of shell scripting gave me a script that blows the site’s silly protection mechanisms out of the water and gives me a large text file containing the entire text of the fan translation.

And I feel very satisfied.

I will share the script here, though I warn you in advance that the total guarantee I give you is that it “works on my machine!” and if it does not work for you or does something horrible then I do not accept any responsibility.

#!/bin/bash
# getbody.sh - gets the content of a web page and its subpages and outputs it all into a text file.
# only crawls one level deep
# Takes a URL as input. If a second command line parameter is specified, it only follows links that include that parameter as a substring
# If the second command line parameter is the word "links" then it does not crawl, just prints the links to stdout and exits

# Check for a command line parameter. No validation, we're sending this to lynx as is.
if [ -z "$1" ]; then
echo "Usage: getbody.sh <url>"
exit 1;
fi

url="${1}"

# set up a unique filename with a time stamp. I should probably do this for my temp files as well, but lazy.
filename="web_scrape_$(date +"%Y%m%d%H%M%S").txt"

echo Processing site: $url

if [ -z "$2" ]; then
lynx $url -dump -hiddenlinks=ignore -listonly -nonumbers > raw_list_of_links.xyzzy
else
if [ "$2"="links" ]; then
lynx $url -dump -hiddenlinks=ignore -listonly -nonumbers
exit 1;
else
lynx $url -dump -hiddenlinks=ignore -listonly -nonumbers | grep "$2" > raw_list_of_links.xyzzy
fi
fi

# drop anything with a ? in it because I probably don't want whatever it returns
cat raw_list_of_links.xyzzy | grep -v "?" > list_of_links.xyzzy

touch $filename
for url in $(cat list_of_links.xyzzy); do
echo $url
lynx $url -dump -nolist >> $filename
done

rm list_of_links.xyzzy
rm raw_list_of_links.xyzzy

 

Posted in shell scripts | Leave a comment

Homelab updates: Bazzite is pretty cool!

A few days ago, I mentioned that one of the services I was adding to my home lab was game streaming, with me repurposing a 5700U-based micro PC as a Bazzite box.

Side note: Serious props to the Bazzite team for this video explaining how to install onto a Windows box while maintaining a small Windows partition for dual booting:

I had some trouble resizing the Windows disk, because there were all sorts of immovable files that wanted to prevent me shrinking the main partition, but eventually I got over that hump and managed to get the Windows partition down to 120GB, leaving the rest of a 500GB SSD open for my new Bazzite install.

Anyway.  The 5700U APU doesn’t have a particularly beefy GPU, but it’s plenty for running… well, mostly games from the PS3 / Xbox 360 era.  I played some Bioshock on it, and some Arkham City, and both were flawless.  I then moved to streaming those to another computer and they were STILL flawless.  Bazzite comes with the “Sunlight” half of the “Sunlight/Moonlight” game streaming software installed, and configuring it was like 2 minutes worth of going through menus and then needing to look up the port to get into the Web UI to authorize the Moonlight client running on my Mac.

For the record, it’s https://localhost:47990 – and, yes, the https is significant.  I’m not sure why it mandates an encrypted connection, especially since it uses a self-signed certificate that makes web browsers freak out, but you gotta have the S in there.

With Sunlight in place, I disconnected it from its display and moved it into my closet, adding a simple HDMI dummy plug so it would think it still had a monitor.  This is important, but also became a source of problems.

The first thing I noticed was that, while I had been using an old TV as my test monitor, which had a native resolution of 1366×768, the HDMI dummy plug was detected as a 1920×1080 monitor with a 120hz refresh rate – and this was a problem because the 5700U REALLY can’t run a modern – or even semi-modern – game at 1080p.  Arkham City, for example, was barely able to break 30fps and something like Atelier Ryza was a slide show.

The obvious answer was to configure games to run at 720, sacrificing visual fidelity for performance, but this had its own problem:

720P Ryza Fullscreen

…games would get shoved up into the top left corner of the screen, with the other three quadrants full of color banding as shown.  Using Steam streaming rather than Moonlight wasn’t AS bad, but the game was still only in one corner of the screen.  The other three quadrants were just black.

Doing some digging into the graphics settings for Ryza gave me a solution, though:

Ryza graphics settings

Configuring the game to run in Borderless mode got me a full-screen 720P image, and we were back on track:

720P Ryza borderless

That was one problem solved, and then I hit the next one:

Shantae is a bit of a troublemaker

I’d used Shantae: 1/2 Genie Hero as one of my test cases for game streaming, because it’s a platform game and I find those to be very unforgiving when it comes to latency.  Believe it or not, I was able to play it just fine over remote play… though admittedly with a wired network.  I wouldn’t try it on wifi.

It even runs smoothly on the little Ryzen 5700U PC at 1080P!  It is not a very demanding game, and it’s one of my favorite platformers.  You do feel very squishy until you buy a few upgrades for your character, but that’s a pretty minor complaint.

It’s also, for some reason, a game where the game speed is tied to the refresh rate of your primary monitor.  Remember when I said that my dummy HDMI plug identified itself as a 1920×1080 monitor with a 120hz refresh rate?  Well, Shantae really wants to run at 60fps and if you give it a 120hz monitor it runs at double speed.

Which is, to put it simply, Hard Mode.

I didn’t realize this was what was happening at first, of course.  No, first I spent several hours troubleshooting the streaming client.  It wasn’t until I found a thread on the GoG forums talking about the issue that I realized that it was a problem with this specific game – and despite my best efforts to tell Bazzite to run at 60hz, the game saw that 120hz monitor and ran with it.

Eventually, the solution was to buy a different dummy monitor dongle, one that advertises itself to the system as a 1080p 60Hz monitor.  Fortunately these things are like five bucks.

Long-term, I plan to move Bazzite to a VM hosted on my Proxmox box, and I’ll give it a real GPU to use at that time.  That should considerably improve the visuals by opening up the option of 1080P gaming.

The only quirk I’m still dealing with is that I can’t open the Steam Overlay while I’m in a game.  Like, if I press the Xbox button on my controller I hear the SOUND of the Steam Overlay opening, and I hear navigation sounds from the Steam Overlay when I move the thumbstick, but I have no idea what I’m highlighting.

Still… progress!  Very solid progress, too.

Posted in homelab, linux gaming, videogames | Leave a comment

I am the alpha nerd

I do not make this claim lightly.

OK, I make it pretty lightly.  “Alpha Nerd” is a pretty high bar to hit, after all, and should really be reserved for people who hand roll binary patches for their custom linux kernels.

But I’m feeling pretty good about today’s project, so I will brag a bit.

I am fortunate enough to be married to a woman with very good taste in media.  She consumes an ungodly amount of manga and light novels from all over Asia, but in particular has been REALLY into Chinese stuff recently.

There’s been just one problem.  While people who translate Japanese content tend to just put it up for download, the culture is completely different when it comes to translations of Chinese content.  It’s much more common to see them published as individual blog posts, and they often disappear from the web.

Naturally, she wanted local copies of her favorites so they couldn’t just poof, and I thought I’d solved this last year when I found a scraper program that would download fan translations as ePubs.

I had not, to be clear, solved it.  There were still a lot of sites that it didn’t address.

She came up with a solution on her own for these – a program called Goodlinks which allows you to archive local copies of web pages.  It’s not very automated, though, and some of these novels are made up of hundreds of individual web pages.  So saving a single novel locally was a process of opening each of these pages, one at a time, and saving them.

A few months ago, my answer would have been “wow, that sounds rough” because I did not have a solution.

Today, I had a solution.

Well, Copilot helped.  Really, it did almost all of the work to start.

The first thing I asked Copilot for was a python script that could be passed a URL and that would return a list of all the links on the referenced page.  This was easy enough, but you couldn’t import it into Goodlinks.  Goodlinks WOULD take a bookmarks.html file, though, so I did some hacking at an exported bookmarks.html file until I figured out what format it wanted its URLs in.

For the record, it wants one link per line in the file – and for some reason, every one of them needs to be prefaced with <DT>

All of this was 100% Copilot, with a little “And could it do this instead?” from my side.

It didn’t take long.  Like, 20-30 minutes from “I can do this better” to “Here’s your completed python script!”

Thing is, though, handing someone a python script is not super helpful.  And my wife doesn’t really like to turn on her computer.  So I needed a solution that could work from a phone.

After considering a few options, I decided that I would set up an email address that would accept emails including a URL in the body of the email, throw that URL at the python script that Copilot had given me, and email the resultant html file back to the email address that the URL had come from.  And, because I enjoy self harm, I decided to do this on one of my Linux VMs.

My first assumption was that it would be easy to have an email client on Linux watch for emails of a specific format and send the emails to an external script for processing.  This turned out to be my first, but not my worst, assumption because… well, I guess if you’re a Linux guy you are expected to use webmail for stuff.  It took me a few clients before I stumbled on to Evolution, which lets you set up an email filter that will pipe the body of the email through an external command.

I was in business!  It turned out that it was actually really easy to take the output from Evolution and send it through a simple shell script to parse out the sender’s email address and the URL from the incoming email, and to put the URL through the Python script I’d generated earlier, and to…

…well, now I had an html file but I needed to mail it back.

I had THOUGHT that you could do this from Thunderbird, and it turns out that you can!

Almost.

Kinda.

Sorta.

Well.

…from a command line, you can tell Thunderbird to generate an email, and it will populate an email message, and then it will sit there and wait for you to manually click the Send button.  It won’t go that last step.  There are workarounds, of course, but they involve using desktop control software to simulate a mouse click on the pixel on the screen that should be over the Send button.

OK.  So how do I send an email from the command line?

Some googling led me to a program called sSMTP, and then I spent probably two hours just trying to get it to authenticate to a gmail account.  gmail has some pretty strong authentication requirements, though, and I could not figure out how to jump through all of the required hoops.

Thankfully, The ISP Formerly Known As Comcast isn’t quite as picky.  You need to go into your email settings and tell it to accept email from third party applications, but once you’ve done that you can use any email client.

Despite being able to authenticate, though, I still couldn’t get it to send an email.  This may be because, unbeknownst to me, I at some point managed to get my email flagged for spam by iCloud and so all of my test emails were being dumped into the ether.  It may also have been because sSMTP was deprecated!  We’ll never know which it was, because I eventually moved to a mail program named msmtp which is apparently the replacement for sSMTP.  That was the first point where I could actually send emails to myself from the command line, and where I thought I had really turned a corner…

…except I couldn’t attach a file.

Some further research, and I found that I would need to install an email client that understood how to MIME-encode an email and attach a file to it, and there’s one called “mutt” that will do this and will even use msmtp as the program to do the mail sending thing so all of the work it took me to get msmtp configured wouldn’t be wasted.

And I got that configured.  And I tried my script again.

And finally, after about six hours of staring at terminal windows and willing them to work, I got to a point where I could send myself an email, with a URL in the email, and Evolution would receive the email and send it off to my python script for parsing, and the script would download the referenced web page and make a bookmarks.html file out of it, and pass that to mutt, and mutt would bundle it up and send it to msmtp for mailing, and the bookmarks.html file would land in my inbox and could be easily imported into Goodlinks.

I mean, really it is just so obvious! I don’t know what took me so long.

I kinda don’t know whether I should actually brag about this or not.  I am fully prepared for someone to stumble across this and point me at a one-click solution for the whole dang thing.  Please be gentle if that someone is you.

 

For later reference, here are some of the sites I found to help me get through this whole nightmare:

https://arnaudr.io/2020/08/24/send-emails-from-your-terminal-with-msmtp/

https://linsnotes.com/posts/sending-email-from-raspberry-pi-using-msmtp-and-mutt/

https://hostpresto.com/tutorials/how-to-send-email-from-the-command-line-with-msmtp-and-mutt/

https://www.baeldung.com/linux/send-emails-from-terminal

 

Posted in homelab, shell scripts | Leave a comment

Prepping for the next homelab project

So, my homelab experiment is running generally quite well.  I keep running into an annoying Proxmox bug that makes the ethernet controller hang up, and hopefully they will fix that soon, but I have a lot of self-hosted services now.  File sharing, media sharing, comics and manga servers, so on and so forth.

Naturally I can’t stop there.

Neither my wife nor I are what you’d call vehemently anti-Windows, but we’ve both been a little annoyed by the direction Windows 11 has been going.  Not the operating system itself – that’s fine – but I’m pretty tired of the constant notifications and hints to install “suggested applications” and in general it feels less like a desktop OS and more like a cheap carrier-subsidized smartphone.

And, to be clear, I’m running Windows 11 Professional.  Not home.  I should not be getting a “hey did you want to install Telegram?” in my start menu.

So, that was one big reason we replaced her desktop PC with an M4 Pro Mac Mini – and it’s been working out very well for her!  But she’d still like access to some of her games that don’t have Mac versions.

Hence, I am delving into the Linux dark arts.

A little while ago, I bought an Ayaneo AM-01 “Retro Mini PC” which is basically just an AMD APU in a box that is heavily reminiscent of a classic Macintosh.  I did this based solely on how it looked, with no real idea what I was going to use the thing for, and thus far I have not been able to justify its existence.

Thankfully, it’s perfect for this project.  My goal is to install Bazzite on it, set it up with Steam remote play, and have it as a network-accessible game streaming device.  It’s only a 5700U so heavy duty gaming is right out – but lightweight stuff will be fine.

Assuming it works – and my guess is that it will, Bazzite has a reputation for being solid and Proton is very mature these days – the step after THAT is to slap a GPU into my Proxmox server and pass it through to a virtual Bazzite box.  Which will mean that the Ayaneo box will again be left without anything to do but also that any system in the house can just boot up Steam and run games from it.

In theory.  Lotta “in theory” in this plan.

Updates as I have them.

 

Posted in homelab, linux gaming, videogames | Leave a comment

More home server stuff. Reading is FUNdamental!

It’s been a good week for projects.  Maybe a good couple of weeks, actually – I kinda forgot when I started working on this particular one.

I mentioned last month that I’d set up a home server for the purpose of actually learning new things, and that’s been working pretty well.  It did eventually get moved from the card table into the server closet, though there was a bit of a misunderstanding on my part when it came to the question of whether or not it would fit in my rack.  I thought it would, but physics disagreed and physics always gets the final word in this house.

Oh man.  Now I want a rainbow yard sign that starts with “in this house, we follow the laws of thermodynamics” and just goes from there.

It can go next to our Litany Against Fear sign.

(Note: We do not actually have this sign.  I live in the Pacific Northwest and our neighbors have no sense of humor.)

But, I digress.

At any rate, this most recent project has been setting up a self-hosted server to host our collection of comic books, manga, and assorted eBooks, and I’ve discovered that there is no such thing as one server that does everything.

I mean, first things first – we have a couple thousand books purchased through the Kindle and Apple Books stores.  There’s no real way to integrate those into anything self-hosted.  But we also have a lot of stuff that has been just kind of accumulated, whether that’s ePub format comics from Humble Bundles or PDFs from DriveThruComics or, let’s be honest, a WHOLE lot of pirated comic books that were mostly accumulated long enough ago that Demonoid was still under its original management.

Side note: Those books have been the reason I’ve been doing a lot of shell scripting recently, and abusing the heck out of generative AI.   There were about 30,000 files and many of them were duplicates and there was no standard for naming and some were rar files and some zips and it was a big old mess.  I’ve deleted about 10,000 and normalized about another 13,000 but I have a long way to go.  Being able to ask Copilot for a script that, say, descends into a directory tree and removes all instances of # from file names and pads all numbers to three digits with leading 0s has been a huge help.

Anyway, I’ve been experimenting with three different self-hosting solutions: Komga, Kavita and Suwayomi.

Komga

First, Komga.  Amazing for comics and manga – it handles the 2-page spreads in ePub files from Humble Bundle, which neither of the other two does well with.  It also doesn’t much care what format things are in and doesn’t have any particular mandates as far as naming conventions are concerned.  While I am still putting a considerable amount of time into normalizing filenames, if I just wanted to point a server at a bunch of unsorted folders to give me access to them via an internet browser, this would win.

I haven’t messed around with its metadata features at all.

Biggest downside:  Importing new media is SLOW.  Like, I assume it is doing some serious processing of each file but it seems to take an inordinate amount of time to do so.

Kavita

Next, Kavita.  I actually like Kavita a lot, but it falls down with the 2-page spreads in Humble Bundle ePubs and is very picky about how it wants its content organized on the disk.  It’s much faster than Komga when importing new content but that isn’t something you do much after you have your library set up.

It is, however, the absolute best for reading ePubs with words in them, as it has a ton of font size and line spacing options.

Suwayomi

Finally, Suwayomi.  Suwayomi is a single purpose app – it does manga, and that’s it.  This is a weird one, because it really doesn’t want to work with local copies of manga and doesn’t like stuff organized by volume rather than chapter.  It wants to read stuff off web sites, with optional downloads, and anything it does beyond that falls into “happy accidents”.

Really, it’s a piracy app that also works as a dang good aggregator for reading questionable manga translations.  My wife has absolutely taken to it, so big props to the Suwayomi team for making something that justifies all of my work.

If I could have one wishlist item for Suwayomi, it would be multi-user support.  We have pretty different tastes in manga and don’t really want to see each others’ manga libraries.  Fortunately, since I’m running it in an Unraid Docker container, it took basically no work to spin up a second instance of Suwayomi and now we have separate libraries based on the port number you connect to.

It ALSO made me figure out Tailscale so she can read manga while she’s on the go.  So that was a huge win!  I’m still kinda uncomfortable with allowing external access to our network, but Tailscale is big enough and reputedly secure enough that I figure I can trust them.

So, which one will I be going with?  Well, that’s the neat thing.  I can’t pick.  None of them do EVERYTHING I want, but each of them has one thing they are just really amazingly good at.

So, in the end… I’m running all three.  Thank God for Docker.

Once I get all of the media actually sorted out, I may want to look into some sort of tablet app for these, for offline reading.  That will probably be its own sort of fun.

 

Posted in comics, homelab, organization | Leave a comment