A couple of years ago – still during my apprenticeship and also during the pandemic – I bought a base model M1 MacBook Air with some bonus money that I got for finishing an important (to my boss, at any rate) project at work and it’s been my main laptop ever since and one that I’ve been using basically daily ever since. However, the fact that I only got a base model has been a bit problematic and, thefore, I decided to upgrade to the new M4 MacBook Pro this year. This, then, leaves me with my still more than capable MacBook Air and I’ve been thinking about what to do with it. Then I remembered the fact that I’ve always been wanting to try out Asahi Linux; and that coupled with the very low power consumption of these M-series chips made me think that it would probably make for a really good low-powered server that still has enough oomph for more heavy workloads (definitely more than my actual server that’s running an Intel N100). And so that’s exactly what I did!
The installation of Asahi is pretty straightforward in general, but having only a base model Mac (with a measly 256 GB SSD) does pose a bit of a problem: Asahi cannot be installed as the sole operating system; instead; it can only be installed alongside macOS. This, then, means that even after completely erasing my previous Sonoma install and then only running the Asahi Linux installer afterwards, I only have about 150 GB of usable disk space on my Asahi partition. Now that’s not that bad or anything, but it’s definitely not a lot of disk space in today’s world.
Speaking of the installation, it’s pretty much as simple as running curl https://alx.sh | sh
in a terminal window and following the on-screen prompts (insert disclaimer about running running shell scripts without checking their contents first here). If you’re even remotely familiar with the command line (and are able to read), then the installation shouldn’t pose any significant problems. Still, I would not recommend installing this on your main machine that you use for other things as well as I’m sure it can quite easily render your macOS install useless if something goes wrong.
During the installation process, you’ll be given the opportunity to set a new size for the macOS partition and a size for the to-be-created Asahi Linux partition. Additionally, you can choose between either Gnome or KDE (if I remember correctly) as your desktop environment. Then, after some rebooting and changing of Mac security features, you’ll be greeted by an installer that you’re going to be very familar with if you’ve ever installed a Linux distribution. There you’ll choose a timezone and a username and password. Afterwards you can reboot and you should be greeted by a login window.
As a side note: Asahi used to be based on Arch Linux but they appear to have moved over to Fedora and the Fedora-based Asahi Linux is, as far as I can tell, the only official
. There are some community-maintained flavours but I haven’t tried out any of those yet; I am also guessing their installation process will differ quite significantly from that of the official Fedora-based version.
I was honestly quite surprised at how usable it was in general, the only thing I still would very much like to see implemented is the ability to connect an external monitor through one of the USB-C ports of the MBA — that has (as of the publishing of this post) not yet been implemented. Other than that, however, everything worked pretty much as you’d expect: you can change the brightness of the keyboard and the screen; the trackpad works (and even has force feedback); the speakers work and sound as you would expect them to sound; the keys on the keyboard all work (including things like the media keys for playing/pausing videos or music); closing the lid makes the laptop reliably go to sleep and opening it wakes it up quite quickly … you get the idea. Even the battery life is as fantastic (KDE estimating 9-12 hours) despite this MacBook being 4-ish years old now. WiFi also works perfectly and at the the expected speed.
There are some strange behaviours here and there though. For example, whilst the trackpad does work, the palm rejection is, seemingly, non-existent, especially if you’re used to how well it works on macOS. Oh and speaking of the trackpad, it feels strangely laggy, almost as though it were connected through a terrible bluetooth connection (I’m pretty sensitive to input delay in general though, so you might not notice this at all). Also, you’re going to have to use regular Windows-style shortcuts, i. e. ⌃C
instead of ⌘C
for copying things, as an example.
Additionally, you might not have access to all the packages as, obviously, this isn’t an x86_64
-based system but rather aarch64
. I wanted to install Ruby on my system and I generally use rbenv
to manage my Rubies. However, I first had a bit of trouble getting rbenv
itself working and once I did, I had even more trouble getting it to actually compile Ruby 3.3.6 for me. At first, there were some problems with openssl
that I managed to somehow fix by running sudo dnf groupinstall "Development Tools"
but then it complained about libffi
apparently missing (even though it wasn’t as far as I could tell) so I just ended up using rvm
instead – that could compile Ruby 3.6.6 without any problems.
Most of the Flatpaks that I’ve tried worked without any problems except for the Flatpak for Telegram for some reason, but that might’ve just been my doing something wrong. LibreWolf, the browser I generally use on Linux, works fine and runs as expected, though also through a Flatpak. Tokodon, KDE’s own Mastodon / Fediverse client, also works quite well. The GPU, apparently, also works but I haven’t really tried that out yet. Watching YouTube videos, even at 4K, was not a problem however and I couldn’t detect any dropped frames.
A colleague of mine – who is also quite interested in both ARM and RISC-V – has told me about Box64 which allows you to run normal x86_64
-based programs on an ARM-based processor. I haven’t yet tried this out myself but I definitely want to try it out and see if I can get some non-native programs running through that.
I still haven’t done too much with regards to trying it out as a server, but I still feel like it should work quite well as long as all the software also works. I really wanted to get something like Proxmox or maybe even just YunoHost working, but I haven’t really found a way to do that yet. One thing I already did was change the charging limit from 100%
down to 80%
. I had to do this through the console by running echo 80 |sudo tee /sys/class/power_supply/macsmc-battery/charge_control_end_threshold
in the terminal as changing the charging limit through KDE’s GUI settings did not seem to work.
I also installed lm-sensors
. I did that mostly to see the temperatures at which the computer was running, but I was very surprised to see that it also provided me with a nice way of seeing how much power it was using at a given time. This showed me that the power usage even with the screen turned on was rather low! I, therefore, enabled ssh and ssh’d into the machine to see what its power usage would be when the screen was completely turned off; and I was very surprised to see that the idle power consumption appears to be around 1-2 W with the screen turned off. Now that’s really low value and something that you would probably have a hard time noticing on your monthly power bill.
If I were to actually start using this machine as a server, I would probably install a more minimal version of Asahi (probably without any DE whatsoever) and I would also need to get some sort of USB-C to ethernet adapater and hope that it works; though depending on what exactly I’ll end up using the server for, a decent WiFi connection might not be all that problematic either.
I was pleasantly surprised to see how well it works just out of the box and will definitely be keeping Asahi on my MBA. I’ll try out some more stuff, especially in regards to running it as a server or I might just keep it around as a nice Linux machine in general.
]]>Therefore, I’ll just be skipping that portion of my blog post. If you’re in Germany and a customer of Vodafone’s, then you should have been assigned a /59 IPv6 subnet and you can quite simply follow the instructions on the official documentation that I linked above.
Before I start this off, it is important to note that this will only work if you’re using Cloudflare’s proxy. I have not found any other DNS provider that allows you to do this, unfortunately. I know there are some who have quite strong (and often negative) opinions about Cloudflare, so if you’re one of those, then you will probably not be able to do this. If you’re not sure what I’m talking about, you should probably read up on Cloudflare and how their proxy functions first and try to form your own opinion on this matter. If you know of another (free!) way to do this without using Cloudflare, then I’d be happy to hear about it.
Also, I’m not claiming that anything I’m about to explain is necessarily the best way of going about this; it’s simply what I found works quite well for me. If I wrote something that’s terrible advice or if you found something that I could improve, you are more than welcome to contact me about that, too!
And lastly, please note that a lot of ISPs do not technically allow the hosting of webservers if you only have a consumer contract and you might have to pay for a (usually more expensive) business contract instead. Or they might just straight up block certain ports from working in the first place on consumer contracts. Therefore, before you do anything, I urge you to check your ISP’s terms of service.
With that out of the way, let’s get started!
Before we start, a quick rundown of my setup. I have a FRITZ!Box 6660 Cable (my main router) to which my server running Proxmox is connected. The FRITZ!Box gets a /59 IPv6 prefix but no public IPv4 (CGNAT). Running on the Proxmox host as a VM is an OPNsense installation. Its WAN network is connected to the LAN network of my FRITZ!Box (it, therefore, gets an IPv4 address in my FRITZ!Box’ LAN, 192.168.178.0/24
) and the OPNsense’s LAN network is a virtual network that all other VMs running on my Proxmox installation are connected to. Additionally, I have assigned a /64 IPv6 prefix to the LAN network of my OPNsense (see OPNsense documentation above) and all VMs get both a private IPv4 address (in the OPNsense’s 10.10.10.0/24
network) via DHCP and an IPv6 address either via SLAAC or DHCPv6.
For my webserver in particular I made a separate and really small (/30) IPv4 subnet with a virtual IP in OPNsense, mostly so this public-facing LXC is in a different network from the VMs and LXCs that are not open to the public. I’ll probably switch that over to a VLAN instead of a virtual IP soon. I feel like this is a bit overkill (and probably doesn’t add that much security anyway), but I wanted to do it anyway. However, this means that my webserver has a static IPv4 in a different network, namely 10.11.10.2/30
with 10.11.10.1/30
being the virtual IP I assigned to the OPNsense installation and it cannot talk to any other VM or LXC.
I don’t want to share the exact IPv6 prefix I get from my ISP, but let’s just pretend it’s 2001:db8:0:e280::/59
where 2001:db8:0:e280::/64
is used by the FRITZ!Box itself and where 2001:db8:0:e291::/64
has been delegated to the OPNsense’s LAN interface. I have assigned a static IPv6 to the LXC which is running my webservers, namely 2001:db8:0:e291::1000:1/128
.
My webserver is running Caddy and I’m using a module for Caddy called dns.providers.cloudflare
so that Caddy can create an SSL certificate even when it’s behind Cloudflare’s proxy.
Okay, that was probably quite a bit of information. The best tl;dr I can think of is: the public IPv6 my webserver gets is 2001:db8:0:e291::1000:1/128
(the prefix is not my actual prefix, this is just as an example).
I’ll assume that you already are somewhat familiar with Cloudflare and how it works, especially after what I mentioned earlier in the blog post and I’ll also assume that you have already added your domain to Cloudflare. If you have not yet done so, please refer to Cloudflare’s own documentation on how to do this.
What you have to do is go into your domain’s DNS settings and create only a single AAAA record with the proxy enabled. Do not add another AAAA
record or even an A
record; simply add a AAAA
pointing to the IPv6 address of your server. This should look as follows:
This is probably the most important aspect of this entire thing if you want your website to be reachable even in networks that do not support IPv6. If you only set a AAAA
record and no A
record, Cloudflare will automatically translate requests from IPv4 networks so that your website can be reached even from those networks.
You may also have to change the SSL settings of your domain. By default, the SSL setting is set to flexible
which ended up not working for me and I had to set it to full
instead:
While you’re here, you might as well also create an API key either for your entire account or only for a particular zone / domain. For more information about what permissions need to be set, you can look at the GitHub page for Caddy’s Cloudflare module.
The first thing you’d have to properly set up are the firewall rules, especially the WAN rules. Since the only thing running on my LXC that needs to be accessed from the Internet is a webserver, it only really needs to have ports 443
and maybe also port 80
open to the public. I created an alias that includes both ports so that I don’t have to create two rules and I simply named it allowed_ports_default
.
However, we can refine this rule a bit further: since all the traffic going to our webserver should come from Cloudflare (as we’re using their proxy), you change the rule so that only traffic from Cloudflare’s network is accepted.
To do this, you can simply create yet another alias that includes all the networks that Cloudflare uses. Luckily, Cloudflare publishes the list of their IPv6 subnets which you can find it here: https://www.cloudflare.com/ips-v6/#. So all we need to do is create an alias that includes all seven (at the time of writing) subnets and put that alias into the Source
field of our created WAN rules. The alias should end up looking as follows:
And the rule should end up looking as follows:
Additionally, you also have to set up the rules on the LAN interface. I created two LAN rules, one for the IPv6 and one for the IPv4 address of my webserver and I allowed only ports 443, 80, 123, 53
for both IPv4 TCP/UDP and IPv6 TCP/UDP. I also set up a LAN rule that blocks access from my webservers LAN network to all of my other LANs.
I’m assuming you know how to get a website up and running with Caddy. If not, I highly recommend looking at their documentation, it’s really quite simple!
However, getting Caddy to work with the Cloudflare DNS was a little bit annoying at first, because the Debian 12 LXC that I’m running did not have the newest version of Caddy in its repositories, apparently, and the version that was available did not have the add-package
command which is needed to install the Cloudflare DNS module. So I simply downloaded the newest .deb
file from Caddy’s GitHub, installed that and installed the Cloudflare DNS module using the command sudo caddy add-package github.com/caddy-dns/cloudflare
. Afterwards, simply follow the instructions on their GitHub page on how to add the API key to your configuration.
If you then restart Caddy after adding your configuration (or simply starting it for the first time), it should automatically generate an SSL certificate for you and your website should become reachable from both IPv6- and IPv4-only networks.
Your website should now be accessible from the Internet! I hope you enjoyed reading this and I hope it will end up helping someone in the future. If you have any further questions, critique or whatever, feel free to reach out to me. This is the first blog post I have written in a long time, so if there’s anything you think could be improved in the next one, I would love to hear about it.
]]>