So, all this time doinking around with adb and rmnet0/rmnet1/ipa on Android when I could have just gotten an iPhone where the magic rvi0 device gives me all packets, everywhere.
All. Packets.
Yeesh.
27.5.2025 03:42So, all this time doinking around with adb and rmnet0/rmnet1/ipa on Android when I could have just gotten an iPhone where the magic rvi0...All I want is a SIP Text Messaging client with support for EAP-AKA authentication where the implementation is simple enough for me, an idiot, to understand.
24.5.2025 03:51All I want is a SIP Text Messaging client with support for EAP-AKA authentication where the implementation is simple enough for me, an...Ok, I need a rooted Android phone of recent vintage for some mischief. I don't want to root any of our current devices, to avoid triggering the wrath of the Play Store.
Maybe eBay for a working Pixel {6, 7, 8} with a cracked screen?
20.5.2025 16:04Ok, I need a rooted Android phone of recent vintage for some mischief. I don't want to root any of our current devices, to avoid...IPsec connection: established.
There was much rejoicing.
...and it turns out that I do need a writable /sys:
# ip netns exec ims ip link set tun23 up
mount of /sys failed: Permission denied
So systemd requires that if I want to use DHCP, I must use VMs. Containers considered harmful.
I don't want to redo everything in a new VM, I'll set it to use a static IP and go on.
I hope it doesn't need systemd-networkd to add the static IP, but deep down I already know that it does.
16.5.2025 12:00...and it turns out that I do need a writable /sys:# ip netns exec ims ip link set tun23 upmount of /sys failed: Permission deniedSo systemd...systemd starting with v245 requires that /sys be read-only in LXCs.
The LXC in question had nesting=1 set. I did not set it deliberately on this LXC, that is either the Proxmox default or it propagates the last time it was set on some other LXC.
Removing nesting=1 makes /sys read-only.
I believe this means it is no longer possible to run docker within an LXC. Since Proxmox doesn't natively support Docker containers, I guess this means running Docker now needs to be a full VM.
--------
However, after removing nesting=1 now systemd-networkd doesn't start at all: `systemd-networkd.service: Main process exited, code=exited, status=226/NAMESPACE`
It has an IP address for now, shortly after booting, but will presumably lose it when the lease expires as nothing will renew it.
16.5.2025 11:39systemd starting with v245 requires that /sys be read-only in LXCs.The LXC in question had nesting=1 set. I did not set it deliberately on...Just as I was beginning to accept that maybe systemd wasn't completely awful...
My most recent LXCs don't get IP addresses. networkctl says, "Interface eth0 is not managed by systemd-networkd"
https://github.com/systemd/systemd/issues/15101 says that /sys must be read-only within containers or systemd starting with v245 fails in quiet and subtle ways.
The "why" can be summarized as, "fuck you, that's why."
So I guess now I figure out how to make Proxmox start containers in the way that systemd now demands.
16.5.2025 11:21Just as I was beginning to accept that maybe systemd wasn't completely awful...My most recent LXCs don't get IP addresses....Ok, so now I just have to find a wireless carrier whose VoWifi ePDG either has no allowlist of allowed clients or allows non-Android Linux clients.
I'd hoped to use a Twilio SIM but it looks like they have no ePDG, epdg.epc.mnc026.mcc310.pub.3gppnetwork.org does not resolve. The IMSI from my Twilio SIM card is 31026XXXXXXXXXX.
Not too surprising, they want you to use their APIs not VoWifi.
14.5.2025 22:19Ok, so now I just have to find a wireless carrier whose VoWifi ePDG either has no allowlist of allowed clients or allows non-Android Linux...Tomorrow should be zfs.rent initiation day! How fun!
1.5.2025 01:36Tomorrow should be zfs.rent initiation day! How fun!Do I know anyone using Tello? Want to send me a referral code, before I sign up in the next couple days?
1.5.2025 01:31Do I know anyone using Tello? Want to send me a referral code, before I sign up in the next couple days?Maybe a microsaas aimed at Europe... yeah, that would be a good thing right now.
I'd need an idea though.
Got nothin'.
17.4.2025 00:30Maybe a microsaas aimed at Europe... yeah, that would be a good thing right now.I'd need an idea though.Got nothin'.I have officially entered information for my taxes this year. Please clap.
14.4.2025 00:41I have officially entered information for my taxes this year. Please clap.USPS delivered the drive today.
They rack new drives near the end of each month.
8.4.2025 03:08USPS delivered the drive today.They rack new drives near the end of each month.Espressif ESP32-S3-BOX-3 looks pretty neat. If I'm going to use these I guess I'd better order soon before tariffs make them unaffordable.
https://heywillow.io/ appears to have enough functionality to get started.
So... maybe?
6.4.2025 13:21Espressif ESP32-S3-BOX-3 looks pretty neat. If I'm going to use these I guess I'd better order soon before tariffs make them...I think the drive is ready to ship to zfs.rent. I've done burn-in testing for a couple weeks, configured zrepl with it attached to a local VM, replicated all of the current filesystems, ran zpool scrub one last time, and removed it from the system.
It is back in the static bag, inside the bubble wrap cocoon, waiting to go to the post office tomorrow.
3.4.2025 23:57I think the drive is ready to ship to zfs.rent. I've done burn-in testing for a couple weeks, configured zrepl with it attached to a...I don't even really remember what that project was. It suddenly became active again on March 16, and accrued charges since.
Maybe I left something at the previous employer which could activate it again? Not sure why it started up, I don't recall doing anything which might.
3.4.2025 23:55I don't even really remember what that project was. It suddenly became active again on March 16, and accrued charges since. Maybe I left...Ok, great. AWS says I owe them money due to App Runner, but the console shows no App Runner services.
So I get to figure out what this bill is really for, and turn it off.
2.4.2025 07:53Ok, great. AWS says I owe them money due to App Runner, but the console shows no App Runner services.So I get to figure out what this bill...I've concluded that this was indeed a network error which got magnified into a false array degradation event.
That is honestly not cool for ZFS to do, but I've decided to keep calm, use zpool clear, and carry on.
1.4.2025 13:28I've concluded that this was indeed a network error which got magnified into a false array degradation event.That is honestly not cool...Hmm. https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P/ says:
"For example, the following cases will all produce errors that do not indicate potential device failure: 1) A network attached device lost connectivity but has now recovered"
The kernel error messages all happened at 07:16:33 this morning.
07:16:33 is the precise moment that I sent a kill signal to the vzdump running on the other host, the Proxmox server where I'm using zrepl to replicate to this new system that saw disk errors.
I think maybe this isn't a hardware failure. It might just be the most terrifying possible way to report a network interruption.
1.4.2025 02:31Hmm. https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P/ says:"For example, the following cases will all produce errors that do not...Well, crappit. I ran badblocks tests for *weeks* on this device.
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368396833 op 0x1:(WRITE) flags 0x700 phys_seg 32 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368399137 op 0x1:(WRITE) flags 0x700 phys_seg 32 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368397089 op 0x1:(WRITE) flags 0x700 phys_seg 131 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368401697 op 0x1:(WRITE) flags 0x700 phys_seg 3 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368401441 op 0x1:(WRITE) flags 0x700 phys_seg 29 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368399393 op 0x1:(WRITE) flags 0x700 phys_seg 200 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368402721 op 0x1:(WRITE) flags 0x700 phys_seg 32 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368402465 op 0x1:(WRITE) flags 0x700 phys_seg 32 prio class 2
Mar 31 07:16:33 zfsrent kernel: I/O error, dev sdb, sector 23368402209 op 0x1:(WRITE) flags 0x700 phys_seg 3 prio class 2
Mar 31 07:16:34 zfsrent kernel: I/O error, dev sdb, sector 23368401953 op 0x1:(WRITE) flags 0x700 phys_seg 32 prio class 2
root@zfsrent:~# zpool status
pool: pool1
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
config:
NAME STATE READ WRITE CKSUM
pool1 DEGRADED 0 0 0
sdb DEGRADED 0 35 0 too many errors