Wednesday, December 14, 2016

A Partial Workaround for NBNCo's SkyMuster satellite outages

Like many users of NBN's long term satellite service we're discovering it is rather flakey. The satellite link itself is pretty solid, but the NBN supplied routing upstream of the ISPs has frequent irregular outages, leaving end users offline for periods ranging from minutes to hours, a situation which has been ongoing for months.

The symptoms when things go sour is that the modem is still online: it has a happy blue ring on it, and usually you can get DHCP from your ISP. Since the DHCP service is on the far side of the satellite link, the satellite itself is exonerated, as is the ISP. However, any IP traffic sent to the public network never receives any responses. This state can persist for some time; eventually whatever is confused in NBN's network becomes unconfused (or gets reset).

One thing we have found is that even after NBNCo gets unconfused, the network remains unusable until you renew your DHCP lease. And immediately you do this, the network becomes usable. (If NBN have their side fixed; there will be a period where no DHCP activity does any good, and DHCP queries may even receive no response.)

Therefore, when your NBN goes away, try renewing your router's DHCP lease. For us, this can lead to immediate renewal of service.

Until NBN sort themselves out we've told our router to keep leases for only 15 minutes, so that it issues DHCP renewals regularly and hopefully will bring our service live again automatically.

We're inferring from this that somehow the DHCP action triggers some form of ISP or NBN routing action because until the DHCP renewal happens not route is in place. Also, when NBN are still confused, doing a DHCP renewal can lead to successful connectivity for something like 10 seconds. This suggests to me that through some screwup the NBN-side state that routes to our service is in conflict with something, possibly another customer's service; after the DHCP our route is in place, and then it is quickly trashed.

I am posting this in the hope that other NBN satellite users will find the workaround useful.

Footnote: the advice from NBN or an ISP tends to be "turn off your modem for 2 minutes, wait for it to reconnect, then restart your computer". Restarting one's computer or router causes it to do a DHCP request on startup. In our experience of NBN's current outages, the only step in this process which achieves anything is the DHCP request. Everything else is just witchcraft (well, "reset all the client's systems to clear any state").


Tuesday, December 03, 2013

Mac and iOS connection issues on satellite internet, and my apparent workaround

This is an attempt of a more coherent writeup of my post on Whirlpool.

I've been debugging a problem in a rural location using an NBN interim satellite internet connection and which was showing encountering frequent unreliability, particularly with SSL connections (HTTPS, POP3S) but also apparent with ssh and, when tried, even telnet. It was, of course, sporadic. It was not so obvious with cleartext web browsing (normal HTTP) but I believe this is because browsers will silently retry failed connections.

Poking around on Whirlpool found little information; I currently suspect the fairly small share of iOS/MacOSX equipment in rural locations - Windows is very prevalent.

My current hypothesis, supported by packet traces, is that this unreliability is a combination of TCP acceleration in the modem (a Gilat SkyEdge IP II), an overly aggressive SYN resend timing in MacOSX and iOS, and satellite latency.

The current workaround is to divert all outbound TCP traffic through a proxy on the firewall/router, so that the Mac connects to the proxy on the firewall (low latency local LAN connection) and the firewall makes the outbound TCP connection to the target host using a far saner SYN resend timing, achieving reliable connection.

Implementation


Most people will do this via a web proxy (eg squid or equivalent program, possible built into their router). This requires each client machine to be configured to use it, and will only deal with http and https traffic.

I'm doing it with relayd on the firewall to forward any TCP connection and a PF rule in pf.conf to divert outbound connections to it.

The network looks like this:

Mac --(wifi)-> airport -> switch -> firewall -> sat-modem -> internet

The PF rule reads like this:
pass in log quick on $if_lan inet proto tcp to !<local_nets> divert-to 127.0.0.1 port 8888

which causes TCP connections arriving on the local LAN interface and directed to non-local networks to be diverted to the relayd listening on 127.0.0.1, port 8888.

The relayd.conf looks like this:

  relayd_addr="127.0.0.1"
  relayd_port="8888"
  protocol mytcp {
    tcp nodelay
    ##tcp no sack
    tcp no splice
  }
  relay proxy {
    protocol mytcp
    listen on $relayd_addr port $relayd_port
    forward to destination retry 3
  }

Packet Traces and Discussion


The following traces are taken from the firewall on the local LAN interface. As mentioned, the network looks like this:

Mac --(wifi)-> airport -> switch -> firewall -> sat-modem -> internet

The firewall runs OpenBSD with stateful rules.

The satellite modem performs both HTTP and TCP acceleration. Gilat offers a little description here. The HTTP acceleration supports an upstream driven prefetch, but I believe it to be irrelevant here. The same Gilat page indicates that the TCP acceleration passes the SYN:SYN/ACK:ACK TCP setup packets as-is, but then collects the data for established connections and sends it over the satellite portion of the link using a more satellite-optimised protocol. Also, by doing data ACKs locally the client (my Mac) is encouraged to send more data promptly instead doing a slow ramp up with high latency ACKs from the remote host. Conversely, the remote host can also send data more rapidly.

The TCP acceleration is all very cool (and surprisingly effective) but has a misfeature/bug in its implementation as shown below.

Here is a successful POP3S connection:
16:47:47.896426 {MACBOOK}.50142 > X.X.X..995: S 2233798023:2233798023(0) win 65535  (DF)
16:47:48.597903 {MACBOOK}.50142 > X.X.X.X.995: S 2233798023:2233798023(0) win 65535  (DF)
16:47:48.643500 X.X.X.X.995 > {MACBOOK}.50142: S 0:0(0) ack 2233798024 win 13312 
16:47:48.644794 {MACBOOK}.50142 > X.X.X.X.995: . ack 1 win 65535 (DF)
16:47:48.645639 {MACBOOK}.50142 > X.X.X.X.995: P 1:307(306) ack 1 win 65535 (DF)

This shows an initial SYN packet, then a resend about 600ms later, then a SYN/ACK response from the far end to the first SYN at 743ms since the first SYN. And then our ACK and normal data traffic.

Round trip latency over satellite is at best about 650ms.

Here is an unsuccessful POP3S connection:
16:48:11.094536 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:11.797141 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:12.099393 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:12.400934 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:12.702356 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:13.003581 X.X.X.X.995 > {MACBOOK}.50144: S 0:0(0) ack 2181400206 win 13312 
16:48:13.005832 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:13.006593 X.X.X.X.995 > {MACBOOK}.50144: R 1:1(0) win 0
16:48:13.007379 {MACBOOK}.50144 > X.X.X.X.995: . ack 1 win 65535 (DF)
16:48:13.007778 {MACBOOK}.50144 > X.X.X.X.995: P 1:307(306) ack 1 win 65535 (DF)
16:48:13.008437 X.X.X.X.995 > {MACBOOK}.50144: R 1:1(0) win 0
16:48:13.009220 X.X.X.X.995 > {MACBOOK}.50144: R 1:1(0) win 0

This shows and initial SYN and resends at 600ms, 900ms, 1200ms, 1500ms. We may infer that the network is congested. Finally the SYN/ACK from the far end at 1800ms. At this point (I believe) the TCP acceleration in the satellite modem has established state for the connection.

In an unfortunate (but annoyingly common) situation, there is another SYN resend from the Mac, arriving at the firewall 2ms after the SYN/ACK was dispatched. Notably, it will have been dispatched by the Mac before it has seen the SYN/ACK. And herein lies the Gilat modem bug.

The modem believes the connection is ready; it has seen the SYN/ACK. It now sees the extra SYN resend, and decides that this is an invalid attempt to make a new connection from the same source on the Mac to the same target host. In response it declares the connection invalid and sends an RST in response to the extra SYN. Meanwhile the Mac sees the SYN/ACK and sends an ACK as normal, and follows up with a PSH of the first data packet. Both of these also get RST packets send back. The Mac reports "connection reset by peer". Badness.

Better behaviour form the modem would be to accomodate a SYN resend from the client Mac, at least during a small window after connection setup.

Better behaviour from the Mac would be to send SYNs far less often. A more normal TCP stack (such as that in the firewall) sends its first resend 6 whole seconds after the first. The only use for a faster resend is to get snappier recovery in the event of the first SYN being dropped. In these days of buffer bloat it looks like this is no longer a common response to congestion. instead, SYNs can clearly be delayed and not lost.

By diverting outbound TCP through relayd we are effectively replacing the Mac's overly eager SYN timing with the saner timing from the OpenBSD host, and the problem basicly goes away.



Saturday, May 13, 2006

UNIX versus iTunes and the iPod, via MacOSX

Our household has been, um, blessed with an iPod for some months now. Currently it is being managed by iTunes from a Mac. However, our music collection lives on our main machine that runs Linux, and that has caused some problems.

The standard procedure is: rsync the music directory from the server to the Mac (over the wireless LAN, ouch!) and then import the directory into iTunes, and then maintain the iPod from there. This is tedious but has until recently worked: the rsync is slow over the wi-fi, importing into iTunes seems to involved having a complete copy of all the imported music, and then you have to plug in the iPod and sync.

Then we stepped out of our cosy English niche and ripped an African CD, with French song titles.

It ripped just fine, and we have these nicely named files in the main music tree. So we blithely went to rsync the files to the Mac. No go. Rsync's attempts to create the files on the Mac failed. It turns out that the Mac's HFS Plus filesystem uses Unicode UTF-8 Normal Form D encoding for its filenames and prevents creation of files whose names' byte sequences are not in this form.

UNIX filenames are simply byte sequences. There is no specified encoding. Since we were creating filenames from the FreeDB index, I expect these names are in Latin-1 encoding. The FreeDB Database Format Specification states that ``Database entries must be in the US-ASCII, ISO-8859-1 (the 8-bit ASCII extension also known as "Latin alphabet #1" or ISO-Latin-1) or UTF-8 (Unicode) character set'' but unhelpfully provides no indication as to how one should identify the encoding in use.

Our current tree contains Latin-1 encoded filenames, so we will be sticking with that for now. What is then needed is a way to make a tree we can rsync to the Mac. So the plan is to keep a hard-linked directory tree beside the "master" music directory which contains UTF-8 NFD encoded names. To this end I have made a small tool called macify which will maintain a link tree.

Friday, March 31, 2006

asynchronous replies in mutt

David Woodhouse talks about Jack's rethorical post "why did we ever abandon Mutt and Pine?", and says he still uses pine on handheld devices. His second of two main reasons for defecting to Evolution is:
Secondly, I very much like composing new email in separate windows, rather than in the main mail reader window. I have habit of hitting 'reply', perhaps making a half-hearted attempt to respond to an email, and then getting distracted and leaving it for days before I eventually find it the composer window on my desktop again, finish it off and send it. If I didn't do that, then stuff would just get lost and I'd never reply to it.
Well, I'm a commited mutt user and I have a similar problem with replies; I often defer then because I can't do them justice right now, and never return. The message gets buried.

There was a recent thread in the mutt lists on started by Jamie Rollins on "pseudo multi-threading", wanting to dispatch mail composition in a separate window. There turn out to be a few people who do things like that, and a few scripts.

For myself, I don't want a separate window. I want always to start in-line with my reply, but perhaps abandon it and resume later. So... screen! Now I have a script called muttedit that I use as the mail editor; my .muttrc now says "set editor=muttedit". It copies the temp file mutt makes and runs screen, invoking "mutt -H" with the copy. If I complete the reply and quit, I'm still in my original mutt as if I had used a plain editor. To defer it, detach from screen. I'm back in mutt, and there a screen session lying around holding the pending reply. Muttedit takes a bit of care to give the screen session a nice fat title with the reply subject in it.

We'll see how it goes.

Saturday, March 18, 2006

how to scuttle a counter-terrorism hotline

I see in todays ABC News "Terrorism hotline callers may be monitored". I presume it's the same hotline touted in the frequent TV advertisements for reporting suspicious stuff, reassuringly touted with the words "and you can remain anonymous". Clearly that's voided now. Very clever. It can only dampen the response from anyone with insider knowledge of some operation.

moving from fetchmail to getmail

As mooted previously, my mail collection is largely bound by the bandwidth from my ISP to me. However, it's not entirely bound by that. Some of it is bound by the delivery cost of each message at my end.

Until this morning I have been using fetchmail to collect my email. generally I'm very happy with it. It has a concise and human friendly configuration file. It is fairly easy to use. It flexibly delivers via the program of your choice, typically procmail for people working this way. However, it only delivers via an external program. The default is the local sendmail on your system, and most people choose procmail if they override that default. This basicly means a program fire-up per message. It is not a big cost, but it's noticeable. Because fetchmail is quite careful about ensuring delivery before deleting something from your POP mail spool, this cost is inline with the data transfer, and thus makes for a little latency.

My mail delivery is a bunch of decoupled programs; something fetches from my ISP and drops messages into a spool Maildir folder (core documentation); another script collects messages from there and files the spam in a spam folder and the ham in the spool-in folder; a third script scatters things from there to the final destination folders according to my rules. It may seem overly complex but it keeps the tasks nicely separated for easy tinkering and works quite well. For example I can refile messages simply by copying them from whatever folder I'm reading into the spool-in folder; there is no fear of having them miscategorised as spam and I don't need to invoke some special program or incantation to kick off the refile.

Still, that's not today's point. The issue here is that the initial delivery goes unconditionally into a single mail folder. I have been using a trite procmailrc with no rules and a DEFAULT=$MAILDIR/spool/ line. However, procmail must still be invoked once per message. If fetchmail could deliver directly to a Maildir I'd just be doing that. However, fetchmail restrains itself to collection from POP or IMAP and delivering to a local mail agent such as sendmail or procmail, and does not sully itself with direct mail folder delivery.

In consequence, and only because my fetch delivers to a mail folder without any other smarts, I have moved to using getmail. Like fetchmail, it can hand to a mail agent for delivery but it can also do direct delivery to a mail folder. So now I do that. The fetches are now slightly quicker, enough to notice.

bogofilter and circumventing fsync()

For some time now I have been using bogofilter to categorise incoming email as spam or non-spam (ham). It is quite effective and generally I have been well pleased. However, it was not fast. It does have a batch mode, avoiding the need to fire it up per-message, but even so it was managing about 1 message per second.

Why is this so? Bogofilter usually uses the dbm library for its back end, and for consistency reasons with shared use of the db it seems to call fsync() frequently. Very frequently. It's easily seen using strace. This requires the OS to commit the data to the disc, not merely to the I/O queue, and basicly slows performance to that of a hard drive, which is as nothing compared to memory.

I am not very concerned about this degree of data integrity for a database that in essense is akin to a collection of junk mail keyword frequency. High value data indeed!

I was loathe to patch bogofilter, and more loathe to dig into dbm or change it, and so for many months I have simply lived with the slowdown. Email delivery rarely needs to be truly instant; if I get a message a minute after dispatch instead of a few seconds I will usually not care. Besides, there's plenty of delivery latency already in the inter-poll delays of my fetches from the POP mail spool.

There are, however, two circumstances where I care about bogofilter's speed; one annoying and the other probably a showstopper.

The annoying one is the morning catchup delay. I often turn off the regular POP fetch when I sleep or at toehr times when I'll be away from my laptop for several hours. Why load my ISP or the laptop's hard drive with wasted work? Therefore, on return to The World I kick off a large mail fetch. It is typically over 1000 messages after a night of inactivity. The POP fetch itself takes as long as it takes - it's largely constrained by bandwidth and there is little that can be done about it (not entirely true - see the next post, as yet unwritten). However, the bogofilter run then takes quite a long time and that adds substantially to the subsequent email filing.

The showstopper stems from a probably-failing hard drive. Our main machine has a pair of decent sized hard drives in RAID1 supplying the /home area. I embarked on moving my mail storage from my laptop to the main machine the other day, and soon after the machine crashed. It's done it three times so far, and the error messages suggest one of the drives in the RAID1 is to blame, possibly coupled with a Linux kernel bug (well let's be frank - definitely a kernel bug - the whole point of RAID1 is to be able to have a drive failure without loss of the machine). Anyway, the third crash seemed circumstantially tied to my morning bogofilter run; the very high disc seek numbers suggest to me that the DMA timeouts stem from the hard drive simply taking too long to do things or internally getting out of step with the kernel to the point that they no longer talk to each other.

What to do, what to do?

Obviously we'll be running some diagnostics on the drive this weekend, and probably returning it (it's nearly new).

Secondarily, this is the spur to me to make bogofilter less hard on physical hardware and hopefully faster into the bargain.

So last night I wrote this script, "bogof". We use a RAM disc. Modern Linuxes tend to ship with one attached as /dev/shm and of course Solaris has run /tmp that way for years. I'm using linux, thus the script's defaults.

The bogof script wants a copy of the wordlist.db file on the RAM disc and makes such a copy at need. Naturally, the first run of the script incurs a cost to copy the db to the RAM disc, but that runs at disc read speeds. Even on a laptop that's 4 or 5 MBps and so it's 20-25s for my current db, equivalent to the cost of 20-25 new messages, well below the size of the big mail fetch.

After the first run the copy already exists, so bogof just sets $BOGOFILTER_DIR to point at the copy and execs bogofilter. Since fsync(0 on a RAM disc is probably close to a no-op, bogofilter runs much faster. Close to 2 orders of magnitude faster in fact.

Still, the data do need to get back to the hard drive at some point or bogofilter will never learn about new spam. I run my POP fetch in a small shell script that basicly runs a fetch that delivers to a spool Maildir folder and then sleeps, repeating. Another script pulls stuff from the spool folder and spam filters it, and runs bogofilter in batch mode over a chunk of messages at once. This is now quite fast courtesy of the bogof script. Once categorised, the messages are then either filed in the spam folder for final sanity checking or passed to the main mail filer that parcels things out to my many mail folders. After the spam run I just copy the wordlist.db file back to the master copy. This runs much faster than disc read speed because it's coming from a copy in RAM and going to the buffer pool, also in RAM. in due course the OS will get it to the disc.

This simple change has greatly sped my mail processing and greatly eased the burden on my hard drive activity light. I'm happy!