Friday, May 11, 2018

scheduling iTunes downloads

We obtain a lot of our video via iTunes, not least because we can download it and watch at our convenience instead of trying to stream and hoping the internets are uncluttered.

However, we need to watch our ISP data quota. We're on a plan with a generous off peak quota, but the off peak period is 1am to 7am. Since iTunes has no scheduling facility this generally involves me getting up in the middle of the night at some point. I've seen a few recipes on the net to work around this, but they all seem cumbersome and/or baroque. Now I have a better way.

I wrote a small helper script called itunes which can tell an open iTunes app to commence a download. It also has a command to fetch a list of the currently selected media items from an open iTunes.

Combined, my workflow is now as follows:

  1. Open iTunes.
  2. Locate the items to download and select them, or some of them.
  3. From a shell, issue the command: itunes download selected
  4. Check back in the morning.
If my items are scattered about, I need to iterate steps 2 and 3 a few times; because the script will wait until the off peak time I need to keep a few shells open, one for each little batch. Because the selection itself is collected when I dispatch the script, this works fine: select, start script, repeat. Conversely, if the items are together, such as the episodes of a television series, I can select them all and issue the script just once.

The nitty gritty:

The scheduling: my laptop's crontab includes these lines:

30 1 * * * . $HOME/.profile; flag ISP_OFF_PEAK 1
30 6 * * * . $HOME/.profile; flag ISP_OFF_PEAK 0
which turn on and off the ISP_OFF_PEAK flag, a persistent boolean value maintained by my flag script.

The itunes helper script waits for a particular flag state before issuing the download request: the defaults are ROUTE_DEFAULT ISP_OFF_PEAK !DOWNLOAD_DISABLE.

The ROUTE_DEFAULT flag is maintained by some other automation I run, and is true when the laptop has a default route, which I use to infer that it is online, with access to the internet.

The ISP_OFF_PEAK is maintained by cron as described above.

The DOWNLOAD_DISABLE flag is entirely notional; I could set it to true to prevent downloads happening during the night.

Wednesday, December 14, 2016

A Partial Workaround for NBNCo's SkyMuster satellite outages

Like many users of NBN's long term satellite service we're discovering it is rather flakey. The satellite link itself is pretty solid, but the NBN supplied routing upstream of the ISPs has frequent irregular outages, leaving end users offline for periods ranging from minutes to hours, a situation which has been ongoing for months.

The symptoms when things go sour is that the modem is still online: it has a happy blue ring on it, and usually you can get DHCP from your ISP. Since the DHCP service is on the far side of the satellite link, the satellite itself is exonerated, as is the ISP. However, any IP traffic sent to the public network never receives any responses. This state can persist for some time; eventually whatever is confused in NBN's network becomes unconfused (or gets reset).

One thing we have found is that even after NBNCo gets unconfused, the network remains unusable until you renew your DHCP lease. And immediately you do this, the network becomes usable. (If NBN have their side fixed; there will be a period where no DHCP activity does any good, and DHCP queries may even receive no response.)

Therefore, when your NBN goes away, try renewing your router's DHCP lease. For us, this can lead to immediate renewal of service.

Until NBN sort themselves out we've told our router to keep leases for only 15 minutes, so that it issues DHCP renewals regularly and hopefully will bring our service live again automatically.

We're inferring from this that somehow the DHCP action triggers some form of ISP or NBN routing action because until the DHCP renewal happens not route is in place. Also, when NBN are still confused, doing a DHCP renewal can lead to successful connectivity for something like 10 seconds. This suggests to me that through some screwup the NBN-side state that routes to our service is in conflict with something, possibly another customer's service; after the DHCP our route is in place, and then it is quickly trashed.

I am posting this in the hope that other NBN satellite users will find the workaround useful.

Footnote: the advice from NBN or an ISP tends to be "turn off your modem for 2 minutes, wait for it to reconnect, then restart your computer". Restarting one's computer or router causes it to do a DHCP request on startup. In our experience of NBN's current outages, the only step in this process which achieves anything is the DHCP request. Everything else is just witchcraft (well, "reset all the client's systems to clear any state").


Tuesday, December 03, 2013

Mac and iOS connection issues on satellite internet, and my apparent workaround

This is an attempt of a more coherent writeup of my post on Whirlpool.

I've been debugging a problem in a rural location using an NBN interim satellite internet connection and which was showing encountering frequent unreliability, particularly with SSL connections (HTTPS, POP3S) but also apparent with ssh and, when tried, even telnet. It was, of course, sporadic. It was not so obvious with cleartext web browsing (normal HTTP) but I believe this is because browsers will silently retry failed connections.

Poking around on Whirlpool found little information; I currently suspect the fairly small share of iOS/MacOSX equipment in rural locations - Windows is very prevalent.

My current hypothesis, supported by packet traces, is that this unreliability is a combination of TCP acceleration in the modem (a Gilat SkyEdge IP II), an overly aggressive SYN resend timing in MacOSX and iOS, and satellite latency.

The current workaround is to divert all outbound TCP traffic through a proxy on the firewall/router, so that the Mac connects to the proxy on the firewall (low latency local LAN connection) and the firewall makes the outbound TCP connection to the target host using a far saner SYN resend timing, achieving reliable connection.

Implementation


Most people will do this via a web proxy (eg squid or equivalent program, possible built into their router). This requires each client machine to be configured to use it, and will only deal with http and https traffic.

I'm doing it with relayd on the firewall to forward any TCP connection and a PF rule in pf.conf to divert outbound connections to it.

The network looks like this:

Mac --(wifi)-> airport -> switch -> firewall -> sat-modem -> internet

The PF rule reads like this:
pass in log quick on $if_lan inet proto tcp to !<local_nets> divert-to 127.0.0.1 port 8888

which causes TCP connections arriving on the local LAN interface and directed to non-local networks to be diverted to the relayd listening on 127.0.0.1, port 8888.

The relayd.conf looks like this:

  relayd_addr="127.0.0.1"
  relayd_port="8888"
  protocol mytcp {
    tcp nodelay
    ##tcp no sack
    tcp no splice
  }
  relay proxy {
    protocol mytcp
    listen on $relayd_addr port $relayd_port
    forward to destination retry 3
  }

Packet Traces and Discussion


The following traces are taken from the firewall on the local LAN interface. As mentioned, the network looks like this:

Mac --(wifi)-> airport -> switch -> firewall -> sat-modem -> internet

The firewall runs OpenBSD with stateful rules.

The satellite modem performs both HTTP and TCP acceleration. Gilat offers a little description here. The HTTP acceleration supports an upstream driven prefetch, but I believe it to be irrelevant here. The same Gilat page indicates that the TCP acceleration passes the SYN:SYN/ACK:ACK TCP setup packets as-is, but then collects the data for established connections and sends it over the satellite portion of the link using a more satellite-optimised protocol. Also, by doing data ACKs locally the client (my Mac) is encouraged to send more data promptly instead doing a slow ramp up with high latency ACKs from the remote host. Conversely, the remote host can also send data more rapidly.

The TCP acceleration is all very cool (and surprisingly effective) but has a misfeature/bug in its implementation as shown below.

Here is a successful POP3S connection:
16:47:47.896426 {MACBOOK}.50142 > X.X.X..995: S 2233798023:2233798023(0) win 65535  (DF)
16:47:48.597903 {MACBOOK}.50142 > X.X.X.X.995: S 2233798023:2233798023(0) win 65535  (DF)
16:47:48.643500 X.X.X.X.995 > {MACBOOK}.50142: S 0:0(0) ack 2233798024 win 13312 
16:47:48.644794 {MACBOOK}.50142 > X.X.X.X.995: . ack 1 win 65535 (DF)
16:47:48.645639 {MACBOOK}.50142 > X.X.X.X.995: P 1:307(306) ack 1 win 65535 (DF)

This shows an initial SYN packet, then a resend about 600ms later, then a SYN/ACK response from the far end to the first SYN at 743ms since the first SYN. And then our ACK and normal data traffic.

Round trip latency over satellite is at best about 650ms.

Here is an unsuccessful POP3S connection:
16:48:11.094536 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:11.797141 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:12.099393 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:12.400934 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:12.702356 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:13.003581 X.X.X.X.995 > {MACBOOK}.50144: S 0:0(0) ack 2181400206 win 13312 
16:48:13.005832 {MACBOOK}.50144 > X.X.X.X.995: S 2181400205:2181400205(0) win 65535  (DF)
16:48:13.006593 X.X.X.X.995 > {MACBOOK}.50144: R 1:1(0) win 0
16:48:13.007379 {MACBOOK}.50144 > X.X.X.X.995: . ack 1 win 65535 (DF)
16:48:13.007778 {MACBOOK}.50144 > X.X.X.X.995: P 1:307(306) ack 1 win 65535 (DF)
16:48:13.008437 X.X.X.X.995 > {MACBOOK}.50144: R 1:1(0) win 0
16:48:13.009220 X.X.X.X.995 > {MACBOOK}.50144: R 1:1(0) win 0

This shows and initial SYN and resends at 600ms, 900ms, 1200ms, 1500ms. We may infer that the network is congested. Finally the SYN/ACK from the far end at 1800ms. At this point (I believe) the TCP acceleration in the satellite modem has established state for the connection.

In an unfortunate (but annoyingly common) situation, there is another SYN resend from the Mac, arriving at the firewall 2ms after the SYN/ACK was dispatched. Notably, it will have been dispatched by the Mac before it has seen the SYN/ACK. And herein lies the Gilat modem bug.

The modem believes the connection is ready; it has seen the SYN/ACK. It now sees the extra SYN resend, and decides that this is an invalid attempt to make a new connection from the same source on the Mac to the same target host. In response it declares the connection invalid and sends an RST in response to the extra SYN. Meanwhile the Mac sees the SYN/ACK and sends an ACK as normal, and follows up with a PSH of the first data packet. Both of these also get RST packets send back. The Mac reports "connection reset by peer". Badness.

Better behaviour form the modem would be to accomodate a SYN resend from the client Mac, at least during a small window after connection setup.

Better behaviour from the Mac would be to send SYNs far less often. A more normal TCP stack (such as that in the firewall) sends its first resend 6 whole seconds after the first. The only use for a faster resend is to get snappier recovery in the event of the first SYN being dropped. In these days of buffer bloat it looks like this is no longer a common response to congestion. instead, SYNs can clearly be delayed and not lost.

By diverting outbound TCP through relayd we are effectively replacing the Mac's overly eager SYN timing with the saner timing from the OpenBSD host, and the problem basicly goes away.



Saturday, May 13, 2006

UNIX versus iTunes and the iPod, via MacOSX

Our household has been, um, blessed with an iPod for some months now. Currently it is being managed by iTunes from a Mac. However, our music collection lives on our main machine that runs Linux, and that has caused some problems.

The standard procedure is: rsync the music directory from the server to the Mac (over the wireless LAN, ouch!) and then import the directory into iTunes, and then maintain the iPod from there. This is tedious but has until recently worked: the rsync is slow over the wi-fi, importing into iTunes seems to involved having a complete copy of all the imported music, and then you have to plug in the iPod and sync.

Then we stepped out of our cosy English niche and ripped an African CD, with French song titles.

It ripped just fine, and we have these nicely named files in the main music tree. So we blithely went to rsync the files to the Mac. No go. Rsync's attempts to create the files on the Mac failed. It turns out that the Mac's HFS Plus filesystem uses Unicode UTF-8 Normal Form D encoding for its filenames and prevents creation of files whose names' byte sequences are not in this form.

UNIX filenames are simply byte sequences. There is no specified encoding. Since we were creating filenames from the FreeDB index, I expect these names are in Latin-1 encoding. The FreeDB Database Format Specification states that ``Database entries must be in the US-ASCII, ISO-8859-1 (the 8-bit ASCII extension also known as "Latin alphabet #1" or ISO-Latin-1) or UTF-8 (Unicode) character set'' but unhelpfully provides no indication as to how one should identify the encoding in use.

Our current tree contains Latin-1 encoded filenames, so we will be sticking with that for now. What is then needed is a way to make a tree we can rsync to the Mac. So the plan is to keep a hard-linked directory tree beside the "master" music directory which contains UTF-8 NFD encoded names. To this end I have made a small tool called macify which will maintain a link tree.

Friday, March 31, 2006

asynchronous replies in mutt

David Woodhouse talks about Jack's rethorical post "why did we ever abandon Mutt and Pine?", and says he still uses pine on handheld devices. His second of two main reasons for defecting to Evolution is:
Secondly, I very much like composing new email in separate windows, rather than in the main mail reader window. I have habit of hitting 'reply', perhaps making a half-hearted attempt to respond to an email, and then getting distracted and leaving it for days before I eventually find it the composer window on my desktop again, finish it off and send it. If I didn't do that, then stuff would just get lost and I'd never reply to it.
Well, I'm a commited mutt user and I have a similar problem with replies; I often defer then because I can't do them justice right now, and never return. The message gets buried.

There was a recent thread in the mutt lists on started by Jamie Rollins on "pseudo multi-threading", wanting to dispatch mail composition in a separate window. There turn out to be a few people who do things like that, and a few scripts.

For myself, I don't want a separate window. I want always to start in-line with my reply, but perhaps abandon it and resume later. So... screen! Now I have a script called muttedit that I use as the mail editor; my .muttrc now says "set editor=muttedit". It copies the temp file mutt makes and runs screen, invoking "mutt -H" with the copy. If I complete the reply and quit, I'm still in my original mutt as if I had used a plain editor. To defer it, detach from screen. I'm back in mutt, and there a screen session lying around holding the pending reply. Muttedit takes a bit of care to give the screen session a nice fat title with the reply subject in it.

We'll see how it goes.

Saturday, March 18, 2006

how to scuttle a counter-terrorism hotline

I see in todays ABC News "Terrorism hotline callers may be monitored". I presume it's the same hotline touted in the frequent TV advertisements for reporting suspicious stuff, reassuringly touted with the words "and you can remain anonymous". Clearly that's voided now. Very clever. It can only dampen the response from anyone with insider knowledge of some operation.

moving from fetchmail to getmail

As mooted previously, my mail collection is largely bound by the bandwidth from my ISP to me. However, it's not entirely bound by that. Some of it is bound by the delivery cost of each message at my end.

Until this morning I have been using fetchmail to collect my email. generally I'm very happy with it. It has a concise and human friendly configuration file. It is fairly easy to use. It flexibly delivers via the program of your choice, typically procmail for people working this way. However, it only delivers via an external program. The default is the local sendmail on your system, and most people choose procmail if they override that default. This basicly means a program fire-up per message. It is not a big cost, but it's noticeable. Because fetchmail is quite careful about ensuring delivery before deleting something from your POP mail spool, this cost is inline with the data transfer, and thus makes for a little latency.

My mail delivery is a bunch of decoupled programs; something fetches from my ISP and drops messages into a spool Maildir folder (core documentation); another script collects messages from there and files the spam in a spam folder and the ham in the spool-in folder; a third script scatters things from there to the final destination folders according to my rules. It may seem overly complex but it keeps the tasks nicely separated for easy tinkering and works quite well. For example I can refile messages simply by copying them from whatever folder I'm reading into the spool-in folder; there is no fear of having them miscategorised as spam and I don't need to invoke some special program or incantation to kick off the refile.

Still, that's not today's point. The issue here is that the initial delivery goes unconditionally into a single mail folder. I have been using a trite procmailrc with no rules and a DEFAULT=$MAILDIR/spool/ line. However, procmail must still be invoked once per message. If fetchmail could deliver directly to a Maildir I'd just be doing that. However, fetchmail restrains itself to collection from POP or IMAP and delivering to a local mail agent such as sendmail or procmail, and does not sully itself with direct mail folder delivery.

In consequence, and only because my fetch delivers to a mail folder without any other smarts, I have moved to using getmail. Like fetchmail, it can hand to a mail agent for delivery but it can also do direct delivery to a mail folder. So now I do that. The fetches are now slightly quicker, enough to notice.