Quoting property names in JavaScript

JavaScript is relatively lax when it comes to valid object syntax. Property names can be quoted or unquoted in most circumstances. These two examples are identical.

I’ve noticed recently that I’ve developed my own quoting style at some point that seems to be rather uncommon so I decided to finally share it and why I’ve come to use it. I’m sure I’m not the first, nor will I be the last to come up with this system but it’s probably worth a blogpost anyway.
A lot of the time in my job I find myself having to work with JavaScript objects derived from JSON (or JSON Schema). Functions consume them, or construct them, and output a modified version of them based on user input. These objects have very strict rules over what properties they can have, but certain property keys are user configurable (In JSON Schema, these are ‘additional properties’.
NB: By user input here I mean any input into your function. It could be user input, or it could be input from an external component that interfaces with your functions in some manner.
Consider the following example JSON Schema where quoted property names are never used.

The same example, but this time with every property name is quoted

Now tell me. At a glance is it obvious which property names are open to manipulation by the user, rather than being a schema-defined name from the schema (in this case JSON Schema)? At a glance, what is the type of the somethingElse property under filed?
Contrast that with the following example, where I quote fields as I would under my system.

To me it makes it much clearer which properties the user chose, and also makes it easier to visually navigate eg if this is in a test file and I need to update a particular fields type to reflect new behavior, because syntax highlighting now colors them differently.
This example happened to be an instance of JSON Schema, but I feel this technique works equally well for any object where the user can control the structure of the object, as opposed to just the content. Every object your code is working with will have some structure to it.
More generally I guess I’d phrase this as quoting significant property names, or ‘Significant Quoting’ as opposed to always quoting or never quoting.
Obviously there’s a couple of drawbacks in representing objects this way in test files. JavaScript quoting rules are somewhat complex, and so mandating a property name be quoted or not sometimes has to depend on the name itself rather than the source of the property name. The other drawback is that there is no linting rule I’m aware of to enforce this. Which, as an advocate of automated rule enforcement and correction, bugs me. A linting system to enforce this could probably be devised with enough investment and time defining typescript style rules or annotations for the objects you work with.
Despite those drawbacks though, I really do feel that in code and especially in test files, this quoting system can make it much easier and quicker to comprehend the structure and navigate down a complex object.

Questions on the Bethesda reselling drama

For context the drama I’m referring to is described over in this polygon article. It’s been updated a couple of times since original publication so you may not have read the most recent version.
A brief summary however is that a firm that’s been hired by Bethesda demanded that a third-party amazon seller take down their listing for a still-sealed game that they no longer wanted and had listed as being ‘new’. The reasoning given by the firm is that they are an unauthorised seller, and thus it comes without a [manufacturer] warranty (I assume this is because the warranty only applies when bought through an authorised retailer). This, they claim, creates a material difference which means that first-sale doctrine (remembering we’re talking about the US here) does not apply. Notably they are not, as far as we can tell, going after products listed as ‘used’. Presumably because if it’s listed as used that makes clear that any manufacturer warranty likely wouldn’t apply.
Now with the background set this raises some interesting (to me) questions.
Is new an accurate description of the product in this case?
Does it being listed as new imply that it comes with a manufacturer warranty?
Should a manufacturer warranty be able to be limited to just purchases through an authorised seller; bearing in mind in this case it’s unopened unused software and thus should be in factory-perfect condition.
Let’s imagine this wasn’t a marketplace order (which afaik is never the default shown seller, but instead a formal business who can be the default seller).
Would a consumer know it’s an unauthorised seller? As it was listed on amazon, amazon might default to showing this seller depending on their Algorithms, when a consumer searches for the game. The only indication would be they ‘sold by’ tagline.
What about if this merchant uses Fulfilled by Amazon, so the seller ships goods to an amazon warehouse and then when it sells it ships from an amazon warehouse (which means Prime delivery /shipping fees for the consumer)? This seems straightforward until you remember than Amazon will commingle inventory from different sellers, including their own.
(Slight segue: what this means is that if Amazon, Seller A and Seller B all sell Widget X; Amazon will put their own inventory and inventory shipped to them by Seller A and Seller B all in the same bins. When you buy Widget X ‘sold by Seller A Shipped by Amazon’; you might get the Widget X that Seller A shipped in to amazon, or one that Seller B shipped in, or even one Amazon themselves directly received. Sellers can opt-out of commingling like this – but it involves paying amazon more money)
In this commingled case someone may have bought a copy ‘sold by Amazon’ and that will be printed on their receipt, and presumably entitled to the manufacturer warranty with Amazon being an authorised seller… but they might actually receive a copy stocked into an amazon warehouse by an unauthorised seller (who, for instance, may have bought their copies in bulk in a sale, and then listed them once the sale ended at a small markup on the sale price below the current non-sale price). Does this matter? Does it even make sense when commingling is involved to have a concept of ‘authorised sellers’?
I ask these questions because the discussion around this seems to be getting caught up in the implications for selling used copies – which is, to me, a much less interesting discussion given that this case is specifically around something being sold as ‘new’.

I think my caching problem is fixed

A post will follow shortly (hopefully) outlining the pain I went through to get it working again.
(And yes, this does serve as a test post for the changes I just battled with)
EDIT: Turns out it isn’t yet. Will keep trying.
EDIT 2: Only problem left seems to be invalidating the main page cache on posting / editing posts. It automatically expires fine though.
EDIT 3: Debug might be helpful?

Suddenly posts from months ago

So you may have noticed that there are suddenly three new posts since March, none of which are from now.
The short answer is caching fail.
For whatever reason the caching plugin I was using didn’t expire its caches when I published posts. So although navigating directly to the post itself worked (if you followed my Twitter or Facebook links to the exact post), it wouldn’t show up on the main feed, nor presumably in RSS.
So to those of you who rely on RSS for keeping up to date, welcome back! At least you only have three posts to catch up on though!
*Will manually verify that caching is working properly when I post this post*

Google+ Names

So, Google have decided to abandon their requirement that you use real names, or (if you’re some kind of celebrity) your stage name, on your Google+ profile.
This is now better than Facebook.
They do at the moment still require a ‘first name’ and a ‘second name’. But unlike facebook they’re policy about how often you can change your name is some what… vague.

Limited number of name changes : After you’ve created or edited your name, you may need to wait for up to three months to change it again. It will depend on how recently you created your profile and when you last changed your name.

Up to three months is a bit longer than facebooks 60 days, but it sounds as though it might be less depending on what Googles Super Secret Inflexible Authoritative Algorithms say. Also the bold section implies that there might be an upper limit beyond which the algorithm will just say no.
Confusingly another place it says

The solution : You can change your name three times every 90 days. If you’ve recently changed your name three times, you may need to wait for up to three months to change it again.

So…. yeah. I think, what it means is that When you change your name, if you’ve had two other names within 90 days you’re stuck with it until the 90 days from the first name change has passed. But I’m honestly not sure.
I’d like the name change restrictions to be much clearer. (Or ideally have no restrictions). They also say that

Some names aren’t allowed. For example, we don’t allow names that are too long, include symbols or numbers,

So… yeah. Apologies to those with hyphenated names. I do wonder what happens if you try putting Katakana in there though.
Still, it’s an improvement, and I think qualitatively they’re now on par, or maybe slightly behind, facebook. Although I haven’t actually done a full review of Googles policies in totality when it comes to names.
(I would apologise for how this blog has turned into ‘bash the name requirements of social networks’, but some can get it right e.g. emoj.li – where your username, and all messages, must be emoji. Names are very very important to me at the moment. So I’m not sorry, and I’m not going to apologise if this kind of stuff bores you.)

Facebook's Real Name Policy — Revisited

So, the post I made about Facebook and their Real Name Policy.
There’s been an update
Where it used to say you could only change it four times
It now says that you can change your name once every 60 days.
(A screenshot from the help center, because I changed my name before taking a screenshot)
Their username policy hasn’t changed though – only one change that should contain your real name. Hopefully they’ll update that policy when they realise how stupid it is to allow multiple real name changes, but only one change of must-contain-real-name-username. They also still seem to be very restrictive on what they consider a ‘real name’.
Still, it’s an improvement.

On dismantling online identities

I’ve written before how I wanted to start killing my unified online identity.
And, just over a year since I posted that, it has begun.
The death knell has been struck for the previous lynch-pin – my Twitter. It was the easiest account of mine to find online (a work colleague found it with a 30 second Google search). It made easy to follow links to numerous other online identities of mine, as I cross posted. It was public.
Well, no more.
Yesterday I took the drastic step of just leaving that account. I’ve made a new Twitter. A protected Twitter, so that my tweets cannot be seen by all and sundry. I’ve added some of those that came first to mind as people who I trust with my new account. Hopefully this will enable me to censor myself less on there. There have been things I’ve wanted to retweet, things I’ve wanted to @ reply to, that I haven’t been able to because my tweets were public. And that was getting uncomfortable for me.
I’ll keep the old account lying around, I’ll use it mostly to tweet links to IRL things, e.g. if I get a new job or promotion; or tweet links to these posts. But my real Twitter is now only for my close friends. Those who have been let past my second level of barriers. It’s a shame. There are those I really do enjoy interacting with on Twitter – people from work, and those I am friends with, but I need to draw the line somewhere on the new account otherwise I will end up back in the same situation as before.
So, with my Twitter now mostly inactive, I’ve taken one of the biggest steps towards splitting up my online identities. Obviously, my ISP and such like will still be able to correlate (I’m not yet using Tor for all accounts – it seems particularly pointless using it for accounts where people know who I am in real life), and there will always be the possibility of my social graphs causing a link. But that kind of seepage is a lot harder for a generally interested person to find, than scrolling down my twitter history to find where I’ve linked to my other accounts.
In addition, as my Twitter was the source for most of my FB posts, that is also going to be going rather quiet. I mostly use FB for messages now anyway, so I guess not much has changed in that regard.
Now I just need to swap out my other accounts for new ones as well. That’s a lot easier when they don’t enforce real name policies. But that can wait until this new Twitter has settled down.
Speaking of which, it’s likely that there are people reading this who aren’t yet following my new Twitter, but could be. Contact me via private means if you’re interested (Twitter DM on my original account, FB message, Email etc). I do reserve the right to not share it though – as I said, I don’t want to start censoring myself on there like I have been before.

Dear Facebook, (On Real Names)

Sort your policies around ‘names’ and ‘usernames’ out. Please. Just please fix it. Here I’ll provide a source for everything I’m about to hurl at your stupid artificial petty little system that’s aimed entirely at trying to make our data seem more ‘accurate’ and ‘valuable’ and ‘easily analysable’.
So, here’s a list of 40 Falsehoods Programmers Believe About Names.
I’ll wait here while you go and peruse it.
*twiddles thumbs*
Done? Excellent. Now, let me go and grab a few screenshots, and we’ll start looking at how your system falls foul of it, and is causing me unnecessary pain.
Firstly, the Name section of the ‘General Account Settings’ screen.
So, here we are. Firstly, let’s click on the little ‘Learn More’ link to find out exactly what you mean by ‘real name’  (I’m hoping you won’t be falling foul of falsehoods 3,4. That wouldn’t be a good start).

Names can’t include:

  • Symbols, numbers, unusual capitalisation, repeating characters or punctuation
  • Characters from multiple languages
  • Titles of any kind (ex: professional, religious, etc)
  • Words, phrases, or nicknames in place of a middle name
  • Offensive or suggestive content of any kind

Other things to keep in mind:

  • The name you use should be your real name as it would be listed on your credit card, student ID, etc.
  • Nicknames can be used as a first or middle name if they’re a variation of your real first or last name (like Bob instead of Robert)
  • You can also list another name on your account (ex: maiden name, nickname, or professional name), by adding an alternate name to your Timeline
  • Only one person’s name should be listed on the account – Timelines are for individual use only
  • Pretending to be anything or anyone is not allowed

Oh. Oh. So that’s falsehoods 3 and 4 right there (One canonical name, and one full name at this point in time). You at least let us change the name, which gets you out of falsehood 1, 2, and 7; although you limit the number of changes, which puts you foul of rule 5.
For those keeping count they’ve so far fallen to falsehoods 3, 4, 5.
By specifically calling out multiple languages not being allowed, I’m going to give you a pass on falsehood 9, as I’m going to assume you’d be ok with me using a Japanese, non-Romanised name. But you call out ‘symbols’, ‘unusual capitalisation’ among other things as not OK. Would Japanese characters fall foul of this? It’s unclear. and arbitrary. Who are you to decide if capitalisation is unusual (falsehoods 12,13,16). You call out as well that you don’t allow mixing languages – I think that might possibly be violation of falsehood 10.
No titles of any kind? Even religious ones? Interesting. Titles are a kind of prefix. Falsehood 14. And for some would arguably be a large part of their identity that it could be disrespectful to not use it when communicating with them.
No numbers? Falsehood 15.
Falsehoods violated so far: 3, 4, 5, 10, 12, 13, 14, 15, 16
The Display As box offers First Middle Last, First Last, Last First. That sounds suspiciously like thinking there’s an order to peoples names. Falsehood 8.
No suggestive or offensive language? Can I find this objective list? Are you assuming that you can know what is suggestive or offensive in every single country and culture in which you operate (or allow users to select as their country from that nice drop-down list you have). Falsehood 31.
On the off chance someone does have a name that falls foul of this, then they must be a weird outlier. Falsehood 39 (Bonus points for alienating a user)
I’m going to give you a pass on falsehood 40, that people have names, as even I recognise that is a far larger battle than just facebook.
So, there we have violations of: 3, 4, 5, 8, 10, 12, 13, 14, 15, 16, 31, 39
There’s probably more there, but I’m not going to start trying to use up my limited name changes to see if you fall foul of the internationalisation related falsehoods.
You see Facebook. You also have this nice little thing called ‘usernames’ that dictate what address a Facebook Users profile can be accessed at (facebook.com/USERNAME). Let’s take a little look shall we?
Oh. Oh dear. Oh dear dear dear.

Should include your real name

I can change my username once. I hereby revoke your exemption to falsehoods 1, 2 and 7. You’re also trying to ram peoples names into a URL. That’s…. probably going to end badly. I have better things to do though than waste my one username change to check if this is the case, so I’m not going to say you fall into any falsehood there.
Let’s click that little question mark icon, to see if that has any wisdom for us.

  • You can’t claim a username someone else is already using.
  • Choose a username you’ll be happy with for the long term. Usernames are not transferable and you can only change your username once.
  • Usernames can only contain alphanumeric characters (A-Z, 0-9) or a period (“.”).
  • Periods (“.”) and capitalisation don’t count as a part of a username. For example, johnsmith55, John.Smith55 and john.smith.55 are all considered the same username.
  • Usernames must be at least 5 characters long and can’t contain generic terms.
  • You must be manager-level admin to choose a username for a Page.
  • Your username must adhere to Facebook’s Statement of Rights and Responsibilities.

Oh. oh dear. This is worse than I thought.
Let’s try and reconcile this shall we?

Usernames can only contain alphanumeric characters (A-Z, 0-9) or a period (“.”).

Combined with

Should include your real name

Well. That’s EVERY. SINGLE. INTERNATIONALISATION. THING. FAILED. Falsehoods 9. 10. 11. 24. 25. 26.
And, as you force everyone to use a username with their real name, that means you think that the number of duplicate names is low enough that the amount of crap people will have to add to get a unique username is low. That’s Falsehood 23.
Bang up job there facebook. A field that shouldn’t even have anything to do with someones real name single handedly fails 10 falsehoods, and that’s before I go back over the earlier ones related to the real name policy (such as offensive names).
So Facebook.
Your grand total of failures here 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 23, 24, 25, 26 31, 39
That’s 21 failures, on a list of 40. And That’s with me being generous because I’m not wasting my precious name changes on checking out your validation.
– Sincerely,
A User Who Just Wanted A New Username So My Profile Cant Be Easily Found.
A User Who Really Wants To Get Rid Of Their ‘Legal’ Name From Facebook.
A User Who Hates You More Than Ever.

Stripe CTF3 write-up

I’ve been kinda distracted throughout work for about a week now, because of the third Capture the Flag competition hosted by Stripe. The first CTF was on application security – you had a ‘locked down’ account on a computer, possibly with various programs available. And you had to break out of the account to read a file you weren’t meant to be able to access, containing the password for the next level. I wasn’t aware of that first competition.
The second one was on Web Application security – you were presented with web access to a web app, and the source code for the apps (each level in a different languages), and had to exploit various vulnerabilities in order to get the password for the next level. The web apps ranged from basic HTML form processing, to rudimentary AJAX twitter clones, to APIs for ordering pizza. The vulnerabilities ranged from basic  file upload validation to SHA-1 length extension attacks, Javascript injection all culminating in a level that involved using port numbers to dramatically reduce search space for a 12 character password. I completed that one and won a t-shirt.
The third one, the one that has just happened, was different. It was ‘themed’ around Distributed Systems, rather than security. You’d be given some sample code that you’d have to speed up, either by finding bottle necks in the code or by making the system distributed and fault tolerant. Spoilers will follow. Although the contest is now over, the level code (and associated test harness) is available here if you still want a go. I will note that it’s entirely possible to rewrite each level into a language you’re familiar with (I didn’t take that approach though, given that half the fun is not knowing the language).
So. To details.
I didn’t mange to finish it, although I made it to the last level, of which I sunk far more time into than was healthy – I’m fairly certain my tiredness at work for a couple of days was due to this level.
Level 0
Level 0. A basic Ruby program that reads in text, and if a word appears in a dictionary file it will enclose the word in angle brackets.
I didn’t know Ruby, but I had a good inkling of where to tackle this level, given how simple the program was. A quick google of Ruby Data Structures, and Ruby’s String split() method confirmed my idea. The original code did a string.split() on the dictionary and then repeatedly looked up each word against the Array that function returns. By transforming that array into Ruby’s notion of a Set, I could gain the speedboost from super-fast hash based checking.
I also modified the comparison to do an in place replacement as it saved the cost of duplicating the entire string. I’m unsure how much weight that had against the Array->Set change.
Level 1
A bash script that tries to mine the fictional currency of Gitcoin. Gitcoin is essentially like Bitcoin. You “mine” gitcoins by adding a valid commit to the repository. That commit must modify the ledger file to add one to your own total of gitcoins. A valid commit is one whose commit hash is lexicographically less than the value contained in difficulty – that is to say, if the difficulty contained 00001 your commit hash would have to start with 00000[0-F]. Because of how git works you have to find such a commit before anyone else mining against the same repository finds a valid commit.
There was one main thing in this level to fix. And that’s the call out to git that mock hashes the commit object to see if it’s valid. If it isn’t it alters the commit message text in some way, and then hashes again. This is slow. It’s slow because of a couple of reasons. Git likes to lock its repository files during operations, so you can’t do parallel searches for valid commits. But also because git objects have to have a very specific format, which git takes time to go and generate before returning the hash. The final thing is that each commit contains the hash of the parent commit as part of it, so naturally should another miner find a gitcoin before you, you have to start the search over again.
To achieve this, I moved the SHA1 testing over to Python. I formed the commit object that git creates manually – the header consisting of the term “commit ” and the length of the commit body, with a null byte. I left the body (which itself has to have a very specific format) as it was in the original script. I called pythons SHA1 library to do the work, which is a non-blocking operation, thus meaning I could set 8 separate processes going at once, each trying a unique set of commit messages. Upon success they then spat out a commit message into a file.
Annoyingly my solution then became quite clunky, with myself manually changing the filename to read in a copy of the original script that bypassed the searching. That pushed the correct commit. Ideally I’d have automated that into the first script, but it was enough to get me a valid commit pushed to Stripes git servers, thus meaning the next level was unlocked.
Incidentally this level had a bonus round, where instead of being against four stripe bots mining, you’d be competing against the other players who had completed the level. Needless to say, people very quickly started throwing GPU based SHA1 tools at it, and I was outclassed by a wide degree.
Level 2
Node.js, again I had no experience (although I do know javascript). You were given skeleton code that had to mitigate a DDoS attack. Your code would be placed in front of an under attack web service, and it had to ensure all legitimate requests got through, and strangely enough illegitimate requests to keep the servers busy, but not falling over. (You lost points in the scoring system for how long the target servers were idle.
In practise this was rather simple as the requests were easily differentiated – each legitimate IP would only make a few requests and relatively far apart. Simply keeping a log of the IPs seen and when they were last seen was enough to differentiate the legitimate mice from the illegitimate elephants. You also had to load balance between the two servers that were available – they would fall over if they had to process more than 4 requests at a time. You knew how long it each request would have before the backend servers timed the connection out, so by keeping a log of when each request was proxied, and to which server, you could check how many requests were likely to still be on the server.
Pretty simple.
Level 3
The penultimate level. Scala. I had great troubles with language on this one, I suspect partly because it’s close enough to Java that I get confused mentally when translating what I want to do into the scala syntax.
You were given four servers – a master one, and three slave servers that would never be contacted by the test harness. You were provided with a directory which you had to index all the files under. Then you had to respond to a barrage of search requests (for which you were also expected to return substring matches).
The default code was incredibly poor, so there were some immediate optimisations that were obvious. Firstly, the master server only ever sent search requests to the first of the slave nodes, which also had to index  and search the entire corpus. There’s two approaches now – split the corpus and send each search to all nodes, or split the searches but make each node index the entire corpus. I went with the former. I split the corpus based on root subdirectory number. So the slave0 would index when subDir%3 = 0. Any files directly under the root directory would have been indexed by all nodes.
The second obvious improvement was that the index was an object containing a list of files that the searcher needed to search. That object was serialised to disk, the searcher would read that in. Then for each query it would go off and load the file from disk before searching the file. My first change was to never serialise the object out, but keep it in memory. That didn’t make much of a difference. Then two options presented themselves. I could try constructing an Inverted Index – that would contain each trigram (as I had to handle substring searches)  and a list of the files and lines where that trigram was found. Or I could take the lazy option of reading all the files in at indexing time (you had 4 minutes until the search queries would start) and storing those directly in the in-memory index. I took that option. I transformed the index list into a HashMap of FilePath to Contents.  And that pretty much got me to pass. Somehow. I don’t feel like that was enough work myself, but that was more than made up for by the last level.
Level 4
I couldn’t crack this one. I tried for days. I think it was from Sunday through Wednesday, excepting some time out for the day job.
The language was Go. I know no Go. The challenge: A network of servers, each with a SQLite database. The network is unreliable with lag and jitter randomly added, and network links being broken for seconds at a time. Search queries will be directed to any of the nodes for 30 seconds. All answers they give as a network must be correct. You are disqualified instantly should you return an inconsistent answer. You gain points for each correct response. You lose points for every network byte of network traffic. Oh, and unlike in the other examples, the sample code they provided you with doesn’t pass the test harness – it gets disqualified for inconsistent output.
So. This level was about distributed consensus – how to get multiple nodes to agree on the order of operations given communication problems. I’m just thankful we didn’t also have to contend with malicious nodes trying to join or modify the traffic. If you could get traffic through it was unmodified.
The starter help text contained pointers to a Distributed Consensus Protocol called Raft. Vastly simplifying the intricacies: Nodes elect a leader. Only the leader can make writes to the log (in this case an SQLite Database). The leader will only commit a log once a majority of nodes have confirmed that they have written to the log themselves. If the leader goes missing, the remaining nodes will elect a new leader.
There’s a library already written for Go, Go-Raft. This seemed like a sure fire winner. Just drop in Raft right? Although dropping the library in was very easy it wasn’t that simple. Raft is a very chatty protocol requiring heartbeat signals, leader elections and in our case, request forwarding to the leader as followers do not have authority to commit to the log.
Beyond that though, the go-raft library had issues. It didn’t work with Unix Sockets (that the test harness required) out of the box (although Stripe had had a commit merged into Go-Rafts master branch that made fixing that extremely simple. It could fail to elect a leader. It also had a bug that seemed to bite a lot of people in IRC – I only saw it once, and I’m still not sure on what exactly the cause is – I suspect a missing/misplaced lock() that caused a situation with the log that is fatal for the raft consensus algorithm.
After battling with Unix sockets and getting an excellent passing score locally – at one point I got 150 points normalised, whilst you only needed 50 to pass, I pushed to remote. And it fell over horrendously. I ended up with a negative point score before normalisation. Needless to say that was demoralising. It turns out that reading the original Raft protocol paper, understanding it theoretically, and getting it to work with some easier test cases is very different to getting it to work in a much more hostile set of conditions.
My problems on this level were compounded by the infrastructure regularly falling over and needing the Stripe guys to give the servers a kick or 10.
But beyond that, I feel that there’s something I failed to grok. When my connections could get through it worked fine – SQL was always consistent, leaders were always elected, requests were forwarded properly (barring one case that I have since read about where the request is forwarded and executed successfully but the response is lost due to jitter).  And yet when running on remote I either suffered from End of File errors (i.e. socket closed), or requests timing out. Although I eventually managed to reproduce those issues locally by manually downloading the test case, it didn’t help me in diagnosing the problem – I regularly had a case where one node, in the entire 30 second test runs, never managed to join the consensus (which takes a grand total of one successful request to do). And I didn’t know what to do. I think that the most valuable thing this level taught me, beyond the theory of distributed systems, is how bad I am at fixing problems when there’s no errors that are directly caused by my code. As far as I could tell everything was fine – if I ran it manually without the test harness in the middle it all worked. But when I put the test harness in, it all fell over. My own logic tells me that therefore the problem must be with the test harness. Except I know people managed to pass the level with go-raft. I need to go and look at some solutions people have posted to see how they coped.
At the end of the day, however fun this was overall, the last level left a bad taste in my mouth – the infrastructure problems were pretty endemic especially nearer the end, and the difference between local and remote in the last level was absolutely disheartening. I can accept some difference, but a score that locally is three times higher than the threshold (after normalisation) shouldn’t get negative points on remote. I just wanted the T-Shirt!

Choosing a new phone

In the vein of a previous post exploring why I chose to move my email over to Office 365, I shall today be exploring how I chose my new phone.
Or, more specifically the OS of the phone (given that hardware doesn’t interest me as a thing – one black fondleslab is much like another black fondle slab).
As that previous post indicated, I currently have a BackBerry. Not one of the new BBOS10 ones, but an older one (although it was new when I took my contract out).
The phone market today is radically different from the one where I first switched to BlackBerry (4 going on 5 years ago). BackBerry has essentially died a death ( in the consumer market anyway, we’ll see if their refocus back on enterprise, and the opening of BBM to other phone OSs makes a difference). Android has risen to become the dominant phone OS – although the device manufacturers haven’t quite got the hang of OTA updates and multi-year support (I’ll get to the issue of so-called bloat-ware in a minute). IPhone and it’s iOS has seen a more sedate rise, but has figured out OTA updates that cut the carrier out of the picture. Windows Phone has also emerged as a serious contender.
Between them, these 4 OSs have the overwhelming majority of the market – few people could name any other OS that is still going today. This post will take each in turn to weigh the advantages and disadvantages, for me according to my needs and desires. I make no claims that my answer is the one true answer, or that even my disadvantages won’t be someone else’s advantages. (although I am right, and everyone else is wrong).
There’s no denying that BlackBerry has had a rocky road recently. Their latest OS 10 is a major shift from their previous direction. A major UI overhaul, coupled with keeping their excellent security features should stand them in good stead in this battle. But alas, I don’t want another BlackBerry – their troubles don’t speak well for being around much longer, or at the very least that consumers will not have the focus they once did. BBM is something I rarely use, and even if I did there’s no longer any need for a BlackBerry itself. Even email, the killer feature it handled exceedingly well, is no longer a differentiator – the competition has caught up, and BlackBerry hasn’t advanced. Their attempt to boost their App Store by making their OS ‘compatible’ with Android apps, to me speaks of a desperation. A last gasp as it were. Perhaps it will be enough, perhaps not. But I don’t want to be take the risk that I’ll be left with an unsupported brick a couple of years down the line (phones to me are at least a two year investment, if not more).
Windows Phone 8
A relatively recent contender, Windows Phone 8 is Microsofts latest attempt to break into the mobile market – a successor to the previous Windows Phone 7 and the Windows Mobile OS family. It inherits a lot of its look from Windows 8 and its Metro UI, and this certainly makes it the most distinctive of the OSs out there. Yet it hasn’t been a massive success, although it is showing steady growth. Perhaps it came too late to the market, or perhaps it hasn’t been marketed well – a common feature of Microsofts mobile attempts. One thing is certain though – app developers haven’t gone crazy for it. Despite the fact that I only use a core set of apps on my phone regularly (mostly social media), I do like to try out apps, and part of me wonders if it’s due in large part to the fact that BlackBerry’s app selection is abysmal.
iOS 7 on iPhone
I already have many Apple devices. I use a Macbook Pro at home, I have an iPod Touch which is my media center, and I have an iPad which sees infrequent use. I have a large collection of apps on my iPod, although again I only have a core set that I actually use. So surely an iPhone is a natural next step? Well, maybe not. iPhones are expensive (I know, that’s hardware – but unlike the other OSs, Device and OS are tied together here). I already have an iPod Touch for all my Appley needs. I know of no-one who uses iMessage or FaceTime – so those have no appeal. My apps are already on my iPod Touch, and I don’t hate the wifi-only nature of it. There’s also Apples iCloud, which is very much a walled garden as far as syncing services go. I use it as minimally as I can for my needs right now (mostly to save connecting via cable to transfer photos).
Oh Android. Google’s attempt at a mobile OS. Phenomenally successful. Open Source, except for when it’s not. Android. It came on to the scene with a terrible UI at the time, although the UI has improved dramatically with recent revisions. But then, with Android, the UI is kind of moot. It’s open source (except when it isn’t), people have written entirely separate launchers and themes – see many of the carrier/manufacturer branded versions for examples. In fact, this really makes it very hard to talk about Android with any meaningful detail. Google’s Android is very different from the Open Source Android – the Keyboard with the cool swipey-pathy-typey-thing? Closed source. Googles Mail app? Closed source. It’s well documented that Google has been closing down Android slowly but surely. And although you have the possibility of side-loading apps, very very few are actually distributed like this. They almost all go through Google Play Store. It seems that Open Source is a flag google wave for community support, to blind the community to just how hard it actually would be to create a successful Android fork – look at what Amazon has to go through to clone the APIs provided by the closed source Google Play Services. CyanogenMod also have to dance around the redistribution of the closed APIs that many apps assume are present, by backing up the original Google Apps, and then reloading it after their version is flashed. Also, how meaningful waving the Open Source flag is when the core platform APIs of the project are developed in private is…….. yeah.
I make no secret that I don’t trust Google these days. You are an advertising target to them. Everything they do that is intended for consumers will eventually feed back into their advertising algorithms. This is why it will surprise you that I went with Android as my next phone OS. I’m not sure yet, how I’ll remove or limit Googles tendrils on the device. Running stock AOSP? Possibly if I can get my social media apps to work without the Google Play Services. Using a separate account for Play Store things? Possibly. I’ll most certainly be limiting apps permissions as much as possible. I was surprised to learn that Android only recently got the ability to limit GPS access on a per-app basis – iOS has had Location Services control for ages. Perhaps I’ll put CyanogenMod on it, although frustratingly I can’t find a full description on their site of what changes they actually make to AOSP. I’ll certainly disable Google Now, and its always listening “Ok Google”. I’d better buckle up, because this is going to be an interesting ride. Especially as I find apps I just want to try if only for 5 minutes.