Dear Facebook, (On Real Names)

Sort your policies around ‘names’ and ‘usernames’ out. Please. Just please fix it. Here I’ll provide a source for everything I’m about to hurl at your stupid artificial petty little system that’s aimed entirely at trying to make our data seem more ‘accurate’ and ‘valuable’ and ‘easily analysable’.
So, here’s a list of 40 Falsehoods Programmers Believe About Names.
I’ll wait here while you go and peruse it.
*twiddles thumbs*
Done? Excellent. Now, let me go and grab a few screenshots, and we’ll start looking at how your system falls foul of it, and is causing me unnecessary pain.
Firstly, the Name section of the ‘General Account Settings’ screen.
FB-Name
So, here we are. Firstly, let’s click on the little ‘Learn More’ link to find out exactly what you mean by ‘real name’  (I’m hoping you won’t be falling foul of falsehoods 3,4. That wouldn’t be a good start).

Names can’t include:

  • Symbols, numbers, unusual capitalisation, repeating characters or punctuation
  • Characters from multiple languages
  • Titles of any kind (ex: professional, religious, etc)
  • Words, phrases, or nicknames in place of a middle name
  • Offensive or suggestive content of any kind

Other things to keep in mind:

  • The name you use should be your real name as it would be listed on your credit card, student ID, etc.
  • Nicknames can be used as a first or middle name if they’re a variation of your real first or last name (like Bob instead of Robert)
  • You can also list another name on your account (ex: maiden name, nickname, or professional name), by adding an alternate name to your Timeline
  • Only one person’s name should be listed on the account – Timelines are for individual use only
  • Pretending to be anything or anyone is not allowed

Oh. Oh. So that’s falsehoods 3 and 4 right there (One canonical name, and one full name at this point in time). You at least let us change the name, which gets you out of falsehood 1, 2, and 7; although you limit the number of changes, which puts you foul of rule 5.
For those keeping count they’ve so far fallen to falsehoods 3, 4, 5.
By specifically calling out multiple languages not being allowed, I’m going to give you a pass on falsehood 9, as I’m going to assume you’d be ok with me using a Japanese, non-Romanised name. But you call out ‘symbols’, ‘unusual capitalisation’ among other things as not OK. Would Japanese characters fall foul of this? It’s unclear. and arbitrary. Who are you to decide if capitalisation is unusual (falsehoods 12,13,16). You call out as well that you don’t allow mixing languages – I think that might possibly be violation of falsehood 10.
No titles of any kind? Even religious ones? Interesting. Titles are a kind of prefix. Falsehood 14. And for some would arguably be a large part of their identity that it could be disrespectful to not use it when communicating with them.
No numbers? Falsehood 15.
Falsehoods violated so far: 3, 4, 5, 10, 12, 13, 14, 15, 16
The Display As box offers First Middle Last, First Last, Last First. That sounds suspiciously like thinking there’s an order to peoples names. Falsehood 8.
No suggestive or offensive language? Can I find this objective list? Are you assuming that you can know what is suggestive or offensive in every single country and culture in which you operate (or allow users to select as their country from that nice drop-down list you have). Falsehood 31.
On the off chance someone does have a name that falls foul of this, then they must be a weird outlier. Falsehood 39 (Bonus points for alienating a user)
I’m going to give you a pass on falsehood 40, that people have names, as even I recognise that is a far larger battle than just facebook.
So, there we have violations of: 3, 4, 5, 8, 10, 12, 13, 14, 15, 16, 31, 39
There’s probably more there, but I’m not going to start trying to use up my limited name changes to see if you fall foul of the internationalisation related falsehoods.
BUT WAIT THERE’S MORE.
You see Facebook. You also have this nice little thing called ‘usernames’ that dictate what address a Facebook Users profile can be accessed at (facebook.com/USERNAME). Let’s take a little look shall we?
FB-Username
Oh. Oh dear. Oh dear dear dear.

Should include your real name

I can change my username once. I hereby revoke your exemption to falsehoods 1, 2 and 7. You’re also trying to ram peoples names into a URL. That’s…. probably going to end badly. I have better things to do though than waste my one username change to check if this is the case, so I’m not going to say you fall into any falsehood there.
Let’s click that little question mark icon, to see if that has any wisdom for us.

  • You can’t claim a username someone else is already using.
  • Choose a username you’ll be happy with for the long term. Usernames are not transferable and you can only change your username once.
  • Usernames can only contain alphanumeric characters (A-Z, 0-9) or a period (“.”).
  • Periods (“.”) and capitalisation don’t count as a part of a username. For example, johnsmith55, John.Smith55 and john.smith.55 are all considered the same username.
  • Usernames must be at least 5 characters long and can’t contain generic terms.
  • You must be manager-level admin to choose a username for a Page.
  • Your username must adhere to Facebook’s Statement of Rights and Responsibilities.

Oh. oh dear. This is worse than I thought.
Let’s try and reconcile this shall we?

Usernames can only contain alphanumeric characters (A-Z, 0-9) or a period (“.”).

Combined with

Should include your real name

Well. That’s EVERY. SINGLE. INTERNATIONALISATION. THING. FAILED. Falsehoods 9. 10. 11. 24. 25. 26.
And, as you force everyone to use a username with their real name, that means you think that the number of duplicate names is low enough that the amount of crap people will have to add to get a unique username is low. That’s Falsehood 23.
Bang up job there facebook. A field that shouldn’t even have anything to do with someones real name single handedly fails 10 falsehoods, and that’s before I go back over the earlier ones related to the real name policy (such as offensive names).
So Facebook.
Your grand total of failures here 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 23, 24, 25, 26 31, 39
That’s 21 failures, on a list of 40. And That’s with me being generous because I’m not wasting my precious name changes on checking out your validation.
– Sincerely,
A User Who Just Wanted A New Username So My Profile Cant Be Easily Found.
A User Who Really Wants To Get Rid Of Their ‘Legal’ Name From Facebook.
A User Who Hates You More Than Ever.

Stripe CTF3 write-up

I’ve been kinda distracted throughout work for about a week now, because of the third Capture the Flag competition hosted by Stripe. The first CTF was on application security – you had a ‘locked down’ account on a computer, possibly with various programs available. And you had to break out of the account to read a file you weren’t meant to be able to access, containing the password for the next level. I wasn’t aware of that first competition.
The second one was on Web Application security – you were presented with web access to a web app, and the source code for the apps (each level in a different languages), and had to exploit various vulnerabilities in order to get the password for the next level. The web apps ranged from basic HTML form processing, to rudimentary AJAX twitter clones, to APIs for ordering pizza. The vulnerabilities ranged from basic  file upload validation to SHA-1 length extension attacks, Javascript injection all culminating in a level that involved using port numbers to dramatically reduce search space for a 12 character password. I completed that one and won a t-shirt.
The third one, the one that has just happened, was different. It was ‘themed’ around Distributed Systems, rather than security. You’d be given some sample code that you’d have to speed up, either by finding bottle necks in the code or by making the system distributed and fault tolerant. Spoilers will follow. Although the contest is now over, the level code (and associated test harness) is available here if you still want a go. I will note that it’s entirely possible to rewrite each level into a language you’re familiar with (I didn’t take that approach though, given that half the fun is not knowing the language).
So. To details.
I didn’t mange to finish it, although I made it to the last level, of which I sunk far more time into than was healthy – I’m fairly certain my tiredness at work for a couple of days was due to this level.
Level 0
Level 0. A basic Ruby program that reads in text, and if a word appears in a dictionary file it will enclose the word in angle brackets.
I didn’t know Ruby, but I had a good inkling of where to tackle this level, given how simple the program was. A quick google of Ruby Data Structures, and Ruby’s String split() method confirmed my idea. The original code did a string.split() on the dictionary and then repeatedly looked up each word against the Array that function returns. By transforming that array into Ruby’s notion of a Set, I could gain the speedboost from super-fast hash based checking.
I also modified the comparison to do an in place replacement as it saved the cost of duplicating the entire string. I’m unsure how much weight that had against the Array->Set change.
Level 1
A bash script that tries to mine the fictional currency of Gitcoin. Gitcoin is essentially like Bitcoin. You “mine” gitcoins by adding a valid commit to the repository. That commit must modify the ledger file to add one to your own total of gitcoins. A valid commit is one whose commit hash is lexicographically less than the value contained in difficulty – that is to say, if the difficulty contained 00001 your commit hash would have to start with 00000[0-F]. Because of how git works you have to find such a commit before anyone else mining against the same repository finds a valid commit.
There was one main thing in this level to fix. And that’s the call out to git that mock hashes the commit object to see if it’s valid. If it isn’t it alters the commit message text in some way, and then hashes again. This is slow. It’s slow because of a couple of reasons. Git likes to lock its repository files during operations, so you can’t do parallel searches for valid commits. But also because git objects have to have a very specific format, which git takes time to go and generate before returning the hash. The final thing is that each commit contains the hash of the parent commit as part of it, so naturally should another miner find a gitcoin before you, you have to start the search over again.
To achieve this, I moved the SHA1 testing over to Python. I formed the commit object that git creates manually – the header consisting of the term “commit ” and the length of the commit body, with a null byte. I left the body (which itself has to have a very specific format) as it was in the original script. I called pythons SHA1 library to do the work, which is a non-blocking operation, thus meaning I could set 8 separate processes going at once, each trying a unique set of commit messages. Upon success they then spat out a commit message into a file.
Annoyingly my solution then became quite clunky, with myself manually changing the filename to read in a copy of the original script that bypassed the searching. That pushed the correct commit. Ideally I’d have automated that into the first script, but it was enough to get me a valid commit pushed to Stripes git servers, thus meaning the next level was unlocked.
Incidentally this level had a bonus round, where instead of being against four stripe bots mining, you’d be competing against the other players who had completed the level. Needless to say, people very quickly started throwing GPU based SHA1 tools at it, and I was outclassed by a wide degree.
Level 2
Node.js, again I had no experience (although I do know javascript). You were given skeleton code that had to mitigate a DDoS attack. Your code would be placed in front of an under attack web service, and it had to ensure all legitimate requests got through, and strangely enough illegitimate requests to keep the servers busy, but not falling over. (You lost points in the scoring system for how long the target servers were idle.
In practise this was rather simple as the requests were easily differentiated – each legitimate IP would only make a few requests and relatively far apart. Simply keeping a log of the IPs seen and when they were last seen was enough to differentiate the legitimate mice from the illegitimate elephants. You also had to load balance between the two servers that were available – they would fall over if they had to process more than 4 requests at a time. You knew how long it each request would have before the backend servers timed the connection out, so by keeping a log of when each request was proxied, and to which server, you could check how many requests were likely to still be on the server.
Pretty simple.
Level 3
The penultimate level. Scala. I had great troubles with language on this one, I suspect partly because it’s close enough to Java that I get confused mentally when translating what I want to do into the scala syntax.
You were given four servers – a master one, and three slave servers that would never be contacted by the test harness. You were provided with a directory which you had to index all the files under. Then you had to respond to a barrage of search requests (for which you were also expected to return substring matches).
The default code was incredibly poor, so there were some immediate optimisations that were obvious. Firstly, the master server only ever sent search requests to the first of the slave nodes, which also had to index  and search the entire corpus. There’s two approaches now – split the corpus and send each search to all nodes, or split the searches but make each node index the entire corpus. I went with the former. I split the corpus based on root subdirectory number. So the slave0 would index when subDir%3 = 0. Any files directly under the root directory would have been indexed by all nodes.
The second obvious improvement was that the index was an object containing a list of files that the searcher needed to search. That object was serialised to disk, the searcher would read that in. Then for each query it would go off and load the file from disk before searching the file. My first change was to never serialise the object out, but keep it in memory. That didn’t make much of a difference. Then two options presented themselves. I could try constructing an Inverted Index – that would contain each trigram (as I had to handle substring searches)  and a list of the files and lines where that trigram was found. Or I could take the lazy option of reading all the files in at indexing time (you had 4 minutes until the search queries would start) and storing those directly in the in-memory index. I took that option. I transformed the index list into a HashMap of FilePath to Contents.  And that pretty much got me to pass. Somehow. I don’t feel like that was enough work myself, but that was more than made up for by the last level.
Level 4
I couldn’t crack this one. I tried for days. I think it was from Sunday through Wednesday, excepting some time out for the day job.
The language was Go. I know no Go. The challenge: A network of servers, each with a SQLite database. The network is unreliable with lag and jitter randomly added, and network links being broken for seconds at a time. Search queries will be directed to any of the nodes for 30 seconds. All answers they give as a network must be correct. You are disqualified instantly should you return an inconsistent answer. You gain points for each correct response. You lose points for every network byte of network traffic. Oh, and unlike in the other examples, the sample code they provided you with doesn’t pass the test harness – it gets disqualified for inconsistent output.
So. This level was about distributed consensus – how to get multiple nodes to agree on the order of operations given communication problems. I’m just thankful we didn’t also have to contend with malicious nodes trying to join or modify the traffic. If you could get traffic through it was unmodified.
The starter help text contained pointers to a Distributed Consensus Protocol called Raft. Vastly simplifying the intricacies: Nodes elect a leader. Only the leader can make writes to the log (in this case an SQLite Database). The leader will only commit a log once a majority of nodes have confirmed that they have written to the log themselves. If the leader goes missing, the remaining nodes will elect a new leader.
There’s a library already written for Go, Go-Raft. This seemed like a sure fire winner. Just drop in Raft right? Although dropping the library in was very easy it wasn’t that simple. Raft is a very chatty protocol requiring heartbeat signals, leader elections and in our case, request forwarding to the leader as followers do not have authority to commit to the log.
Beyond that though, the go-raft library had issues. It didn’t work with Unix Sockets (that the test harness required) out of the box (although Stripe had had a commit merged into Go-Rafts master branch that made fixing that extremely simple. It could fail to elect a leader. It also had a bug that seemed to bite a lot of people in IRC – I only saw it once, and I’m still not sure on what exactly the cause is – I suspect a missing/misplaced lock() that caused a situation with the log that is fatal for the raft consensus algorithm.
After battling with Unix sockets and getting an excellent passing score locally – at one point I got 150 points normalised, whilst you only needed 50 to pass, I pushed to remote. And it fell over horrendously. I ended up with a negative point score before normalisation. Needless to say that was demoralising. It turns out that reading the original Raft protocol paper, understanding it theoretically, and getting it to work with some easier test cases is very different to getting it to work in a much more hostile set of conditions.
My problems on this level were compounded by the infrastructure regularly falling over and needing the Stripe guys to give the servers a kick or 10.
But beyond that, I feel that there’s something I failed to grok. When my connections could get through it worked fine – SQL was always consistent, leaders were always elected, requests were forwarded properly (barring one case that I have since read about where the request is forwarded and executed successfully but the response is lost due to jitter).  And yet when running on remote I either suffered from End of File errors (i.e. socket closed), or requests timing out. Although I eventually managed to reproduce those issues locally by manually downloading the test case, it didn’t help me in diagnosing the problem – I regularly had a case where one node, in the entire 30 second test runs, never managed to join the consensus (which takes a grand total of one successful request to do). And I didn’t know what to do. I think that the most valuable thing this level taught me, beyond the theory of distributed systems, is how bad I am at fixing problems when there’s no errors that are directly caused by my code. As far as I could tell everything was fine – if I ran it manually without the test harness in the middle it all worked. But when I put the test harness in, it all fell over. My own logic tells me that therefore the problem must be with the test harness. Except I know people managed to pass the level with go-raft. I need to go and look at some solutions people have posted to see how they coped.
At the end of the day, however fun this was overall, the last level left a bad taste in my mouth – the infrastructure problems were pretty endemic especially nearer the end, and the difference between local and remote in the last level was absolutely disheartening. I can accept some difference, but a score that locally is three times higher than the threshold (after normalisation) shouldn’t get negative points on remote. I just wanted the T-Shirt!

2013 in review

As the last moments of 2013 fade into the past, I thought I’d look back at what has been.
The largest is that I did get my 2:1 and I did start working.
In terms of other things, I’ve actually started blogging more over the course of 2013. In 2012 I managed 7 posts. In 2013 I managed 19 posts (not including this one) on various topics of law, gender and technology. I’m really enjoying doing semi-regular blogs, although I’m still relatively bad at finishing posts.
I’ve also, naturally, had a lot of time spent exploring what gender means to me. Quite a bit of this has happened over on a mostly private tumblr I have. I’m glad I made it and separated it out, but it is leaving me somewhat adrift when it comes to integrating it back in to my main online identity – if that’s something I even wish to do.
In fact, Tumblr generally has had a surprisingly large presence in my life this year. There’s been several things posted or asked on my various tumblrs that have really made me think and evaluate my own opinions, and my own situation. I’d argue that, subjectively, my tumblr’s have caused the largest amount of introspection of any online service I use. I’m more open on them than I ever thought I would be – especially in the past few months. With that said there is one large piece of me I still haven’t confronted online, and I don’t really have plans to for reasons I’m not going to get in to.
I feel as well that this year has marked the tipping point in my views of online privacy – see the numerous blog posts I’ve made over the course of this year. My views have gotten stronger but not unreasonably so I feel. It leaves me in an odd situation between my very public internet presences on this blog and twitter, and the other internet presences I try and keep low key and/or private.
And so, as 2014 rolls in I hope for many things, but most of all, I think, is finding a way to reconcile my desire for privacy with my desire to have a public internet presence.

Choosing a new phone

In the vein of a previous post exploring why I chose to move my email over to Office 365, I shall today be exploring how I chose my new phone.
Or, more specifically the OS of the phone (given that hardware doesn’t interest me as a thing – one black fondleslab is much like another black fondle slab).
As that previous post indicated, I currently have a BackBerry. Not one of the new BBOS10 ones, but an older one (although it was new when I took my contract out).
The phone market today is radically different from the one where I first switched to BlackBerry (4 going on 5 years ago). BackBerry has essentially died a death ( in the consumer market anyway, we’ll see if their refocus back on enterprise, and the opening of BBM to other phone OSs makes a difference). Android has risen to become the dominant phone OS – although the device manufacturers haven’t quite got the hang of OTA updates and multi-year support (I’ll get to the issue of so-called bloat-ware in a minute). IPhone and it’s iOS has seen a more sedate rise, but has figured out OTA updates that cut the carrier out of the picture. Windows Phone has also emerged as a serious contender.
Between them, these 4 OSs have the overwhelming majority of the market – few people could name any other OS that is still going today. This post will take each in turn to weigh the advantages and disadvantages, for me according to my needs and desires. I make no claims that my answer is the one true answer, or that even my disadvantages won’t be someone else’s advantages. (although I am right, and everyone else is wrong).
BlackBerry
There’s no denying that BlackBerry has had a rocky road recently. Their latest OS 10 is a major shift from their previous direction. A major UI overhaul, coupled with keeping their excellent security features should stand them in good stead in this battle. But alas, I don’t want another BlackBerry – their troubles don’t speak well for being around much longer, or at the very least that consumers will not have the focus they once did. BBM is something I rarely use, and even if I did there’s no longer any need for a BlackBerry itself. Even email, the killer feature it handled exceedingly well, is no longer a differentiator – the competition has caught up, and BlackBerry hasn’t advanced. Their attempt to boost their App Store by making their OS ‘compatible’ with Android apps, to me speaks of a desperation. A last gasp as it were. Perhaps it will be enough, perhaps not. But I don’t want to be take the risk that I’ll be left with an unsupported brick a couple of years down the line (phones to me are at least a two year investment, if not more).
Windows Phone 8
A relatively recent contender, Windows Phone 8 is Microsofts latest attempt to break into the mobile market – a successor to the previous Windows Phone 7 and the Windows Mobile OS family. It inherits a lot of its look from Windows 8 and its Metro UI, and this certainly makes it the most distinctive of the OSs out there. Yet it hasn’t been a massive success, although it is showing steady growth. Perhaps it came too late to the market, or perhaps it hasn’t been marketed well – a common feature of Microsofts mobile attempts. One thing is certain though – app developers haven’t gone crazy for it. Despite the fact that I only use a core set of apps on my phone regularly (mostly social media), I do like to try out apps, and part of me wonders if it’s due in large part to the fact that BlackBerry’s app selection is abysmal.
iOS 7 on iPhone
I already have many Apple devices. I use a Macbook Pro at home, I have an iPod Touch which is my media center, and I have an iPad which sees infrequent use. I have a large collection of apps on my iPod, although again I only have a core set that I actually use. So surely an iPhone is a natural next step? Well, maybe not. iPhones are expensive (I know, that’s hardware – but unlike the other OSs, Device and OS are tied together here). I already have an iPod Touch for all my Appley needs. I know of no-one who uses iMessage or FaceTime – so those have no appeal. My apps are already on my iPod Touch, and I don’t hate the wifi-only nature of it. There’s also Apples iCloud, which is very much a walled garden as far as syncing services go. I use it as minimally as I can for my needs right now (mostly to save connecting via cable to transfer photos).
Android
Oh Android. Google’s attempt at a mobile OS. Phenomenally successful. Open Source, except for when it’s not. Android. It came on to the scene with a terrible UI at the time, although the UI has improved dramatically with recent revisions. But then, with Android, the UI is kind of moot. It’s open source (except when it isn’t), people have written entirely separate launchers and themes – see many of the carrier/manufacturer branded versions for examples. In fact, this really makes it very hard to talk about Android with any meaningful detail. Google’s Android is very different from the Open Source Android – the Keyboard with the cool swipey-pathy-typey-thing? Closed source. Googles Mail app? Closed source. It’s well documented that Google has been closing down Android slowly but surely. And although you have the possibility of side-loading apps, very very few are actually distributed like this. They almost all go through Google Play Store. It seems that Open Source is a flag google wave for community support, to blind the community to just how hard it actually would be to create a successful Android fork – look at what Amazon has to go through to clone the APIs provided by the closed source Google Play Services. CyanogenMod also have to dance around the redistribution of the closed APIs that many apps assume are present, by backing up the original Google Apps, and then reloading it after their version is flashed. Also, how meaningful waving the Open Source flag is when the core platform APIs of the project are developed in private is…….. yeah.
I make no secret that I don’t trust Google these days. You are an advertising target to them. Everything they do that is intended for consumers will eventually feed back into their advertising algorithms. This is why it will surprise you that I went with Android as my next phone OS. I’m not sure yet, how I’ll remove or limit Googles tendrils on the device. Running stock AOSP? Possibly if I can get my social media apps to work without the Google Play Services. Using a separate account for Play Store things? Possibly. I’ll most certainly be limiting apps permissions as much as possible. I was surprised to learn that Android only recently got the ability to limit GPS access on a per-app basis – iOS has had Location Services control for ages. Perhaps I’ll put CyanogenMod on it, although frustratingly I can’t find a full description on their site of what changes they actually make to AOSP. I’ll certainly disable Google Now, and its always listening “Ok Google”. I’d better buckle up, because this is going to be an interesting ride. Especially as I find apps I just want to try if only for 5 minutes.

It's been a year

This time last year I came out to my friends, and the readers of this blog, as transgender/genderqueer. And what a year it has been. I’ve learned a lot about myself, and about transgender people generally. I’ve graduated university, and started a full-time job, although I’m not ‘out’ there yet.
Side-note: If you ‘re a colleague from work reading this, feel free to keep reading and ask me any questions privately in person or over sametime / notes etc. just don’t mention this to other colleagues – I’ll decide when and how to do that. Thankyou!
So I’ve taken several steps over this year: my wardrobe has expanded in several areas recently; I’ve experimented with nail varnish and with hair dye and shaving. The support from my core group of university friends was amazing.
I’ve also refined my own views on where I’d like to go with this; and on what it means to be transgender and/or genderqueer. It’s really hit home that despite what I just said about wardrobes, nail varnish etc they are all simply gender stereotypes. Even the idea that all men have penises with XY chromosomes and all women have vaginas with XX chromosomes is wrong – it erases intersex people whose sex chromosomes may not be of the traditional XY/XX configurations; transgender people who don’t want surgery down there because of costs, or the risks with SRS; and transgender people who don’t hate their genitals with a passion. The only real way of knowing is to know your inner self. It’s much more a question of “How does being called and read as male make me feel? How does that compare with being called and read as female?”. Indeed, any other question ultimately comes back to gender stereotypes – variously erasing intersex people, tomboys, and ‘effeminate’ men. For some people, myself included, there’s not a major dislike of being classed as one of male or female, but one is preferable to another – the opposite of the one assigned at birth. It’s the distinction between gender identity – what you know inside; and gender expression – how you express that identity and how that interacts with the stereotypes that society places on gender.
How do I know this? I’ve chosen a name. I’ve started using it online with several accounts. It feels right. More right than my birth name, and being read as my birth sex. But being called male doesn’t usually provoke an intense negative reaction. I hope to expand my use of it online, and filter it through to the real world over time.
I’ve started down the NHS route (it’s long and slow and I won’t see a specialist for about a year), and am looking into exactly what my private insurance covers, if anything. I do know I don’t have a hope in hell of passing without some… help.
I’ve become far more aware of exactly how the gender binary, and the resultant gender stereotypes are ingrained deeply into our culture – you only need to look at the questions the consultant psychiatrist asked before agreeing to refer me to a specialist. And naturally that leads to a much deeper understanding of the various “labels” that people use to describe their exact gender identity. For now I’ll stick with the labels of transgender and genderqueer – the latter being an umbrella term for anyone outside the gender binary.
It hasn’t all been progress and rainbows however – I haven’t come out at work, although I feel confident that the majority would be supportive. I also haven’t come out to my parents. That’s something that has caused me a great deal of pain. It’s something I won’t be able to hide forever, but… Well…. I have my reasons, despite wanting to tell them.
It’s been a year. I wonder what the next 12 months will bring?

What is Legal Tender?

You often hear of someone declaring their money legal tender.
A bus driver going “Sorry, can’t take twenties” to which the person responds “but it’s legal tender!”
A person trying to pay in ASDA in England with a Scottish £20 note, the shopkeeper going “what’s this then?” “That’s legal tender that is, you gotta take it, it’s Sterling and everything”.
But Legal Tender has a very very specific meaning.
Firstly: None of the situations above were concerned in any way with legal tender. The following situation however is:
Someone at the end of a cab ride, trying to pay a £15 fare with a £20 note.
So, what’s the difference between that situation, and the bus fare situation presented above?
Debt. In the first three situations, you were paying for a service or goods in advance of receiving the service or goods. In the cab ride example however, you were paying for the service after having received it. In essence you were attempting to discharge the debt you had incurred.
This brings us to our first conclusion:
Legal Tender applies only when attempting to discharge a debt.
Now, To refine this a bit, Credit Cards are not legal tender. Vouchers are not legal tender. A Bank Transfer is not legal tender. If it was then cabs would have to accept a bank transfer. They (usually) don’t. In fact, Legal Tender applies only to cash – notes and coins.
Picking apart the term itself: To tender is “to offer or present formally”. A legal tender is a legal offer. Thus, Legal Tender is strictly:
An offer (of cash) to discharge a debt that cannot be refused.
Strictly this applies onto to paying money to a court to discharge a debt – it prevents you being sued for non-payment, but the end result is that effectively cash becomes a required way of accepting payment for a debt. There is an important caveat: There is no obligation for change under a Legal Tender situation, so you’d better pay with exact money!
Now, there’s a few restrictions here. So don’t try and pay off a £100 restaurant meal with 5 pence pieces. Because that offer wouldn’t actually be legal tender.
Why? Because there are restrictions on how much each denomination of currency is valid legal tender for.
50p – up to £10
20p – up to £10
10p – up to £5
5p – up to £5
2p – up to 20p
1p – up to 20p
Coins and notes higher than 50p are valid legal tender for any amount.
So far everything I’ve said has been true… For England.
When you consider Scotland (in particular) things get funky.
Firstly no Scottish Notes are legal tender in England. Only notes from the Bank of England are. Similarly no English notes are legal tender in Scotland – although this is for a different reason – Scotland has no concept of Legal Tender! Instead the creditor must accept any reasonable offer to discharge the debt, so if, for some reason, meters of bubble-wrap became a standard way to pay for things, then it could be construed as a reasonable offer, and thus the creditor would be obliged to accept it to pay off the debt!
There’s one aside I’m going to make here – technically, when I talk about “Scottish Notes” I’m conflating several different notes, because in Scotland  seven retail banks can print notes, that are backed by Bank of England funds. The system for this is that the Bank of England essentially says to the banks “You can print this much money”, with the amount carefully chosen to keep the values between Scottish notes and BoE notes on par.
And now, as a final aside, to that aside. I said that the Scottish Notes are “backed” by the Bank of England. This probably conjured up images of a huge gold vault somewhere underneath central London. Well….. I’m sorry to burst that bubble, but the BoE only has a fraction of gold to “back” the amount of money in circulation. Technically there is no currency on Earth that is backed by anything – all the money on Earth is “backed” by is the value that other people believe it has.  But the loopy idea of fiat money – money without any physical backing to its value, is something for another time. And that, reader, is a nice, comforting note to end the article on…. right?

"The World Inside" – Robert Silverberg

As many of you know, I’m an avid reader. Although my intake is predominately fantasy, I do read a healthy dose of crime, YA and other fiction.
In this case, it’s a work of Speculative Fiction – a collection of short stories by Silverberg published as “The World Inside”.
The question the work poses is “What if we encouraged overpopulation. What if we held breeding to be the best activity there is.”. It is with this premise that the world of the Urban Monad is born. The year is 2381. The world population is over 75 billion (up from todays population of 7 billion). To feed that many people vast amounts of food is needed, so vast amounts of land have been given over to farmland, with small communities maintaining the vast amount of agriculture. But where does everyone else go? Into arcology inspired “Urban Monads”, 1000 floor skyscrapers, housing over 800,000 people, that as a community are almost entirely self-sufficient – they recycle everything obsessively, they capture body heat, every area of space is used to its full potential. The only thing they need is food from the farms – in exchange they maintain the machinery that the small farming community uses.
Because of the limited space, each apartment is one small room. The beds inflate and deflate down into the floor to ensure effective utilisation of the space. There’s no internal partitions (there is a privacy screen for the toilet, but it’s never used). There’s certainly no separate kitchen.
In our situation you can imagine tensions forming easily between people, so the rules are strict to ensure that peace is maintained – to ensure that no-one goes “flippo”. Sex (between any persons) is the main way of releasing any tension – and because minimising conflict is essential, it is seen as impolite (or even sinful) to refuse when someone asks, although the refusal is always accepted. The people in the UrbMon live in a ‘city’ – a group of about 50 or so floors, based roughly on class – administrators live up the top, whilst labourers live in the bottom-most city. The city forms the basis of their social group – it is rare that they interact with people outside of their own city – each ‘city’ even has its own entertainment complex.
Naturally because of the belief that reproduction is the highest good (the most “blessworthy”) it is common for marriages to happen at 12-13, as soon as the person is able to reproduce. Small families are also seen as a sign of failure – in Prague (one of the cities) there is an average of 9.9 children per family. This is not seen as freaky or weird, but is seen as something to be praised and upheld as a model of the family. This combined with the sexual permissiveness hinted at earlier, has given rise the the culture of “nightwalking”, one where men will roam the corridors of their city, and will go into an apartment (locks are forbidden as creating tensions and untrustworthyness), to have sex with the female (or male) there. There is no obligation for the other partner to leave, although they will usually go nightwalking themselves. Interestingly there is a social expectation that women don’t nightwalk, a source of tension for one of the characters in a later story when his wife leaves the room to seek out sex, not even waiting until night. (The horror!)
Due to the lack of privacy, sex happens within full view of the children. Indeed, one of the opening parts of the book is a song sung by children upon waking in the morning

God bless, god bless, god bless!
God bless us every one!
God bless Daddo, god bless Mommo,
god bless you and me!
God bless us all, the short and tall,
Give us fer-til-i-tee!

This opens the first short story in the collection – which tells, from the viewpoint of the father in this family, of the arrival and tour of a “sociocomputator” from a colony on Venus, where the social structure resembles our own much more closely – private dwellings in their own plot of land. He is somewhat shocked at the lack of privacy, at the invitation to “share” his hosts wife. At the age at which children are married.
To elaborate on that last point, once children become fertile they move into group dorms until a unit is available for themselves. Naturally, given the large number of people space is rare – there is a constant expansion programme of new UrbMons being built. One of the stories involves the conflict within one of those couples randomly chosen to go an live in the new UrbMon. They are so used to staying within their city, they are terrified of leaving all they know (no-one ever leaves an UrbMon by choice). But this is countered with the possibility of rising in social status; in established UrbMons the highest status jobs are rarely available for anyone to take, with people being groomed for the roles. In a new UrbMon however, the social mobility is much much greater.
Another of the key stories for understanding their culture is the one where someone, defying all social convention, leaves the UrbMon to explore. As he steps outside he is struck by how big the UrbMons are. But what is really telling is hwo he views the small agricultural commune that he encounters as he explores. At first he expects to be able to communicate with them, but as language is an issue he starts to view them as savages – especially when it appears that they are going to kill a pregnant member (which, given his culture is possibly the worst crime one could commit). But the distrust is mutual – the villagers believe he is there to spy on them.
One particularly noteworthy exchange is that with the old woman, when he tries to convince her that procreation is the best thing, he can’t understand why they stay so small – even as she points out that if they grew and expanded their commune, there wouldn’t be enough food for the UrbMons.
The final scene I’m going to highlight in this review is from the same story – it shows from our point of view, his attempt to rape the female; whilst from his point of view he thought she was just playing a game – having never been told no to his sexual advances. It wasn’t an easy read, but it was very valuable for showing how our experiences and social influences inform our beliefs and our actions.
And with that thought, I’m going to leave the review here. Suffice it to say, I highly recommend the book as an excellent read that provokes some serious thought about cultural norms, as well as alternative models of sustainability.

Safe Spaces

Quick note before the post proper: I do have an analysis of The World Inside in the works, but it’s proving rather troublesome to tame into coherence. For now, this post.

I regularly see people declaring that somewhere or other is a “safe space”. So let’s pick this concept apart, and see just how easy or hard it is to create a safe space.
What is a “Safe Space”? The idea behind it is simple – a place where someone (usually of a minority group, though not necessarily) can write, talk, discuss their beliefs without any mockery, without trolls, and without a risk of them being offended (or in some cases distressed – known as triggered) by content within the area. For example, a Safe Space for a Homosexual person would be a space where you aren’t condemned for being homosexual. You won’t be mocked with homophobic slurs. Such a person can talk frankly about their experience. A transgender person meanwhile would have a space where they won’t be called and number of the transphobic slurs, nor would they be confronted with such slurs unexpectedly.
Sometimes you see people claiming that a particular tumblr tag is a “safe space” and that people should keep their hate out of the “transgender” tag, for example. This, is a futile request. The nature of tagging (on most sites, including tumblr) is that tags are public and unmoderated (beyond generic site-level moderation). Such tags will naturally be used by anyone who wishes too. And whilst sites may have “community guidelines” and so forth, against homophobic material etc, such policies tend to rely on user-reporting, and notably tend not to be as strict in their moderation as safe-spaces require.
Another angle is for example the /r/lgbt subreddit, which claims itself as a safe-space for any and all gender, sexual and romantic minorities, and it does work. Kind of. Reddit provides subreddit moderators (sub-reddits are essentially a forum board) with tools to remove any posts they wish. And this subreddit in particular has very pro-active moderators ensuring that any (even slightly) anti-lgbt material is removed quickly. So they have a safe space. Great. except, as is common in the case of highly active moderators, anything that doesn’t fit with their world-view is also removed. As such it creates a community that is perceived to be ‘all on the same page’. Even posts that aren’t anti-lgbt, but question, for example the ever expanding alphabet soup are removed.
Moving into the real world, a “safe space” tends to be a meeting area where there are people in authority, with the power to remove people from the space – such as University LGBT societies. These tend to be less prone to the ‘heavy-handedness’ of internet community moderation – by virtue of the fact that without the online disinhibition effect (Something I learned a great deal about for a University coursework) the number of trolls and “extreme” views tend to be minimised.
That said, online “safe spaces” are needed – providing people who experience homophobia, transphobia, and even things such as sexual assault or have attempted suicide, an area where they can pseudonymously communicate with others in the same boat is vital. It encourages the community to connect, to network, and thus to become stronger. And it insulates them from the problems that they face elsewhere in life (sometimes frighteningly regularly).
Safe Spaces need to be actively moderated, otherwise they are impossible to maintain. But it is important to recognise that this moderation can go too far, which can cause a narrowing world-view and even rejection, not acceptance, from the wider society (or even from within the same minority group – see the split from /r/lgbt to /r/ainbow).

On Pornography

Welp. Cameron’s done it. Bent over backwards to introduce unworkable, unrelated policies in a confused mess designed to appeal to comfort traditional Tory Middle-Class Daily Mail reading idiots I mean, voters.
So let’s look at the proposals he has outlined.

  1. A ‘crackdown’ on those accessing child pornography/ child abuse images.
  2. Internet Filters that will by default block access to all pornography on those using residential ISPs.
  3. The criminalisation of simulated rape pornography.

The crackdown.
I don’t think many people would disagree with child sexual abuse being absolutely disgusting. My mum was a Special Educational Needs teacher, and she has worked with children who have been abused. It is so wrong the damage it can do to them. That out of the way, let’s have a look at this
The way this is currently handled is you have CEOP, a branch of the police, who track down the people committing the abuse, rescue children, and find people who are viewing the content. You have the IWF, an independent charity who handle reports of child abuse images submitted by the public. They create the blacklist of URLs that is passed to search engines and ISPs to block access, and filter out those pages containing the content. They also forward information to CEOP and equivalent agencies worldwide after deeming content to be potentially illegal.
The proposals include getting search engines to redirect results, so someone searching for “child sex” for example, might get results for “child sex education”. There will also be pages displayed when someone tries to access a page blocked under this scheme that will warn them that looking for such material is a criminal offence. I imagine it would look similar to the ICE notice placed on seized domains by the US Government.
The thing here though, is that Google (and most other search engines) already remove results pointing to child abuse imagery. My thoughts on the IWF being the determiners for what gets blocked (which they already do)  are long enough for another blog post – but suffice it to say, I’m not sure that an independent, unaccountable charity should have “special permission” to view and classify the images without any form of oversight – especially as it’s generally hard to work out that something has been blocked – See the Wikipedia Blocking Fiasco. I have another point about the effectiveness of blocking content – but that will be the main thrust of the next section.
 
Blocking of Pornography
So, the second issue is the implementation of filters on Residential UK Broadband connections that will prohibit access to porn, should the account holder not opt-out of the blocks. This is a further example of how our internet use is getting more and more restricted over time. First they had CleanFeed, which blocked the IWFs list. Then they blocked The Pirate Bay and other similar sites. Now they want to block Pornography (albeit on an opt-out basis for the moment).
So, firstly what is pornography? Images of oral, anal or vaginal sex? How about “Kink” images of bondage, where no genitalia are visible? Pictures of female breasts? Cameron has already announced that Page 3 won’t be blocked.
mailWeb
How about the written word – many fan-fiction pieces get very steamy, not to mention the entire erotica bookcase at your local bookshop (or Sainsburys).
Of course, our mobile internet connections are already filtered by default – so we can look at those to see what will be blocked. “User-generated content sites”. Oh yes, I suppose they could contain pornography. Reddit in fact has many sub-reddits dedicated to such things. ISPs have even indicated that categories such as “anorexia”, “web forums:” and even “esoteric content” may be blocked. Of course, one natural side effect of that will be the (accidental) blocking of sexual education resources. No filter is 100% perfect, so it’s inevitable that sites will get blocked. We can look at what mobile operators have blocked “by mistake” in the past – a church website blocked as adult, a political opinion blog(!) and even eHow – a site that posts tutorials and educates on how to do everyday things.
This is to say nothing of the LGBT websites that might be blocked – vital resources for any person questioning their gender or sexuality – but especially for young people who may not feel comfortable talking with their parents about these things. This by itself will actively cause harm (if these proposals didn’t cause harm I wouldn’t be so strongly against them), but there is further harm to come from these – parental complacency.
There are bad parents. There are parents who don’t communicate with their children. We all know they exist. And any right minded parent would fear their children seeing something on the internet that they weren’t ready to see. But these filters will make parents think their kids are “safe”. That they don’t need to talk with their kids about sex, about things they might see on the internet, that they don’t need to use the internet with their children. So when children do stumble across adult content, they’ll be even less prepared to talk about it. And these filters suppose one thing – that the children are less tech-savvy than those writing the filters. Anyone who has worked with children, or works in Computer Software will know how fast kids adapt to new technology. Those older children who do want to seek out this material aren’t stupid. They’ll know how to get around these filters – unless you want to block searches for proxies (or VPNs for those more technically inclined). And all the time the parents will think their kids are safe, and wrapped securely in cotton wool. This is possibly one of the most damaging effects.
Simulated Rape Pornography
The final measure announced in this slate of news was the criminalisation of simulated rape pornography – aiming to close a loophole in Section 63 of the Criminal Justice and Immigration Act – affectionately known as the “Extreme Porn Law”. To be clear this proposal is talking about the banning of consensual, fictional “rape-play” images. For context – studies from the late 70s and 80s have shown that the idea of forced sex is one of the most common fantasies. Somewhat amusingly this announcement came shortly after the Crown Prosecution Service had adjusted the prosecution guidelines for offences under this act.
To try and criminalise images of consensual, legal things is utter madness. My objections to this are very much the same as my objections to the original section of the act. It makes the assumption that we are unable to distinguish between fantasy and reality. It makes the assumption that there is evidence of harm by looking at consensual images. We’re happy to let people run around and kill simulated people, but to watch a consensual act is somehow damaging. To me this stems from our cultures attitude towards sex in general. Which is that it’s something to be done behind closed doors, without disturbing the neighbours, and without discussing it afterwards. To something so natural, that’s a very weird attitude. It, incidentally, is the same reason I believe the pornography-blocking proposals will cause harm.
Summary
Overall, these proposals are terrible. They won’t work, they’ll cause actual harm, and they’ll make people with common fantasies feel victimised.
You can sign the OpenRightsGroup petition here, and a DirectGov ePetition here – although neither address the criminalisation of simulated rape.

Tor, Freedom Hosting, TorMail, Firefox and recent events

So, there’s been…. a lot of panic in the Tor community over the last 24 hours. Let’s have a look at some facts shall we?
Firstly, it would be good if you knew some basics of Tor – I have a previous article on it here. Secondly, forgive the number of Reddit Comments I’ve linked to – but given the lack of mass media coverage of this news, there’s not much choice)
News broke that the FBI had issued an arrest warrant and extradition request to Ireland for Eric Marques. The article frames him as a large distributor of Child Abuse Images. Whether that is accurate or not remains to be seen in court, but one thing that is (now) known is that he was the man behind “Freedom Hosting” which provided hosting for Tor Hidden Sites. A number of those sites apparently hosted Child Abuse Images or videos. It’s not yet known if he had any connection with any of those sites beyond being their hosting provider.
One immediate question that presents itself is how did they find out that this guy was operating the Freedom Hosting site? I haven’t seen any evidence on how this happened. It’s possible that they used a server exploit to find out the machines real IP address. Or that they tracked him down via other means (financial records etc), and then happened to find out he was behind it. Incidentally, the only evidence that the Tor community has that he ran it was the timing of all these events.
So, all the sites hosted by Freedom Hosting disappeared from the Tor network. Then, a few days later they showed up again. But this time, some (but not necessarily all) the sites hosted included an extra iframe  that ran some javascript code (Link is to a pastebin, so is safe to click). Needless to say this javascript code is an attempt to break anonymity.
Now, a small amount of background. Tor (for end users) is mostly run through the Tor Browser Bundle these days. This combines Tor with a patched version of Firefox – to fix some anonymity leaks, as well as some Firefox extensions such as HTTPSEverywhere, and NoScript. NoScript is a Firefox extension that prevents Javascript from running according to the users preferences (block all, whitelist domains, blacklist domains, block none). Great, so the Javascript wouldn’t run? Well…. no. Tor Browser Bundle ships with NoScript in the “run all scripts” mode. Tor have had an FAQ about this setting up for a while. The short answer is that because Tor tries to look like a normal machine – always reporting a Windows NT Kernel (even on other OSs) for example, that disabling JS would leave you in a minority, as well as making it harder to actually use the normal javascript-reliant internet. Needless to say, Tor are reevaluating this tradeoff. This is especially true as their patches to Firefox should, in theory, make it harder for Javascript to break out and find the users normal IP.
So, this script can run. What does it do? Well it specifically targets Firefox 17 on Windows. Firefox 17 is the Extended Support Release of Firefox, which is what the Tor Browser Bundle is based on. Claims that this is a 0-day attack have been abound, but further examination has revealed that in fact, it had already been patched in Firefox 17.0.7 – which had been packaged into a Tor Browser Bundle at the end of June/early July. When you put this together it means that the script only affects users of old Tor Browser Bundles on Windows. The script appears to use the vulnerability above to try and send your real IP to another server. It also tries to set a cookie, presumably to track you as you browse the internet and onion land.
Notably TorMail, (a service which provides public email facilities over Tor), was also apparently hosted on Freedom Hosting, so far more than just people accessing Child Abuse Images are potentially affected. Anyone who wanted a truly anonymous email account has been affected. This makes it likely (although not guaranteed) that the FBI now have access to every e-mail stored on that server.
Freedom Hosting, whilst not the only Tor Hosting Service, was certainly one of the largest and well known. And TorMail was unique in its service. What this will mean for whistleblowers and others who used TorMail remains to be seen.