GitLab 8.5 released

The open-source GitHub competitor GitLab has shipped a new version of their software which, among other things, has light project management in the form of a feature they call “Todos”:

GitLab is where you do your work, so being able to get started quickly is very important. Therefore, we’re now introducing Todos.

Todos is a chronological list of to-dos that are waiting for your input. Whenever you’re assigned to an issue or merge request or have someone mention you, a new to-do is created automatically.

Then when you’ve made a change, like replying to a comment or updating an issue, the to-do is automatically set to Done. You can also manually mark to-dos as done.

I bet GitHub are really feeling the heat. I’ve long thought it silly that many projects have their source code and their issues/milestones/bug-tracking separate. I’m very impressed by GitLab, and the version control for my next project will be between them and BitBucket.

Smartwatches out-shipped Swiss watches

What relationship does the smartwatch have with the traditional watch and is it anything like smartphones and “traditional” phones? Here’s Jacob Newman reporting for Macworld on how smartwatches out-shipping traditional watches:

Traditional watches aren’t likely to go away, as there will always be some appeal in a timepiece that’s simpler, more dependable, and not at risk of obsolescence. It’s also unclear if high-end smartwatches like the $10,000-and-up Apple Watch Edition can truly compete with the luxury Swiss watch business. Still, the explosion in smartwatch shipments shows how much opportunity exists to reach people who don’t care for mechanical watches, and should serve as a wakeup call to the big Swiss brands.

The watch market is complicated because you can win a $1 watch in a carnival game and some watchmaker’s cheapest watches are $100,000. The point is that while I would never wear a traditional watch for utility (because my phone tells time, of course) and I’d never wear a smartwatch for fashion (because even though Apple are trying, it’s still a nerd’s toy), I think many people share this experience (but feel free to express dissent here). Both of these points are diluted by two facts, however:

  • a mechanical watch at the same price range as a smart watch will likely last forever, giving it way more utility in one crucial metric; and
  • Apple are pushing the Apple Watch as a fashion and luxury device.

Perhaps this will hurt Apple in the long run as fashions change and people realize their $20,000 Apple Watch is only good for a year (or maybe that’s a benefit for people in that stratum of wealth). However as market leader and beautiful-product-maker they’ve really led in setting the fashion of technology in the past, so perhaps it will not hurt them.

High-end watchmakers shouldn’t fear encroachment from smartwatches because people don’t buy a $10,000 watch for its utility: much like Apple fans are often mocked for by fans of other products, with high-end mechanical watches, it’s about the brand.

Apple Pay launches in China

Apple has launched ApplePay in China:

You can now support Apple Pay for your customers in China, providing an easy, secure, and private way for them to pay using their China UnionPay credit and debit cards. Apple Pay lets users buy physical goods and services within your app without having to enter payment or contact information.  Learn more.

The O2O market in China is massive, and if Apple release a Venmo-like service for Apple Pay, this could change the way people do business.

Stand with Apple

Tim Cook has published a heroic defense of American’s right to privacy in the face of a court order Apple has been served by the FBI:

The government would have us remove security features and add new capabilities to the operating system, allowing a passcode to be input electronically. This would make it easier to unlock an iPhone by “brute force,” trying thousands or millions of combinations with the speed of a modern computer.

The implications of the government’s demands are chilling. If the government can use the All Writs Act to make it easier to unlock your iPhone, it would have the power to reach into anyone’s device to capture their data. The government could extend this breach of privacy and demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone’s microphone or camera without your knowledge.

Apple is doing this because this is the right thing to do: there may be not a lot a stake in unlocking this particular phone, but the precedent that the government wants to set is clear. There’s a lot of excellent journalism you can find on this topic, and I may publish a round-up post with some analysis later. But for now, I want to be absolutely clear about my support for Apple and my condemnation of any technology company which doesn’t stand with Apple on this.

Where there's no software problem: betas

Writing about Apple software quality woes, Michael Simon makes some really good points in his latest piece for Macworld. There’s something really problematic about the opening paragraph however:

Twice over the past month I’ve had to erase and restore my iPhone. Both times were related to an attempted install of the iOS 9.3 Public Beta; instead of upgrading my phone with Night Shift, secure Notes, and better News, I got stuck in an endless Apple logo loop that required plugging into the dreaded iTunes and wiping my drive.

Craig Federighi and Eddy Cue were recently on The Talk Show with John Gruber and argued that because more people are installing the software on day one more than ever, that this is one of the challenges that Apple has to contend with with regard to software quality. That was nonsense because it’s Apple themselves that are releasing more than ever, being more aggressive with upgrade prompts than they’ve ever been, and arguable seeking more users than ever. What Simon has to say about the betas being an indication of software quality is equally nonsense because they’re betas: the fact that his install failed is actually what betas are supposed to do. It’s fine that these problems crop up in the betas, the problem is that they also make it to the final build.

The Walking Dead S06E09 "No Way Out" Review

Spoilers ahead. In the mid-season premiere of AMC’s The Walking Dead, the writers killed off characters with story left to tell, protected characters who have met their narrative end episodes ago, and wrote in at least one absurdity. Let me explain.

But before I get started, I must admit I’m never sure what to expect from The Walking Dead. Sometimes it appears to be a critique of our culture, sometimes it feels  like a soap opera, and sometimes it’s clearly a unrelenting gore-fest. Zack Handlen of The Onion’s A.V. Club has a similar conundrum:

My problem, I think, is I keep expecting The Walking Dead to have a consistent narrative philosophy. I don’t mean in some kind of high-minded, “what does this all really have to say about America?” kind of way. I just want there to be a point behind the misery and death and seemingly endless stream of gore.

Perhaps it’s a strength of the show that it can take on different tones. In any case, here’s what I mean by the wrong characters died.

Negan’s people

Whoever played the character which accosts Daryl, Sasha, and Abraham was awesome. The delivery of his lines was menacing and comedic. The voiceless goons around him I won’t miss, but I do think it’s a shame he met such a quick end. However, if the show is willing to kill of a character this good early in the Negan storyline (I haven’t read any comics), I’m excited for what’s in store. Especially considering that whoever this Negan is is unlikely to take too kindly to having his people blown to bits: that was a declaration of war.

It’s still a shame he died however, and for a reason I think many fans may disagree: it was Daryl who should have died. For a crew of on-guard and in-control goons to not realize Daryl disarmed their buddy and then have him grab a rocket launcher is very unlikely. The reason it happened is not so much that Daryl has narrative potential left or because it’s a likely occurrence (not that this matters much in a zombie apocalypse TV show), but because Daryl is a fan favorite and it makes a great opening. It was stupid, but man was it a surprise and wholly entertaining (a theme which is repeated later in the episode by Daryl again).

Jessie and her family

The Walking Dead is mainly the story of the Grimes family, and so when Pete (i.e. “Porchdick”) began fighting with Rick and Rick began flirting with Jessie, the ensuing death was inevitable. The decision to stop Pete was a morally tough one for Rick because while it was the right thing to do, it would strain his political capital with Alexandria. It was made even more morally murky because of his feelings for Jessie, which themselves were hard because of what happened with Lori. One of the ways that Rick could grow as a character was to learn to love again, and this is what was interesting about the Jessie storyline, especially considering the relationship Rick had with her children with Pete.

Unfortunately, I feel, this all came to a screeching halt within the first five minutes of the mid-season premiere, where the rest of Jessie’s family meet their end. Her youngest son, Sam, absolutely was going to bite the dust, Carol assured that very early in the season. Ron, however, had a tense but interesting relationship with both Rick and Carl, and I’m sorry to see that end. I’m surprised to see Michonne so unrelenting in ending the life of a teenager, just like Carl, as well. I would be shocked if this doesn’t have an effect on her later.

Ultimately, I at least don’t think Jessie should have died: it cuts short would could have been an amazing way to develop Rick’s character, and she had a lot of potential in her own right. Her brutal coming-of-age in the bloody murder of a Wolf to defend her family showed that she had strength and resilience. I find it much more likely that other Alexandrians would have died than Jessie’s dying, and I think it would serve the plot better to get rid of some of the less tough characters that clearly haven’t grown like Jessie has.

Carl

The younger Grimes is the natural successor to being the narrative center of The Walking Dead. In this episode, we see this fact cemented as a plot armor which keeps him alive despite being shot (albeit accidentally) in the face. I don’t follow the comics, but I understand that he received a similar injury there. However, despite my being a bit cynical about his plot armor, I appreciate how this happening to Carl grew other characters: first, Michonne really shows her love for Carl by first murdering someone his age to defend him and then giving him a kiss of the forehead before leaving his side to kill some zombie; secondly, Rick’s soliloquy to Carl on his son’s almost deathbed was incredibly moving.

Denise and the Wolf

I don’t know how this fits in with the rest of the story. Both Denise and that Wolf has interesting character development left, especially considering that the Wolf validated Morgan in the end by saving Denise despite it resulting in his being bitten. Here’s Vox’s Todd VanDerWerff on Denise and the Wolf:

The Wolf’s eventual death is particularly notable for the way that the spirit of trying to save others filters out first to the Wolf (who turns back to help Denise when she’s almost certainly dead) and then to Denise (who offers to save his life). Ultimately, Carol shoots the Wolf, and he falls prey to the horde.

I think really who should have died here was Carol. Sure, she has some conflict left to settle with Morgan with regards to the KILL KILL KILL philosophy vs. the “all life is precious” point of view, but I don’t think there’s as much there as there was in seeing how the Wolf could turn out to actually be good like Morgan said or to ultimately validate Carol’s attitude. Carol went from being an abused wife to a distraught mother to a vicious survivor, but I’m just not sure there’s anything left for her. Her takedown of Terminus was almost comical in its ruthlessness, and I don’t think there’s much left for her to do.

Glenn

Bryan Bishop at The Verge has some spot-on analysis of what’s wrong with Glenn’s story in this episode:

But then, for some inexplicable reason, Glenn started going a little nuts, and (apparently) decided to sacrifice himself even though he could have easily kept running. After all the nonsense last year, it looked like Glenn was going to die after all — just a huge, flaming middle finger to the audience. But THEN! In came Sacha and Abraham, miraculously saving Glenn with a hail of automatic weapons fire and a goofy one-liner. TWD managed to take an already cheap, eye-rolling moment and make it even cheaper.

With this in mind, I don’t think that unlikeliness of cheesiness of this sequence is the biggest writing crime here, but rather it’s putting Glenn in such a silly situation so soon after the dumpster fiasco at all. In my opinion, it would have been better to have this part of the story be mostly about the reuniting of Glenn with Maggie, which we don’t really get to see because of all the silliness. The look that Maggie gives Glenn, with the audience knowing that she’s pregnant, was absolutely heart wrenching, and this was cheapened by an unnecessary action sequence in an already action-packed episode. So while I’m glad they didn’t kill Glenn, if they’re going to keep putting him in these situations, they should just do it.

Daryl

I said at the beginning of this piece that Daryl should have died in lieu of Negan’s snarky associate, and I think that Daryl’s actions later in the episode only validate this further. Sure, it’s damn awesome to pour gasoline out of a big tank, fire a rocket into that gasoline, and sit back while all of your zombie problems are burned away. Cinematograph-ily and narratively, this was a welcome and exciting surprise. But practically, what a joke: Daryl pours valuable gasoline onto a lake and then proceeds to fire rocket into the lake when literally a match would have sufficed.

The reason this happened is obvious: Daryl is played out as a character. He started as the wild and unfriendly but good-hearted survivor, and we really saw that good-heartedness grow and develop over the seasons. Throughout, he always had little to say but did let his actions speak for him. But because his character has nowhere to go from being good and “cool”, the writers have had him become more brooding and more “cool”. While he’s a fan favorite, I think that his “awesomeness” in the latest episode only goes to show he has no more story left to be told.

Perhaps I’m wrong, though, and all in all, I wholeheartedly enjoyed the latest episode.

Machine Learning in Swift: Linear Regressions

I’m going to implement a simple linear regression algorithm on a data set which maps square footage onto housing values in Portland, Oregon in Swift. With this algorithm, I’ll be able to predict housing values given square footage. For this exercise, I’m going to use pure Swift and keep my only dependency as Darwin. If you want to skip right to the code, here’s a Playground .

First, I’m going to need to define a point, with $x$ and $y$ values. If I were using CoreGraphics I could make use of CGPoint, but I won’t add that dependency and there doesn’t appear to be a Swift Point, which I find a bit surprising. Because Swift value types are much more efficient, I’m going to make my point a struct.

struct Point {
 var x: Float
 var y: Float
}

Great. Now I’d like to define a collection of points as an object so that I can perform operations on it. I’ll use a Swift Set because my data isn’t ordered.

typealias Data = Set

Unfortunately this is where I run into my first problem with Swift: my Point cannot go into a set because it’s not Hashable; and to be Hashable, the struct must also be Equatable. So let’s do some stuff to make the compiler happy:

func ==(lhs: Point, rhs: Point) -> Bool {
 return lhs.x == rhs.x && lhs.y == rhs.y
}

extension Point : Hashable {
 internal var hashValue : Int {
  get { return "(self.x),(self.y)".hashValue }
 }
}

Now that I have all the preliminaries done, I’d like to define an extension on my new custom Point type which adds all of the functions I’ll need to perform a linear regression.

extension Data { }

This causes my second run-in with the Swift compiler: it seems that constrained extensions must be declared on the unspecialized generic types with constraints
specified by a where clause. This means that instead of using my custom Data object, I’ll have to use Set and constrain the Elements to Point structures. Let’s see what happens:

extension Set where Elements : Point { }

Unfortunately this also does not work: the compiler is complaining that I’m constraining Elements to a non-protocol type Point, which is true. I cannot quite tell, but it seems that this feature may be coming in a future version of Swift, along with the following syntax (which also did not work for me this time):

extension Set where Generator.Element == Point { }

In any case, I’ve now found the winning combination to get the functionality I want while keeping the compiler happy: a PointProtocol which defines an x and y, a Point struct which implements PointProtocol, and an extension on Set where the Elements conform to (the admittedly superfluous) PointProtocol:

protocol PointProtocol {
 var x: Float { get }
 var y: Float { get }
}

struct Point : PointProtocol {
 var x: Float
 var y: Float
}

extension Set where Element : PointProtocol { }

Now it’s time to implement the derivative values I’ll need to plot a linear regression on my Set of Points. With Andrew Ng’s first three lectures fresh in my mind and a little help from Salman Khan, I came up with the following implementation:

extension Set where Element : PointProtocol {
 var size: Float {
  get { return Float(self.count) }
 }

 var avgOfXs: Float {
  get { return self.reduce(0) { $0 + $1.x } / self.size }
 }

 var avgOfYs: Float {
  get { return self.reduce(0) { $0 + $1.y } / self.size }
 }

 var avgOfXsAndYs: Float {
  get { return self.reduce(0) { $0 + ($1.x * $1.y) } / self.size }
 }

 var avgOfXsSquared: Float {
  get { return self.reduce(0) { $0 + pow($1.x, 2) } / self.size }
 }

 var slope: Float {
  get { return ((self.avgOfXs * self.avgOfYs) - self.avgOfXsAndYs) / (pow(self.avgOfXs, 2) - self.avgOfXsSquared) }
 }

 var yIntercept: Float {
  get { return self.avgOfYs - self.slope * self.avgOfXs }
 }

 func f(x: Float) -> Float {
  return self.slope * x + self.yIntercept
 }
}

I have been trying to find a way to generalize the averaging functionality and pass in just the value I want to use in the summation, but I have yet to find a good way to do that.

Now that I have all the tools I’ll need, it’s just the matter of plugging in some data and running the regression:

var data = Data([
 Point(x: 2104, y: 400),
 Point(x: 1600, y: 330),
 Point(x: 2400, y: 369),
 Point(x: 1416, y: 232),
 Point(x: 3000, y: 540)
])

for var i in [0, 1000, 2000, 3000, 4000, 5000] {
 XCPlaygroundPage.currentPage.captureValue(data.f(Float(i)), withIdentifier: "")
}

This creates a beautiful graph in Xcode’s Playground which reveals to me the profound insight that a house with 0 square footage should be worth $267,900 in Portland, Oregon. More interestingly, at the 3000 square footage mark, just like we might expect by cross-referencing with out original data set, my linear regression shows the house should cost $522,146. Take a look for yourself:

Screen Shot 2016-02-14 at 1.06.43 AM

 

Watch apps worth making and the enterprise

Everyone, even Apple, still seems to be trying to figure out what people want or need to do on their wrist. Prominent WatchKit developer David Smith muses:

What doesn’t work is easiest to say. Apps that try to re-create the functionality of an iPhone app simply don’t work. If you can perform a particular operation on an iPhone, then it is better to do it there. The promise of never having to take your iPhone out of your pocket just isn’t quite here yet. The Apple Watch may advance (in hardware and software) to a point where this is no longer true but the platform has a ways to grow first.

In response, Federico Viticci:

[…] As I tweeted yesterday, my favorite Watch apps aren’t trying to mimic iPhone apps at all. If the same task can be completed on the iPhone, I don’t see why I would try on a smaller, slower device.

Something you might not hear elsewhere: I’m rather interested in the possibilities of fleets of watchOS devices in enterprise. I’ve heard of a real, albeit crazy, case of a company deploying a fleet of iPhones that workers wear on their wrists to inform them of certain events as they happen. Of course, the Apple Watch would be perfect for this, but it’s been billed and tooled to be such a personal device, I don’t think the platform is quite ready for enterprise needs like multi-user of deploying many of them.

But perhaps one day.

Sync is still hard

Sync is still hard. Versioning documents, resolving conflicts, and issues of connectivity still cause every cloud storage solution trouble. Even for high-profile software like iCloud and Dropbox. Consider that Federico Viticci just tweeted:

Just lost 1.5k words I had prepared for tomorrow because I wanted to try iCloud sync instead of Dropbox this week.

In response, Manton Reece writes that iCloud is too opaque:

I hear that people love iCloud Photo Library and Notes, and that the quality of these apps and companion services has significantly improved. That’s great. (I also think that CloudKit is clearly the best thing Apple has built for syncing yet.)

But to me, it doesn’t matter if it’s reliable or fast, or even if it “always” works. It only matters if I trust it when something goes wrong. Conceptually I’m not sure iCloud will ever get there for me.

This is absolutely right. I used to be “all-in” on Apple’s software when iPhoto was around, because I could back up the managed folder and still access that data in a directory structure that made sense to me. I migrated to a Dropbox and Adobe Lightroom based workflow because of performance, reliability, power, and predictability. Perhaps Photos is simpler and more convenient for most consumers, but it just is too risky and too opaque for me.

This discussion reminds me somewhat of why Marco Arment and David Smith use their own Linux servers instead of BaaS.

The smart-home's future

The “smart-thing” pattern is coming for all of our stuff: homes, cars, toasters, and more. Still, the smart home still is too confusing and expensive for many consumers. Dan Moren writes for MacWorld:

My home is dumb.

Part of the reason is that I don’t have a house—I have an apartment, which I rent. That limits the investment I can make into smart home technology: No rewiring thermostats or installing smoke detectors for me.

But the other part of it is that right now, the smart home industry is disjointed, fragmented. There are a ton of disparate gadgets and more competing and wackily-named protocols than I can shake a (smart) stick at.

I disagree. The Phillips Hue (and it’s “Friends of Hue”) program seems to me to be a popular standard, and it works very well. Apple’s HomeKit and Siri-integration works very well as a hub. And while I also rent and cannot touch my thermostat, there are plenty of consumer fire-alarms, scales, AI-assistants, locks, and blinds that work with iOS. In fact, because of the difference that Dan alludes to between a “smart home” and a “smart room”, I’d argue renting makes it easier and cheaper to get into home automation.

The biggest issue with the smart home is the price and the “long upgrade cycle” on things like locks.

Creative professionals and the Apple Pencil

The iPad Pro was reviewed by Amanda Summers on Medium, titled “A UX Designer’s Review of iPad Pro”:

We are confident in saying we are able to sit down with iPad Pro and Apple Pencil and create something just as good, if not better, than sketching traditionally using pencil and paper.

Apple Pencil feels completely natural in our hands. There’s no latency and the shading and pressure points feel all too real. The palm rejection technology works great, allowing us to rest our palm on the screen without worrying if it will mess up our drawing.

This is the most consequential review I’ve ever read of the iPad Pro, all of the reviews which came out on the first day were mostly to the tune of “Yeah it’s a big iPad, and the Pencil is cool but rather expensive.” If creatives are successfully using professional grade software to get real work done, this is an excellent sign of the potential of the platform. I do wonder though: is this a good sign for the form factor of the iPad Pro, or is it a good sign for the utility of a stylus on tablets generally? I suspect the latter, and we’ll see what Apple have to say in March about it.

Unrelatedly, I found the article’s placement and production interesting as a published piece. It’s under Amanda Summers’ name, but it’s “published in” her employer’s “organization” entity. I suspect that what MindSea Development get from having their employee’s publish to Medium is status and marketing, and it’s yet to be seen how Medium will make money from this.

Twitter is in fact changing the timeline

Twitter is changing their timeline to be algorithmic instead of chronological, rather like Facebook did some time ago.  Matthew Lynley reporting for TechCrunch:

Twitter today is unveiling a new Twitter timeline that shows tweets at the top that the service recommends, instead of the most recent tweets. They’re designed to be the best tweets that users may have missed based on what Twitter thinks you care about. […]

All this comes on the heels of a massive backlash against the move, which was first reported by BuzzFeed, in the trending topic aptly named #RIPTwitter.

If you don’t like changes a service you do not pay for makes, you don’t have any recourse other than leaving. I encourage you to do so.

Get off of GitHub

In recent months, we’ve seen the slow, gradual growth of open-source to quell the irony of GitHub: GitLab, a FOSS GitHub-style piece of software that also offers a subscription model, has been getting a lot of press. Perhaps relatedly, GitHub have been making organizational changes which do not strike me as innocuous. This would not be the first time that FOSS’s free (as-in-beer) hosting sweetheart has turned its back on the community: SourceForge, once a place much like GitHub today, made some changes that arguably opened the market for Github. From Wired:

In the years since career services outfit DHI Holdings acquired [SourceForge] in 2012, users have lamented the spread of third-party ads that masquerade as download buttons, tricking users into downloading malicious software.

There’s nothing inherently wrong about profit-seeking behavior, it’s only natural for a company to grow, to return value for its investors, and to, I suppose, conquer the world. What is a problem is when that companies product is, at least in part, software that was created with the utmost good intention: unconditional sharing. GitHub’s enterprise business of selling subscriptions per-repository to enterprise is great because programmers know-and-love GitHub, and enterprise IT can manage permissions and source code long after programmers move elsewhere. This was a magnificent aligning of incentives by GitHub: programmers get to share their code unconditionally (or whatever nerdy stuff they want to do), and their employers will pay to keep the lights on because the employees will ask to use the service at work. Superb.

A turn for the worse

What’s going to happen next with GitHub, however, I fear will not be so win-win. From Business Insider:

We also understand that [GitHub CEO] Wanstrath is working extremely closely with Andreessen Horowitz’s Peter Levine and Sequoia’s Jim Goetz, talking with one or both of them almost daily. These are two of the industry’s most respected VC investors.

One person familiar with Wanstrath’s relationship with these VCs told us they are “thrilled” with him and with the changes he’s been making at the company.

“Chris wanted to change leadership structure and he made a set of changes. You’re going to see a bunch of announcements where new folks are joining,” this person said.

The rest of the article discusses how GitHub are clearing house of longtime employees and removing the existing “meritocracy” in favor of “hierarchy.” In and of itself, I wouldn’t fear either of these changes: companies regularly undergo change and hierarchy is perhaps necessary at a certain scale. The reason this worries me with GitHub however is that they, in the eyes of the FOSS community and in the words of Obi-Wan Kenobi, were supposed to be the chosen one. Despite not being open-source themselves, GitHub have always (in my eyes) been a programmer’s company and a company of programmers, with great community outreach and geektastic stuff like Hubot. These changes and cosiness with VC people make me suspect that GitHub is looking to take on another round of funding and to monetize its “social” aspect.

At risk of self-aggrandizing, I’m going to quote myself from earlier today, where I was talking about Facebook:

I cannot imagine that a highly technically literate consumer base would be willing to subject themselves to the policies of many of today’s Internet giants. In particular, Facebook, and social media with similar business models, sell your attention to advertisers. In the early days, these services are great: they’ve usually received a huge amount of capital and provide a service users want to get their attention. When the capital runs dry, the investors want their 100x return, and the service has the user’s attention, they sell the user’s attention using information the users inputted themselves.

I may have to eat these words, because precisely what we have is a “highly technically literate consumer base” that’s “willing to subject themselves” to some VC-backed giant looking for “100x” return: me and developers like me. When I first began programming, I was so fickle, I bought fully into GitHub’s marketing of being a merit-based community of programmers, isolated from all the bullshit of corporate politics. Of course, there were problems with GitHub: it felt like the world stopped turning when it went down and there’s a whole slew of annoyances that maintainers of large projects have to deal with. But I looked past these because of the goodwill of the community.

The need to grow

The reason things have changed now and that it’s past time to get off of GitHub is that it’s not enough for groups like Andreessen Horowitz or Sequoia that company’s be merely profitable: an agency making Flash-based websites can be profitable. It has to be exponential. Here’s Matt Henderson, founder of Makalu, talking about what it’s like to take on venture capital:

[VCs are] not looking for a profitable business; instead, they’re looking for growth that provides the opportunity for a 100x exit. And their expectation is that you, the founder, will work to achieve that at any cost. And since their investment also brings the expectation of participation and inclusion in the running of the business, any company owner considering taking on investment would be well advised to make sure at the outset that everyone’s on the same page in terms of objectives.

Perhaps GitHub is seeing a downturn in growth of their enterprise business. Perhaps they fear competition from GitLab and need investment to fend it off. Perhaps they’re greedy and want to be as big as Facebook one day. I don’t know. But I’m certain that if the recent trend continues, we’re going to see GitHub positioned as a product that has the worst parts of LinkedIn and SourceForge. A programmer’s professional status will be tied to their open-source contributions and public side-projects, but when anyone downloads them, the software will come bundled with some browser plugin or “GitHub installer” or something ludicrous. Or maybe not, maybe GitHub will find a way to grow profit without selling out their users. I’m just not going to wait around to find out.

My thanks to Michael Tsai for the links.

Publishing to the open web

When it was difficult to publish writing, music, and video, publishers naturally came to exist because of the high barrier to entry for widespread distribution. But today no such physical or monetary barrier exists, the barriers that do exist are legal or technical ones. Perhaps the legal are some deliberate scheme to protect old money business models, but I find it more likely it’s more a matter of inertia.

The technical barriers will not last. There are many people who haven’t got nor desire the technical skills to publish their own media on Internet. I suspect this is at least part of the appeal of mainstream social media: you don’t have to be a nerd to interact with people on the Internet, it’s so easy anyone can do it with little effort. I’m hopeful that as more people become acquainted with the Internet earlier and technical education becomes more prominent, a DIY spirit to publishing will flourish. While this doesn’t ease the problem today of massive publishers owning the rights to what might justifiably be considered public domain, the next Mickey Mouse or Happy Birthday Song might not have the same problem.

This is rather like the infamous quip that “if you’re not paying, you’re not the customer, you’re the product.” I cannot imagine that a highly technically literate consumer base would be willing to subject themselves to the policies of many of today’s Internet giants. In particular, Facebook, and social media with similar business models, sell your attention to advertisers. In the early days, these services are great: they’ve usually received a huge amount of capital and provide a service users want to get their attention. When the capital runs dry, the investors want their 100x return, and the service has the user’s attention, they sell the users attention using information the users inputted themselves. To the degree that the service doesn’t want to lose users, it tries to keep them happy, but the incentives have misaligned, especially when the companies go public: it’s time to make profit grow. There’s nothing wrong with public companies seeking growth, but it does run the risk of harming user privacy and the Open Web.

A microblog of your very own

This is the raison d’être of Sudophilosophical: this is my attempt to live in the future,  to self-publish my thoughts, lofty and tiny alike, to the World Wide Web. There’s no pressure for 100x profits or to sell out my readers. I’m rather hopeful for that direct sponsorship and individual contributions will enable individuals to run sustainable content production now and in the future. Platforms like Facebook Instant Pages, Google AMP, and Apple News may be eating publishers, but I hope that technically savvy users will eat platforms next.

With this in mind, I’ve re-thought the structure of this website to accommodate for an important type of media: the microblog. As of today, in the sidebar, there are a couple of options for readers who would like to subscribe via RSS:

  • All – a is the feed of everything,
  • Articles – a feed of all long-form,
  • Microblog – a feed of all short-form,
  • Podcast – a feed of all spoken-word.

Dave Winer published a note about publishing long-form content where he muses:

Last night I posted a tweet: “Next time you want to post an essay to Medium, do the open web a favor and post it elsewhere. Anywhere. Tumblr. WordPress.com.”

I think he’s absolutely right, but not quite radical enough. It seems to me obvious to retort: Next time you want to post under 140 characters to Twitter, do the open web a favor and post it elsewhere. Specifically, your own website, not Tumblr or WordPress.com.

Publish-it-yourself

Detractors might argue that this is all well and good for individuals, but maybe not bigger organizations with loftier journalistic goals. John West argues for supporting publishers directly in “Death by a thousand likes”:

We need to stop pretending that content is free. Publications need to ask readers to pay for their content directly, and readers need to be willing to give up money, as opposed to their privacy and attention.

To the contrary, we need to stop pretending that publishers are necessary. I agree that the production of content can be quite costly, but the distribution of content in the digital age is basically free. The cost of hosting this site is about $25.00/month, and it managed to weather through a Hacker News hug-of-death (more on that later) of 52,000 visitors in 24 hours. I don’t see why publishers are necessary if everyone on Earth has access to the World Wide Web. Paul Krugman and disenfranchised minorities alike would be able to publish their thoughts at near-zero cost, and monetize their business directly by selling sponsorships, merchandise, speeches, or a variety of other goods and services.

These are very similar arguments as those that were trotted out with regards to Parse shutting down. I’d like to appropriate what Marcus Arment said about Parse for use here:

For whatever it’s worth, running your own Linux servers today with boring old databases and stable languages is neither difficult nor expensive. This isn’t to say “I told you so” — rather, if you haven’t tried before, “You can do this.”

Getting WordPress running on a Linode today, I admit, is beyond many people. But I look forward to the near future when people who want to express themselves don’t reach for a venture-backed advertising-platform-in-the-making, but rather their very own cloud server.