hacker news with inline top comments    .. more ..    23 Jun 2015 News
home   ask   best   3 years ago   
Commit messages are not titles antirez.com
120 points by cocoflunchy  3 hours ago   89 comments top 28
1
stared 2 hours ago 5 replies      
Dots, periods or starting with caps - it makes no difference (except for aesthetics, perhaps.) What does (or at least: did for me) is Angular-style commit messages: https://github.com/angular/angular.js/blob/master/CONTRIBUTI....

> <type>(<scope>): <subject>

> [...]

> feat: A new feature

> fix: A bug fix

> docs: Documentation only changes

> style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)

> refactor: A code change that neither fixes a bug or adds a feature

> perf: A code change that improves performance

> test: Adding missing tests

> chore: Changes to the build process or auxiliary tools and libraries such as documentation generation

There is a big difference in knowing if someone added a feature, fixed something, or did some refactoring.

2
BillinghamJ 3 hours ago 10 replies      
Does anyone actually care whether commit messages contain a period or not? If they're lower case, title case, sentence case, it's completely irrelevant
3
knocte 2 hours ago 1 reply      
As for me, I don't care about the style at all.

What I care is about the content. And the thing I hate the most are commit messages that explain the what, but not the why! (It's the equivalent to comments in the code that explain something obvious.) To me, this kind of commit messages express the same as if there was no commit message.

4
kashyapc 1 hour ago 0 replies      
Speaking of Git commit messages, it's worth pointing out this wiki[1], written and reviewed by some long time open source contributors. It's a bit verbose, but rewarding to the patient reader.

On writing Git commit subject lines, I've come to appreciate why the imperative mood is nicer (as opposed to the and indicative mood). Some related reasoning here[2].

[1] https://wiki.openstack.org/wiki/GitCommitMessages[2] http://chris.beams.io/posts/git-commit/

5
nandemo 2 hours ago 1 reply      
In a previous gig, we had a simple rule: there must be a ticket number with a certain format at the beginning of the commit message. A summary follows.

The ticket has pretty much all the info you need, so there's usually no need to write a very detailed commit message.

This essentially mandates tickets for every single commit, which might sound too process heavy for some people, but ticketing was part of our process anyway for other reasons (change management, compliance, etc).

In the rare event that a commit is not originated by a ticket, you can still cheat by writing a fake ticket number.

6
zvrba 39 minutes ago 0 replies      
How about: if your message doesn't fit into a single sentence, the commit is too big. Split it. If it's a huge architectural change merged from another branch, describe it in a separate document referenced in the message.
7
VeejayRampay 2 hours ago 1 reply      
The first comment under the post is frankly outrageous.

This is precisely the reason why the rest of the world abhors programmers and the software community. Random strangers contributing nothing to the debate but their angry and immature bile. No wonder content creators are disabling comments one after another, why wouldn't you?

8
chrisan 1 hour ago 1 reply      
I really like bitbucket and JIRA for this in a business use case.

We use the format "JIRA-XXX commit message/title/synopsis/whatever you wish to call it" where JIRA is our project name in JIRA and XXX is the ticket number. Bitbucket then can turn that into a link to your JIRA project

All of the who/what/why is already in the JIRA ticket as more often than not there are non-coders who have input/insight around the issue. When I am looking at the commit log I see a brief summary and if I need more details I can go to JIRA and get the full history with screenshots, designs, reproducible steps, business cases, etc.

Another side effect of this is, when someone is searching in JIRA and finds a ticket they can easily see all of the commits related to that ticket. This also works with Github

The commit becomes simply the "how" "JIRA-XXX brief desc" was implemented

9
teh_klev 2 hours ago 1 reply      
> How many emails or articles you see with just the subject or the title? Very little, I guess

I hate to go OT and "well actually", but, well actually this is more common that you'd think. I first learned about "subject only" emails back in 1998 and most places I've worked at use them to convey succinct email messages where there's no need for extraneous words in the body. It's a real time saver when you have a busy inbox.

I've also worked in places that "EOM" (or some variant of) their message subjects to be explicit that there's nothing in the body [0].

WRT to the remainder of the article, I don't have any strong feelings either way.

[0]: http://lifehacker.com/5028808/how-eom-makes-your-email-more-...

10
sambe 3 hours ago 0 replies      
I also prefer treating them as sentences and prickle at the level of enthusiasm for no dots. In the end consistency with others is more valuable though. However, the article is entirely unconvincing to me, despite being on his side - the removal of a dot and writing succinctly are not mutually exclusive.
11
tempodox 1 hour ago 0 replies      
It seems obvious that inside any given organisation, it makes sense to have a convention regarding the form of commit messages, so you know how to look for certain things.

However, pretending there is a one-size-fits-all rule is taking it too far.

12
m0skit0 1 hour ago 1 reply      
We use Redmine and just put the relevant issue number and the title of the issue. If more information is needed, you can go to Redmine to check more details or how to reproduce.
13
StavrosK 3 hours ago 1 reply      
I was recently advised to use the "commit messages are subjects" style, but I get a pang of... something, every time I have to omit the full stop (I always type it out of habit and then press backspace).

I agree completely with Antirez, messages are not subjects or titles, and we should make them as succinct as possible, not force ourselves to write something that leads into a full-text piece only to omit the latter.

If it can fit in one line, make it fit, otherwise summarize the change as well as you can. Our goal should be to have to read the least amount of text to understand what the change is doing, and the hierarchy is: short commit message > long commit message > diff.

14
przemoc 2 hours ago 0 replies      
It always felt natural to put this dot at the end of (what antirez nicely called) commit synopsis, so AFAIR I always (or at least since I started using git and when was not forced by particular project's rules to do some other way) put it there and found advices to avoid it quite strange. The only exception (beside forgetting about full stop yet pushing to public repo, it does happen sometimes) I have is when the commit is sole version-bumping/releasing one, because then I usually go with following style

 PROJECT_NAME [v]VERSION
There are some half-broken SCM managers that always display whole commit message (sometimes even without respecting paragraphs) and then such synopsis without full stop yet followed by another sentences looks rather awful. YMMV

As some commenters here already wrote, it's not a grave matter, but it's good to have consistent style. It goes without saying, that having relatively short commit summary in one sentence separated with blank line from further details (if they are necessary) is far more important matter than the period or lack thereof, as it immensely eases groking commit log.

15
doe88 59 minutes ago 0 replies      
I think the two most important rules are:

1- The first line should give a good description of the commit

2- Avoid long lines (reading truncated commit messages is painful)

Other than that I think it's all common sense/good judgment and I'm not attached to a particular writing style or kind of sentences one should make.

16
jgrahamc 2 hours ago 2 replies      
I think the body of the commit message is way more important than the title: http://blog.jgc.org/2013/07/write-good-commit-messages.html
17
anton_gogolev 3 hours ago 2 replies      
Is it me or the entire "Kosher Commit Messages" movement really gained traction with the widespread adoption of DVCSes?

I really cannot recall contributors obsessing that much over SVN, CVS or VSS commit messages. Now, you could change these after the fact, but still.

18
callmekatootie 2 hours ago 0 replies      
Commit messages need to convey what will happen if you merge that commit in. Plain and simple. That's all that is needed of them. Of course, you need to tag the ticket number (in case you are using Cloud based solutions like Github) but I am more interested in what the changes will do if I merge them in.
19
VieElm 3 hours ago 1 reply      
The 50 character limit on the first line of a commit message really bugs me. I try to stay within 50 characters but sometimes I don't care. I can't always fit what I want in 50 chars and adding a second line can be too much. This is the worst kind of thing around tooling, these types of conventions, in this case because that's how Torvald's wants kernel git commits formatted. I am not committing to anything Torvald's cares about.
20
necrodawg 2 hours ago 0 replies      
Commit messages are like tabs and spaces. However you write them, keep them the same across your repository.

But obviously lower case, present tense, no dot, 50 char limit, and usually starting with add/fix/refactor/update/remove is the way to go. ;)

21
mkawia 1 hour ago 0 replies      
my commits are not only titles they are milestones , I have lolcommits installed and pose for the selfie and everything.

Could be because of I am new to git

22
jkot 2 hours ago 0 replies      
Commit messages could have titles :-)

Most tools only shows first line of message. Rest of the message can contain whatever you want.

23
jsmthrowaway 3 hours ago 1 reply      
When using Git's mail features -- remember, not everyone uses GitHub -- the first line of a commit is a subject line. It's good practice to just think of it that way. If you write sentences in your subject lines in e-mail, I guess, that's up to you. Don't take it from me, read the Tim Pope counterpoint, with eight extremely good reasons for concise subjects in the closing paragraph:

http://tbaggery.com/2008/04/19/a-note-about-git-commit-messa...

Strong disagree with antirez on this one. Even vim filetype gitcommit disagrees. (Try it on the "smart synopsis" example.)

24
velco 2 hours ago 0 replies      
Right. Commit messages are not titles.

Commit messages ought to consist of a title, followed by an empty line and a short(-ish) summary.

25
jlouis 1 hour ago 1 reply      
I have the following in my git config for handling these kinds of discussions:

 [jlouis@lady-of-pain ~]$ cat .gitconfig | grep phkphk = commit -a -m 'upd'

26
Traut 1 hour ago 0 replies      
why is this even important?
27
seanhunter 2 hours ago 1 reply      
How can anyone take the author's word on whether or not to put a full stop at the end of something when he puts a full stop at the end of the bullet points in his list? That's far worse usage than anything he's doing in the title/summary/smart synopsis. https://www.oxforddictionaries.com/words/bullet-points
28
hasteur 1 hour ago 0 replies      
While I commend the poster's ideals, the method only works in the glass cathedrals model of development. One in which no commit gets made without having followed the process from inception, through requirements gathering, through unpressured development, through appropriate and thorough QA/code review, and deployment. In the street market bazzar you're left with thousands of micro commits that you don't have time to sit down and create an illuminated manuscript page for each commit explaining it's small description, and it's naritive history.
Who'll be the next president Google Search google.com
131 points by sssilver  4 hours ago   83 comments top 22
1
raus22 2 minutes ago 0 replies      
They have not nailed the "Knowledge-Based Trust: Estimatingthe Trustworthiness of Web Sources" yet

paper here(paper by google): http://arxiv.org/pdf/1502.03519v1.pdf

2
joelrunyon 3 hours ago 6 replies      
Another unintended consequence of Google doing the things that Google recommends you don't do (in this case, scraping sites).

Can we talk about how terrible a piece of pro-hillary propaganda that was?

Namely this:

> America is ready for the leadership of a Hillary Clinton. A new history will be made when she becomes the leader of the free world. The world of women everywhere will change

Sure, this sounds very reminiscent of 8 years ago, but what's even scarier is the opening paragraph:

> She has paid her dues as a political candidate

Do people really think like this? All you have to do is "pay your dues" and you "deserve" it instead of voting on issues, track record & potential to make real change?

This is why we can't have nice things.

3
richmt 0 minutes ago 0 replies      
Interestingly enough, the top link for me just below the "in the news" headline is this comment thread.
4
kattuviriyan 3 hours ago 2 replies      
Some will need a screenshot of the query at some point:

http://i.imgur.com/SEq4GW9.png

5
MrDosu 3 hours ago 3 replies      
"Because the other side doesn't have a viable candidate" Glorious two party system, how is that even remotely called democracy by anyone?
6
HarrietJones 2 hours ago 1 reply      
Two Points. Google answers questions, it sometimes gets those questions wrong, but it's getting better. I'm not that worried about the odd wrong answer because that's a consequence of fledgling deep learning software. Things will get better. Of course, some idiot's going to listen to Google one time, and do something disastrous with the "advice" and then they might have to tone it down a bit, but I for one welcome our question answering overlords.

Of more concern is the possibility that Google is increasingly correct, and the algorithms and heuristics it uses know us better than we know ourself. Nothing's going to fuck with democracy more than the knowledge that free will is a myth and the outcomes for Very Important Things are already decided.

Media in the UK are stopped from reporting on Exit Polls before elections and there's a move to ban all polling because knowing how people will vote is enough to adversely affect elections. I can see this being applied to issues like this, but even that's problematic. I don't know if a machine accurately telling everybody who will win is better or worse than a machine only telling Important People who will win.

7
vixen99 2 hours ago 1 reply      
300 million people and (as is all too possible) another Clinton, another Bush? Someone should do justice to the sickness this demonstrates. Good example for Kim in N. Korea.
8
KhalilK 2 hours ago 0 replies      
I like how the second search result points to this thread. We've gone meta HN!
9
hsshah 3 hours ago 1 reply      
Nice example of how much ground "AI" has to cover before it can be considered "intelligent". OTOH, lot of humans struggle with differentiating facts from opinions, so can't be too hard on these systems.
10
Ciantic 2 hours ago 0 replies      
Interesting, I find answers from DuckDuckGo more relevant:http://i.imgur.com/Dg96e42.png all hits are on the title.

I didn't get the one intended by parent from Google though.

11
bsbechtel 1 hour ago 0 replies      
To me this is a great example of how far we have yet to go in machine learning and AI.
12
glogla 3 hours ago 1 reply      
Given that google shows different people different results for the same query, to tell people what they want to hear, I wonder if others see the same result I do.
13
AJAr 4 hours ago 1 reply      
Are there other known instances of this bug? Funny, but could be pretty impactful with the audience being the whole general public.
14
rajathkm 2 hours ago 0 replies      
I think it can confuse a lot of people at first glance because Google generally displays the best answer to the query in that space. Funny to see it in this context.
15
EU_hacker_nrd 1 hour ago 1 reply      
..err..so you guys get identical search results? Are you all on Tails over Tor with en_US? I get:"Who will replace Sepp Blatter? - ESPN FC", "Who will be Zambia's next president? | - sardc" etc.

Is this an American problem?

16
kriro 3 hours ago 0 replies      
I love that this thread is #1 and "Who will replace Sepp Blatter" is #7 at the time of this post :D
17
decafbad 1 hour ago 1 reply      
Which country?
18
popeshoe 3 hours ago 0 replies      
While it's kinda funny, I think it's made pretty clear that text is a summary generated from the first search result that is clearly linked below, only a moron would consider this an endorsement of Hillary by Google.
19
DanielBMarkham 1 hour ago 0 replies      
Okay, this is weird. I'm almost certain that within the last year or two one of the main Googlers said something like "People just want to be told what to do" It was quite a controversy at the time, and people argued that he was just being flip.

I thought it would make a nice addition to this story, juxtaposing people's desire for easy answers with an example of the type of easy answers Google provides folks. It could have kicked off a discussion about the very, very fine line between "helping people" and "telling them what to do"

Only now I can't find that quote anywhere on the internet. Huh?

20
higherpurpose 3 hours ago 0 replies      
The dangers of having a machine give you the "right answer". This isn't that different from that other "right answer" in Google about dinosaurs:

http://thenextweb.com/insider/2015/05/26/why-is-google-givin...

All this teaches us is to take Google's first answer with a huge grain of salt, if not almost immediately discard it in favor of further research, even when there's a "simple" question like "how tall is something" (remember Google got Stephen Colbert's height wrong as well: http://www.huffingtonpost.com/2014/10/21/google-stephen-colb...).

21
ExpiredLink 2 hours ago 0 replies      
As always, the candidate with the higher advertising budget wins.
22
exo762 3 hours ago 0 replies      
I would prefer seeing answer to this question from prediction markets, not analysis of websites aka "experts opinions". It's much easier to understand mechanism that generated such prediction and you know that those people actually bet their own money on result.
Clean Thesis is a clean, simple, and elegant LaTeX style for thesis documents der-ric.de
66 points by noqqe  5 hours ago   25 comments top 6
1
gjm11 2 hours ago 1 reply      
If the creator is reading this: Your web pages would be immediately improved 100% by adding a link to a PDF file generated using this style, showing most of what goes into a typical thesis. (You could use your own thesis for which it was developed, or just a pile of lorem ipsum.)

I see that you have links to JPEGs of sample pages, but this would be much better. [EDITED to add: I am not suggesting that you remove the JPEGs. They don't do any harm.]

2
ginko 3 hours ago 7 replies      
Isn't the LaTeX stylesheet for your thesis usually dictated by your university? It was for me at least..
3
johnchristopher 3 hours ago 4 replies      
I thought the default style(s) were the results of years of micro tweaking and deep studies of the impact of character positioning and flows by Knuth and that only the default \LaTeX{} style could give that 2% head start or that A instead of a B++ for any essay.

Now we add blue titles ? And sans serif font ?

Sarcasm apart, it looks good. Some links are 404 on the page though (classic thesis, etc.).

4
stared 1 hour ago 0 replies      
5
cies 1 hour ago 1 reply      
A quick look on Github told me you are not using the memoir package for this. Could the author please elaborate why it is not using memoir?
6
ccannon 1 hour ago 1 reply      
Stop wasting time coming up with elegant thesis formats and finish your thesis!
Mobile Changes Everything a16z.com
215 points by ryanb  13 hours ago   126 comments top 19
1
haxel 6 hours ago 15 replies      
"Everyone gets a pocket supercomputer" - Slide 8

I see this idea repeated so often, but it's unfortunate that we don't also have the _value_ of a supercomputer in our pocket. The sole purpose of a supercomputer is to advance the interests of its owner, who has exclusive control over it. Whether the purpose is prediction, or simulation, or to advance the state of the art, the benefit goes to the owner.

Yet we seem to have less and less control over our smart-phones. With so much information about each of us being siphoned off through the Internet, it's easy to wonder whose interest they serve.

With all of the advances in computing power, you'd think we'd put a bit more imagination into capturing more value for each individual smart-phone user, and less into centralized capture and analysis of our digital activity.

2
jimduk 4 hours ago 1 reply      
One comment from experience Slide 21 "The mobile supply chain dominates all tech" / Flood of smartphone components - Lego for technology

 The shiny Lego is only available for the major players. 
For the current key components - GPU/CPU/Camera sensor - you can't order them/get support/get docs unless you have scale or amazing connections. If you are a hardware startup your lego is 2/3 yrs behind the big players, and behind public perception.

This makes complete sense once you look at chip fab costs/profit models and the industry structure, but is not great for disruption from smaller players.

NB this was from a European perspective of doing things officially - it's possible there might be more 'unofficial' components and support if this is done in China with strong local support

3
choppaface 6 hours ago 1 reply      
The presentation touches on smartphone penetration and communication behaviors of teens but really doesn't grapple with that phenomenon with any novel amount of rigor. The talk is aimed at making us believe in (i.e. want to invest in) tech. The argument is that the opportunity is so big, even fools who just throw money in the pot stand to make money.

The trouble is that we're mostly aware of how awesome mobile penetration is and how vital social networks are. I'd much rather see the preso that brings new evidence and rigor to the table than the last ditched effort to pick up conservatives who have been ignoring tech for the past 7 years.

If a16z wants to chart progress, would love to see some of these graphs posted online and live-updated daily/monthly.

4
whysonot 7 hours ago 0 replies      
In case anybody is interested in the google books chart of the word "mobile":

https://books.google.com/ngrams/graph?content=mobile&year_st...

After the context is set, the most important page appears to be 44. It suggests that the next blessing[1] of unicorns will tackle enormous markets by building products around mobile. Didn't this shift already happen? I couldn't think of many major industries that don't already have mobile-first contenders.

Maybe I'm missing the point of the "tech is outgrowing tech' sentiment?

[1] http://www.answers.com/Q/What_is_a_group_of_unicorns_called

5
gavanwoolery 3 hours ago 0 replies      
Just because mobile is more prevalent does not make it more valuable, in fact, quite the opposite: the fact that it reaches more classes dilutes the spending power of the average user. What we have ended up with is a segment with extreme competition AND low app prices (the average PC app sells for at least 10x more than a mobile app).

But this is only the tip of the iceberg. Although we have tried to app-ify everything, I still prefer doing 99 percent of tasks on a device with a real keyboard and enough horsepower to prevent lag (in spite of rapid improvements, I still find the lag on my mobile device (1 year old now - HTC One M8) to be frustrating).

6
CodingGuy 6 hours ago 0 replies      
I love mobile first - all idiots are leaving the web again! :
7
Animats 6 hours ago 2 replies      
Mobile puts users back in their proper role as consumers, where they belong. The personal computer, and the Internet, were originally seen as subversive tools of empowerment. Remember those "manifestos of cyberspace" from the 1990s? Remember cyberpunk? Well, that didn't happen. Most Internet traffic today goes to the top 10 sites. None of those sites are even run by companies with a broad shareholder base. The billionaires are firmly in charge.

"If you want a vision of the future, imagine a boot stamping on a human face - forever." - Orwell.

8
z3t4 4 hours ago 0 replies      
Mobiles and personal computers are different, but the difference will eventually blur.

What worries me though, is that currently mobiles are not as great as PC's when it comes to learning and creating. And that will slow technology growth, as todays youth are consumers rather then hackers/creators.

To make mobile software you still need a PC :P

9
quantisan 7 hours ago 0 replies      
The $30 fully featured smartphone is already here. I've been using a Microsoft Lumia 635 ($30 with no contract, $50 unlocked in US) as my main driver for a month. Sure it doesn't take epic photos or play the latest mobile games. But everything you'd expect works surprisingly well. Compare it to a couple of years ago, even a $100 Android phone felt castrated back then.
10
cstuder 6 hours ago 0 replies      
Additional food for thought: The newest statistics about landline and mobile adoption from the CDC: http://www.theverge.com/2015/6/23/8826159/wireless-only-hous...
11
danblick 6 hours ago 0 replies      
Great presentation. This focuses on new business opportunities enabled by technology... but what kinds of political changes would you expect as half the world's population gets access to cheaper information and communication? Which institutions would you expect to gain or lose?
12
mdpopescu 4 hours ago 0 replies      
Seriously? "Microsoft is dead" again? Hasn't he learned from the last time he said that? :)
13
zargath 5 hours ago 0 replies      
I think it is very "dangerous" to say mobile = smartphone = iOS + Android, at least that is what I hear people say.

What about all the billion devices we get in clothe, toys, tracking, etc?

You can make insanely fast and small hardware today, and it will be used for awesome stuff. That is not just because you have a smart-phone in your nasty little pocketses. .-)

14
mathattack 6 hours ago 0 replies      
I'm a huge Benedict Evans fan, but is this news anymore?
15
hathym 4 hours ago 0 replies      
supercomputers to facebook and play candy crush, what a wonderful change
16
pjmlp 7 hours ago 3 replies      
Changes everything and yet they have a website that displays like crap on my mobile, forcing me to zoom in page sections.
17
graycat 6 hours ago 4 replies      
=== Overview

"Mobile" -- an astoundingly popularcollection of new products? Yes.

"Changes everything"? No.

Mobile is new and popular? Yes, and atone time in the US so were tacos.

New and popular are not nearly the same aschanging everything.

=== Use a Smartphone?

Could I use a smartphone to buy fromAmazon? Yes. Would I? Very definitely,no!

Why not?

(1) If the user interface (UI) is a mobileapp instead of a Web page, no thanks.

Why? Because with a Web page and my Webbrowser and PC, I get to keep a copy ofthe relevant Web pages I used in theshopping and buying. And I very much wantto keep that data for the future.

(2) Want to keep those copies of Web pageson a mobile device? Not a chance.

Why? Because for such data, I want my PCwith its hardware and software. I wantthe Windows file system (NTFS), my texteditor and its many macros, and my meansof finding things in the file system.

My PC also gives me a large screen, a goodkeyboard, a good printer, a mouse (I don'twant to keep touching the screen -- infact, my PC screen is not quite closeenough for me to touch), ability toread/write CDs and DVDs, backup to a USBdevice, etc.

Do I want to backup to the cloud? Not achance. I backup to local devices.

Why? Because for cloud backup, money, acloud bureaucracy, the Internet, spooks,and lawyers could get involved.

=== Business

My business is a Web site. I'm developingthat on my PC, and will go live on a PC --in both cases, a PC, not a mobile device.

Mobile users of my Web site? Sure: MyWeb pages should look and work fine on anymobile device with a Web browser up todate as of, say, 10 years ago.

=== New Business for A16Z

It sounds like A16Z likes mobile becausefor 2+ billion poor people smartphones aretheir first computer and are new andpopular.

Okay, then, A16Z, here's another businessyou should like -- bar soap. Also, ofcourse, just from the OP, tooth brushes.No way should we forget -- salt. Okay, ofcourse -- sugar. Sure, one more -- toiletpaper. Naw, got to have one more, plasticknives, forks, spoons, and drinking cups.

Not to forget -- sell them batteries fortheir smartphones. Maybe even solar panelrecharging for their smartphones!

Especially for A16Z, got to have one more-- sure, Kool Aid.

=== Summary

A computer is the most important tool inmy life. Currently my PC is my computer.

A smartphone most definitely does notreplace my computer.

Actually, at present I have no use for asmartphone, a cell phone, or a mobiledevice and, so, have none.

Actually some years ago a friend gave me acell phone. Once I turned it on, and somecomplicated dialog came up about myreading some contract and sending money.I turned the thing off and haven't turnedit back on since.

Or, my PC has a network effect: It hasall my data and means of entering,storing, processing, communicating, andviewing data, all in one place. A mobiledevice cannot be that one place, and, dueto the network effect, I don't want tosplit off some of my data into a mobilesilo.

=== Denouement

This post was written, spell checked, etc.with my favorite text editor, using myfavorite spell checker, on my PC, and noway would I have wanted to have done thispost on a smartphone.

18
lucian 4 hours ago 0 replies      
slide 31 / video min 16:07

------------------------------

Global SMS: 20 bn messages a day

WhatsApp: 30 bn messages a day.

(with just 40 engineers)

------------------------------

should be refactored:

------------------------------

Global SMS: 20 bn messages a day

(10.000 Engineers using C/C++/Java - just guessing)

WhatsApp: 30 bn messages a day

(with just 40 engineers using Erlang)

------------------------------

19
paulgayham 7 hours ago 0 replies      
mobile is irrelevant and 'app's are mostly crap. My phone has flappy bird and that's it.
The man with 1,000 klein bottles under his house [video] youtube.com
106 points by cevn  10 hours ago   49 comments top 15
1
noonespecial 2 hours ago 0 replies      
His tiny robotic warehouse in the crawlspace of his house is the coolest thing I've seen in a long time.

Pure distilled mad science. He's like a living cartoon. I want to be him when I get old so very much!

2
slyall 3 hours ago 4 replies      
Cliff Stoll in case people were wondering:

https://en.wikipedia.org/wiki/Clifford_Stoll

I have his original book my shelf, it was fascinating when it came out.

His Klein Bottle company is here:

http://www.kleinbottle.com/

3
acron0 3 hours ago 0 replies      
This is incredible. I LOVE stuff like this. It makes me so happy that people like this still exist in the world. I say 'still' because I feel like they're a dying breed. The rationale behind everything he's done is laid bare and it makes sense! People like this don't ever let a detail like "I have no idea how to achieve this" stand in their way. Everything is a challenge or problem that needs solving.
4
guard-of-terra 59 minutes ago 1 reply      
Is it glass wool hanging down from his underfloor ceiling in insane qualities?

I was raised to believe it's extremely dangerous to health, people crawling there scare me. Of course it's easier with forklift.

5
jay-saint 1 hour ago 0 replies      
Check out his job postings for his Acme Klein Bottle company http://www.kleinbottle.com/jobs.html

My favorite: PENTIUM PROCESSOR. Must know all pentium processes, including preprocessing, postprocessing, and past-pluperfect processing. Ideal candidate pent up at the Pentagon, penthouse, or penitentiary. Pays pennies. Penurious benefits include Pension, Pencil. Pentel, Pentax, and Pentaflex. Write to pensive@kleinbottle.con

6
bambax 3 hours ago 4 replies      
Real life Gyro Gearloose!!!

In the next video he talks about how a Klein bottle is made, and how, contrary to a bottle, it has no edge.

I'm not sure I understand why a bottle "has to" have an edge? Surely it's possible to make a bottle with no edge? For example, if one takes a sphere and progressively turns it into a bowl (by punching into it), and then makes the bowl deeper, it still has just one surface and no edge, no?

7
DanBC 6 hours ago 0 replies      
The klein bottles are awesome, but they are the least awesome thing about his under-floor space. That was amazing.
8
tempodox 2 hours ago 2 replies      
I got one of those Klein bottles from him, and it's really beautifully done. If you are careful, you can even store some liquid or other stuff in it. I wonder if the mathematical original could do that, too.
9
harel 1 hour ago 2 replies      
What would be a practical use for such a feat of inter dimension glass containerism?
10
chii 2 hours ago 0 replies      
> Now with a LIFETIME GUARANTEE - we guarantee that you will live your entire life OR YOUR MONEY BACK!

ROFL - that site is awesome. You don't need no fancy javascript, graphics or special effects. Just pure humor and good content!

11
dmd 1 hour ago 1 reply      
Wow, synchronicity -- I just received a klein bottle order in the mail from him YESTERDAY.
12
nedwin 2 hours ago 0 replies      
"Acme Klein Bottles - where yesterday's future is here today!" http://www.kleinbottle.com/
13
wodenokoto 3 hours ago 0 replies      
I want a reality show about this guy, this is amazing!
14
onion2k 3 hours ago 0 replies      
He is awesome.
15
DrScump 9 hours ago 1 reply      
a warning: the volume on this is REALLY. LOUD.
Tell HN: I wish we use cookies messages could be globally turned off
25 points by hoodoof  1 hour ago   10 comments top 7
1
jvdh 13 minutes ago 2 replies      
The idea behind this proposal was a noble one: companies should ask permission before invading the privacy of consumers.

Unfortunately, companies collectively decided that their businessmodel does not need changing at all, and simply implemented a "cookie wall" for all their consumers. This led to consumers, like you, to quickly get "cookie wall fatigue" and try to click 'OK' as soon as possible, without thinking at all.

And all the while, we complain that privacy on the Internet is going to hell, and there is nothing we can do about it, because consumers don't care enough, or have given up...

2
SmellyGeekBoy 16 minutes ago 0 replies      
> ...because of some European law I have no interest in.

Nobody in Europe does either. The whole thing is generally considered completely pointless and unenforceable.

3
blfr 7 minutes ago 0 replies      
There is a list for adblock[1] with our own mike-cardwell contributing but it doesn't have the kind of coverage you're used to from ad lists.

[1] https://github.com/r4vi/block-the-eu-cookie-shit-list

4
pmontra 3 minutes ago 0 replies      
As web developer and user of web sites I feel your pain but for the small web sites it's a matter of better being safe than sorry. Some countries have pretty hefty fines against sites that send cookies without notice.
6
theandrewbailey 21 minutes ago 0 replies      
Using NoScript works wonders. In addition, I rarely see the annoying "Sign up for our newsletter" popups that assault immediately on most sites.
7
dudul 19 minutes ago 0 replies      
I guess we should spend more time tailoring the Internet to your needs and preferences.
Fourier series codepen.io
120 points by guiambros  12 hours ago   30 comments top 16
1
aswanson 12 minutes ago 0 replies      
These kinds of visualizations are awesome; I remember a site back in the day during undergrad that helped visualize the fourier components using image processing. I learned so much more from that site than watching a professor scribble out the formulae on a chalkboard.
2
burger_moon 2 hours ago 0 replies      
Here's another project that was posted here a while back that goes into details explaining different waveforms with examples. It's titled "SEEING CIRCLES, SINES, AND SIGNALSA COMPACT PRIMER ON DIGITAL SIGNAL PROCESSING". It's really well put together.

http://jackschaedler.github.io/circles-sines-signals/dft_int...

3
line-o 3 hours ago 2 replies      
Wow. Beautiful math. Such insight. Thank you for sharing.

 var PI2 = Math.PI * 2.0;
That should be Math.TAU, right?

4
somberi 57 minutes ago 2 replies      
One of my favorite questions to Math Candidates, that are potential employees:

In Fourier transform, what is being transformed to what? What is wrong with how it is that it needs to be transformed?

I am amused at the variety of answers this produces.

6
hemmer 1 hour ago 0 replies      
It's interestingly that the Gibbs phenomenon appears visually as a whiplash effect. You can see how it would only get worse as higher order terms are included.
7
cturhan 34 minutes ago 0 replies      
Here is another version[1] made by D3.js

[1] http://bl.ocks.org/jinroh/7524988

8
jcpst 45 minutes ago 0 replies      
Very cool. My only concern is if you have a screen with a lower resolution (1366 X 768), the drawer with the source code causes the user controls to be hidden, so it's not immediately obvious that those controls exist.
9
rtpg 4 hours ago 0 replies      
Love a gif version I saw of this recently, so fascinating.

I think this would be neater if it had a small text field, where you could put in a JS function, and it would do an approximate fourier series based off of a segment of the function.

Kind of feel like someone should make a page that fully explains all the levels of this graphic:

- the circles let you graph f(x)=a*sin(nx+phase) (cosine on the horizontal projection)

- attaching the circles is the same as adding, so you actually have a way of drawing out the first parts of a fourier series

- any periodic function can be represented through all these spinning circles

10
agumonkey 1 hour ago 0 replies      
Pseudo-Mechanical representations of Fourier series are everywhere these days, it's impressive how dry college math can hide the simplicity of an 'object', composition of rotations, turned into algebra.
11
joe563323 2 hours ago 1 reply      
This is just awesome. Thank you so much. I really wish to view more blog posts by the author of this code. Sadly it has been published as anonymous.
12
bshimmin 1 hour ago 0 replies      
I'd be seriously impressed if someone could do this just using CSS.

(I was pretty impressed already, to be fair.)

13
lancefisher 5 hours ago 2 replies      
So are people dropping the data- prefix in custom attributes now? e.g. input[frequency]
14
vruiz 4 hours ago 0 replies      
Woah. Where were all these cool visualizations when I was studying physics?!
15
otoolep 4 hours ago 0 replies      
This is really cool.
16
sotaan 3 hours ago 0 replies      
nice work!
Second U.S. Agent Agrees to Plead Guilty to Bitcoin Theft bloomberg.com
91 points by sidko  7 hours ago   30 comments top 4
1
swombat 5 hours ago 2 replies      
Looking at the other side of the coin, perhaps also worth asking (even if the answer is a negative):

Given what we know about the coercive nature of the plea system in the US, which frequently forces people to plead guilty even when they're innocent, to avoid a very expensive, long, and likely "unhappy-ending" trial, is it possible that these FBI agents are actually innocent and simply being coerced into filing guilty pleas to turn 20 year sentences into 5 year sentences?

I mean, this is almost a banana republic we're talking about here after all. What was the percentage of people putting in guilty pleas who are innocent? I can't find the article but I recall it was quite high. If we apply this line of thinking to people we like, we should also apply it to people we don't like.

2
cm2187 22 minutes ago 0 replies      
It is useful to get these regular reminders that the authorities to which we give every day more power, are as human as they have ever been. Reading this, it is not exactly outside of the realm of realism that an FBI/NSA/GCHQ/etc agent would sell private personal data collected in the name of anti-terrorism!
3
ipsin 6 hours ago 1 reply      
I'll be interested to see what a gross abuse of power gets you, in terms of sentencing.
4
trhway 7 hours ago 3 replies      
>In addition, prosecutors allege Force used his supervisors signature stamp on a subpoena to unfreeze one of his own accounts that had been blocked for suspicious activity.

...

>Bridges allegedly filed an affidavit for a warrant to seize the Mt. Gox Co. exchange, where Silk Road bitcoins were stored, and its owners bank accounts two days after he had taken his own money out of it, according to the criminal complaint.

this couple of bad FBI apples obviously can't cast any doubt onto the integrity of the rest of the FBI Silk Road team (consisting mainly of pure-breed white knights in shiny Lexuses) nor onto the integrity of the whole investigation and conviction....... The trial was 3 ring circus beyond reasonable doubt. F&cking clowns. It isn't the issue whether DPR was this guy or not, or whether he really hired a killer. The issue is that my taxes are extorted from me in order to, supposedly, finance justice process, and what i've got for my money was a show so lame that it wasn't worth even a rotten tomato to throw at it.

In the light of the so far surfaced crimes by the FBI agents the epic heroic story http://www.wired.com/2015/05/silk-road-2/ looks like a new story:

"DPR believed that Nob was a Puerto Rican cartel middleman named Eladio Guzman, but he was in fact DEA agent Carl Force. Force had spent more than a year developing his undercover identity on Silk Road in an effort to get close to DPR. Theyd become confidants, spending nights chatting at such length that DPR trusted Nob when he needed enforcement muscle.

It was Nob whom DPR hired to kill his employee, Curtis Green. Force then coerced Green into faking his own death as a ruse. Force was surprised to see DPRs moral collapse up close, but then again, hed seen this kind of thing before, during his younger DEA years in undercover. He too had experienced the temptations that came with a double identity. In fact, his secret life as a hard-partying operator had nearly destroyed his regular life. Hed left all that behind and recommitted himself to Christ. The Silk Road case was his first undercover role since those days, and it was a big one. Because of his tenure online as Nob, Force was able to carry out the supposed hit on Green, setting DPR up for a murder conspiracy indictment while at the same time cementing their relationship. Nob and DPR had become comrades-in-arms."

I mean, obviously, this heroic Force whose court testimonies and affidavits sent the guy for life couldn't be the same Force who stole the money. I mean we're talking about commitment to Christ after all... Did i mention clowns?

And this one is the icing - http://www.forbes.com/sites/sarahjeong/2015/03/31/force-and-...:

"A states witness took the fall for an agents theft, thus becoming the target for a murder-for-hirea murder that was then faked by the same agent. The Silk Road case was compromised again and again as Force and Bridges allegedly took every opportunity to embezzle and steal money. With so much bitcoin on their hands, the two had to coax various bitcoin and payments companies to help convert their ill-gotten gains to dollars. When companies resisted, investigations were launched, subpoenas were issued, and civil forfeitures were sought in retaliation."

I just released 0.2 of Hypatia, a 2D adventure game engine
33 points by lillian-lemmer  1 hour ago   5 comments top 3
1
reedlaw 53 minutes ago 1 reply      
I like how this is aimed at beginners (non-programmers). Although I don't think it's really possible for a complete neophyte to pick up this project and make something, it seems like it could become a part of an even more basic course. I find a lot of "beginner" resources make far too many assumptions. Today I was working with some students who had never used a keyboard before and I tried to find a typing tutorial they could use but most of them assumed too much. It's OK to have intermediate resources but they should at least point to more basic information. For example, a typing program is far more useful with a picture of a hand over the keyboard in the home position. It's even better with animation to show the proper fingering.
2
trishume 51 minutes ago 1 reply      
Neat! I like the idea of having both programmer and non-programmer customizability: it invites collaboration as well as people who start as non-programmers to expand their skills.

Side note: clever website domain (http://about.lillian.link/), I like it. The site itself is nice too, particularly the embedded video of Hypatia is much better than a long description.

3
hundunpao 56 minutes ago 0 replies      
Adventure and Action Adventure are completely different things in my book.
The Unix Philosophy and Elixir as an Alternative to Go lebo.io
257 points by aaron-lebo  15 hours ago   102 comments top 19
1
vezzy-fnord 12 hours ago 6 replies      
Erlang (and by extension Elixir) are good languages for web development, I agree. Frameworks like Nitrogen and N2O even let you do frontend work directly from base Erlang constructs like records and functions to represent templates and JS callbacks, respectively.

However, they will not replace Go for a rather simple reason. Erlang is an application runtime first and a language second. It's not meant to play with the native OS all that well, instead relying on its own scheduling, semantics, process constructs and the OTP framework. It exports a minimum set of OS libs for things like file system interaction, but you wouldn't write a Unix subsystem daemon in it. Though, you certainly can - if you want to hack with NIFs, port drivers and C nodes. But then you put the global VM state at risk if some native caller blows up.

For this reason, Go will remain a dominant choice for infrastructure and systems programmers. It definitely is replacing C/C++ for lots of things above kernel space, contrary to your statement.

I'd honestly dispute the characterization that Elixir is a small language. Erlang is easier to fit into your head, not least of which is because of how essential pattern matching on tuples and lists is. It's a consistent and homoiconic design, more warty constructs like records aside.

Finally, Erlang/Elixir aren't Unix-y at all. Don't get me wrong, BEAM is a wonderful piece of engineering. But it's very well known for being opinionated and not interacting with the host environment well. This reflects its embedded heritage. For example, the Erlang VM has its own global signal handlers, which means processes themselves can't handle signals locally unless they talk to an external port driver.

2
odiroot 3 hours ago 0 replies      
> Somewhere, however, that has been extrapolated to the idea that Go is a good language for writing web apps. I don't believe that's the case.

I have always had this feeling and thought why isn't more people speaking up about that.

For me it's a good improvement from C++ (and maybe C as well), somehow well-suited for systems programming.

I never understood how people can claim it to be natural "upgrade" from Python or Ruby. I have a feeling the opposite is actually the case. This of course is all in the web-development context, not in general one.

3
prasoon2211 1 hour ago 0 replies      
I am utterly baffled by people who use the phrase "The Unix Philosophy" for justifying a piece of software but don't understand that Unix philosophy is in the context of software that can be composed together. That is why the other tenet of the Unix philosophy is to use text streams for all I/O.
4
aikah 10 hours ago 1 reply      
> That being said, have you tried writing a web app in Go? You can do it, but it isn't exactly entertaining. All those nice form-handling libraries you are used to in Python and Ruby? Yeah, they aren't nearly as good. You can try writing some validation functions for different form inputs, but you'll probably run into limitations with the type system and find there are certain things you cannot express in the same way you could with the languages you came from. Database handling gets more verbose, models get very ugly with tags for JSON, databases, and whatever else. It isn't an ideal situation. I'm ready to embrace simplicity, but writing web apps is already pretty menial work, Go only exacerbates that with so many simple tasks.

This.

And don't even try to work around these limitations and share your work for free, or you're going to get trashed by the Go community( i.e. the Martini fiasco )

But think of go more as a blueprint. Go has great ideas(CSP,go routines,...)

Hopefully it can inspire a new wave of languages that don't rely on a VM, that are statically typed and safe without a fucked up syntax.

The language that succeeds in striking the right balance between minimalism and features with an excellent support for concurrency will be the next big thing.

5
dschiptsov 7 hours ago 6 replies      
Erlang is well-researched, pragmatical language (read the Armstrong Thesis at last) this is why it is "great". User-level runtime, which is trying to do kernel's job is a problem, but relying on message-passing and being mostly-functional it, at least, have no threading/locking problems - so it scales. Nothing much to talk about.

Go is also well-researched language with emphasis on keeping it simple and being good-enough and doing it right way (utf8 vs. other encodings) - a philosophy from Plan 9. Go has in it a lot of fine ideas, attention to details and good-enough minimalism - the basis for success. It is also pragmatic - that is why it is imperative and "simply" static-typed.

Criticism about lack of generics or is not essential, especially considering that generics in a static typed language is an awkward mess. Complexity of its user-space runtime is a problem, of course, but runtime is hard, especially when it is not mostly-functional.

Go is in some sense "back to the basics/essentials" approach, not just in programming but also in running the code, and even this is enough to be successful.

BTW, its syntactic clumsiness and shortcomings (hipsters are blogging about) came from being statically typed, just admit it. On the other side, being C-like and already having C-to-Go translators opens up the road to static-analyzing and other tools.

Go is the product of old-school (Bell labs) minds (like Smalltalk or Lisps) not of bunch of punks.)

6
jordan0day 12 hours ago 0 replies      
I think Elixir and Go solve different problems -- but for the problem space the author mentions here, web services, Elixir is clearly superior.
7
oldpond 12 hours ago 2 replies      
Thanks for that interesting viewpoint. I have also done the same survey, and I agree. I did a ChicagoBoss PoC last year, and Elixir and Phoenix are my goto for the next one.

It should also be noted that both are young languages and they are evolving rapidly. This makes them even more exciting to me. Also, I think Go's sweet spot may be a new approach to client/server i.e. applications that work across the network. This is quite different than the browser/app server model. Elixir has similar plumbing that fit this model as well, but what really excites me about Elixir and Phoenix is the ability to build a restful MVC application that can take advantage of Erlang/OTP's scalability.

8
andyl 12 hours ago 0 replies      
Elixir and Phoenix are great for web-apps, REST API's and the like. Phoenix channels make real-time push super simple and scalable.
9
jtwebman 12 hours ago 2 replies      
I have been using Elixir for a while now! I'll never go back to Go or Node.js by choice.
10
applecore 13 hours ago 1 reply      
> However, as we more and more turn servers into REST APIs that are supposed to deliver JSON to numerous clients, for many it is time to find an alternative.

Perhaps the alternative isn't more REST APIs that deliver JSON; given its inherent problems, it's more likely we'll be using a challenger to that entire model, like GraphQL.

https://facebook.github.io/react/blog/2015/05/01/graphql-int...

11
swagmeister 5 hours ago 0 replies      
>Finally, JavaScript handles the modern web well, but it isn't perfect. If a request does something that is especially CPU-heavy, every single user of your application will be waiting.

One thing you might want to take into account is that ES2015 generators allow you to write long, blocking functions as generators that partially compute the result before deferring further computation to somewhere further along in the message queue. This allows you to spread out blocking computations so that you can still serve requests.

12
fiatjaf 2 hours ago 0 replies      
Code samples side-by-side comparison: http://rosetta.alhur.es/compare/go/elixir/#
13
georgeg 4 hours ago 0 replies      
I feel that this discussion would be incomplete without mentioning D-lang (http://dlang.org) and the vibed framework (http://vibed.org) as alternatives to the technology platforms that the author mentioned.
14
EdwardDiego 8 hours ago 2 replies      
> We've known for years that Ruby and Python are slow, but we were willing to put up with that. However, as we more and more turn servers into REST APIs that are supposed to deliver JSON to numerous clients, for many it is time to find an alternative.

> You see a lot of competitors trying to fill this void. Some people are crazy enough to use Rust or Nim or Haskell for this work, and you see some interest in JVM based languages like Scala or Clojure (because the JVM actually handles threading exceptionally well), but by and far the languages you both hear discussed and derided the most are JavaScript via node and Go.

Meanwhile the Java programmers just keep on delivering with Jersey, Spring, Restlet etc. etc. so forth. Less blogging, more doing.

15
bsaul 7 hours ago 1 reply      
What about deployment ? How does it compare to "scp and i'm done" like in go ?

Also what about memory usage ? My latest go service was a 4 mo file with one single json config file. It consumed less than 4 mo in RAM, which let me deploy it on a base 500Mo instance with plenty of memory to spare.

16
4ydx 10 hours ago 0 replies      
Models get ugly with tags? You don't have to have them unless the names are different.

This post is all about familiarity and the degree to which you are interested in using a language (just like every other post of its kind).

17
joshbuddy 11 hours ago 1 reply      
Web workers are real concurrency and afaik, there is no GIL that spans across the main thread and the worker threads in any of the browser implementations of web workers.
18
stefantalpalaru 12 hours ago 2 replies      
Surprisingly favorable benchmark: https://github.com/mroth/phoenix-showdown

and related HN discussion: https://news.ycombinator.com/item?id=8672234

19
jbverschoor 5 hours ago 0 replies      
Important to note is that Elixir is created y Jose Valim from plataformatec. An important person and company in the Ruby community
Top mathematical achievements of the last 5ish years, maybe richardelwes.co.uk
70 points by rndn  10 hours ago   7 comments top 3
1
aruss 4 hours ago 0 replies      
I'm somewhat surprised that Gentry's fully homomorphic encryption wasn't mentioned. It's definitely more applied math than anything else, but if you're going to mention RSA and factoring you might as well mention the biggest crypto breakthrough in the last 5-6 years.
2
claudius 1 hour ago 0 replies      
The list at the bottom regarding computational achievements is possibly slightly more relatable and also contains this nice gem:

 The most impressive feat of integer-factorisation using a quantum computer is that of 56,153=233 241. The previous record was 15.

3
hessenwolf 5 hours ago 1 reply      
Four extra lines in a nice IMRaD format would have been a nice addition to start the discussion. It looks like the reader still has a lot of work to do.
Process the whole Wikidata in 7 minutes with your laptop (and Akka streams) intenthq.com
4 points by ArturSoler  17 minutes ago   discuss
Dollar Shave Club Is Valued at $615M wsj.com
102 points by chermanowicz  9 hours ago   135 comments top 35
1
nobleach 8 hours ago 7 replies      
I liked their branding. It had that hip internet company feel. The little leaflets and stuff that they sent with each razor pack were just fun. After time, their razors began to hurt and feel like they were pulling hair more than I remember at the beginning. One Saturday I did an A/B test with a store-bought Gillette 5 blade against the DSC ultimate 6 blade. I lathered up and shaved with a brand new DSC blade and it was just a little painful. Brand new Gillette, smooth as silk. I decided that even though the Dollar Shave Club was cheaper (Gillette from a drugstore is 15-20 bucks for refills) I couldn't have a painful shave experience. I had my phone in hand to take pics of the experience and thought... hey, I'll just cancel right now. I could not find a way to do it. I could only pause my delivery. Finally a couple of months later, I hunted around their website a bit and didn't immediately see anything related to "quitting the club". After a google search, I finally met with success.... only to start receiving plenty of email from them. I unsubscribed and so far I haven't heard from them again. I doubt I'll try Gillette's shave club as I've heard they just send out their "not as sharp" razors for the reduced price. Maybe I should just grow a beard....
2
jewel 8 hours ago 3 replies      
I've never understood the appeal of the dollar shave club. Razors are very cheap when bought in bulk, and they take up very little space to store, even an entire year's supply. Wouldn't it make more sense and save money to ship a year's supply at a time?

For example, here are 64 razors for $16: http://amazon.com/dp/B00XKVH4O6. The comparable dollar shave two-blade option would cost you $48, since it's $1/mo for blades and $2/mo for shipping.

On the high end six-blade model, here's 24-cartridges for $30.49: http://www.dorcousa.com/pace-6-cartridges-6-pack-sxa1040/ The same thing from dollar shave would cost $54.

3
paulpauper 11 minutes ago 0 replies      
Everyone says it's a bubble and yet seven years later, since 2008, I can only think of three major VC implosions off the top of my head: fab.com, zynga, and groupon. Fab was a train-wreck that anyone with a pulse could have seen coming. Zynga... way too dependent on Facebook and a fad. Groupon is the better of the three, only seeing its valuation fall from $30 billion to just $4 billion - but still well over $1 billion. These valuations, as lofty as they seem, ain't goin' lower.
4
DiabloD3 9 hours ago 2 replies      
Honestly, I like their branding.

It is unique enough to stand apart, but not so far out there that its unapproachable or unnecessarily too hipster, and manly enough without scaring off the metro-sexual or gay crowd (and please, don't take this as offense, it's just that they gravitate to certain brands that are more "gentlemenly" than flat out "manly").

Their website portrays their brand extremely well, their famous video (https://www.youtube.com/watch?v=ZUG9qYTJMsI) is absolutely amazing for how low budget it is, and, well, how fail it is and they just keep rolling with it. Its genius.

I just wish they'd veer into the double edged safety razor market, which in my opinion, needs a bit of disruption too.

5
WizzleKake 9 hours ago 7 replies      
I remember when Dollar Shave Club popped up on my radar. I thought to myself, "You know what? They are right. It's ridiculous that I spend so much money on Gillette razor blades. I only get 2-3 good shaves out of them."

I ended up buying an $80 Philips electric razor and haven't looked back. Takes way less time, I don't have to use shaving cream, and I don't have to be careful either.

6
toxican 38 minutes ago 0 replies      
I don't have to shave very often (once or twice a week), so the price and convenience of not having to remember to buy something I already don't think about very much is great. They deliver the 6 pack of blades once every other month or so and I'll often pause it if I get a backlog of blades.

I can't really speak to the quality of said blades or the shave, since I'm pretty sure 14 year old boys shave more than I do. But I've never experienced a painful shave as others have.

7
totalrobe 8 minutes ago 0 replies      
If for some reason you love DSC's razors, you can buy direct from the manufacturer, Dorco USA, for even cheaper.

dorcousa.com

8
belugashave 13 minutes ago 0 replies      
At Beluga Shave Co. we make single edge shaving like a Professional Barber easy for the first time with our Beluga Razor, and it looks like for many of you we have the perfect solution. Why pay a few bucks when you can pay a few cents right? Happy to answer any questions regarding shaving, especially for those who might be having some issues.

Quick highlights on our Beluga Razor:1. We don't even sell blades, but they cost as little as 10 cents each and available in over 75+ different varieties.2. A single edge cuts easier and closer than multiple blades, while eliminating irritation because a single edge generates far less friction than multiple blades.3. No clogging issues because our Beluga Razor is designed to eliminate this issue.

9
apapli 2 hours ago 0 replies      
Neat concept, but they didn't get my money. I assumed they would be cheap (per their name), but quickly realised that my local Aldi is cheaper. Furthermore, I shave daily and use disposable razors but simply don't need them at the frequency DSC ship them to you. So DSCs ad got me thinking, and all I ended up doing was swapping from Gilette to Aldi.

Furthermore, it isn't more convenient for me - I still need to go and buy other things from the store, so as long as I remember once every 2-3 months to pick up blades DSC certainly isn't making my life simpler or easier.

This is just another pets.com to me, neat idea but really just not that compelling.

10
stevenmays 9 hours ago 1 reply      
Private equity bubbles occur when there's not much money to be made in publicly traded companies. This is because there's a capital surplus because of quantitative easing, and a lower cost to access capital. Everything in economics is cause and effect.
11
saboot 8 hours ago 2 replies      
$20 Merkur Double Edge Safety Razor$20 Large sample pack of razors from amazon$10 bristle brush and lather bar soap

That was three years ago, haven't bought anything since. I'm confused why if people want an affordable solution this route isn't more popular? Is it simply because most stores have stopped selling plain razors for much more expensive handled disposables?

12
stevehawk 43 minutes ago 0 replies      
What amazes me is that their business strategy is so easy to replicate-1 - buy another manufacturer's blades in bulk2 - sell

yet they pull in that kind of valuation.

13
siscia 4 hours ago 0 replies      
It is around one full year that I am growing a beard, but when I use to shave I used a straight razor.

The one like the old far-west movie, but more secure and safe.

It is a niece white piece of plastic, (2) and I bought a little packet of 10 blades (1.2) if you shave every morning I guess a blade will last at least a week.

But you need to learn how to shave with a straight razor, the first time is just little piece of your face floating in blood, the more you practise the less you will cut yourself, I used to don't have the little scratch on my face... It is definitely a different and more enjoyable experience...

14
cft 9 hours ago 3 replies      
We are in what will be called "the private equity bubble".
15
TheSpiceIsLife 2 hours ago 0 replies      
I just want to be that person who brings up the oppression inherent in the clean-shaven-male culture.

The skin on my face hates shaving. I disagree with the resources spent on shaving, and the pollution. I've shaved my face fully once in the past 10 years, but I do shave around the cheeks and neck, so I have to deal with the shaving thing a bit anyway.

Also, I'm from Adelaide, home of 'The Beards', here they are with their track 'If Your Dad Doesn't Have a Beard, You've Got Two Mums' https://www.youtube.com/watch?v=RmFnarFSj_U

16
douche 1 hour ago 0 replies      
I found a Schick injector razor probably five years ago, and bought a supply of blades for it off of Amazon for about $15. I really only use it for cleaning up my neckbeard, but I'm not even halfway through the blades I bought.

It does take a little care until you know what you are doing, since you can really carve yourself up with the single blade if you go too hard. But it's certainly sharper and does a better job than any of the multi-blade disposable razors I used to use.

17
rayiner 8 hours ago 1 reply      
I don't see DSC as overvalued at all. They're selling real products that everyone needs (not wants), in an arrangement that makes total sense (razor blades are the sort of consumable that needs restocking at predictable intervals). They face high competition in the sense that I wouldn't bet on any retailer that isn't Amazon, but I'm sure that's priced into the valuation.
18
wyc 8 hours ago 2 replies      
Their premium model has six blades[0]. Made me chuckle remembering this article[1].

[0] https://www.dollarshaveclub.com/blades

[1] http://www.theonion.com/blogpost/fuck-everything-were-doing-...

19
bkeroack 8 hours ago 1 reply      
We're hiring software engineers at DSC: SDEI (software development engineer in infrastructure), SDET, iOS/Android engineers and platform engineers. Email engineering-careers@dollarshaveclub.com
20
ild 5 hours ago 2 replies      
I have a nice 1/2 inch beard, and my razor is scissors. $8/10-15 years.
21
adrr 8 hours ago 0 replies      
Flattered to be on featured on hackernews.

We're hiring Devops, Platform Engineers, Mobile developers and QA automation engineers.

For more info email: engineering-careers@dollarshaveclub.com

22
thucydides 9 hours ago 2 replies      
"Dollar Shave Club is burning through 'low single digit millions' of dollars each month, according to a person familiar with the figures."
23
pkaye 8 hours ago 0 replies      
I think they get their blades from Dorco. I can buy that much cheaper in bulk on Amazon.
24
kayge 8 hours ago 0 replies      
It will be interesting to see if Harry's will "catch up" with a similar valuation in the near future. Estimates at the end of 2014 put them in the $200-300 million range, although they seemed to be neck-and-neck otherwise.
25
intrasight 7 hours ago 0 replies      
If you clean and dry your razors - whatever kind - you'll get many more good shaves out of them. I clean mine and then blast with a can of pressurized air.
26
JanSolo 9 hours ago 3 replies      
The valuation seems high to me. I've been told that valuations should be around 3.5x revenue; at $615m, that should mean that DSC has sold $175m of razors last year. I would be astounded if that were the case.
27
zorbadgreek 7 hours ago 0 replies      
If you really like their blades, you can buy them direct or on Amazon for less. http://zorbadgreek.blogspot.com/2012/09/how-dollar-shave-clu...
28
baby 9 hours ago 1 reply      
I really like what they're doing, I keep seeing their ads in facebook and I wonder if that's because I look at razor stuff on google, reddit or amazon... But yeah they're marketing well, they're doing something nice!

Having say that, I wouldn't buy any, and I advise clever people here of not doing so as well. Safety razors are super cheap and are amazing.

29
theklub 3 hours ago 0 replies      
One of the few subsciption services that made it big, congrats to them.
30
avodonosov 3 hours ago 0 replies      
Even if they were selling perfect blades, is this company really worth $615M?
31
mrisse 8 hours ago 0 replies      
32
seekup 9 hours ago 0 replies      
Bring on the armchair private market economists.
33
nsxwolf 8 hours ago 1 reply      
I shave with a razor about once a week and I think I've been using the same blade for 2 or 3 years now.
34
yarrel 7 hours ago 0 replies      
Subscribed for a while.

There's not a single word of the name that's accurate.

35
pkaye 8 hours ago 0 replies      
I think they get their blades from Dorco. I can buy that much cheaper in bulk on Amazon.
Are You Concerned Your 4 Year Old Isnt in a Cool Band? priceonomics.com
28 points by ryan_j_naughton  9 hours ago   31 comments top 7
1
zyxley 4 hours ago 0 replies      
The most important part here is probably that they give access to a room full of instruments, then will use anything that can contribute to a song, even if it's just two notes alternating on a beat... and get the kids into that before they know anything at all about playing music "properly".

This seems like a great way to cultivate a life-long creative understanding of music - "if you can make interesting noise to a beat, that's music, and the rest is just details and practice".

2
xefer 27 minutes ago 2 replies      
My wife and I brought my son to a rock band camp in the Boston area. My son loved it and maybe helped encourage him to still be an amateur musician all these years later.

At the same time it felt very weird to be seeing rock-n'-roll packaged, codified and commodified the way it was. At pick-up time you see a bunch of youngsters mirroring the clothes and attitude of rock culture - while being loaded into their parents' minivans. I couldn't help feeling a bit uneasy about the whole thing to be honest.

Back In My Day rock was a much more organically rebellious activity.

3
onion2k 55 minutes ago 0 replies      
You cant just say no to an idea: its your responsibility to adapt it.

There's a similar idea at Pixar - it's called 'plussing'. You accept someone else's point and use it as a starting point to build something amazing. There's a really worthwhile Randy Nelson talk about it on Youtube: https://www.youtube.com/watch?v=QhXJe8ANws8

4
golemotron 48 minutes ago 0 replies      
I read this and thought: there should be some forum like reddit's NotTheOnion called NotPortlandia for this sort of thing. I went to reddit and discovered that there was. Shock.
5
mootothemax 2 hours ago 1 reply      
"They put the most distortion on the pink guitars, to subtly encourage the younger boys to break gender norms."

I love it!

Seems like a really well thought-out program, albeit - in my eyes - let down somewhat by an advert that implies it'll have much less depth than it actually has.

6
facepalm 4 hours ago 0 replies      
Sounds great, actually, if I was living in SF I would sign up my kids immediately.
7
gbog 2 hours ago 4 replies      
the opposite: I would be concerned if my 5 year old kid was pushed by someone to become a rock musician.

On Earth we only need that many rock stars, and there is already an intense competition over the only few places available. I know many musicians that didn't get enough success and have hard time earning food money.

I suspect that a kid whose fate is to become a musician will find a way to become one. This do not need to be encouraged even only a little. I find it more responsible to give kids the most "normal" (classic) education, while keeping doors open for later. It is easier to pivot from postman to writer (Bukowski) than from pretend rock star to bank teller.

Implementing a Virtual Machine in C felixangell.com
38 points by fapjacks  12 hours ago   8 comments top 5
1
r4pha 19 minutes ago 0 replies      
There's something really nice about these minimalistic C projects. You can quickly take a look at the source and immediatelly grasp what's happening. A simple and didactic introduction. Super cool.
2
sklogic 11 hours ago 1 reply      
And if you want a better performance, you can replace this switch with computed gotos (supported at least by gcc and clang).
4
jheriko 2 hours ago 2 replies      
why not use an array of structs as the program, with a function pointer and the parameters stored in the struct? the instruction pointer is then an index into that array... the functions implement the opcodes and the archetecture becomes more instructive, cleaner, faster, smaller and easier to read imo. executing the program can become a loop that just calls whatever function pointer is in the struct at the instruction pointer.
5
joe_the_user 10 hours ago 0 replies      
A nice simple demonstration.

Other things that can be added easily is Labels, Loop0 - loop while a variable is greater than zero and subX - pop stack and instruction pointer and go to location X.

Fast Reed-Solomon Coding in Go klauspost.com
49 points by pandemicsyn  11 hours ago   16 comments top 4
1
Smerity 6 hours ago 3 replies      
Whilst I quite enjoy Go, one annoyance I have is that it seems there is no middle ground between alright performance using pure Go and great performance using Go with assembler. Many other languages allow you, if you contort it, to get good performance without leaving the language. In this code, it was at least well sourced (and hopefully battle tested) SSE3 assembler for the Galois field arithmetic.

I hit this repeatedly when writing a pet project govarint[1] which performs integer encoding and decoding. I don't expect blazingly fast performance if I don't use assembler, but bitwise operations are painfully slow compared to the equivalent in C. C would also win handsomely even if the compilers were equivalent due to no boxing. It's also additional overhead that you now have two divergent code paths - the Go code (fallback in case you don't have assembler for the architecture) and the assembler code - that would both need to be kept up to date and checked for bugs / errors / inconsistencies.

Am I missing something? I only ever see assembler checked in to repositories - is there a typical sane build process I've just not seen? Is it just a matter of waiting for the Go compiler to "level up" optimizations to match GCC/Clang? Or should I just learn to live with the fact that to code Go I'll also need to reacquaint myself in assembler again?

[1]: https://github.com/Smerity/govarint

2
zaroth 6 hours ago 0 replies      
Very nice work, and thanks for releasing under MIT.

Interesting that Backblaze decided to go with 17+3 configuration. As I read more on the topic, I realized I haven't quite wrapped my head around how data and parity are allocated across symbols, stripes, and disks, and how all that impacts degraded read performance, but at first glance I would assume 17+3 would have a pretty terrible degraded read performance.

Another great read if you're interested in home brew RAID or using parity is Plank's paper "Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like Systems". [1]

I found that gem reading Adam's blog on implementing triple parity RAID in ZFS. [2]

[1] - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.103...

[2] - https://blogs.oracle.com/ahl/entry/triple_parity_raid_z

3
signa11 6 hours ago 1 reply      
the article contains an excellent introduction to "Galois Field Arithmetic" available via the following: http://www.snia.org/sites/default/files2/SDC2013/presentatio....

there is linux-kernel article by hpa: https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf

4
creshal 5 hours ago 3 replies      
I was under the assumption that R-S has been replaced with turbocodes in most use cases. Why not use those? Patent issues?
C Puzzles gowrikumar.com
77 points by brown-dragon  10 hours ago   17 comments top 5
1
FreeFull 41 minutes ago 1 reply      

 #include <stdio.h> int main() { float a = 12.5; printf("%d\n", a); printf("%d\n", *(int *)&a); return 0; }
This program invokes undefined behaviour when it casts float\* to int\*, so technically it could print anything, or do something else. I'm not sure how using memcpy instead would impact the puzzle's point though.

2
JoshTriplett 8 hours ago 1 reply      
> Write a C program which prints Hello World! without using a semicolon

Here's one way, cheating a bit by abusing the definition of "a C program which prints":

 /tmp$ gcc hw.c hw.c:1:2: warning: #warning Hello world! [-Wcpp] #warning Hello world!
A more serious solution, though:

 /tmp$ cat hw.c #include <stdio.h> void main(void) { if (puts("Hello world!")) {} } /tmp$ gcc hw.c /tmp$ ./a.out Hello world!

3
nathell 9 hours ago 1 reply      
One of these reminded me of my woeful attempts at IOCCC-like code, a Hello World implementation:

http://students.mimuw.edu.pl/~dj189395/nhp/czynic/programy/t...

4
kazinator 8 hours ago 2 replies      
Anyone else spot the uninitialized variable in the banner program?

Also:

> What's the output of the following program. (No, it's not 10!!!)

What's the point of including a program that calls malloc without including a prototype for it, after asking the reader earlier answer why such a program segfaults on IA-64 but not IA-32?

(The answer is: a constraint is violated, requiring a diagnostic, because an int * object pointer is assigned the return value of malloc whose implicit declaration marks it as returning int. The comment issue is a red herring.)

5
rooneyyyy 3 hours ago 0 replies      
I couldn't find the book at the hint provided... Could someone help me get that book.Thanks in advance
The American civil war then and now interactive theguardian.com
16 points by yitchelle  6 hours ago   4 comments top 3
1
kissickas 2 hours ago 0 replies      
Great content, but the auto-change on scroll was annoying. They should have at least implemented a small delay.

Anyone know why the trees at Antietam Dunker's Church appear to have shrunk? Or is that just a trick of the lens? I find the trees to be the most interesting part of these comparisons. The Brompton oak is beautiful.

2
rcarrigan87 1 hour ago 1 reply      
This is really well done. I only wish they had a little more historical explanation to go with each photo.

I'm not sure if it was the Guardian, but there was a WWII article where they did a similar photo overlay effect. I can't find it. Anyone remember this?

3
Lorento 1 hour ago 0 replies      
History really distorts things. Do we have to wait 200 years to see fights with ISIS in a similar "noble tragedy" kind of light, where there are no clear baddies and nobody's assigned blame for the killings?

Maybe some of those dead people were "baddies" fighting for something we don't believe in anymore? In that case, surely we should be celebrating the fact that they died. If not, then maybe we should be complaining about the violent people who killed them? It's not a natural disaster where nobody wants it to happen. Soldiers actively fought for some purpose or other that we presumably either accept or not today.

Supreme Court Strikes Down Warrantless Searches of Hotel Records eff.org
32 points by DiabloD3  9 hours ago   3 comments top 2
1
coldcode 31 minutes ago 0 replies      
Yet the FBI can issue an NSL and get the same information without the opportunity for judicial (from a real court, not the monkey one) review. We have a strange two faced system in the US.
2
lsiebert 5 hours ago 1 reply      
This headline isn't true. They still allow warrantless searches, and there will still be such searches.

This information can be obtained by an administrative subpoena (See an example of one at http://www.docstoc.com/docs/79005165/Administrative-Subpoena, notice they aren't required to list reasons such a subpoena would be invalid) or a police request, it's just that the hotel can choose to have a judge review the decision, and attempt to quash the subpoena.

What the supreme court said is that you can't arrest the hotel employee/owner for not handing over the information immediately, which was part of the LA law.

Good Coverage here:http://www.scotusblog.com/2015/06/opinion-analysis-an-opport...

Geomagnetic Storm Hits Earth weather.com
20 points by Brendinooo  8 hours ago   4 comments top 3
1
joshuahedlund 27 minutes ago 0 replies      
If you're interested in keeping up with this kind of stuff on a regular basis (and without the in-your-face advertising as weather.com) check out http://spaceweather.com/
2
sevensor 42 minutes ago 0 replies      
This could have interesting implications for Field Day (http://www.arrl.org/field-day), pushing people into the higher frequency bands if the event isn't over by then.
3
andor 4 hours ago 1 reply      
How will space weather affect the new satellite internet networks?
X86 Machine Code Statistics strchr.com
53 points by adamnemecek  12 hours ago   35 comments top 11
1
dspillett 1 hour ago 1 reply      
Certainly interesting, but not really useful IMO.

Instructions that are present very rarely might be executed far more frequently if they are within a loop, particularly a tight number-crunching loop.

I suspect that MOV instructions would still dominate but that you would see then "minorities" within the instruction set getting more representation in the stats.

2
IvyMike 10 hours ago 2 replies      
A) I have no idea what to do with the information, but it's fascinating nonetheless.

B) Obviously the instruction mix depends on the programs analyzed. The ones he analyzed are actually all heavily tuned but smaller programs. I wonder if that changes in other domains. If he ever re-does his analysis, it would be interesting to see something gigantic like OpenOffice, which I'm guessing might even be more MOV based. And maybe something like Octave, which might make use of some more esoteric math functions.

C) When he says "three popular open-source applications were disassembled and analysed with a Basic script", I thought he might mean "simple", but downloading the source, he does indeed mean BASIC the programming language. :)

3
nathell 10 hours ago 0 replies      
Funny thing, I had exactly the same idea the other day. Here is a breakdown of opcodes in a random binary of BusyBox 1.22.1 I had lying around (as disassembled by ndisasm):

https://scontent-fra3-1.xx.fbcdn.net/hphotos-xpt1/v/t1.0-9/1...

4
jbandela1 10 hours ago 3 replies      
The only issue I have with this are that you are using a compiler from 1998 (visual c++ 6) at that time I think the Pentium !!! was just starting to get popular. There are college students now who were toddlers when this compiler came out. I would love to see a treatment of a more recent compiler generating instructions for more recent x86 CPUs.
5
userbinator 7 hours ago 1 reply      
It would be interesting to compare the statistics of compiler-generated code with that of human-written Asm, since I can easily tell the two apart; the latter tends to have a denser "texture" and less superfluous data movement.

Ideally, register-register MOV instructions on a 2-address architecture would be necessary only in the case of requiring two copies of some data, since the register/memory/immediate operands of other instructions can already be used to choose which registers should hold what data.

The fact that such a huge percentage of the instructions are simply moving data around suggests two possible explanations: these could be obligatory MOVs for making copies of data, meaning that a 2-address architecture is too limiting and a 3-address one would be more ideal, or they could be superfluous, meaning that 2-address is a sweet spot and compilers are generating lots more explicit data movement than they should. From my experience writing Asm and looking at typical data flow in applications, I'm tempted to lean toward the latter...

6
rasz_pl 3 hours ago 1 reply      
>applications were disassembled and analysed with a Basic script:

argh noooo. Whole analysis is failed :( It doesnt matter that 50% of binary is movs if said binary executes them only once, or not at all (some obscure function). It would be more valuable to test those, or any other program, in modified QEMU TCG or MESS/PCem.

7
kenferry 10 hours ago 1 reply      
You're saying an archiver, an encoder, and an installer don't do a lot of floating point math? I'm shocked, I tell you, shocked. :-)

More to the point, it's unclear if this is a static analysis of the binary, or a runtime analysis of retired instructions.

8
jcranmer 7 hours ago 4 replies      
Well, I decided to do a very quick and dirty grepping of statistics with the following hideous Bash two-liner:

 /usr/bin $ for f in *; do objdump -d $f | sed -e 's/^ *[0-9a-f]*:[\t 0-9a-f]*[ \t]\([a-z][0-9a-z][0-9a-z][0-9a-z]*\)[ \t]\(.*\)$/\1/g' | grep '^[a-z0-9]*$' >> /tmp/instrs.txt; done /usr/bin $ cat /tmp/instrs.txt | awk '/./ { arrs[$1] += 1 } END { for (val in arrs) { print arrs[val], val; sum += arrs[val] } print sum, "Total" }' | sort -n -r | head -n 50
For those of you who can't read bash, this basically amounts to:

1. Find every ELF binary in /usr/bin

2. Use objdump to disassemble the text sections

3. Grep for what the commands probably are (not 100% accurate, since some constant data (e.g., switch tables) gets inlined into the text section, and some nop sequences get mangled)

4. Build a histogram of the instructions, and then print the results

I don't bother to try to handle the different sizes of types (particularly for nop, which can be written in several different ways to align the next instruction on the right boundary). The result is this list:

 117336056 Total 43795946 mov 9899354 callq 7128602 lea 5258131 test 4928828 cmp 4463038 jmpq 3879175 pop 3628965 jne 3519259 add 3227188 xor 2824699 push 2361085 nopl 2016346 sub 1678131 and 1551014 movq 1311683 jmp 1311476 retq 1255216 nopw 1239479 movzbl 1234815 movl 946434 movb 663852 shr 614367 shl 581608 cmpb 523398 movslq 427116 pushq 384794 cmpq 376695 jbe 348158 movsd 341248 testb 340718 sar 338542 xchg 311171 data16 302296 jle 266539 movzwl 252872 cmpl 210762 jae 169823 lock 152724 addq 151186 sete 147340 cmove 146501 imul 146386 setne 145233 movabs 142801 repz 123489 cmovne 123439 addl 105156 pxor 81745 cmpw
Since this is Debian Linux, nearly every binary is compiled with gcc. The push and pop instructions are therefore relatively rare (since they tend not to be used to set up call instructions, just mov's to the right position on rbp/rsp). jmpq and pushq are way overrepresented thanks to the PLT relocations (2 jmpq, 1 pushq per PLT entry).

mov's are very common because, well, they mean several different things in x86-64: set a register to a given constant value, copy from one register to another, load from memory into a register, store from a register into memory, store a constant into memory... Note, too, that several x86 instructions require particular operands to be in particular registers (e.g., rdx/rax for div, not to mention function calling conventions).

If you're curious, the rarest instructions I found were the AES and CRC32 instructions.

9
gsg 8 hours ago 1 reply      
I wonder what percentage of those xors are for generating zero? I would guess a large majority of them.
10
herf 9 hours ago 0 replies      
Sampling profiler would be an improvement. You want to know what CPUs spend time doing, not what is in unused code.
11
DiabloD3 9 hours ago 1 reply      
I wonder what this looks like on modern compilers like today's MSVC, gcc, or clang.
SSD Prices in a Free Fall networkcomputing.com
534 points by nkurz  20 hours ago   286 comments top 26
1
discardorama 19 hours ago 18 replies      
Cheapest 1 TB SSD on Amazon: $350

Cheapest 1 TB HDD on Amazon: $40

Expecting a 10x drop in prices in 1 year is ludicrous. Even if the prices follow a pseudo Moore's law and fall by half every 18 months, you're looking at at least 5 years before they reach parity. And in the meantimg HDDs would have gotten cheaper; so expect even more time for parity.

In other words: ain't happenin' in 2016.

2
awwducks 19 hours ago 1 reply      
Seems like a prime case for using a (bar/line/etc) graph to show the trend. I'm strangely disappointed in not seeing one.
3
fencepost 16 hours ago 1 reply      
I can't comprehend trusting data to a (single) 30TB SSD or even 10TB SSDs given the regularly reported failure modes of "It's not a drive anymore."

Perhaps it's because I'm not doing anything where massive storage is a requirement, but having data striped in RAID6 makes me happy. I'd be happy to use SSD for the underlying medium, but I want something that's less prone to single points of failure. Backups are all well and good, but what's the interface speed of a drive like that? How long to restore a 20+TB backup?

4
hbbio 19 hours ago 8 replies      
In a side-note in the article, the author mentions that Amazon Glacier runs on tape. Although I'm not sure Amazon officially explained the technology, I heard (and read on Wikipedia) that they store data on conventional HDDs, that are however kept "offline".

Has anyone here some more knowledge about that?

5
ksec 9 hours ago 0 replies      
They are talking as if 3D NAND is free. The maximum of 64 layers is not yet production ready AFAIK. Even the 32 is too expensive, we are talking about 8 or 16 layers coming soon , which when you include two nodes step back you are talking about 2x to 4x improvement. 3D NAND isn't free to manufacture either. When you add up the cost, 3D NAND will be no more then a continuation of SSD falling prices according to its current tends in the coming years.

P.S - It just means it will prolongs the life of NAND and SSD will continue to get faster, higher capacity and cheaper before hitting its limits.

6
peter303 18 hours ago 3 replies      
HP is working on a memrister based SSD with density of Flash and the speed of transistors http://www.technologyreview.com/featuredstory/536786/machine....
7
wmf 19 hours ago 1 reply      
The article predicts that SSD prices will fall from 30c/GB to under 5c/GB in 18 months. This seems like a case of citation needed.
8
monksy 18 hours ago 3 replies      
Could someone explain the 3d storage innovation to me?

Wouldn't stacking flash cells like that be susceptible to interference? (Heat/magnetic or electrical noise?)

9
pacquiao882 5 hours ago 0 replies      
They are just getting rid of stock before NVMe M.2 SSD's go mainstream with 10x performance and lower production costs compared to AHCI SATA3 SSD's.

http://www.hardocp.com/article/2015/03/24/where_ssd_market_h...

10
ryanmarsh 18 hours ago 6 replies      
So does this mean Apple will finally stop shipping a 16GB iPhone as entry level?
11
bwy 18 hours ago 1 reply      
I find great irony in statements like these when the motivation of every party is considered. The idea is that prices are "in a free fall," i.e., they haven't fallen completely yet. But this will encourage people to buy SSDs and drive them in the very free fall of which the article speaks!

Experienced the same principle in my life a few weeks ago - I moved to a place described as "gentrifying." Realized a week into staying there that I was one of those people who was actually contributing to its gentrification! By no means was it gentrified, though.

12
sneak 13 hours ago 1 reply      
This article makes a previously unasserted claim that Glacier uses tape. Is this accurate? What is the source?
13
ghshephard 11 hours ago 0 replies      
I find it surprising that most people aren't recognizing that SSDs don't even need to come close to price parity to completely replace hard drives. Random access alone makes them so much more valuable, and a good SSD is now worth more than extra memory in terms of increasing the performance of a computer system.
14
markhahn 8 hours ago 0 replies      
Flash has had a drop in price because it gained mainstream utility, and therefore serious volume production. It's entirely unclear whether this will continue.

The physics of flash cells is not all that promising: there's certainly no Moore's gravy-train. 3D is a decent tweak, but it's not like 4D is coming next. As flash cells shrink, they become flakier (which might not hurt drive-writes-per-day, but is that the right metric?)

15
yuhong 18 hours ago 0 replies      
What I am really interested is when SSDs will be cheap enough to be in common OEM systems.
16
Turbo_hedgehog 19 hours ago 3 replies      
Is keeping a SSD powered on enough to prevent bit rot?
17
mirimir 8 hours ago 0 replies      
Total cost of ownership is another issue:

> Also, with much lower power use, there is a TCO saving to be added to the equation. Power savings work out at around $8/drive-year, so add another $40 to the 5-year TCO balance and the hard-drive doesn't look so good.

And I wonder if Mr. O'Reilly has included the cost of power for cooling.

18
rorykoehler 17 hours ago 2 replies      
Having had a failure rate above what I was used to with hdds I'd prefer to see an increase in quality rather than a fall in price. It gets annoying having to replace my os ssd every 18 months or so. My most recent phone's memory also died after 30 months.
19
abandonliberty 17 hours ago 1 reply      
I wonder how reliability is doing; Unlike HDDs SSDs have a very unpredictable failure pattern, including sudden death rather than slow degradation.

If you want to ensure data is safe you need to run a raid 1 or online backup service.

20
mirimir 12 hours ago 0 replies      
Consumer SDDs are now inexpensive enough for desktop RAID. On Amazon, decent consumer 240GB-1TB SDDs are available at $0.32-$0.40 per GB. And in my experience, SSDs are so fast that even RAID6 arrays rebuild very quickly after replacing a member. I use Linux software RAID.
21
bifrost 19 hours ago 5 replies      
SSD technology still has a reliability hurdle, you can nuke one in a month if you write to it constantly. Spinning media has much longer read/write durability. Until there is parity its still a bit of an arms race for capacity.

Hybrid drives are pretty awful so I don't think they'll stick around. I could see Hierarchical Storage Management make a comeback and an entry to the consumer space, but thats been esoteric at best.

22
kylorhall 16 hours ago 1 reply      
I bought my 180GB SSDs for < 50 cents per GB like 3 years ago and it's barely better than that right now. Storage in general has more to do with availability and demand than technology, there's plenty of times when they've gone up in price.
23
robocat 15 hours ago 0 replies      
Fab manufacturing capacity doesn't magically increase because of 3D chips.
24
bcheung 15 hours ago 0 replies      
I surprised they didn't include any graphs. I'm very curious to see the trends visually.

Prices per GB are still very far off. It's hard to believe they will achieve parity that soon. Here's hoping though.

25
MCRed 19 hours ago 0 replies      
I can't wait. Once I can reasonably replace everything with SSDs I will.

And then I'll never buy spinning rust again.

I dreamed of this eventuality in the 1990s. I didn't know what technology would be used. I never expected EEPROMs would evolve in this direction (now marketed as "FLASH") as they were so fickle and unreliable in those days.

This will allow lots of interesting things.

Imagine a laptop that's got its main RAM backed on a dedicated bit of high speed flash... so it instantly powers off completely and instantly powers back on completely.

Soon, we'll have the massive increase in storage capacity much like we've had a massive increase in compute capacity to the point where you no longer really think about it-- all computers are "fast".

26
forscha 19 hours ago 4 replies      
Is the hdd demise indeed expected so quickly?
Docker, CoreOS, Google, Microsoft, Amazon to Develop Common Container Standard techcrunch.com
533 points by yurisagalov  19 hours ago   98 comments top 15
1
joslin01 17 hours ago 7 replies      
I am a big believer in containerization technology from a practical standpoint. It has allowed me to create repositories that act as services. Database, search, api, admin, etc.. they are all their own service. I do not have to configure any servers this way; instead, I declare what the system ought to be and Docker makes it happen. I don't even have to configure init scripts because a proper Dockerfile will contain a start mechanism usually in the form of any other executable: `docker run your/api --port 80 --host host.company.com`.

The only thing that matters then between services is their bindings, which gives you the ability to use any programming language for any service. Deployment with ECS has been going well so far for me. My flow:

1.) Push code to GitHub

2.) GitHub tells Docker, which builds a private image

3.) Docker tells my build server

4.) Build server tells ECS to update the given service

5.) ECS pulls from DockerHub, stops service, starts service

The only thing missing is that DockerHub doesn't tell my build server what tag it just built! It builds tags like dev / staging for the given branches, but doesn't relay that info over its webhook. There's a ticket open about this already and I'm sure they'll get to it soon.

Nevertheless, I'm able to administer any system -- things like Elasticsearch, db, api -- from code on a branch. This is powerful to me because I have to administer environments for everything. Rather than do all this work with Puppet, Chef, or even Ansible, I can just declare what the systems ought to be and cluster them within code branches.

With ECS coming into picture, you're encouraged to forget you even have physical boxes in the first place. If you think of the power at your finger tips that results from this development workflow, I believe it's a no brainer for everyone to jump on board and get this as good as it can be. It's going to be a huge boon to the software community and enable more services sharing.

2
mpdehaan2 18 hours ago 5 replies      
Hmm, interesting.

I'm unclear what value this adds in the end

Yes container images would become portable between systems, but if you hide the underlying system enough under abstraction layers what makes me choose between CoreOS or Docker or the future thing? What's the value difference?

Containers are useful if you have the build systems in source control, but if you don't, you don't know how to rebuild them or what is in them - they become dangerous in that case. They become scary "golden images".

Docker files were already very easy to regenerate things -- and I think interface wise, one of the more compelling wins. If there were other systems it's still likely they would have different provisioners.

It seems the (excuse me for the buzzword) value add then quickly becomes in the people providing management software for Docker, rather than in Docker, and Docker becomes more or less a subcommitee of a standards body.

I'm sure that's NOT true, but it's confusing why they wouldn't want to seek differentiation to me and what this means for valuation purposes.

3
deathhand 18 hours ago 3 replies      
This is great news and a shift in the way business is traditionally done. If containers were a thing 20 years ago there would be fierce vendor lock-in and patent/lawsuits flying everywhere. People would chose which cloud platform to deploy based upon which tools they prefer.

Docker has fundamentally changed the way they think of the way they fit in the tech eco-system. Instead of selling a set of containers that only work with their tools they've opened up the platform strengthening their position as the go-to solution for management. Prudent move on their part. It limits their potential market cap but solidifies them as an entrenched member for the foreseeable future.

4
rbanffy 25 minutes ago 0 replies      
Would anyone care to explain how Microsoft fits on this picture?
5
gtirloni 18 hours ago 2 replies      
6
vezzy-fnord 17 hours ago 1 reply      
I was going to ask why IBM weren't in, but read on to see that it's a general Linux Foundation collaboration, and so naturally they're part of it.

So I guess we're going to have libcontainer support for AIX Workload Partitions and OS/400 LPARs? It's gonna be interesting to see just how big the Docker libs become.

7
bobsky 17 hours ago 3 replies      
The proof is in the pudding. Overall this is very positive for the ecosystem as a whole, and glad to see them all come together. But I thought a big selling point of a standard means it's written down, currently the spec returns a 404 on github [1] seems like a lot of unknowns on what's actually being proposed.

Its confusing why the App Container (appc) spec which is written down [2] and has maintainers from RedHat, Twitter, Google, Apcera, CoreOS [3] is not being promoted - what's the new OCP standard offering that isn't in the appc spec?

[1] https://github.com/opencontainers/specs[2] https://github.com/appc/spec[3] http://www.infoq.com/news/2015/05/appc-spec-gains-support

8
jakejake 10 hours ago 3 replies      
Serious question. We have a master DB and a slave, two memcache servers and 3 webservers behind a load balancer. We're not a public-facing company and so have no reason to be building for "web scale" or whatever, we're well within capacity.

Deploying new code (happens weekely) is just as simple as clicking one deploy button in our version control system (which does a "git pull" on the web servers). DB changes (which are very rare, once or twice a year) we run manually. The cache servers never change. All of the server run automated security updates on the OS. Otherwise we upgrade non-essential packages every few months.

Is there a way that using Docker could make things better for us? I feeling the "you should be using Docker" coming at me from every angle. Our deployment is certainly not very sexy but it is simple and doesn't take a major amount of effort. Is there a use case for a company like mine?

9
TheMagicHorsey 14 hours ago 2 replies      
What does this mean for vendors like VMWare that want VMs to be the unit of deployment that developers interface with?

Seems to me that VMWare's VM management technology is still needed, but the clock is now running on how long it will be before their part of the stack is irrelevant, as all the smarts move into the container-management layer.

10
ape4 18 hours ago 4 replies      
Lots of cooks.I hope its not a huge mess.
11
castell 2 hours ago 0 replies      
What's next? Develop a common executable standard? Develop a common UI standard?
12
olalonde 15 hours ago 4 replies      
Weird to see Microsoft in that list. On a related note, will this new container standard support non-Linux kernels? Would be nice to be able to run containers directly on OS X without having to go through the boot2docker VM.
13
abritishguy 18 hours ago 0 replies      
I hope from this we get a really well engineered standard and not some silly mess.
14
vacri 15 hours ago 1 reply      
If Docker's heavily screwed-up tag system becomes locked into a standard, I may as well slit my wrists now.
15
justignore 18 hours ago 3 replies      
The 50th Anniversary of Noam Chomsky's Aspects of the Theory of Syntax chronicle.com
31 points by tintinnabula  12 hours ago   5 comments top 2
1
snitzr 6 hours ago 1 reply      
Recommended: Is The Man Who is Tall Happy? on Netflix. A nice introduction to Chomsky and his linguistic ideas.
2
formerlinguist 1 hour ago 2 replies      
This was actually not Chomsky's first book on this topic. That was Syntactic Structures (1957).
Ioke is a folding language ioke.org
29 points by brudgers  10 hours ago   8 comments top 5
1
brudgers 7 minutes ago 0 replies      
A podcast interview with Ola Bini discussing Ioke:

http://www.se-radio.net/2010/01/episode-154-ola-bini-on-ioke...

2
mattnewport 4 hours ago 2 replies      
I don't know what "folding" is supposed to mean in this context and it's not defined elsewhere on the page. Is there a common usage of the term when describing a programming language that I'm not familiar with? I first thought of a functional fold but that doesn't appear to be the intended meaning here.
4
fhdhcdhedh 7 hours ago 1 reply      
I don't get it. The page mentions what goals ioke is trying to achieve, but then offers no meaningful demonstration whatsoever of the language's compelling features. I'm left with the impression that "it's good because I say so."

I'm really intrigued, but I don't even know what I'm looking at. The language guide didn't help either.

5
agumonkey 1 hour ago 0 replies      
Oh, work by Ola Bini, I remember him for his contributions to JRuby. Interesting experiment.
Organizing complexity is the most important skill in software development johndcook.com
681 points by speg  1 day ago   271 comments top 62
1
msandford 23 hours ago 10 replies      
This is incredibly true. I once turned 60kLoC of classic ASP in VBScript into about 20kLoC of python/django including templates. And added a bunch of features that would have been impossible on the old code-base.

It turned the job from hellish (features were impossible to add) to very nearly boring (there wasn't much to do anymore). So with this newfound freedom I built some machines to automate the data entry and once that got rolling the job got even more boring. Because it was a small company with a very long learning curve the owner didn't let people go, but instead kept them on so that he didn't have to hire and train new people as growth accelerated.

But with all the automation some slack found its way into the system and problems that had normally required me to stop working on my normal job and help put out fires now got handled by people who weren't stretched a little too thin.

Sadly there's (seemingly) no way to interview people for this ability so we're stuck with the standard "write some algorithm on a whiteboard" type problems that are in no way indicative of real world capabilities.

2
knodi123 21 hours ago 1 reply      
Mirror since the site is currently down:

The most important skill in software development

Posted on 18 June 2015 by JohnHeres an insightful paragraph from James Hagues blog post Organization skills beat algorithmic wizardry:

When it comes to writing code, the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity. Ive worked on large telecommunications systems, console games, blogging software, a bunch of personal tools, and very rarely is there some tricky data structure or algorithm that casts a looming shadow over everything else. But theres always lots of state to keep track of, rearranging of values, handling special cases, and carefully working out how all the pieces of a system interact. To a great extent the act of coding is one of organization. Refactoring. Simplifying. Figuring out how to remove extraneous manipulations here and there.

Algorithmic wizardry is easier to teach and easier to blog about than organizational skill, so we teach and blog about it instead. A one-hour class, or a blog post, can showcase a clever algorithm. But how do you present a clever bit of organization? If you jump to the solution, its unimpressive. Heres something simple I came up with. It may not look like much, but trust me, it was really hard to realize this was all I needed to do. Or worse, Heres a moderately complicated pile of code, but you should have seen how much more complicated it was before. At least now someone stands a shot of understanding it. Ho hum. I guess you had to be there.

You cant appreciate a feat of organization until you experience the disorganization. But its hard to have the patience to wrap your head around a disorganized mess that you dont care about. Only if the disorganized mess is your responsibility, something that means more to you than a case study, can you wrap your head around it and appreciate improvements. This means that while you can learn algorithmic wizardry through homework assignments, youre unlikely to learn organization skills unless you work on a large project you care about, most likely because youre paid to care about it.

3
bbotond 22 hours ago 1 reply      
The site seems to be down.

Cached version: http://webcache.googleusercontent.com/search?q=cache:Mf9074z...

4
userbinator 15 hours ago 1 reply      
I'd say it's not really about "organizing complexity" - because that tends to just push it around somewhere else - but reducing complexity which is most important.

In my experience it has been that designs which look "locally simple" because they have such a high level of abstraction are actually the most complex overall. Such simplicity is deceptive. I think it's this deceptive simplicity which causes people to write a dozen classes with 2-3 methods of 2-3 lines each to do something that should only take a dozen lines of code (this is not that extreme - I've seen and rewritten such things before.)

Perhaps we should be focusing on teaching the techniques for reducing complexity more than hiding it, and that abstraction is more of a necessary evil than something to be applied liberally. From the beginning, programmers should be exposed to simple solutions so they can develop a good estimate of how much code it really takes to solve a problem, as seeing massively overcomplex solutions tends to distort their perspective on this; at the least, if more of them would be asking things like "why do I need to write all this code just to print 'Hello world', and the binary require over a million bytes of memory to run? Isn't that a bit too much?", that would be a good start.

5
dsr_ 23 hours ago 1 reply      
The most important skill in operations: systematic debugging. If the developers did well at organizing, this is much easier.

The most important skill in low-level technical support: diplomacy.

The most important skill in high-level technical support: figuring out what people are actually complaining about.

Note that many low-level technical support problems look like high-level problems, and vice versa.

6
edw519 22 hours ago 7 replies      
the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity

Agreed.

very rarely is there some tricky data structure or algorithm that casts a looming shadow over everything else

Agreed, BUT...

In order to organize, sooner or later, you will have to get clever (with tricky data structures or algorithms).

How I have always built something big and/or complex:

 1. Add lines of code. 2. Add lines of code. 3. Add lines of code. 4. Holy shit! What have I done? 5. Refactor. 6. Combine similar sections. 7. Genericize with parameter-driven modules. 8. Still too many lines of code! Optimize Step #7 with something clever. 9. Go to Step 1.
2 years later: What the hell was this clever parameter-driven multi-nested process for? Why didn't I just code it straight up?

For me, organizing complex code has always been a delicate balance between readibility and cleverness.

Don't remove enough lines of code and it's too much to navigate. Remove too many and it's too much to comprehend.

Organization + Cleverness + Balance = Long Term Maintainability

7
rsuelzer 11 hours ago 1 reply      
This times 1000! The really sad thing is that 9.9 out of 10 technical interviews are all about how many Algo's you know. Even if your job will never require actually implementing a single one.

These interviewers do not seem to care at all about the actual process of writing and refactoring code, or what your finished products actually look like.

I know plenty of programmers who can write highly optimized code, but do so in a way where it is completely impossible to maintain.

It's especially heightened when you are given a problem such as "validate if this uses valid brackets, and you have 10 minutes." When under time constraints to solve what is basically an algorithm 99% of programmers are not going write their best code or write code in the way they would normally write code.

If you are using lots of algo's on your programming interviews, I suggest you take a step back and determine if those skills are actually what you want to be testing for in this job. Odds are that it is NOT algo's. Give your interviewer some really sloppy code to refactor into something beautiful. Give them a few hours to work on it, watch how they put the pieces together.

If your position isn't going to require someone to write advanced algorithms on daily basis, testing for them only cuts out a huge swath of potential talent. I also think it probably leads to less diversity in your work place, which is a bad thing.

A Web Developer will never need to solve the towers of Hanoi problem, but they will need to write clean code that can be maintained by others.

</rant>

8
wskinner 15 hours ago 1 reply      
Heres something simple I came up with. It may not look like much, but trust me, it was really hard to realize this was all I needed to do.

This reminds me of something Scott Shenker, my computer networking professor at Berkeley, drilled into us every chance he got: Don't manage complexity. Extract simplicity.

Finding complex solutions to complex problems is comparatively easy. Finding simple solutions to complex problems is hard.

9
jondubois 23 hours ago 3 replies      
I agree that it is an important skill in engineering to recognize when the complexity got too high - At that point you need to take a step back (or several steps back) and find a different path to the solution - Sometimes that means throwing away large amounts of code. It's about acknowledging (and correcting) small mistakes to make sure that they don't pile up into a giant, disastrous one.

Another thing I learned is that dumb, explicit code is highly desirable - It's better to have 10 dumb components to handle 10 different use cases than 1 clever component that can handle all 10 cases.

I think the most important skill is being able to break down problems into their essential parts and then addressing each part individually but without losing track of the big picture.

10
ak39 22 hours ago 3 replies      
Organisational skill is actually an entrepreneurial skill. I'm learning the hard way that there are two diametrically opposing skills you need to master for both running a business and programming effectively:

1. Skills in Creating2. Skills in Organising those creations

The thing is, the Creating part is always exciting but it's disruptive in nature. The Organising part is boring because it's about taming or tempering the creation in such a way that it can be referenced later on (just like filing your invoices, or timesheets - yuck - but necessary).

Unless you've got a systematic method to organise your creations, you will always be alone with your ideas, find it hard to resume creative chains of efforts and ultimately flounder without profit.

Both in business and in programming.

Damn right it's the most important software development skill.

11
mathattack 21 hours ago 2 replies      
This has been my experience too. I've been involved cleaning up a half dozen or so projects that got out of control. In each case, technical complexity was blamed as the reason. After digging in, I found that following commonalities:

- Incomplete, conflicting and misunderstood requirements.

- Lots of "We never thought we would need to do X".

- Poor team communication.

- Mistrust, frequently well earned.

- Harmony valued over truth.

Once these were winnowed away, the problems rarely overwhelmed the technical teams. This isn't to diminish the importance of technical skills. Rather - when everything else is f*cked up, you can't blame the technology or expect a 10xer to pull you out of it.

12
jimbokun 21 hours ago 0 replies      
I disagree with the premise that organizing code is not a recognized or appreciated skill among developers.

At least not since this was published:

http://www.amazon.com/Refactoring-Improving-Design-Existing-...

Martin Fowler really struck a chord all the developers trying to do the right thing by cleaning up badly structured code, by giving the practice a name and explaining why it's important. Refactoring is definitely a widely acknowledged and accepted practice today, although probably more so in some communities than others.

13
linkregister 20 hours ago 0 replies      
Google cache, for those experiencing connectivity issues.

http://webcache.googleusercontent.com/search?q=cache:Mf9074z...

14
varchar 9 hours ago 0 replies      
Interesting. Most comments seem focused on code complexity. In real life situations the complexity is blend of human interactions (attitude, consistency,team,leads, peers, family, emotions) business, market, competition, budget, time, attrition, unexpected events, and more.

Life is complex. Business and workplace dynamics can be complex. People are complex with their own strengths, quirks and situations. Having a broad outlook, developing patience and skills to deal with life and work is part of becoming mature.

15
matt_s 23 hours ago 1 reply      
There is just a nice magical moment though when you grok what an application does and how it is organized, especially when you didn't write it, which is likely the most often case.

I think a large issue is when an application has new people working on it or entirely new teams that maintain it. That is when the original authors methodology and organization of the application falls to the immediate need of the day.

The functionality of the software should speak for itself. Commenting your code with why you are doing something is important to help other maintainers later on, including yourself, understand what you were thinking when it was written.

16
humbleMouse 22 hours ago 1 reply      
I like this little article a lot. Personally, I try to write code in a way that reads like a book. Lots of comments, explicit function names, explicit variable names, object names, class names, ect. Talking about languages higher level than C/assembly here obviously.

I am amazed at all the code I see that has terrible/too generic names for functions and variables and objects. Some people get so obsessed over short function names, one character variable names, and complicated looking one liner evaluations.

17
mkramlich 13 hours ago 0 replies      
I agree that complexity is far up there. But also risk. Also long term thinking. And net cost or net profit. The more years I have under my belt, I think more and more not only about complexity, but also risk, cost, profit. Code and hardware is just a means to an end. Not the end itself.

But yes, seek the minimum amount of complexity to materialize the inherent, necessary complexity. But don't allow a drop of complexity more than that. Architecture astronauts, pattern fashionistas, I'm looking at you. KISS. Spend your complexity dollars where it gives you something you truly need or want. Don't do things Just Because.

18
joshburgess 16 hours ago 0 replies      
"The art of programming is the art of organizing complexity, of mastering multitude and avoiding its bastard chaos as effectively as possible." --Dijkstra

:)

19
agounaris 22 hours ago 0 replies      
I would add to the conversation the fact that most projects fail on the set of the initial requirements. In my experience so far I have seen that constantly changing what you want the app to do creates a huge mess in the codebase even if you use latest tool and methodologies.

Looks like there is a great value to organise your app in way to be able to throw away large chunks of code and start over in case there is a big design change.

20
kriro 7 hours ago 0 replies      
"""But theres always lots of state to keep track of, rearranging of values, handling special cases, and carefully working out how all the pieces of a system interact."""

I'm not a functional programming evangelist but that reads like a very good reason to go for FP. I think a similar point was made in "Functional JavaScript". I don't remember it exactly and it's on my shelf at home but there was some passage about the biggest downside of typical OOP codebases being the mental effort of keeping track of values and value changes.

21
Beltiras 20 hours ago 0 replies      
I disagree. Communication is the most important. It's the number one cause of failed software project. Miscommunicated features, capabilities, scope, failure to name a few. My favourite is the last one. Not standing up and recognizing that an approach is not working due to fear has to stand out as a big one.
22
aspirin 23 hours ago 5 replies      
Organization is the hardest part for me personally in getting better as a developer. How to build a structure that is easy to change and extend. Any tips where to find good books or online sources?
23
petejansson 20 hours ago 0 replies      
Relevant: http://www.safetyresearch.net/blog/articles/toyota-unintende...

Summary: Toyota settled an unintended acceleration lawsuit connected with analysis of the source code for a 2005 Toyota Camry showed it was defective "spaghetti code."

There'a a lot of poorly-organized code in the world, and a typical excuse for not cleaning it up is that "it works" so there would be no return on fixing it. In the Toyota case, the code may have contributed to unintended acceleration, and did result in a legal exposure for which Toyota felt it was necessary to settle a lawsuit.

24
bcheung 12 hours ago 0 replies      
I can't load the page, but I would definitely agree with the title.

From my own experience programming here are some the most common and best ways to better organize complexity:

1) Create DSLs. The Sapir-Whorf hypothesis is the theory that an individual's thoughts and actions are determined by the language or languages that individual speaks. By creating DSLs we are able to reason about a problem domain with increased efficiency.

2) Reduce cognitive load by reducing state. By reducing the number of variables in a given section of code we can more easily reason about it. values.map (x) -> x * x is a lot more understandable than newArr = [] ; for (i=0; i<values.length; i++) { newArr.push( values[i] * values[i] ); }

3) Build tools to build tools. The history of computing is one of building tools on top of tools. From assembly language to C to high level languages. What is is the next step? I suspect it is some kind of polyglot environment that is a hodgepodge of languages all working together combined with automated code creation from AI.

25
sgt101 23 hours ago 0 replies      
Which is why jupyter/ipython is so great -> you can do some fantastic documentation of code with working examples and visualisation, you can do it while you write it !
26
abc_lisper 17 hours ago 0 replies      
This is always a good idea. Here are some of the things I do

- Establish conventions early; Conventions in managing projects, conventions in code. And stick to those conventions. Be predictable,in the code. Use simple names.

- Protect your interfaces. By this I mean, use a central place like a wiki to document your interfaces, so all involved parties may agree upon. Write unit-tests for interfaces. Use libraries like mockito and hamcrest that make it a breeze.(You would lock your home every time you go out, don't you?)

- I mentioned this in the previous bullet, but write tests. Write lots of them, write tests that document any tricky, magical behavior.Cover all the cases(A boat with one hole is bad as one with two holes). Cover all the invariants, any thing that you notice but didn't mention in the source code. Write tests for the bugs you just fixed.

- If you are developing in Java, please use an IDE. I use Intellij, but Eclipse is good too. It makes refactoring code much easier.Rename fields, pull classes up the hierarchy, create abstract classes, create getters and setters automatically with refactoring tools. I am not against emacs or vi, but it is hard to manage Java's sprawl with them.

One of the best programmers I know writes code like it has been generated with a program. It is boring, dull and looks alike in every direction. Every field and method is documented, it says what its purpose is, and why it is needed. He is very fast(fast for a startup, not your average enterprise), accurate and gets a lot of stuff done without magic.

27
anton_gogolev 23 hours ago 0 replies      
This can be reduced to:

 Do Not Program Yourself into a Corner

28
jaequery 23 hours ago 1 reply      
Nice post, organization definitely is one of the most overlooked aspects of programming. It takes a lot of experience and thinking to be able to organize properly. It's really what separates the beginner programmers and the experienced ones.
29
AdieuToLogic 9 hours ago 0 replies      
> Do the difficult things while they are easy and do the great things while they are small. A journey of a thousand miles must begin with a single step.(source: http://www.brainyquote.com/quotes/quotes/l/laotzu398196.html...)

This applies to systems as much as any thing else it possibly could.

30
blazespin 23 hours ago 2 replies      
Design patterns by the Gang of Four was great for this.
31
bikamonki 23 hours ago 1 reply      
No part of a system is the most important part, from a car to a huge organization, all parts are equally required to interact and hence make such system 'work'.

Having a clear functional organization at the start (and respect it throughout development) is very important, but after that is equally important to code clean and efficient code, to test, to debug, etc. Then, going up and dow on the solution stack is important to make the right decision on hardware, OS, server, services, etc.

32
crimsonalucard 16 hours ago 0 replies      
Since so many people prioritize getting then task done than writing organized/beautiful code more often then not we get code that isn't organized properly.

Thus, as a result: Interpreting complexity is by far the most important skill in software development. More-so then organizing complexity.

33
stinos 23 hours ago 1 reply      
Not only is it the most important skill (unless for small home projects maybe), it's also the one which takes the longest to learn and hence the one you really improve on during the years and which distinguishes the experienced ones from the lesser experienced ones. Coincidently, it's also the skill for which you won't find a ready-made answer on stackoverflow or any other site/book.

thinking of it, I've also seen this as a typical difference between fresh CS graduates and those who have been programming for 10+ years. The latter would sometime take way longer to come up with clever math-oriented algorythms than the first, because the graduate has been trained for it and still has it fresh in memory, but experienced programmer would make up for that by being able to use the algorithms in all proper 'best practice' ways one can think of. Whereas the graduate would just slam it in somewhere and call it a day even though there are now x more dependencies and whatnot, you get the picture.

34
jackreichert 22 hours ago 0 replies      
For those not able the view it.[Cacheview](http://webcache.googleusercontent.com/search?q=cache:http://...)
35
zackangelo 20 hours ago 0 replies      
Aside from personal discipline and experience, Ive found that using strongly typed and compiled languages combined with good tools are the best way to accomplish this.

Being able to search for and manipulate symbols at the AST level goes a long way towards eliminating any resistance to refactoring.

36
ademarre 11 hours ago 0 replies      
> Organizing complexity is the most important skill in software development

I agree with this profoundly. Unfortunately, complexity is in the eye of the beholder. When comparing solutions to a problem, different developers will not always agree on which is the least complex.

37
autotune 20 hours ago 0 replies      
I take it from his site not loading that the most important skill is learning how to scale your site and ensuring it remains accessible during high load times, or using CloudFlare to at least ensure it gets cached.
38
edpichler 11 hours ago 0 replies      
Our humans are "comparing machines". For this reason we tend to valuate people that solve problems more than the ones who never create them. This is really bad.

Also, in business, if someone is really good administrator, it seems he never does nothing.

39
o_nate 21 hours ago 1 reply      
Or in other words: code should be as simple as possible, but no simpler. I guess you could call it organization or readability or just good design. It requires deeply understanding what you're trying to accomplish and structuring your code to reflect that. I don't think there's any rote, step-by-step procedure that will get you there. Often it is a flash of creative insight that breaks the log-jam and reveals the hidden inner structure of the problem. Once that is revealed the code writes itself. In other words, good code should always make the problem that was solved look easy.
40
jdimov9 22 hours ago 3 replies      
I agree that most complexity in software systems comes from managing state. So here is a simple solution - stop doing it. Stop managing state.

Use the right tools for the job. Most mainstream programming languages are ridiculously inadequate for building any production software of any significant complexity without introducing more problems than you are trying to solve.

Use a mature functional programming language that works with immutable data and is designed for building complex industrial systems.

Use a language that was designed from the beginning with the understanding that your software WILL be full of bugs and errors, yet systems must always continue to run.

Use Erlang.

41
agumonkey 16 hours ago 0 replies      
And Metalevel complexity. I may have been able to craft beautiful code, but I sensed chaos in the way I operate and manages my own resources / tools. I've seen people being organized at many, if not all, layers, solving problem and solving how to help solving problems. Witnessing that makes me feel calm and envious at the same time.

ps: it's also reminiscent of recursion, dogfooding etc.

42
jblow 22 hours ago 0 replies      
It is a reasonable point that organization is important but I have to disagree about "most important".

The most important skill in software development, by far, is managing your own psychology.

43
jtwebman 18 hours ago 0 replies      
The website is down but here is a link from the archive.org site. http://web.archive.org/web/20150622134205/http://www.johndco...
44
BerislavLopac 23 hours ago 0 replies      
This pretty much aligns with my take on that from four years ago: http://berislav.lopac.net/post/13061099545/the-most-importan...

As wise men said: All problems in software can be solved with more layers of abstraction, except of the problem of too many layers of abstraction.

45
DevPad 20 hours ago 0 replies      
Yeah, the best developers I've ever seen are about simplifying things, avoid overcomplication for too much "flexibility".

The zen of Python https://www.python.org/dev/peps/pep-0020/ say:

 Simple is better than complex.

46
markbnj 22 hours ago 0 replies      
It's a very important skill for a programmer to have, especially in the modern environment where distributed systems built from many integrated components are the norm. That said, it's awfully difficult to disentangle the various skills needed for programming and assign an importance to each one, mush less to determine which of them is actually most important of all.
47
thewarrior 18 hours ago 1 reply      
Yeah this is so true.

I worked on a system where we had to support imperial and metric units. It was done in a pretty bolted on fashion with if statements all over the place. And sometimes it isn't even clear if it could be done in any other way.

Any HN'ers have suggestions on how to do it elegantly.

48
brazzlemobile 21 hours ago 0 replies      
Here's the Google cache if someone is looking for a mirror: http://webcache.googleusercontent.com/search?q=cache:www.joh...
50
tome 22 hours ago 0 replies      
This is why I love Haskell. Haskell seems to increase my ability to manage complexity by one level.

Disclaimer: Haskell is not a silver bullet, not a panacea and I'm only claiming a modest increase, not miracles, but it helps me deal with complexity better than any other language I know.

51
ljk 18 hours ago 0 replies      
I was thinking that recently. The way I try to stay organized is to comment(with timestamp) every time something is changed so I can refer to it later. Does anyone have more tips on how to stay more organized?
52
headShrinker 15 hours ago 0 replies      
google cache record: http://webcache.googleusercontent.com/search?q=cache:Mf9074z...

The server seems to be getting hammered.

53
grandalf 20 hours ago 1 reply      
In a recent job I ended up rewriting some of the codebase with this kind of stuff in mind... the results:

- 10% of the LOC as previously

- Code significantly more understandable

- Jr. developer on the team suddenly became a rockstar b/c he could understand what was going on.

54
Omnipresent 22 hours ago 3 replies      
On the topic of organization and related to another post about good Software Development books. What are some books that teach code organization as discussed in this post. One I can think of is "Refactoring" by Martin Fowler.

What are some others?

55
signa11 22 hours ago 1 reply      
there is an excellent book by jon lakos called 'large scale c++ design' which treats organizational or physical design of large scale c++ projects (> 1giga loc). highly recommended.
56
justonepost 23 hours ago 3 replies      
Arguably, this is what higher level languages like Java and C++ provide. Tight organizational language metaphors that help implement design patterns in a thoughtful, consistently structured manner.
57
snarfy 21 hours ago 1 reply      
There is a term for it. It's called managing scope. I'm surprised the article didn't mention this.
58
Stratoscope 15 hours ago 2 replies      
Out of all the things I know how to do in programming, reducing complexity is probably the one I'm best at. So how do I get a job doing this? Or a series of lucrative consulting gigs? :-)

I'm pretty sure I'm not as smart as I used to be, and I'm definitely not as smart or productive as some of the younger programmers I've worked with. (Sorry for the ageist remark!)

This may be my secret advantage: I have to keep my code simple enough that even I can understand it.

Here's a fun example that I've seen more than a few times in various forms: four-way navigation, either involving up/down/left/right or north/south/east/west, or both.

In one (somewhat disguised) project it worked like this: the code had several different modules to provide a keyboard interface for geographic navigation, while keeping the geo code separated from the low level details of key codes and events and such.

There was a keyboard manager that mapped keycodes to readable names that were defined in an enum:

 switch( keyCode ) { case 37: return KEY_LEFT; case 38: return KEY_UP; case 39: return KEY_RIGHT; case 40: return KEY_DOWN; }
Then an event manager broadcast navigation messages based on the KEY_xxxx codes:

 switch( keyEnum ) { case KEY_LEFT: BroadcastMessage( 'keyLeft' ); case KEY_RIGHT: BroadcastMessage( 'keyRight' ); case KEY_UP: BroadcastMessage( 'keyUp' ); case KEY_DOWN: BroadcastMessage( 'keyDown' ); }
A navigation manager received these messages and called individual navigation functions:

 // Don't forget to reverse the directions here events.on( 'keyLeft', function() { moveRight(); }); events.on( 'keyRight', function() { moveLeft(); }); events.on( 'keyUp', function() { moveDown(); }); events.on( 'keyDown', function() { moveUp(); });
These navigation functions panned a map in one compass direction or another:

 function moveUp() { map.pan( maps.DIRECTION_NORTH ); } function moveDown() { map.pan( maps.DIRECTION_SOUTH ); } function moveLeft() { map.pan( maps.DIRECTION_WEST ); } function moveRight() { map.pan( maps.DIRECTION_EAST ); }
Of course most of you reading this can see the problem at a glance: Besides having so many layers of code, how many different names can we give to the same concept? We've got KEY_LEFT, keyLeft, moveLeft, and DIRECTION_WEST that all mean pretty much the same thing!

Imagine if math worked like this: You'd have to have two of every function, one for the positive numbers and another one for negative numbers. And probably four different functions if you are dealing with a complex number!

That of course suggests a solution: use numbers instead of names, +1 for up and -1 for down, ditto for right and left. And pass these numbers on through any of these layers of code so you only need half the functions. If you need to flip directions along the way (like the left arrow key navigating right), just multiply by -1 to reverse it instead of having to make special cases for each direction name.

You might even decide to combine the two axes, so instead of vertical and horizontal, you've got +1 and -1 there too (or 1 and 0, or something that lets you handle both axes with one piece of code). Now you could be down to a quarter of the original code.

Unfortunately, I was brought in on this project near the end to help wrap up a few other tricky problems, and all this navigation code was already set in stone. (And to be fair, working and tested, and who wants to go back and rewrite proven code, even if it is four times the code you need?)

But this would make a pretty good "how would you clean this code up" interview question!

59
brobdingnagian 17 hours ago 1 reply      
Important according to what metric? Making the software developer feel good, or making the company money? The former is almost certainly true, the latter is almost certainly not.
60
evandrix 21 hours ago 1 reply      
page not found
61
brightball 23 hours ago 0 replies      
Great read. 100% agree.
62
yessql 22 hours ago 1 reply      
Oh, I thought the most important skill in software development would be SCRUM.
Running Empty outsideonline.com
32 points by nashequilibrium  12 hours ago   2 comments top
1
jac_no_k 2 hours ago 1 reply      
I cycle commute to work about 38km each way, currently about a 90 minute effort. I ride at for me, "threshold" (above 90% of max heart rate) for 45 minutes of that and remainder at "tempo". No where near "ultra" but I think I'm near my biological maximum.

I too have hit a period where I was feverish for several weeks. Foolishly kept cycling and became ill / fatigued to the point where it was a struggle to get out of bed. The physician I visited didn't find anything conclusive and I was on variants of ibuprofen to control the fever and for pain relief. When I was finally feeling better, I cycle commuted once and that knocked me back into bed for another week.

Likely I was going too hard for the conditions. This occurred when temps during the commute was hovering around 0dC. Low temps make it a challenge body temperature and may have stressed my body too much.

Strava provides a metric to guess how fatigued one is from cycling. I noticed that the point when I fell ill was the same time it was reporting record fatigue levels. I now keep an eye on it but I'm feeling a touch of that "fatigue" these days. Likely to due to $dayjob and cycling...

I wish there was a way to measure all my activities to guide when I should be backing off. I've found my breaking point and crossing the that point means weeks of recovery.

For what it's worth, I'm faster then I ever was... but something is going to break.

Whats really going on in downtown Vegas? theguardian.com
53 points by flippyhead  13 hours ago   21 comments top 12
1
vogt 12 hours ago 0 replies      
Eh. I just moved to work at a Downtown startup. I absolutely love it. Folks love complaining about cities that don't have character - downtown has it like pretty much no other. Container park, as a poster before me mentioned is bustling. Grab tacos at Pinches if you're ever by there, they're amazing. Most, if not all of the startups here seem to be doing very well. The author paints a picture that Downtown is being gentrified by the Tony show, displacing things like punk rock bars in exchange for organic farmer's markets. I find this to be pretty much completely false. There's as much punk rock in the area as there is farmer's market right now.

I don't want to call this article a hatchet job because I don't know Hsieh and have nothing to do with him or Zappos or anything, but that's how it came across to me. Downtown Vegas seems like it's going nowhere but up, even despite the glaring little pockets of yesterday that line some of the more run down streets.

Calling the Gold Spike a frat house is pretty on point though.

2
caseysoftware 12 hours ago 2 replies      
I was in Las Vegas for Future Insights Live just a couple weeks ago and a friend - Frank Gruber of TechCocktail - gave me a tour.

First of all, it is nothing like the Strip. While there are casinos and hotels, etc, they're a fraction of the size of the main chaos. As a result, it feels closer to somewhere you could actually live.

We went to Container Park* (and numerous other places) and they're bustling. I don't know how many are locals vs tourists but the sheer number of people (families, couples, etc) hanging out made me think locals.

I moved to Austin in 2010 and it feels like a more embryonic version of that. But in five years, Austin has had a handful of IPOs, a few major acquisitions, and most major SF companies are setting up shop. If Las Vegas got a similar cycle going - either by starting companies or importing people - it could get some great things going. Either way, it needs to be thought of as a 5, 10 or even 20 year plan. Not something to do in three years.

* By the way, the fire-breathing praying mantis alone almost makes it worth the trip. Even at 20-30 feet away, you can feel the heat when it shoots. It's amazing.

3
geophile 11 hours ago 2 replies      
I visited Container Park in April, after having read a few articles about the Downtown project. I really wanted to like this place, and architecturally I thought it was pretty interesting. But the place didn't make sense to me.

Container Park has a bunch of very narrowly focused businesses, in a small, 2-story space with insufficient traffic to support them. There was nothing resembling an "anchor" store to draw crowds and get this place going. As a result, it just seemed dead, lackluster.

Downtown LV is quite far from the strip, so tourists aren't going to stumble on it. And it's not even in the heart of downtown LV. To get there, you have to make a point of going there. And of course, LV makes it very tempting to stay in the big hotels and not venture all that far.

Some examples of the businesses: There was one store devoted entirely (I think) to socks. Another sold groceries, locally produced art, and art supplies. One store sold used designer bags and shoes. There were a few bars and restaurants, but nothing I'd go out of my way to patronize. In general: an odd collection of niche stores, with a few places to eat and drink.

4
deckar01 9 hours ago 0 replies      
I stopped in Las Vegas for a couple nights as I was making my exodus from San Francisco back to Oklahoma. I wish I would have known this existed. When I was looking for things to do all I could find was the overpriced shows and resorts on the strip. Seems to be a marketing failure, because I'm definitely their target audience. How do most people find these developments organically?

Although I was passing through with my car, I tried to venture off the strip on foot and take the bus around town. Walk any direction perpendicular to the strip and you start to feel like you are in an empty wasteland. Next time...

5
caseyf7 13 hours ago 1 reply      
The thing you have to remember is this is a real estate play, not a tech play. We may soon have more CEOs buying tracts of land with their personal money and then moving the corporate headquarters nearby if this works out.
6
jchin 12 hours ago 0 replies      
This article is from November 2014, quite some time ago.
7
tgb29 9 hours ago 0 replies      
I lived in Northwest Arizona for the last 2 years and only recently began going downtown after spending many of my Nevada nights on the strip. I really like Fremont street and would seriously consider moving there just to be a blackjack dealer or casino employee. The workers were very considerate and showed great respect to me when I sought more information.

The biggest downside and the saddest part of downtown is the "dumb" street lights that run perpindicular to Fremont. I'm talking about the "walk" and "don't walk" lights. Often I had to wait 1-2 minutes for the "walk" light even though there were no cars in sight . You would think in an area that is trying to attract tech talent that they would make "smart" street light and walk signs . This seriously ruins the user experience for me and I'm pretty sure the technology to fix this has been around for 20 years. Simply , if there are no cars then there should be no "stop walking " lights . Solve this and downtown becomes much better immediately.

I see many opportunities for tech startups in Vegas especially in Fintech. Nevada has one of the best laws for medical marijuana in the US and there needs to be a hotel that caters to this demographic. Tobacco smoke is nasty. There are 8,000 jobs available in the area and a software solution to fill these voids would work. Accept it or not but the sex industry is enormous, the laws make it legal, and it can be better organized in order to protect young women.

8
8ig8 10 hours ago 0 replies      
Mods: How about a 2014 note in the post title?
9
nugget 12 hours ago 0 replies      
Tony invested in the wrong companies. Instead of trying for standalone valley style tech companies, he should have focused on small division style incubations that could be sold into larger parent companies who would use the acquisitions to establish and grow a meaningful presence in Vegas. That strategy could have brought a lot of growth to the area and begun seeding the ecosystem with stable employers, talent, and infrastructure. Vegas would make an awesome home base for a lot of QA, customer service and support, and similar corporate functions that are now being priced out of LA, SF and NYC. I give him a huge amount of credit for trying, regardless of the outcome.
10
wolfico 12 hours ago 1 reply      
I really would like to move to Vegas but a few people who have lived there have told me that Zappos is realistically (but not literally) the only game in town for tech. Does anyone with boots on the ground there know if this is the case or not? How does it compare to the LA/DFW/AUS scene?
11
dangero 10 hours ago 0 replies      
On first glance this article seemed negative, but by the time I finished I couldn't remember what they were complaining about and I really wanted to check it out. Almost seems like intentional reverse psychology.
12
mistermann 12 hours ago 1 reply      
This article makes it seem a lot more interesting and substantial than actually being there.
The Internet is running in debug mode (2014) java-is-the-new-c.blogspot.com
54 points by sciurus  14 hours ago   55 comments top 23
1
kentonv 10 hours ago 5 replies      
OK, we're going to see a lot of rehashing of the old arguments:

Pro-text: Human-readability saves immeasurable human developer time!

Pro-binary: Text parsing wastes a lot of very measurable machine time!

Problem is, half of this argument is based entirely on anecdotal evidence and gut feelings. We really have no idea how much developer time is saved by having messages be human-readable. You will find smart people who believe very strongly both ways. In a seemingly very high fraction of these cases, it seems that people are really just taking the practice they are more comfortable with (because they've used it more) and rationalizing an argument for it because it makes them feel good about their choices. When hard evidence is lacking, confirmation bias, unfortunately, takes over.

As the author of Cap'n Proto and former maintainer of Protobufs, I obviously come down pro-binary... but I won't bore you with my argument as I don't really have any hard facts either.

2
Udo 12 hours ago 1 reply      
The problem I see with binary protocols is the same one that plagues XML: as soon as you build something to be "not human-readable", any potential efficiencies gained will quickly be overshadowed by increasing bloat.

When you as a developer look at JSON messages, and you see endless walls of irrelevant text scroll by, you're disgusted. This disgust drives minimalism. With a machine-centric format, the excuse quickly becomes "oh, this is intended for machines anyway, so who cares". If you can access something through tools only, the bloat becomes hidden and is encouraged to grow.

At that point, you'll start seeing articles asserting that, sure, the new super-efficient binblob messages are 10x-20x as large as JSON used to be, but look at all the things we gained, like automatic protocol negotiation, contracts, actual serialized objects. Any of these sounds reasonable at first but in reality will only benefit tool vendors in a vicious feedback cycle where the format slowly evolves itself to death.

I take that 3-5x overhead of parsing JSON any time over the non-human-readable alternatives. That doesn't mean it's the right choice for all protocols. But it's a reasonable default for a lot of systems.

3
delinka 10 hours ago 0 replies      
The Internet is running in debug mode. Because it's large and complex and humans have to debug the problems. Humans need to be able to read the data as it whizzes by to spot the problem[1]. Often, this needs to happen in environments with minimal tooling; e.g. you're staring down a problem on a production server, and you're forbidden to install the newest analysis script and its requirement of the latest version of Python.

We've already agreed on a binary protocol: UTF-8 (previously it was ASCII.) But we've also built redundancy into it for the humans to make sense of it with their high-level brains. Instead of a single byte representing an HTTP header, we use a string of bytes. Now the human involved can tap the wire and watch the request in real-time without processing anything.

Now, if you'd like to remove redundancy without the need for a compression library, we'll just need to agree on shortening those strings. And we'll need a new diagnostic/parsing tool for each [binary] protocol that's invented -- unless you can convince the grep/sed/awk developers to add every protocol to their tools. Or maybe we could all agree on a single binary encoding for every potential combination of strings; something like an index into a dictionary. It might be better (i.e. higher compression ratios) if we let the computer decide on the dictionary for each message.

Do you see where this is headed?

1 - This, of course, is only the case until the machines can accurately gauge human intent and respond appropriately, preventing us from making mistakes to begin with.

4
sirsar 13 hours ago 2 replies      
I wouldn't be so quick to judge the energy lost by using JSON over some binary format as "waste". Standardization on JSON has saved countless developer hours by being so easy to read, write, implement, and debug* . This translates directly and indirectly into economic benefits for everyone. Similarly, I would have taken much longer to start learning HTTP if it was not inspectable with near-zero friction. People "waste" electricity for a reason.

* Is it perfect? No. Is any protocol? No. Would a binary format be better? Very likely, no.

5
andrewstuart2 13 hours ago 1 reply      
Let's not forget about TCP, IP, and all the other protocols involved in the internet that are binary by design. It's more than just HTML, JSON, and JavaScript over HTTP.

> Standardize on some simple encodings (pure binary, self describing binary ("binary json"), textual)

Maybe like gzip [1], hpack [2], bson, or others?

I realize the point he's making about doing unnecessary work, but there's also a reason we haven't expanded human language past written characters or spoken syllables. It's efficient for our brains, and for preserving and transmitting knowledge.

There's just no way to create a binary format (character encodings aside) that can encompass all the possible ideas that can be communicated. Instead, the common text protocols eventually get optimized into binary (HTTP/2) without compromising the ability to express the rest.

[1] https://www.ietf.org/rfc/rfc1952.txt

[2] https://tools.ietf.org/html/draft-ietf-httpbis-header-compre...

6
serve_yay 11 hours ago 2 replies      
I know I'm not making any big revelations here, but the funny thing is there's no "plain text" at all. It's just that our tools all know how to decode ascii. ascii bytes are just as non-human-readable as any other until they're decoded as ascii and displayed on screen. Another encoding could be just as transparent if all our packet sniffers, editors, etc, spoke it.
7
nchudleigh 11 hours ago 2 replies      
The bit about global warming - come on. Reminds me of HBO's Silicon Valley. "And we're making the world a better place, through standardized binary encoded web protocols"
8
rbanffy 11 hours ago 1 reply      
We have realized, long ago, that computers are cheap and programmers (the humans who most often read "human readable" stuff) are expensive.

We also tend to forget, or never learn, what was discovered before our careers began.

9
ChuckMcM 13 hours ago 2 replies      
From last year (Oct 2014) which describes textual protocols as "debug mode" since they consume cycles to parse.

Its an interesting claim, and it is certainly true that encoding numeric information into UTF8 consumes CPU cycles. But what isn't quite so clear is "What percentage of packet latency is dedicated to encoding and decoding packets?"

Back in the old days I was the ONC RPC architect for Sun and we spent a lot of time on "RPCL" (RPC Language) which was a way to describe a protocol textually and then compile that description into library calls into XDR (the eXtensible Data Representation). We did that because you burned a lot of CPU time trying to parse a C structure out of the network, and more importantly the way in which it was represented in memory was an artifact of the computer architecture (bit endianness, did structures get packed or were they word addressable, etc) XDR solved all of those problems by putting data on the network in a canonical format, and local libraries could always convert from the canonical format into the local format correctly.

That actually works quite well. It almost became the standard way to do things on the Internet, but politics got in the way. The big argument was that if you converted things into big-endian form on the network, then a little-endian processor had to convert to send and convert to receive, but a big-endian processor got a free pass without "painful" conversion steps.

Later, rather than converting big endian to little endian people just convert to text (which has the same effect of a canonical form) but it hides the religious argument behind the "hey its just text, we all know how to parse text right?" sort of abstraction. At least then it penalized everyone equally.

But the truth, which came out in the RPC wars, and is even more true today, is that you have to burn a few billion CPU instructions to have any impact at all on latency. That is because computers are so much faster and the network? While faster isn't a million times faster, it isn't really is just barely, and only on a good day a hundred times faster than it was back in 1985. What that means in principle is that if it takes 1 uS or 30 uS to queue your packet for the internet it doesn't even show up on the carve out of the 5,000 uS it takes to send a small packet from here to there, or the 200,000 uS it more typically takes.

If you're a supercomputer sending data around to simulate fluid dynamics, that stuff adds up. If you're sending ajax calls from here to there, not so much.

10
nickpsecurity 12 hours ago 0 replies      
I used to use Sun XDR [1] for this reason. Pushed Juice [2] applets briefly while they existed with efficiency neither JS nor Java delivered. Other benefits authors didn't see, too. Plus, client-server or P2P architecture with native code gave me performance, portability, and security benefits Web can't deliver.

Whole Web is ridiculously inefficient. Even in 90's, there were better architectures [3] to choose from. It's unsurprising that "Web" companies such as Facebook have moved back to Internet apps where possible (esp mobile) and often avoid vanilla Internet/Web protocols within their datacenters. There's better stuff in every area. Here's to hoping that more of it gets mainstreamed.

[1] https://en.wikipedia.org/wiki/External_Data_Representation

[2] ftp://ftp.cis.upenn.edu/pub/cis700/public_html/papers/Franz97b.pdf

[3] http://www.cs.vu.nl/~philip/globe/

11
nitwit005 8 hours ago 0 replies      
Actually, the internet is dominated by binary. The wire formats are binary, and a huge hunk of all bandwidth is being eaten up by binary formats: video, bittorrent, voice, JPG images, etc.

Sure, HTML, CSS and JS are "text", but it's usually optimized, compressed text, and pretty soon we'll be shipping that around in HTTP2.

People optimize what matters. Look at the HTML source for this page. It's mostly user content. More efficient packaging for the HTML structure wouldn't make a measurable difference in load time.

12
bcheung 12 hours ago 1 reply      
Accessibility and ease of use contribute to the advancement of technology. The easier something is to work with the more people will use it.

I used to be obsessed with performance and refused to write in anything other than assembly. When I started working with teams of other people I realize there is efficiency for the computer and there is efficiency for the developer and the company. The latter 2 trump efficiency of the computer pretty much every time.

I'm more than happy to sacrifice some efficiency if it means getting the job done faster, cranking out more features, greater compatibility, more leverage with existing tools / standards, and so on.

If you want to talk about efficiency though. Why don't we get rid of time zones and daylight savings time?

13
jpatokal 10 hours ago 0 replies      
Google has an elegant solution for this in the form of protocol buffers, which are human-readable when expanded but compress down to a very efficient binary format:

https://developers.google.com/protocol-buffers/

Unfortunately it was made public only after JSON had already become "the XML replacement".

14
deepuj 11 hours ago 0 replies      
The more readable and accessible data is, the more "future-proof" it will be. Unless you have a specific requirement for performance, it is never a good idea to sacrifice accessibility. JSON has won the developer community primarily because it is simple, readable and easy to implement.
15
wtbob 12 hours ago 1 reply      
One of the many nice things about canonical s-expressions[1] is that they combined many of the features of binary and textual protocols. Fast to parse, easy to write a parser for, easy to examine by hand, they were ahead of their time.

[1] http://people.csail.mit.edu/rivest/Sexp.txt

16
kijin 12 hours ago 1 reply      
The fact that the Web (not the "internet") runs in debug mode is directly responsible for its popularity and accessibility.

Anyone can look at a web page and learn how it works. Anyone can copy bits and pieces of code from various places and put it together into a web page of their own. Most "web programmers" manage to make a living without ever having to learn a complicated protocol or trying to figure out what a long string of hexadecimal digits means. Easy to learn = more casual tinkerers = more people learning how to code, at least at a basic level.

You could design perfectly efficient protocols and encoding formats, but if people don't use it, what's the point?

HTTP/2 is a nice compromise. The web server and browser abstract away all of the binary layers, so most programmers only need to care about human-readable text.

17
bcheung 12 hours ago 1 reply      
To counter my earlier statement though I think there are scenarios where you can have your cake and eat it too.

JSON is easier to use, debug, and is more efficient than XML.

WebAssembly is going to reduce data transfer sizes and load times while increasing developer productivity as the ecosystem and tools surrounding it expand.

18
nemasu 11 hours ago 0 replies      
I just recently implemented something using binary websockets, so it's easily possible now.
19
choward 13 hours ago 0 replies      
So he wants HTTP2?
20
chillingeffect 9 hours ago 0 replies      
> 99% of webservice and webapp communication is done using textual protocols.

Yep. <img src="whatever.jpg"> is indeed all text. and zip would have turned those 25 bytes into 191. And then a 300KB highly-compressed binary data file would be transmitted.

aka "Penny-wise and pound-foolish."

21
contingencies 9 hours ago 0 replies      
"Human society is running in debug mode!"
22
fapjacks 11 hours ago 1 reply      
You can read the author's inexperience between every single line.
23
smegel 13 hours ago 0 replies      
> textual encodings like xml...have become very popular

Written by someone with "Java" in the URL. Why did I even click.

       cached 23 June 2015 13:02:02 GMT