hacker news with inline top comments    .. more ..    14 Jan 2014 News
home   ask   best   5 years ago   
Requirements for DRM in HTML are confidential w3.org
164 points by duncan_bayne  2 hours ago   89 comments top 10
Nursie 37 minutes ago 1 reply      
Great. DRM. The best example of shooting yourself in the foot ever.

Give customers encrypted content and the keys, try to prevent them from freely using the two together, undermine copyright fair use and first sale doctrines as you go along.

Intended effect - No PiracyActual effect - Paying customers get horribly limited products, pirates carry on regardless

It's crazy. And the more they try to lock it down the worse their products become and the better piracy looks in comparison. Pirates don't only beat the legit industry on price, they beat them on quality and availability. How can the industry allow this to stand? Let alone continue down the same path with their fingers in their ears shouting LALALALALALA I CAN'T HEAR YOU!?!

rlx0x 1 hour ago 2 replies      
This is all so ridiculous, rtmp for instance is as secure a DRM as its ever gonna get and that never stopped me from downloading a stream. Even things like HDMI/HDCP is broken beyond repair. And all of this should justify damaging the w3c reputation forever, what are they thinking?!

This whole concept of DRM is just idiotic, its enough if one guy breaks the DRM and releases it. Why should I even bother booting a propertary OS (windows) and buying a stream everytime I want to watch something if I can just download a release and watch it, and its not like they can do anything against that either.

Why should I bother and buy HDCP capable new hardware, bother with proprietary NSA-compliant US software I much rather buy the DVD, trash it and just download it in a open and free format (I don't even bother with ripping (and breaking CSS) anymore).

belluchan 1 hour ago 5 replies      
Can't we just fork the w3? Start using Firefox and forget about these people. Oh I'm sorry your browser is a little slower, but at least it's not Google made.
duncan_bayne 2 hours ago 1 reply      
It's worth mentioning that the CEO of the W3C, Jeff Jaffe, is trying to rectify that:


josteink 25 minutes ago 1 reply      
Email the W3C. Tell them what you think of this bullshit (in reasonably polite manners).

I've done it. I've gotten a non-canned response.

But clearly they need more people at the gates bitching. This needs to be stopped.

girvo 2 hours ago 6 replies      
Sigh. Look, I'm okay with DRM, as long as it works on all my devices. EME won't, under linux, I guarantee the DRM Vendors won't bother releasing Linux binaries. That annoys me.
Zigurd 1 hour ago 2 replies      
Why should DRM be part of a standard? Aren't plug-ins sufficient?
alexnking 1 hour ago 1 reply      
Maybe instead of getting everyone to adopt Silverlight, we could just make the web more like Silverlight. Like more closed and stuff, because movies!
kevin_bauer 1 hour ago 0 replies      
I guess, the "another backdoor" proposal will go very well in Europe, where most citizens are just static about americas view on privacy and respect for constitutional rights. Way to go, maybe the W3C will finally get Europe and the rest of the "free" world to create their own web!
dreamdu5t 49 minutes ago 1 reply      
What's the problem? Don't support companies that distribute any DRM content. Standardizing DRM and propogating DRM aren't the same thing.
Why 'Her' will dominate UI design even more than 'Minority Report' wired.com
41 points by anigbrowl  1 hour ago   25 comments top 11
aegiso 3 minutes ago 1 reply      
Here's the thing that bugged me throughout the movie: once AI's progressed to the point where it can rival a human, all bets are off. Nobody needs to work again, ever -- not even to maintain or develop the AI's, since they can, by definition, do that themselves, with infinite parallelizeability to boot.

What does "design" even mean in a world where everyone on earth can basically have an arbitrarily large army of AI's in the background designing everything in your life, custom-tailored for you?

For this reason I don't see how the world in the movie could possibly exist. Not because the technology will never get there, but because once it does virtually all aspects of society that we take for granted go out the window. So imitating any of this design is a silly pursuit, because once you can make it there's no reason to.

I should go re-read some Kurzweil.

hooande 18 minutes ago 0 replies      
The people focused, in-the-background style of technology design that this article describes is difficult because people are so different. We can solve this problem technologically right now, but we're very far away socially.

Writing software is difficult. Writing software that can be all things to all people is almost impossible. It's not science fiction. It just requires networking, data collection and processing on a scale that we have never seen. The only way my operating system can adapt to my unstated preferences is if it can compare my behaviors to the behaviors of billions of other users. Your OS, your lights, your tv and blender can all be personalized and almost invisible. But they have to be able to talk to one another and to every other device in your life. Not just talk but calculate, compare and adapt, in a way that makes "Big Data" seem like a big joke. This kind of design and implementation is going to be the next major technology evolution, as soon as we're ready for it to happen.

The technology in "Her" will require our society to completely rethink the concept of privacy. Either we need to abolish the NSA or just get over it. You can't tell your computer your innermost thoughts if you're worried that the government is going to use them to assassinate your character. Corporate privacy is also an issue. I need my bed to talk to my shoes which talk to my refridgerator and my web browser. Then all of that data needs to be blended together and collaboratively filtered with the data of every human on the planet. Good luck getting any two of those companies to play nice with each other, much less all of them in synchronicity.

If we want to make things smarter and less obtrusive then we need to make them more connected and collaborative. The future lies in machines that have seen everything a million times before. Then my OS can treat me like an individual and adapt to my mood. Then my phone can stay in my pocket because it knows what to alert me to and what can wait. Computers can't make these decisions until they have a mind boggling amount of data, and we can't get that data until we get our collective act together.

mrmaddog 59 minutes ago 2 replies      
I have not yet seen "Her", but this strongly reminded me of Ender's communication with Jane from the "Ender's Game" sequels. One of the most interesting facets to their conversations is that Ender could make sub-vocal noises in order to convey his pointsshort clicks of his teeth and movements of his tonguethat Jane could pick up on but humans around him could not. It is the "keyboard shortcuts" of oral communication.

If "Her" is really the future to HCI, then sub-vocal communication is a definite installment as well.

kemayo 37 minutes ago 0 replies      
>>> Theos phone in the film is just thata handsome hinged device that looks more like an art deco cigarette case than an iPhone. He uses it far less frequently than we use our smartphones today; its functional, but its not ubiquitous. As an object, its more like a nice wallet or watch. In terms of industrial design, its an artifact from a future where gadgets dont need to scream their sophisticationa future where technology has progressed to the point that it doesnt need to look like technology.

This article really makes me think of the neo-Victorians from Neal Stephenson's Diamond Age.

...which is kind of funny, because in many ways Snow Crash exemplifies the other ("Minority Report") style of design the article talks about.

jasonwatkinspdx 46 minutes ago 1 reply      
I once read a quip in an interview with a sci-fi author. He said something like: "No one writing about the present day would spend paragraphs explaining how a light switch works." It's easy for sci-fi to fall into the trap of obsessively detailing fictional technologies, to the determent of making a vivid setting and story.

Edit: I'm not saying that sci-fi shouldn't communicate some understanding of the future technology or shouldn't enjoy engaging in some futurology. Just that it's difficult to do in an artful way.

sourc3 5 minutes ago 0 replies      
Saw the movie this past weekend and thought it was really good. I didn't like it just because it has awesome voice driven OSes or endless battery life devices, but because it portrays a current trend we are experiencing; hyper connected loneliness.

The more people are "digitized" and tethered to their devices, the more they seek some human connection.

Don't want to ruin the movie for those who haven't seen it so I won't comment on the ending. However, I urge the HN crowd to check it out. It's one of the best movies I've seen in a while.

snowwrestler 36 minutes ago 1 reply      
Does Minority Report dominate UI design? I think it has dominated the movies' potrayal of future UI, but that is not the same thing.

I think if you look at the actual UIs being designed and sold today, their clearest entertainment ancestor is Star Trek the Next Generation.

w-ll 17 minutes ago 1 reply      
OT: But if you get a chance, watch [1] Black Mirror. There is 2 seasons of 3 episodes. skip the first episode maybe? but I liked it because that* could happen tomorrow. Where as the other shorts are in a somewhat see-able future.

I feel like Spike Jonze was inspired by a few of the episodes. Her was still an amazing movie.

1. http://www.imdb.com/title/tt2085059/

jotm 13 minutes ago 1 reply      
I haven't seen the movie, so I gotta ask - do those glasses have built in displays? Cause that seems like the near future and a better one than just vocal communication...
jkw 45 minutes ago 3 replies      
Can someone explain how Minority Report dominated UI design? (serious question)
danso 40 minutes ago 1 reply      
Does anyone still re-watch TNG episodes and find that the queries they do to be profoundly limited in power, other than the feature of having the universe's knowledge to query across?

If UIs are taking cues from entertainment, they might act as a nice bridge, but are just as likely to be stifling

PSA: Back Up Your Shit jwz.org
146 points by mfincham  4 hours ago   70 comments top 18
steven2012 3 hours ago 6 replies      
I think that the beauty of Snapchat is that it frees you from this ridiculous notion that a text, IM, Facebook message, etc, has any value.

In my opinion, it doesn't. Also, in my opinion, I believe that feeling the need to save every single conversation you have fuels an over-inflated sense of self-worth, and that everything you say has value and needs to be saved.

I never, ever peruse through my messages, to reminisce over an old conversation. It's too much navel-gazing to suit my sense of pride. What actually matters is the actual relationship you have with a person, which is built on the BODY of IMs, messages, conversations, visits, dinners, parties, etc, that you shared with that person. Sometimes, it's best to leave good conversations in the blurry past, and just remember that a certain person is funny, a great conversationalist, etc.

I'm doing the same sort of thing with Google now. I will disallow anyone I'm in a conversation with to google facts with their phone. When we talk, it's about whatever resides in our own brains, be it good, bad or ugly. The entertaining part of any conversation is the actual conversation, the passion, the humor, etc. If all we wanted to do was pass around facts, then we can forward each other URLs and be done with it. When I'm talking with someone over dinner, we're not hammering out a contract that requires precision, we're having a conversation over ideas, and as funny as it sounds, facts aren't as important as the spirit of the conversation. Unless of course you're in an argument with someone, and then that isn't very much fun so why even bother starting the conversation in the first place.

famousactress 4 hours ago 3 replies      
Thanks for this. The SMS export from iPhone is something I've been looking for. One of the most important relationships and experiences of my entire life has been documented (trapped) in my phone and it's backups ever since.

I'm looking forward to seeing how well it works, specifically whether it can pull photos/videos as well. If it doesn't yet but it wouldn't be too much trouble to add, I'd be willing to literally pay you to add that.

[Edit: Since a lot of the other comments are questioning the value of saving this stuff I figured I'd share my use cases. It turned out when I thought about it I have at least three:

1. I effectively met my wife on myspace (believe it or not a pretty nasty software bug led to our relationship) and an enormous amount of our initial friendship and courtship ended up documented there. Years ago I painstakingly clicked through for hours and copy-pasted the conversation to a text document.

2. I had a close friend die very suddenly and at a young age. My memory generally kind of stinks and I hated that there were conversations with him that I half-remembered. I went back through social media conversations with him (again, mostly on myspace) a lot in the years that followed. It helped me piece together memories that are very important to me now.

3. This past year my wife and I adopted our daughter. Our relationship with her birthmother has primarily been via SMS and the months that followed were a really exhausting and beautiful blur. It's really important to us that we're able to share that thread with our daughter someday.

In none of these cases did I see it coming that these services would end up having such valuable content in them for me. I didn't know I'd meet my wife. I'll never know when the last time I talk to someone is, and I would have never guessed that one of the most important things I'll have to give my daughter about her birth story is an SMS conversation.

So yeah, having access to this stuff is important to me. Thanks to jwz for pulling these resources together.]

borski 4 hours ago 15 replies      
"You don't just throw your letters in the trash. You might want them some day."

Maybe it's just me, but I actually /do/ throw my letters in the trash. I /do/ treat Twitter, etc. as ephemeral and passing. I don't care about saving those messages. Am I the only one?

randomdrake 3 hours ago 1 reply      
Accessing your own data and storing it is great, but there's still the matter of backing it up. jwz wrote a good guide for that as well. It's linked in the article, but not in a way that makes it obvious. Thought it would be good to mention it here:


enigmabomb 4 hours ago 0 replies      
PSA: This guy's nightclub makes a really mean meatball sandwich.

Make sure that recipe is backed up.

dkokelley 3 hours ago 0 replies      
Honestly, my Twitter feed, Facebook, and SMS records could all disappear tomorrow, and I would be OK with it. Maybe there's value in my accumulated Facebook connections and history, but most of the value today comes from current content.

Now email, that I value for archival.

anigbrowl 3 hours ago 0 replies      
These conversations aren't ephemeral and disposable, they are your life, and you want to save them forever.

Yes they are, and no I don't. I highly doubt JWZ carries a portable recorder to immortalize all his in-person conversations; I certainly don't, even though recording people (for movies) is what I do for a living. Funnily enough, far more of my important memories involve real-life conversations than exchanges on IRC/Facebook/HN.

Yeah,. it's good to have a method of backing this stuff up if you do need it, eg for business communications or any number of other use cases. But most digital chatter is eminently disposable I wish there was a way to have emails expire and self-destruct automatically, so that things like time-sensitive sales offers would quietly vanish once the actionable date had passed unless I made some special effort to retain them.

sturmeh 2 hours ago 0 replies      
Chat history serves one purpose for me, the file size quantifies how much I spend talking to a particular person, and I use that to sort people on my contact list.
dhughes 2 hours ago 0 replies      
Pictures are the worst for backing up, actually no backing up someone else's pictures is worse.

Parents for example, my mom takes a lot of pictures she wants to keep I take lots of pictures I don't care about.

Semi-wheneverly when I manage to get the card from the parent's camera or cellphone to back up it's usually a mess of I backed up 63% of these so which ones are new. Is IMG0003.JPG the same as IMG0003.JPG I saved already wait no one is 2MB and the other is 3.25MB.

Meld helps but it's the same thing what do I have and what is new and what is different with the same name but is different which I just happen to notice due to the file size.

So I end up dumping it all onto something or multiple somethings and swear I'll figure it out next time. Goto step 1.

mmanfrin 2 hours ago 2 replies      
PSA: It is not 1999, please don't use neon green text on a black background as your color scheme for text.
codva 4 hours ago 2 replies      
I delete all email after 90 days, unless I explicitly moved it to an archive folder.

I've never event thought about saving IMs, texts, Twitter, etc. Civilization has survived a very long time without a written record of every conversation ever. It will continue to do so.

gwu78 1 hour ago 0 replies      
"Remember: if it's not on a drive that is in your physical possession, it's not really yours."

So, if we store our data in "the cloud", it's not really ours?

mathrawka 3 hours ago 1 reply      
Totally off topic, but the I can read the site fine, but if I switch back to white color site (like HN, or just staring at a wall), my eyes still see the lines of the site for awhile.

It physically affects my vision for a few minutes, albeit just a little bit. Is this normal?

ballard 2 hours ago 0 replies      
Backup personal stuff and code to Tarsnap. Videos would be too expensive, but downsampled home videos might be worth saving too.
flipstewart 4 hours ago 0 replies      
I do throw away letters. I'd rather not live in the past or cling to ephemera for emotional reasons, thank you.
Aloha 4 hours ago 0 replies      
It drives me nuts that pidgin and adium use different logging interfaces - its made switching from Windows to OSX more painful - as I still use finch on Linux, and use file syncing to sync logs and config files across platforms.
X-Istence 3 hours ago 0 replies      
The SMS backup tool for iPhone doesn't seem to work for me. I have encrypted backups turned on for my iPhone, will this not work because of that?
qwerta 3 hours ago 0 replies      
On android you just mount phone partition and query SQLite tables.
Google to Acquire Nest investor.google.com
534 points by coloneltcb  7 hours ago   509 comments top 96
MrZongle2 7 hours ago 14 replies      
Am I the only one who thought "well, good for the Nest guys" followed by "too bad, it looked like a good product"?
nostromo 7 hours ago 4 replies      
I'm happy for Nest but sad for myself, given the smoke detector was my Christmas gift last year.

Previously I would look to YouTube as an acquisition done right. But after the kerfuffle around G+ integration, I no longer feel that way.

If I have to sign in with my G+ account to manage my smoke alarm, it's going directly into the rubbish bin.

If Apple had bought Nest, on the other hand, I'd feel very differently.

jmduke 7 hours ago 7 replies      
The cynic in me shudders at the possibility of Google adding even more real-world data like temperature preference and lifestyle factors to their massive data portfolio.

The idealist in me is incredibly excited at the possibility of Nest's fundamental strengths being bolstered by Google's coffers.

tghw 7 hours ago 3 replies      
Given Google's history with acquisitions, as a Nest owner, I'm not thrilled to hear this. Hopefully they let it run as independently as possible.
minimax 7 hours ago 5 replies      
"Google Inc. (NASDAQ: GOOG) announced today that it has entered into an agreement to buy Nest Labs, Inc. for $3.2 billion in cash."

Can that be right? $3.2 billion? That's a crazy huge sum.

toddmorey 7 hours ago 1 reply      
Please, please Google... don't spam me about starting a Google+ account when I'm just trying to turn the temp down a few degrees. I beg you.
bradly 7 hours ago 12 replies      
Seriously not trying to troll, but could someone help explain where Google might see 3.2 billion dollars of value in a company that pretty much just sells a successful thermostat?
badclient 7 hours ago 2 replies      
Someone at google is thinking of ways to use Google Plus to fuck up Nest.
sidcool 4 minutes ago 0 replies      
Isn't Google trying to spread too thin? I feel so.
k-mcgrady 7 hours ago 4 replies      
Very surprised Apple didn't purchase this. Headed by Tony Fadell, brilliant design, I thought it would fit and be a nice diversification for them. However it will be very interesting to see what Google does with this. If I put away the cynic in me for a moment: the more data Google has on me the better their services are for me. I could imagine the Nest thermostat integrating with Google Now and the location data Google has on me for example to better now my comings and going.
molecule 7 hours ago 1 reply      
Coming Soon: Google+ Integration w/ Your Nest Thermostat
uptown 7 hours ago 1 reply      
Smart buy in my opinion. The connected home has enormous untapped growth potential, and they just acquired a company that managed to take something as mundane as a thermostat and make it cool.
suprgeek 7 hours ago 1 reply      
$3.2 BILLION All-Cash offer.

Is that Color-level valuation for a Thermostat company?

Or more like Instagram-level... Holy Moly!

ChuckMcM 7 hours ago 2 replies      
It's awesome for the folks at Nest, but the first thought through my head was this : I am so starting a smart lawn sprinkler controller company!
callmeed 7 hours ago 0 replies      
FYI Google Ventures participated in 2 of Nest's funding rounds, including leading their series C last year.

I can't say exactly if/how this affects the acquisition but it does feel like some of the money is just traveling in a circle.

outside1234 6 hours ago 0 replies      
I'm really surprised. I thought Nest was Tony Fadell's vehicle to make it back into Apple.

I am going to sit here with a bemused expression on my face thinking of Tony Fadell at data driven Google. I put it 6 months before he is out of there.

ck2 7 hours ago 1 reply      
Do they own patents or something?

Because with a $10 android chip, any Chinese plant could churn out Nest clones in a few months.

sixothree 7 hours ago 0 replies      
I don't think you can improve a google product without sharing more data.
jcampbell1 6 hours ago 1 reply      
The only possible way this acquisition make sense is that Nest has some truly amazing product in the pipeline. It is impossible to justify 3.2 Billion for a team or thermostats. My guess is they wanted to get acquired because some future product make sense to have Google's brand and go to market ability.
hoopism 7 hours ago 0 replies      
The Nest UI (web and mobile) is laughable horrible. Only has about 2 weeks historical data. Doesn't record what means a temp was adjusted (web? Phone? Nest?). Doesn't do any sort of trending long term or historical analysis. Charts overlay clicking points so you can see some data. Doing my own analysis it ended up using more energy than my manual changes. So I don't even use autoschedule.
jffry 7 hours ago 2 replies      
Coming soon: audio ads, broadcast throughout your home via your Nest Protect. Since it knows if you're moving, it can wait and play the ad when it knows you're there, allowing Google to charge a higher cost per impression.
granttimmerman 16 minutes ago 0 replies      
Well, I guess that's one reason/excuse why Nest turned me down a summer internship when I applied last November.
ThomPete 7 hours ago 2 replies      
For once I can understand the pricing and think its in fact cheap.

1) Nest has a realistic potential to be in "every" home if not the actual hardware then the underlying intelligence.

2) Nest has a business model that works. People are paing for it and they love it.

3) Combining Google data with Nest access points I can only imagine the awesome things they can come up with and improve. Perhaps the first glimpse at an intelligent grid system.

4) 3.2 is a steal and given that its cash its a good sale.

nodata 7 hours ago 4 replies      
Oh come on this has to stop. How much would it have cost Google to build Nest from scratch? Not 3.2 billion dollars.
mesozoic 7 hours ago 0 replies      
The real winners are DST Global and the others who bought round D and got what looks like a 50% return on $150 million in about 2 weeks.
pinaceae 6 hours ago 1 reply      
well, stick in a fork in Nest then.

wonder how bad their financials were when they're already selling.

fucking SnapChat refused a FB offer, but Nest just rolls over and dies.

jusben1369 7 hours ago 0 replies      
I think folks are too focused here on Google caring about the temperature in your house. I suspect Nest's hw/sw/design chops will come in very handy when putting together say the experience of a self driving car?
bluthru 7 hours ago 0 replies      
Ugh, our industry needs more owners, not less.
agperson 7 hours ago 1 reply      
Argh, I was just about to buy some Nest Protects, but I don't want Google sensors built into my house.
mikegreen 7 hours ago 1 reply      
$3.2b makes me think there has to be some awesome technology that they own or pipeline, as I can't get excited about the thermostat. It is a great idea, and played with one a bit, but it isn't solving any large HVAC issues - the thermostat is simply an on/off switch for the real working hardware in your basement/outside.

So, what else do they do that makes Google want to pay $3+ billion (pinkie to lip) for Nest?

wil421 6 hours ago 0 replies      
So will they shut it down like they did when bought sparrow or bump?

I thought sparrow was great for iOS and OS X. Googles gmail app is not as stable.

Lately I've been concerned about google and privacy this make make me change my mind and just buy a competitors version of a "nest". The price is what has been keeping me away.

throwaway420 6 hours ago 0 replies      
I wish we lived in a world where Google can be trusted with all of this personal data because they could do lots of interesting things to make lives easier and create more wealth for everybody.

But I have every expectation that this data, if not right away, could someday be sent to the criminal gangs that wear suits and ties that are looking for new targets to loot and plunder.

Just knowing the temperature in your house, cross-referenced with other data about you as well as electricity bills, could be enough for a future police agency to argue probable cause that you're running a drug growing operation and have them do all kinds of raids and intrusive searches and harassment.

This shouldn't be an issue, but Google has earned absolutely zero trust with its obvious non-denials of NSA activity and other negative conduct.

badman_ting 7 hours ago 1 reply      
Google offers you 3 billi, you take it. That's what I always say.
WWKong 33 minutes ago 0 replies      
This proves that the market for "better mouse trap" is huge. Look around the house. Pick something. Anything. Create a better version.
cmos 7 hours ago 0 replies      
Nest has built an expensive thermostat that is engaging. It certainly doesn't take $80 Million to do that. So perhaps they also spent their money developing a cheaper version after proving that people like it and more importantly proving it can save money and eventually pay for itself.

I would spend a ton of money putting everything on one chip, getting the cost down to $5 or $10 for the same user experience. Maybe put in a cheaper display if needed. Now this wireless device in the house can pay itself off in very short amount of time, making it a super easy decision for consumers. The current offerings of thermostats are clunky and have horrible user interfaces that require modes and buttons and such to program.. forget about daylight savings time!

If people can buy Nest functionality for $50 then google will have a solid foothold in people's homes. From that they can sell camera's that show up through your google TV device and get into the security market.

The dial on Nest is a universal interface, as the original ipod has shown us, perfect for scrolling through menu's and long lists and for changing volume or choosing music. Might as well put a microphone in so people can ask the internet things.

Still not quite sure it's worth $3.2Billion.. perhaps there was a bidding war?

achy 7 hours ago 0 replies      
This might be a minority opinion, but I think more than home data collection (which they probably have a lot of already from phone / computer usage), that this is about acquiring a company that has cache and a 'cool' factor within both the design world and more importantly the market sector of young tech inclined home owners - a hugely lucrative market.
sdoowpilihp 7 hours ago 0 replies      
This will give Google access to a very idiosyncratic data set no other major tech company has. It will be interesting to watch how they influence the product line over the coming years, as well as what they do in terms of integration with other google services.
noonespecial 6 hours ago 0 replies      
In a not-so-strange way this fits well with their recent robotics acquisitions. It really feels like lightning could strike a second time for google so long as they don't middle manage themselves out of it.
smackfu 7 hours ago 2 replies      
Well, that should resolve their patent issues.
outside1234 6 hours ago 0 replies      
When do we start getting ads on the Thermostat? Or ads on our browser asking us why we are spending so much time at home?
pessimizer 7 hours ago 2 replies      
I don't know a lot about Nest, but I understand that requests to the thermostat are made from the phone app indirectly through a central server?

What possible consumer benefit could that indirection have?

edit: The reason I ask is that I'm trying to figure out why an exact copy of Nest except that it didn't leak data and didn't require other people's servers to work wouldn't eat Nest's lunch.

bitwize 6 hours ago 0 replies      
Oh great.

You have set a temperature of 65F. Share your climate preferences your Google+ circles?

belgianguy 7 hours ago 1 reply      
smart phone > smart car > smart house > ... ? ... > smart human

I for one see great areas of improvement and possibilities inside the common house. Not that I'd wire up every appliance with a Wi-Fi sensor, but some data can be useful, if only for safety (falling asleep with the oven on: here a sensor could cut the power at a certain condition, or trigger the fire alarm, or ring up the firefighters).

Or you left the house and realise you left some of the lights on, just tell the in-car console to open the domotics app, and tell it to shut the lights off.

And the smart fridge, if RFIDs or its successor ever become cheap enough to be able to be printed on a milk carton, you could have your fridge's contents with you wherever you go, just open the app, look what's about to go off, or have it remind you to get extra supplies when you're near a point of interest.

If they'll let me I'll hook up a Nagios server to my coffee machine.

HorizonXP 4 hours ago 0 replies      
All I can say is that KPo is a beast, and he's totally why Google paid such a large sum of money for Nest.
ericcumbee 5 hours ago 0 replies      
On one hand google having more information is unsettling but at the same time it could lead to some interesting possibilities. In the summer my nest kicks the AC on at about 4:45 assuming that I will get home sometime around 5pm. But say I go afterwork to have a beer or drink coffee and stay out until 8pm. Nest knowing where my phone is means it could be smart enough to not turn my AC on until I am headed home.
callahad 7 hours ago 0 replies      
Time to add thermostats and smoke detectors to all those blog posts about "de-Googling" your life...
loganu 6 hours ago 0 replies      
It's not a talent acquisition. It's not a move towards more government monitoring. It's not meant to increase G+ sign-ups or sell ads for more efficient furnaces.

It's a play into the growing "connected product" / smart home segment. When you combine google's resources, their current software and hardware products, and the type of products Nest is likely to move on to, you get some strong synergies that Google would be dumb to ignore. A little more discussion on where things could go from here (that doesn't touch on the NSA or government subpoenas) would be really refreshing.

cocoflunchy 7 hours ago 1 reply      
$3.2 billion?
Fogest 5 hours ago 0 replies      
Nest has a post on their blog about this as well if you're interested:https://nest.com/blog/2014/01/13/nest-google-and-you/

I am not sure why people always say Google makes companies suck. This may be the case sometimes, but it is that way for most companies who buy things if they try to make changes. Just look at YouTube. It has become quite successful, despite some poor decisions.

matmann2001 7 hours ago 0 replies      
Great. Now I'll have to listen to an ad before my fire alarm goes off.
figital 7 hours ago 0 replies      
There's been a product on the market for several years called "TED" ... something seemed fishy when Google killed the data monitoring service ... http://www.google.com/powermeter/. Still ... you have the power (so to speak) to meter your own appliances. Yes you.

Mine is still in the box :(.

keithg 2 hours ago 0 replies      
Finally a tech company with enough good sense to realize that if you get a buyout offer that starts will "b" and ends with "illions", you take the money!
lowlevel 7 hours ago 0 replies      
I have a feeling I'll be ripping the Nest off of my wall in no time.
FiddlerClamp 6 hours ago 0 replies      
I'm relieved that Apple didn't buy them. In that case, they'd have certainly either a) discontinued the Android version, or b) created a cross-platform app like iTunes.
dengar007 7 hours ago 1 reply      
Now you can +1 your temperature!
dorfsmay 4 hours ago 0 replies      
So that means you'll have to have a google plus account, older than 13, and use an interface that use a protocol that replaces snmp but that is propriwtary to google, just to change the twmperature?
protomyth 7 hours ago 0 replies      
I was hoping Honeywell would buy them an experience a bit (at the division level) of a NeXT-Apple replay. I really am not sure about buying one given the implications of the data from this device being given to advertisers or used in advertising analytics.
ehfeng 7 hours ago 1 reply      
Google should have negotiated for a $3.141 billion price.
andrewhillman 7 hours ago 0 replies      
Wow. This weekend I was talking about Nest with my dad and I said Google will end up buying them. Interesting to see that it's an all cash transaction. I guess all cash makes the deal cheaper for Google.
fudgy73 3 hours ago 1 reply      
$3.2 BILLION?! Was Nest Labs making a profit? Seemed to me like they made cool devices that no one bought.
forgotAgain 6 hours ago 0 replies      
Google with (new) data sources in my house: hmm.
mikeg8 7 hours ago 0 replies      
"To change the temperature of your master bedroom, please create a google+ account."
elwell 1 hour ago 0 replies      
"OK Google, flush the toliet."
rhythmvs 5 hours ago 0 replies      
Well, then we should have a better look at http://www.loxone.com/ and maybe so keep our actual real-life data out of black box data centers.
dpeck 7 hours ago 0 replies      
Woohoo, I look forward to it disheartening into Mountain View and only being used to control the HVAC at HQ and be shown off to board members on their quarterly tours.
lauradhamilton 7 hours ago 0 replies      
As a Nest owner this creeps me out.

Google should NOT have always-on sensors of any type.

benwerd 6 hours ago 0 replies      
Do other large tech companies not realize what Google's up to, are they unable to try to compete, or do they just not care?
deeviant 7 hours ago 0 replies      
So perhaps being able to dissect nearly every aspect of a users online life is not enough for Google, now they want to offer a range of devices that can start recording the minucia of everyday home life.
ivv 5 hours ago 0 replies      
Theoretically, Google could now be remotely adjusting your room temperature to maximize (or minimize) the effectiveness of advertising. [1]

[1] http://anzmac.info/conference/2013/bestpapers/anzmac2013-364...

angersock 7 hours ago 1 reply      
"We've noticed that you're currently freezing your toes off...would you like to search instead for 'HVAC'?"

Fucking creepy.

vikas5678 7 hours ago 0 replies      
With all the data google already has on me, as a new homeowner, I think I'm terrified of buying this product. What kind of profile would this allow Google to build on individuals?
Cub3 5 hours ago 0 replies      
Maybe we'll finally see Android @ Home become a reality


BIair 6 hours ago 0 replies      
On the face, appears to be a huge valuation for an overpriced thermostat and smoke alarm. But combined with their recent robotic acquisitions I wonder if this signals a move beyond the smart phone, and smart car, to the smart home? If Google engineers want to build the Star Trek computer, surely they want to build the Jetsons home.
tsenkov 7 hours ago 0 replies      
Was there anything similar (as an amount of cash in the deal) in the last 10 years? Probably Microsoft's acquisition of Nokia's phone division? Is there a list somewhere, that would be interesting to see.
jerdavis 6 hours ago 0 replies      
This doesn't make a whole lot of sense to me. Besides the fact that Nest is a great product (But so are Triscuits), why would Google buy them?The Nest _does_ know when you are home. If Google starts to use Nest data to target adds at me, I'm going to rip it out, set it on fire, and mail it to Larry.
oscargrouch 6 hours ago 0 replies      
a sign that google is horny for the "internet of things" :)
jmuguy 7 hours ago 0 replies      
Hmm, I wonder if this will make a security system (like canary http://www.indiegogo.com/projects/canary-the-first-smart-hom...) more or less likely. I assumed that would be the next product from Nest.
pirateking 7 hours ago 0 replies      
I look forward to well designed Nest competitors that support connections and logging to user specified endpoints.
14th 7 hours ago 0 replies      
... I'm glad I haven't bought one yet. Now I need to figure out how to build an open source one.
jpswade 6 hours ago 0 replies      

Now there's another Google product that will never hit the UK.

jd007 7 hours ago 0 replies      
what a great way to force g+ on users even more. "too cold and want to increase your temperature? add 3 friends to your circles!"
jyz 6 hours ago 0 replies      
3.2 billion is an insanely large number. But then when you step back and see Snapchat being offered the similar amount, you kind of feel sorry for Nest for selling.
regnum 4 hours ago 0 replies      
Good thing the Google founders have grown up and don't need adult supervision anymore.
michalu 5 hours ago 0 replies      
Yes, now we can have even more relevant ads!!
pgcosta 6 hours ago 0 replies      
I understand why this is a good aquisition, but still 3.2B...The valuation was 'only' 2B. That escaleted a lot!
skizm 4 hours ago 0 replies      
You must now login with your G+ account to turn on your thermostat or smoke detector.
baweaver 7 hours ago 0 replies      
So YouTube didn't work as a G+ promoter, so now we'll have to log into our houses with it. Well played Google, well played.
baweaver 7 hours ago 0 replies      
So will I need to sign into my house with a Google+ account now?
andyman1080 7 hours ago 0 replies      
Is it known how much equity Faddell had? Is he a billionaire now?
spiderPig 7 hours ago 0 replies      
Nice acquisition, but $3.2 bn?? Really? Just unbelievable.
jgalt212 6 hours ago 0 replies      
oh, no. there's definitely no bubble.
aaronpeck 7 hours ago 1 reply      
Remember Sparrow?
mmuro 7 hours ago 0 replies      
Luke-Jr 5 hours ago 0 replies      
Maybe Google will push Nest to comply with the GPL terms? They're currently missing the (required by GPLv2) build/install stuff from their source code releases... I've been unable to get root on mine so far :(
subndes 6 hours ago 0 replies      
Google now able to collect more data points about your home too!
ffrryuu 6 hours ago 0 replies      
Bailing out the VC already?
A command-line murder mystery github.com
68 points by hodgesmr  4 hours ago   22 comments top 8
NAFV_P 1 hour ago 0 replies      
A lot of the code that I write looks like a crime-scene.
lost-theory 35 minutes ago 0 replies      
Very fun and unique. I enjoyed it a lot! My solution (spoiler alert): https://gist.github.com/lost-theory/8412918
GuiA 3 hours ago 1 reply      
Hah, I had a similar idea a few months back [0]. Of course, being the lazy person that I am, I tweeted about it, wrote about it in my ideas notebook, and never implemented anything.

Checking this out right now!

[0] https://twitter.com/gardaud/status/402608968029057027

schoen 3 hours ago 2 replies      
I found it frustrating that gur vagraqrq fbyhgvba cngu qbrfa'g gryy lbh qverpgyl jura lbh'er qbar naq lbh unir gb vasre gung sbe lbhefrys.

It's a really neat idea and a cool way to give people some command-line practice.

chrismorgan 3 hours ago 1 reply      
I take it using pagers would be considered cheating, under the categorisation of "text editors"?
quantumpotato_ 3 hours ago 2 replies      
Alicia Fuentes F 48 Walton Street, line 433

There's no Walton Street under streets. Same for a few others I checked. What gives?

isxek 3 hours ago 1 reply      
Are the executables listed in https://github.com/bmatzelle/gow/wiki/executables_list sufficient for use with this?
jameshsi 2 hours ago 0 replies      
thanks for sharing. this was entertaining and reminded me of http://vim-adventures.com
Taking PHP Seriously infoq.com
88 points by kmavm  5 hours ago   45 comments top 11
flebron 3 hours ago 2 replies      
I really liked this talk.

PHP was my first language when I was starting out professionally, and before I went into university and learned "proper programming". Back in 2007-2008, I remember the mess it was, with the internals list being a permanent struggle to implement anything by consensus (no lambdas or JS array notation because it wasn't easy to google, for example), and it really did look like a dead end. The internal source code of PHP was often a mess of macros everywhere, and the whole PHP6 unicode fiasco really did paint a grim picture of its future.

Facebook seems to have given it a bit of fresh air, implementing some pretty interesting stuff (Hindley-Milner with subclasses, for instance), and it really seems like the "feel" of programming in PHP has changed. I'm not going to say "Screw C++ and Haskell, _this_ is a serious language!", but on the other hand I feel I can say with a straight face to someone who is starting programming, "You could check this language out", without a guilty conscience that I'll be ruining their mind.

I'm unsure of PHP's future - if it'll be tied to Facebook (and thus Facebook's future, which I am equally unsure of), for example - but as of now, it seems to be a reasonable, if idiosyncratic, language.

So yeah, good talk :)

nikcub 4 hours ago 1 reply      
slideshare link, for those who want to avoid the InfoQ marketing signup maze:


SCdF 3 hours ago 1 reply      
At my job I maintain some PHP that we are slowly removing (and replacing with Scala for the backend and Angular for the frontend). My total experience with PHP prior was writing a tiny wordpress plugin a decade ago that added some anti-spam stuff to comments.

And you know, PHP, it's not so bad. It gets the job done, it deploys instantly (the rest of our code is mostly Scala. Oh. My. God. Just. Build. Already). It can be pretty haphazard, you can do some funky stuff with it (the abilty to pass by reference on integers and other 'primitives' caught me by surprise) and I wouldn't pick it for new code, but it's certainly not the steaming heap of evil that people make it out to be.

Maybe our code is just awesome, idk.

camus2 3 hours ago 1 reply      
Interesting talk , curious about Hack.

PHP is a horrible language with great libraries and a few good features. A lot of successfull projects are written in PHP. The execution model is good enough for some types of apps(blogs,e-shops,...).

As a PHP developer , i'm betting on Python and NodeJS for the future, i dont believe PHP has any real future outside a bunch of popular CMSes. While PHP has excellent libraries (Symfony,Doctrine,...) , Python has very good ones too, and it IS trully multipurpose.

When i first came to Python i did not like its OOP model(no interfaces,...),but Python metaprogramming features are unique and very interesting to learn.

The biggest problem with PHP is its core developers, adding some "feature" is not fixing the language. Removing the bad parts should be the focus of the next PHP versions. It is not.

anuraj 1 hour ago 1 reply      
I came to PHP after long stint with C, C++, Objective C, Java and Javascript - and I was productive from day 1. I have programmed extensively, and have come across few quirks, but never got stuck, nor is my code spaghetti. I do not use frameworks where they can be avoided and write extremely simple code and avoid OOP for scripting. I would not do OOP with a language without strong typing. My take is for short lived HTTP requests which need to spew out some JSON preferably(favor headless coding for frontend), PHP is the apt tool. For heavy lifting, use a strongly typed language which can easily handle complexity.
girvo 3 hours ago 1 reply      

I just spent the last week working on a new language that... turns out, is basically Facebook's new language: Hack. Although, it does seem they've not released it to the public, so I suppose I'll keep working on it, heh.

My new language steals PHP's "shared-nothing", "bootstrap from nothing at request" but steals Typescript (and Hack's, apparently) gradual typing system, along with a saner, less ridiculous StdLib.

martin_ 3 hours ago 1 reply      
Hack sounds pretty awesome, it's strange I haven't seen mention of that alongside hip hop before - also he mentioned "hacklang.org" but that seems completely different
jimmytidey 4 hours ago 2 replies      
I love that he says "if you are a person like me, who is a c++ compiler, I mean c++ person" (16.20)
mitchtbaum 3 hours ago 0 replies      
Whoah! Thank you for linking to this video series. Another video from this Strange Loop 2013[0] conference is on Symmetry in Programming Language Design[1]. I am finding it to be way more up my alley.

[0]: http://www.infoq.com/strange-loop-2013/

[1]: http://www.infoq.com/presentations/noether

Edit: Posted to https://news.ycombinator.com/item?id=7054815

chronomex 4 hours ago 4 replies      
Is such a thing even possible?
jbeja 4 hours ago 0 replies      
Yes, more PHP please!.
How I Built a Raspberry Pi Tablet makezine.com
8 points by nkvl  32 minutes ago   1 comment top
rhgraysonii 0 minutes ago 0 replies      
For those interested in the actual build process and materials rather than just a writeup, here it is from his personal site: http://mkcastor.com/2014/01/02/pipad-build/
The NASA Studies on Napping priceonomics.com
54 points by priyadarshy  4 hours ago   13 comments top 5
vacri 1 hour ago 1 reply      
But what data is this conclusion based on? One important study by NASA for the most part.

This is bollocks, for the most part. There have been plenty of studies done on naps, powernaps, short term sleeps. When I was going through uni in the 90s, the motor vehicle registry agency in my state was doing a lot of research in the area of powernaps. Later on I spent some time in sleep medicine myself, and studies here and there would filter through. This is not a notion reliant on one study with a small n.

I would have thought that a startup founder who bases his company on the premise of napping would have read more on the topic.

jsaxton86 2 hours ago 2 replies      
This is interesting, but I wouldn't consider it a conclusive study. According to the original paper, the sample size is only 21 (12 napping pilots, 9 no-rest pilots).
bryanlarsen 3 hours ago 2 replies      
The easy way to avoid nap grogginess is to limit then to under twenty minutes.
feniv 3 hours ago 1 reply      
The unit of performance in these studies is reaction time, which isn't a useful measure in a field like programming or design.
bane 1 hour ago 0 replies      
I've found that often when I feel groggy during the day. Taking time out to do 5 to 10 minutes of deep twisting back stretches will revive me almost as well as a power nap.
Great Companies Dont Have an Exit Strategy recode.net
25 points by bradleyland  3 hours ago   8 comments top 6
gjmulhol 2 hours ago 2 replies      
This is a fundamental lack of understanding about what an exit strategy is. It is not about the CEO exiting. Private companies are a very illiquid investment. An exit strategy is something that exists to help inform investors about how they will eventually realize their gains. If a company intends to stay private forever and simply pay a dividend to investors, it may not be interesting for certain types of investors.

By definition, therefore, an IPO is an exit. It is an opportunity for an investor to get his or her money out of your particular company. Most great companiesdepending on how you define themactually have gone through some sort of exit event as defined this way, and all savvy investors will want to know how they can realize their gains if you are successful.

cfontes 39 minutes ago 0 replies      
Don't get me wrong but I disagree badly with the way this was writen. Mainly the selling a company part.

Sometimes there are no other option other then selling your company.

You can become outnumbered in the board and become a one men army due to different views of the future or other complications ( try that... it's fun like hell). You can work for your company for 40 years and realise YOU are not needed anymore and you should sell it and enjoy your last years.

And so many other variations to that "Why?" question. I found that statement aimed to make people that sold their company look like they gave up. And more often then not. This is not the case.

Thinking like that is just to simple.

aaronbrethorst 1 hour ago 0 replies      
"An IPO is a means for a company to bring in cash, not the end of the game."

Nice sentiment, but I'm guessing your early investorsas they look to close out the funds that made up the bulk of your Series A or Bsee things a bit differently.

001sky 1 hour ago 0 replies      
Great Companies Dont Have an Exit Strategy

Nope. But they have a liquidity strategy. Liquidity is the "exit" (path) for investors and is critical to retention of staff. Frankly the title seems more to be linkbait than insight.

gesman 2 hours ago 0 replies      
Great exits are usually made by the companies who are focused on the great product or service, not on an "exit" strategy.
avighnay 48 minutes ago 0 replies      
Very good article. It is not possible to build a product if cashing out is the first thing in the CEO's mind.

We always hear and talk about only two exits IPO (getting rare now) and Acquisition (the $B glamour). However there is also a third valid way of investor exit which is 'Management Buyout' (never heard in tech circles)


A 30 minute introduction to Rust steveklabnik.com
242 points by steveklabnik  11 hours ago   143 comments top 18
ChuckMcM 8 hours ago 4 replies      
This is an excellent summary Steve, it also points out one of the challenges of 'System' languages, which is the requirement for 'unsafe.'

One of the first things I did when I started working on Oak (which became Java) was to see about writing an OS in it. The idea being that if you could write an OS in a 'safe' language then you could have a more reliable OS. But we were unable to write it completely in Oak/Java and that lead to some interesting discussions about what might be the minimum set of 'unsafe' actions might be required by a systems language.

Sadly we did not get to explore that very much, although I did pass it on as a possible thesis topic to some interns who came through Sun at the time. I'd be interested in your experience with what actions require 'unsafe' and if you have seen a canonical set that might point toward a process to get to a 'safe' OS.

tptacek 11 hours ago 6 replies      
This isn't so much an introduction to Rust as it is an introduction to Rust's concurrency model.

The example of returning a reference to an automatic variable isn't super compelling, since every competent C/C++ programmer knows not to do it. That bug does pop up every once in awhile, but almost always in the context of a function that returns a reference to one of many different possible variables depending on some condition in the function.

Does Rust really call its threads "green threads"? Green threads have a weird reputation.

Copy like "this allows you to, well, read and write the data" could be tightened up; it's an attempt at conversational style that doesn't add much. "That doesn't seem too hard, right?" is another example of the same thing.

How much of Rust concurrency is this covering? How much of its memory model? Does the whole concept of Rust concurrency and memory protection boil down to "the language provides an 'unsafe', and then people write libraries to do things with it"?

minimax 10 hours ago 4 replies      
I think we can be a little bit more charitable towards C++. Modern compilers will let you know if you try to do something as obviously incorrect as returning a pointer to a stack variable.

    $ cat > foo.cpp <<EOF    > int *dangling(void)    > {    >     int i = 1234;    >     return &i;    > }    > EOF        $ clang++ -Werror -c foo.cpp    foo.cpp:4:13: error: address of stack memory associated with local variable 'i'          returned [-Werror,-Wreturn-stack-address]        return &i;                ^    1 error generated.

pcwalton 11 hours ago 1 reply      
I like this tutorial because dives straight into the most unique/unfamiliar parts of Rust (ownership/references) and gets them out of the way. It's a "learn the hard way"-style tutorial, and I think that's the best approach. Once you learn how ownership and borrowing work, along with ARCs and concurrency, everything else is really simple and just naturally falls out.
noelwelsh 7 hours ago 2 replies      
The focus on C++ as point of comparison is understandable given Mozilla's background, but in Internet land most systems software runs on the JVM, and is written in Java, or increasingly, Scala (see LinkedIn and Twitter, for example).

The issues of memory layout and the like come up here, and unlike Rust the JVM doesn't give much control of this aspect. See Martin Thompson's blog for an example of someone very concerned with issues of performance on the JVM (http://mechanical-sympathy.blogspot.co.uk/) I believe Rust could see a lot of adoption within this community as a "better" Scala -- a modern high-level language that allows dropping down to bit-twiddling when performance is an issue. It needs higher kinded types before it will work for me, but I hear that is on the road-map.

BTW, I've read a few Rust tutorials and they all fail for me in the same way: too much waffle and not enough getting down to the details. I understand the difference between stack allocation, reference counting, and GC, I get why shared mutable state is a bad idea, etc. What I want is a short document laying out the knobs Rust provides (mutable vs immutable, ownership, allocation) and how I can twiddle said knobs.

samth 10 hours ago 2 replies      
I think the emphasis on "unsafe" isn't helpful. As far as I can tell, the only thing that "unsafe" is enabling is that Arc and RWArc are written in Rust rather than in C in the runtime (the way they'd be in Go, or Erlang, or Haskell). The things that make Rust able to do what it does are ownership and tasks and lifetimes and affine types -- all the things the post covers before talking about "unsafe".

Also, it gives the impression that there's something fundamentally unsafe about all of this, whereas the whole point is that these abstractions are _safe_ to use.

brson 10 hours ago 0 replies      
I like this a lot, and think it's the best intro to Rust yet. The thing that concerns me a bit is that it presents the special cases in concurrency without impressing some of the most important points. Primarily, the channel example presents the send as copying, which in this case it is, but one of the main advantages of Rust's channels and owned types is that message passing of heap-allocated types do not need to copy. It probably doesn't stress hard enough that Rust tasks do not share memory before saying, 'oh, but really you can share memory if you need to', though I see that the Arc and RWArc examples are good ways to introduce the concept of using unsafe code to provide safe abstractions.
acqq 6 hours ago 2 replies      
"Rust does not have the concept of null."

How can I have the pointer to something that is maybe allocated or maybe present? Do I have to have additional booleans for such uses? Isn't that a waste?

How can I effectively build complex data structures like graphs, tries etc then?

I'd like to see that covered too.

patrickaljord 11 hours ago 6 replies      
Thanks for the the tutorial! Rust seems a bit too complex to me. Like a C++ on steroid that wants to do and be everything. Nothing wrong with that but not my cup of tea. I'd rather stick to C if I need tight memory management, it is way simpler and straight forward. And if I need concurrency, I'll stick to Golang (or erlang). Really, it's such a pleasure to read some golang after reading this 30 minutes of Rust. Anyway, just my opinion.
maxerickson 10 hours ago 1 reply      
I personally dislike the style of tutorial that has lots of 'we' and 'lets' in it.

I suppose part of that comes from the tendency for such tutorials to provide revelations instead of motivators. For example, in this tutorial there is 'look at this C++ code because I said to' and then two sentences later it explains that the C++ code ends up in a garbage value.

But this is probably very much a point of style and I'm sure lots of people think my view is stupid.

Jemaclus 10 hours ago 1 reply      
The last time I touched C code was my sophomore year in college, so maybe 12 years ago? As a result, the last time I had to deal with pointers and such was back then, as well.

I'm primarily a web-dev. Ruby, PHP, and Javascript are the languages I'm most familiar with at the moment.

Are there any Rust for Dummies-style tutorials floating around? As simple as this introduction is, it was still over my head...

rcthompson 9 hours ago 1 reply      
Thanks for this straightforward and accessible intro to some of Rust's unique features!
bsaul 7 hours ago 1 reply      
Is rust borrowing any kind of code from what is used for objective C ARC technology relative to detecting the lifetime of a variable and automaticaly freeing the resource ? Is it a common known algorithm ?
niix 9 hours ago 1 reply      
Thanks for this. I've been thinking about getting into Rust recently and this motivates me to do so now.
steveklabnik 11 hours ago 4 replies      
I like the overall structure, but I'm not sure about throwing so much syntax without explaining it in detail.
danso 11 hours ago 2 replies      
A little OT...but what's with Svbtle's apparent default styling of links? There's no indication that any particular word or sentence contains a link, which basically makes those links invisible to readers. Or do lots of people read web articles by randomly hovering the mouse around the text?

But relevant to the OP...I generally try to save useful tutorials like this on my pinboard, which often doesn't pick up the meta-description text. So I double-click to copy the first paragraph and paste it into pinboard...except in the OP, I kept on clicking on text that was hiding links underneath.

It's a strange UI decision, and one that seems to discourage the use of outbound links...if you can't see the links, then what is the purpose of them? For spiders?

0x001E84EE 9 hours ago 2 replies      
I'm a big fan of that kudos button! Very nice website and an interesting introduction to Rust.
45g 10 hours ago 1 reply      
> You can see how this makes it impossible to mutate the state without remembering to aquire the lock.

Not quite true. Looking at the type signature of e.g. RWArc::write I see this:

    fn write<U>(&self, blk: |x: &mut T| -> U) -> U
which means I could probably do:

    let mut n = local_arc.write(|nums| {         nums[num] += 1;         return ~(*nums);     });    n[2] = 42;

Super Mario World "Executes Arbitrary Code" [video] youtube.com
153 points by ingenter  9 hours ago   14 comments top 6
tptacek 5 hours ago 0 replies      
Here's the basic technique:


I can't read this without thinking that I have wasted a life that could have been better spent synthesizing shell code out of the precise contents of Yoshi's mouth.

joshschreuder 6 hours ago 0 replies      
I love stuff like this. It's been posted a few times here, but the Pokemon Yellow code execution is amazing to watch also:


zetx 6 hours ago 0 replies      
This appears to be the same as what was shown at AGDQ 2014 (Awesome Games Done Quick): http://gamesdonequick.com/

Here's their live run with them explaining what is happening: http://www.twitch.tv/speeddemosarchivesda/b/492923053?t=10h2...

noselasd 4 hours ago 1 reply      
For the uninitiated, can anyone explain what's going on ? What does this video show me ?
richforrester 5 hours ago 0 replies      
Funny. I remember calling the Dutch Nintendo help-line (from a land-line no less) to find out how to get to the final castle's backdoor. This is back when I was about 10 years old.

Now, there's people coding games in that game by playing it.

I thought myself a gamer.

batmansbelt 6 hours ago 2 replies      
What are we looking at here? Would this hypothetically work with a cartridge, or is this exploiting a bug in the emulator?
Patrick McKenzie AMA on BetaList betalist.com
48 points by keesj  5 hours ago   19 comments top 5
patio11 2 hours ago 1 reply      
That was fun. I apparently wrote approximately 4,500 words in 90 minutes. Also: my brain is now absolutely mush, so if you have anything you'd want me to add, please, find my inbox so I can get to it when my brain is not mush.

I think that's the most exhausting writing task I've ever done -- it's harder being in a CS email firehose, which is already exhausting. At least when doing CS you have macros to help, and 80% of customer service can be done without engaging more than 2% of your mental faculties.

venus 2 hours ago 2 replies      
Seems like Disqus was a pretty poor choice for comments on this. I don't find them easy to read, they take ages to load, and worst of all they apparently dropped several of the guest's comments!

If this is going to become a regular thing, Betalist, I would recommend a proper comments system. Disqus is OK for blogs but if you intend to have a high value conversation then it's a bad choice, IMO.

keesj 3 hours ago 1 reply      
Just a heads up: AMA has now ended. Patrick was gracious enough to extend his time from 30 minutes to 1.5 hours. There's some real gems in those Q&A's. Definitely recommend checking it out.

(Disclosure: I'm the founder of BetaList)

liquidcool 2 hours ago 0 replies      
First, thanks for this. However, I'll have to wait until I get to my desktop because this is broken on mobile (Galaxy Note 2 Chrome). In landscape the text extends beyond the screen and portrait shows a few words per line. Zoom appears to be disabled. If web analytics show low engagement on mobile, that would be why.
cordie 2 hours ago 0 replies      
Awesome. Can't wait to read through all the comments!Thank you so much for doing this!
Quick tip for developers who use OS X
901 points by gargarplex  13 hours ago   337 comments top 90
juanre 12 hours ago 11 replies      
Bash, running in your terminal, understands both the Emacs and the Vi commands. By default is Emacs, so you can C-a (control-a) for beginning of line, C-p to go back in command line history, or C-r to search it.

I prefer the Vi mode, though. Add to your .bashrc

set -o vi

Then you can press escape to go from input mode to normal mode; there k will take you to the previous line in command line history, j to the next line, ^ and $ to the beginning and end of the line, /something will search something back.

Editing is really fast; move by words with w (forward) and b (backward), do cw to replace a word, r to replace a letter, i to go back to input. It will remember the last editing command, just as Vi, and repeat it when you press . in normal mode.

guelo 12 hours ago 12 replies      
After making the switch to OS X in the last couple years after living in Linux and Windows before that, I think it's objective to say that keyboard shortcuts in OS X are much worse in both ease of use and consistency across applications.
adventureloop 12 hours ago 3 replies      
Another really useful command is the open

    $ open filename.png    $ open .    $ open -e filename.txt    $ open -a Pixen filename.png
The first command will open the file with the default application.

Open . will open the current directory in finder, which I find very helpful.

The -e flag will open the file in textedit.

The -a will open the file in the given application name.

pilif 11 hours ago 1 reply      
This works if whatever is currently running in your terminal has support for the XTerm mouse escape sequences.

Option-click is the shortcut to tell the terminal to forward the click to the application running in the terminal. That's usually your shell or some editor.

I seem to faintly remember that at some point this was actually configurable and you could configure the terminal to forward non-option-clicks and only enable selection mode on option clicks. I didn't find this option in current iTerm or Terminal.app versions though - I might just be imagining this.

natch 12 hours ago 6 replies      
Useful when copy/pasting things to share with other people in email and such:

    $ pbpaste | pbcopy
takes the current contents of the copy/paste buffer and removes color/font/background color rich formatting and puts just plain text back into the paste buffer.

And of course pbcopy and pbpaste are also very useful on their own.

petercooper 12 hours ago 5 replies      
A new one for me I accidentally found the other day..

If you use Spotlight to find something and then want to see the file in Finder, Cmd+click the item in the Spotlight dropdown.

Edited from Cmd+shift+click due to note in child comment :-)

ancarda 12 hours ago 5 replies      
Nobody seems to have posted this yet:

    python -m SimpleHTTPServer
I'd recommend an alias like "serve". It basically puts the current directory online (binds to

scelerat 12 hours ago 3 replies      
The first thing I do when setting up a new Mac is make the Caps Lock key into another Control key.

System Preferences > Keyboard > Modifier Keys

Saves lots of awkward pinky-bending.

drcongo 12 hours ago 2 replies      
Lovely, thanks. Also, is it common knowledge that CMD+click on a URL opens that URL in your default browser? Useful for the "Running on" messages.
andr 12 hours ago 4 replies      
mdfind to get Spotlight results in shell.

Also, iTerm is a great Terminal replacement, if you haven't tried it.

purephase 12 hours ago 3 replies      
I have tried to use Terminal, but I just can't. The ability to highlight and automatically have the text in clipboard is such a crucial feature in my workflow that not having it even as an option is a deal breaker.

Once I started with iTerm and built a config/flow around it, I can't go back.

Thanks for the tip though! I didn't realize this worked in Terminal too. I'll keep it in mind the next time I'm forced to use it.

ghotli 12 hours ago 3 replies      
Osx terminal uses almost all of the emacs movement keys by default. Ctrl-a and ctrl-e are useful but I find the meta movement keys to be the most helpful. (usually alt or ESC)

Meta-f jump forward a word

Meta-b jump backwards a word

raju 10 hours ago 1 reply      
Two more (for Bash) -

Ctrl-x, Ctrl-e will pop open your EDITOR so you can edit the command. Saving and closing the editor brings the command back in your terminal and automatically executes it.

fc will bring up your EDITOR with the last typed command. (You can use fc -l to see a list of commands)

chavesn 12 hours ago 0 replies      
You can also CMD+click and drag (or double click as long as it's not a URL, which will open) to add to the text selection (like multiple cursors in Sublime Text 2).

Also, Option+click and drag gives you column selection mode.

roberthahn 10 hours ago 1 reply      
oo! As long as we're sharing our favourite terminal tips, here's mine: In your .bash_profile, add this line:

alias imgsz='sips -g pixelWidth -g pixelHeight'

sips stands for "scriptable image processing system"[1], and provides terminal users with a toolset for inspecting and manipulating images. The alias above is really useful for web developers who need a quick look at how big a given image is:

    $ imgsz logo.png    /path/to/logo.png    pixelWidth: 500    pixelHeight: 120
[1] https://developer.apple.com/library/mac/documentation/Darwin...

josho 12 hours ago 0 replies      
Also useful is Apple's support doc listing of keyboard shortcuts. http://support.apple.com/kb/HT1343
albemuth 11 hours ago 0 replies      
If you'd rather not use the mouse, the fc command will open the last-executed command on vim (this is configurable, I suppose), saving and exiting the file executes the command!
rynop 13 hours ago 0 replies      
Nother related tip: command + click on a file in iTerm opens it up. Test it out by doing ls -al then cmd+click on file name.
jaredsohn 5 hours ago 1 reply      
OSX also features a lot of neat trackpad multitouch commands that some might not be aware of (my Windows trackpad doesn't understand multitouch.)

Two fingers: up/down: scroll up/down, left/right: forward/back in some browsers (such as Chrome)

Three fingers: up/down: show/hide a list of applications/desktops; left/right: switch desktops one at a time.

Five fingers together/apart: show/hide desktop

Before I discovered these, I was confused why I would occasionally see the forward/back arrow in the browser for a short period of time, not realizing that it showed because I happened to use a couple of fingers to move the mouse cursor.

kozikow 2 hours ago 0 replies      
I compiled my list of mac keyboard shortcuts and keyboard tricks: http://kozikow.wordpress.com/2013/10/31/going-mouseless-on-m... . Mac is the only non text based OS that you can control with 100% keyboard.
sandGorgon 8 hours ago 0 replies      
So I see a lot of comments in this thread about OSX has a much cleaner interpretation of Command-C vs Ctrl-C in Linux. Actually, the problem is that you should NOT be using ctrl-c to copy in linux. the global copy command in Linux obeys the CUA [1] standard. Copy is defined to be "Ctrl-Insert" and paste is defined as "Shift-Insert". This will work on all kinds of user interfaces on Linux - including the cmdline, the browser, etc.

However this is an issue which I dont know is a bug or something (I'm on Ubuntu 12.04) - if you open "System Settings" in Ubuntu and copy text from there, I'm not able to paste it on the browser, but am able to paste it everywhere else using the command sequence described above.

[1] http://en.wikipedia.org/wiki/IBM_Common_User_Access

mikeroher 13 hours ago 6 replies      
Two more tips:

* Ctrl-a to go to beginning of line

* Ctrl-e to go to end of line

_djo_ 13 hours ago 2 replies      
Works in iTerm too.
niyazpk 13 hours ago 1 reply      
Works even when you are ssh-ed into a remote machine! Nice.
SmileyKeith 12 hours ago 0 replies      
I would lose years if I used the mouse in the terminal..
michaelhoffman 12 hours ago 1 reply      
Is there a way to do something similar in Linux? I use gnome-terminal but would think about switching to something with a feature this useful.
k-mcgrady 13 hours ago 0 replies      
Most useful thing I've seen all day - thanks!
IgorPartola 9 hours ago 2 replies      
Doesn't work on iTerm2, which y'all should be using anyways. Besides being infinitely more configurable, and getting tabs right, it's also much faster when you are printing a bunch of text. No idea why OS X Terminal is so slow when it comes to this, but it's actually painful to `cat` a file of more than a thousand lines.
lukabratos 12 hours ago 0 replies      
Increase or decrease volume by small increments: alt + shift + volume up / volume down or brightness up / brightness down.Also Alfred and Shortcat app.
natch 12 hours ago 1 reply      
I wish there was a way to make the mouse cursor stand out more when (and only when) it's over the terminal. Maybe it's because I use a dark terminal background, I can hardly ever see the thing.

There's a system wide setting to make the cursor huge, but I don't want a huge cursor everywhere. And there's a terminal preference to make the insertion cursor different, but that's a different cursor, not the mouse cursor.

When do I want to see the mouse cursor? It's useful for things like highlighting a git hash or a few lines of text for copy/paste.

mariusbutuc 19 minutes ago 0 replies      
oops, not working in iTerm2...
yogsototh 12 hours ago 2 replies      
OK, so for vim users with zsh:

bindkey -v

Now be happy, hit <ESC> and, use ^ and $ but also f<letter> t<letter> and also ,;.

You can copy/paste with dd, Y, etc...

pfortuny 13 hours ago 0 replies      
Well, if you do not want to use the mouse, you can always:

Ctrl-e: go to the end of the lineCtrl-r: search backwards (input the text)

and you go to the point where the search begins.

donskif 4 hours ago 0 replies      
Great tip, thanks.

Another useful fyi that I haven't seen anybody else mention is the Bash framework bash-it [https://github.com/revans/bash-it/]. It has tons of fantastic plugins, aliases and themes out of the box.

I've only tried it on iTerm2, but as it only replaces your .bash_profile file it should work fine on Terminal.

jbarham 6 hours ago 0 replies      
20 years ago with Plan 9 you could have just clicked to move your cursor. Plus a change...
benwerd 10 hours ago 0 replies      
That was the least obvious thing ever, and you've just measurably improved my development experience. Thank you.
coherentpony 12 hours ago 1 reply      
Wait, people use the mouse when they're in the terminal?
apinstein 12 hours ago 0 replies      
And if you use vim in the terminal, you can even use the mouse to select text... https://github.com/apinstein/dotfiles/blob/master/vimrc#L150...
tunnuz 13 hours ago 0 replies      
Type "set -o vi" on your bash, and you can edit your commands with the vi shortcuts.
lloeki 9 hours ago 0 replies      
Go to Keyboard Shortcuts in System Preferences and remap "Send reset" to cmd+ctrl+R or something, because it saves sanity when going back and forth between vi in a term and a browser.
emehrkay 12 hours ago 1 reply      
drag + drop files to terminal to get its location.

I often do "cd " + dragon drop folder to get to that folder in terminal

e28eta 12 hours ago 0 replies      
I like checking the box in preferences to use Option as Meta, and then option + left/right will jump a word at a time, and option + backspace will delete the previous word, just like in most Cocoa apps.
brbcoding 12 hours ago 0 replies      
Whoa, it would seem that you can option click above the line and go back in history!
phronmophobic 4 hours ago 0 replies      
I've found that maxing out the key repeat rate makes life better. System Preferences > Keyboard. Select the keyboard tab then move the Key Repeat slider all the way to the right.
xmodem 11 hours ago 0 replies      
This. changes. everything.
caiob 13 hours ago 1 reply      
The problem here is the "click" part of it.
lobster_johnson 12 hours ago 0 replies      
Or you could set up your .inputrc to make arrow keys (eg., ctrl+arrow to jump between words) work properly. It also helps to learn the other Readline shortcuts (ctrl-A for home, etc.).
zodvik 12 hours ago 1 reply      
To cut & paste files in Finder - use Cmd + C, then Cmd + Option + V.
jheriko 12 hours ago 0 replies      
another one, you can drag files and folders onto the terminal and it will paste the path in at the cursor position... very useful to cd to some path you have open in Finder
khalidmbajwa 7 hours ago 0 replies      
Nothing will ever be the same again #HolyWow
FlyingAvatar 12 hours ago 0 replies      
I've always preferred, option + left/right for fast moving with left/right for fine adjustment so I don't have to leave the keyboard.
dsego 10 hours ago 0 replies      
You can type say "hello" and the terminal will greet you.
fiveisprime 12 hours ago 0 replies      
Another useful tip is, in iTerm, map option+left and option+right to send escape sequence ^[b and ^[f respectively. :)
davidedicillo 11 hours ago 0 replies      
If we'll ever meet, beer is on me. Thanks!
nailer 11 hours ago 0 replies      
FYI this is 'Alt' on modern Mac keyboards.
scottyallen 7 hours ago 0 replies      
This appears to work in iTerm2 as well. Neat!
thucydides 9 hours ago 0 replies      
Move cursor left or right word-by-word: alt + left arrow, alt + right arrow
thearn4 10 hours ago 0 replies      
Simple, to-the-point, and very useful. Thanks!
lukasm 11 hours ago 0 replies      
another tip. Using mdfind you can do what find does but using Spotlight index
thebouv 12 hours ago 0 replies      
The OP tip is nice to have built-in now. If I recall correctly, this was an option that had to be turned on in Terminal Preferences in earlier releases.

Another fave of mine: Ctrl-w for deleting whole "words" at a time. Better than holding that delete key down.

The default shell for OS X appears to be bash (my favorite shell as a Linux user as well).

Just google for Bash shortcuts. Here's a place to start: http://teohm.github.io/blog/2012/01/04/shortcuts-to-move-fas...

adriaanm 11 hours ago 0 replies      
The search clipboard: cmd-e is to cmd-g as cmd-c is to cmd-v.
SDMattG 12 hours ago 1 reply      
This is amazing. You have saved me countless seconds of frustration :-)
niuzeta 12 hours ago 0 replies      
nocivus 11 hours ago 2 replies      
No one uses Terminal. iTerm ftw ;)
sazeod 12 hours ago 0 replies      
Or you can use vim editing mode directly in bash using:

set -o vi

this makes most vim motions available at the command line after hitting "esc"

seanhandley 7 hours ago 0 replies      
This just made my day <3
macinjosh 12 hours ago 2 replies      
Tip: Real hackers don't use a mouse.


javajosh 8 hours ago 0 replies      
Off-topic, but related: does anyone know a way to bind a key, and F-key for example, to a particular application in OSX? The behavior I'm looking for is almost achievable with Spaces - but it's basically when you hit the combo from anywhere, the target application comes to the foreground, or if it's not running, it launches.

Command-Tab is fine and dandy, but I want more stability in my application switching. And if it doesn't exist, I may have to dust off XCode like I've been threatening to do for a while now.

weddpros 12 hours ago 0 replies      
sips -Z 1280 file.jpg to resize a photo easily...
beshrkayali 12 hours ago 5 replies      
Sucks that it doesn't work in iTerm2.
desireco42 8 hours ago 0 replies      
Thank you :)
redeemedfadi 12 hours ago 0 replies      
<Ctrl> X, <Ctrl> E to edit the current line in your default editor.
padmanabhan01 9 hours ago 0 replies      
years lost indeed..
whyme 8 hours ago 0 replies      
thank you; you just made my year!
dnyce 6 hours ago 0 replies      
Wow. Talk about #yearslost!
Fishrock123 13 hours ago 0 replies      
... Why didn't I know this before?

Thanks for sharing!

dustinbrownman 8 hours ago 0 replies      
Holy guacamole! #lifechanged
grillermo 13 hours ago 0 replies      
Works on iTerm too!
giuliano108 6 hours ago 0 replies      
cmd option mouse-drag: blockwise selection
adsrikanth 11 hours ago 0 replies      
What do you mean by hold option?
LeicaLatte 11 hours ago 0 replies      
you mean with a mouse?


jlink 4 hours ago 0 replies      
Thanks !
dustinbrownman 8 hours ago 0 replies      
holy guacamole! #lifechanged
jhwhite 13 hours ago 0 replies      
I did not know this. Thanks! Very helpful tip.
simonhamp 13 hours ago 0 replies      
You, sir, are a genius!
pketh 11 hours ago 0 replies      
Mind. Blown.
jackmaney 13 hours ago 0 replies      
China cloning on an 'industrial scale' bbc.co.uk
14 points by bane  2 hours ago   5 comments top 4
awakeasleep 11 minutes ago 0 replies      
The part about halting pig's aging at one year stopped me in my tracks.

The whole article was incredible, but that has been my lifelong dream (for pets) Imagine a dog that stayed one year old forever. Or a giraffe.

car 31 minutes ago 0 replies      
"BGI offers a glimpse of what industrial scale could bring to the future of biology."

"If it tastes good you should sequence it," he tells me. "You should know what's in the genes of that species."

"A third category is if it looks cute - anything that looks cute: panda, polar bear, penguin, you should really sequence it - it's like digitalising all the wonderful species," he explains.

ChuckMcM 25 minutes ago 0 replies      
Kind of surprised he didn't take the name Moreau :-) It will be interesting and potentially shocking to see what comes out of these efforts. Both good and bad scenarios come to mind.
anigbrowl 1 hour ago 1 reply      
Execrable writing, but interesting to see that they're industrializing this already.
Adrian and Jacob retiring as Django BDFLs holovaty.com
235 points by adrianh  13 hours ago   68 comments top 11
thatthatis 12 hours ago 8 replies      
There's an interesting political science-ish question posed by this:

Is it best to have a king first then later transition to democracy?

It would seem that kings followed later by parliament has been a very successful model for django.

I don't know how far this generalizes.

randlet 12 hours ago 1 reply      
Adrian & Jacob built a great framework, but more importantly they helped oversee the building of an awesome community and ecosystem over the last 8-9 years. Thank you both!
craigkerstiens 12 hours ago 0 replies      
Heres Jacob's comments as well - http://jacobian.org/writing/retiring-as-bdfls/
sneak 12 hours ago 1 reply      
erichurkman 13 hours ago 0 replies      
Thanks for all of your hard work, Adrian & Jacob.
biscotti 12 hours ago 1 reply      
Thankyou for your hand in creating something I use every day

Jacobs post: http://jacobian.org/writing/retiring-as-bdfls/

bkeating 12 hours ago 4 replies      
Thank you for Django. We still of course, hope to see you at future DjangoCons' :D

> (But please, no more Django Pony. It's stupid.)

Hear, hear!

nwp90 7 hours ago 0 replies      
I think a project that is to continue to be relevant for the long-term needs a vision, and that that comes from a leader. Once the leader is gone, the vision doesn't develop. At some point the original vision will no longer fit the changing circumstances around the project, and either a new leader and a new vision will emerge or the project will fade away (or both, if the new vision doesn't work).

On the other hand, if at some point the original vision is accepted by the mainstream, the project will be successful and popular for a while, the more so if it ceases to move on and develop its vision. Then at some point the project will fade away as the Next Big Thing arrives.

It's not always possible for a project born of one vision to adapt to a new reality - even if the leaders can foresee it perfectly - without starting afresh.

Given that, and that the skills and personality needed to "maintain" a mainstream project are different to those needed to develop and make concrete a vision, it's good that creators move on.

To me, Adrian and Jacob moving on is a sign of Django's settling into the mainstream. It'll stick around for a while, and then it will fade away. The Next Big Thing is on its way.

To Adrian and Jacob - I hope you enjoy your new ventures; and thanks.

pekk 11 hours ago 1 reply      
Since no one else asked, does this pose any threat for the future of Django?
indiefan 10 hours ago 2 replies      
You keep on using that acronym. I do not think it means what you think it means.
kabisote 3 hours ago 0 replies      
How can a BDFL retire? I thought it's For Life?
The PCs Death Might Also Mean the Webs Demise wired.com
22 points by grannyg00se  2 hours ago   27 comments top 18
quaunaut 1 hour ago 0 replies      
I couldn't possibly disagree more. I don't think apps are going anywhere either, but I'd limit their days before I limit the web's.

* As the web gets faster and gains more capabilities, it will encompass everything apps currently consider their domain.

* By default it contains no gatekeepers, whereas apps on mobile devices primarily go through one central source. This carries with it safety, but also a higher barrier to entry: And generally, higher barrier to entry is something people are only willing to endure if given good reason to. Having the web replaced by apps doesn't leave any room for that.

* The web is open and universal. This gives it infinitely more resilience than apps- if Apple were to fall into the sea, an entire ecosystem would be lost. As is, the web practically guarantees nothing can ever die permanently.

About the only weak point that the web has, is that it's more difficult to effectively monetize one-time payments, but that's why everyone and their sister is going to subscription models, or are working within an established marketplace.

bhauer 52 minutes ago 2 replies      
As a consumer, I hope he is wrong. As a producer, I believe he is wrong.

As a consumer with a high-spec PC with multiple large monitors, a good phone, and two tablets, I strongly prefer consuming all content on my PC. It's fasterby a massive margin, easier to navigate, easier to read, easier to simultaneously consume two or more content items (video + text content, for example), easier to pair consumption with production. It's better in every single content consumption metric that matters to me. It has a lot of room for improvement (see my previous rants about monitors), and I feel the lack of innovation in desktop computing is precisely why it's flagging. But that has more to deal with lack of innovation in desktop computing and less to do with mobile versus desktop.

As a producer, there is no comparison. In a pinch, I can produce work product on my Surface Pro or, in an even tighter pinch, on my Venue 8 Pro. But every moment I do so, I will be longing to be back home in front of my desktop computer. Unless of course I am on vacation in some beautiful environment.

Speaking of, I often feel there is a myopic view of computing that says mobile is workable for work production because the people making the decisions are those who can be mobilethey travel extensively and don't produce a whole lot. They may be creative, but they are not the creators. For the rest of us, we spend a lot of time at home or at an office, two locations where we easily can install high-performance desktop computing in one form or another.

Like others here, I don't care a whole lot precisely what is behind the screens, keyboard, mouse; behind the projectors, hand gesture inputs, and so on. I don't care if it's a PC in a big ATX case, a NUC or Brix, or a mobile device that I dock on a charging plate with wireless HDMI. What matters is that I can break free of its mobileness, making it a device with a large screen, a full-size keyboard, and a high-precision pointing device such as a mouse. That is desktop computing, and it will evolve.

Yeah, for me, I hope he is wrong because his model of computing is one that doesn't align with my preferences. Furthermore, in the computing model I long for, all my mobile devices become subservient to a singular "computer" that runs my applications. One of those applications will be a web browser.

clarky07 21 minutes ago 0 replies      
The PC and the web aren't dying anytime soon. PC sales aren't going to 0. They are simply at a point where they are good enough to last a bit longer than they used to. With the replacement cycle getting lengthened, sales go down. They will plateau though. Nobody is stopping using pc's altogether. At worst people will be buying convertible laptop/tablets. iPad is awesome, but it doesn't solve all problems.

The web, like pc's, is also not going to die anytime soon. Linking between apps is far too complicated, and finding things without Google would suck. If you know exactly what you are looking for, perhaps apps work for that. It is the equivalent of type in direct traffic though. Otherwise, you are going to google to find something specific, or twitter/facebook, to find something random. Each of those things require the linking of the internet. You may or may not use the built in browser much, but you will use the browser view in those apps to consume the content that is out there.

rwhitman 57 minutes ago 0 replies      
So the most powerful tool for the open sharing of information since the advent of the printing press, that has transformed the way we live forever, will now be discarded because people are using computers in the form of handheld devices instead of Windows PCs? Is that what they are trying to say here?

I disagree.

banachtarski 17 minutes ago 0 replies      
This is a silly deduction from flawed assumptions. PC sales don't necessarily correlate to PC usage! I don't know a single person that doesn't use a PC from day to day, including my nontechnical friends.
belluchan 32 minutes ago 0 replies      
Question for anyone really: how many different websites did you visit today, and how many different apps did you use today? Is that number even remotely equal? Do you think you'd ever want as many as the former as apps installed on your phone?
Zigurd 21 minutes ago 0 replies      
PCs and non-general-purpose computing are headed for a divorce, and rightly so.

PCs started as general-purpose computing for people who needed general-purpose computing. Now, 90%+ of PC users, if asked if they need general-purpose computing will go "Uh. Sure. Whatever."

The population of the open Web, with open standards, will shrink alongside the population of people who actually need a Personal Computer under their total and complete control.

90% of people want a game, a pop song, and a movie. And the publishers of those products don't want them to steal their products. Some of that 90% will break in the direction of open culture, on an open Web. But not most of them.

ams6110 6 minutes ago 0 replies      
One reason that at least partly explains the decline in new PC shipments is that nowadays, a three or four year old (or even older) PC is still plenty good enough for what most people do with it. My newest computer at home is a mid-2007 iMac and it's absolutely fine for everything I do.
wmeredith 42 minutes ago 1 reply      
This seems like a non-argument. Most apps are used to access the web in some manner. It's like saying cars will kill roads.
johnwalker 5 minutes ago 0 replies      
I actually agree with this article. It's a lot more trouble to wait for the web to standardize than it is to support a couple different versions of an app, which usually has potential to offer a better user experience anyway. (More developer jobs, too!) This really shouldn't be troubling to anyone.
pan69 23 minutes ago 0 replies      
Name me one app hat doesn't use HTTP.

The web is more than websites. Personally I believe apps will be long gone before the web will be. It's already starting with the gazillion of apps you need to install these days with most of them that should just have been websites.

nly 53 minutes ago 1 reply      
The Web frankly just isn't evolving as quickly as mobile app platforms. HTML5 feels like ancient history, despite not even becoming a final recommendation yet and, imho, the rise of the gigantic javascript frameworks just shows the existing platform is sorely lacking... all it really demonstrates is the power of having a client you can push code to easily.

Where's my standard, secure browser UI and API for secure payments using my credit card? We've been using the web to shop for 20 years and it still sucks.

Where's decent standard authentication worthy of this millennium, let alone this decade?

Why don't we yet have date picker and other form widgets that actually work across browsers?

Why are bespoke markup languages like MathML and SVG actually failing or seeing less and less adoption?

aufreak3 44 minutes ago 0 replies      
This seems to be a really important question out there and I'm somewhat scared by the silence on HN about this. The silence makes me think that folks are going "oh shit!" with frozen fingers.

It does looks like the incentives are aligned in the direction laid out in the article. Companies can better control their user experience on mobile through uncrawlable apps. You hear "mobile first" a lot these days. These things are with users for longer than desktops/laptops. Anyone can carve out a section of the web where content gets created, but cannot be linked to.

The two gaps I see are -

a) mobile devices are good for _consumption_, but are not yet on par with *tops for _creation_, and

b) reputation systems on the internet (currently) require linking, and there's nothing to replace that in the mobile world.

andrewhillman 41 minutes ago 0 replies      
I believe with the rapid advancements in mobile frameworks and responsive design the desktop and mobileweb will converge. Not every startup needs a native mobile app.

If you believe the desktop web is dead... Throw out all your desktops/ laptops and give everyone in your company iPads and see how productive your company is. It won't be pretty.

adventured 1 hour ago 1 reply      
This will end up not being even remotely accurate. In fact, the web will continue to expand, but at a slower rate. Homes and offices will continue to have the big screen web experience, and this will actually become cheaper and more awesome (simultaneously the traditional PC will decline in sales volume while not decimating the home web browsing experience; the author doesn't grasp the obvious).

Why? Simple: there's no way to properly distribute the truly vast array of unrelated information the web contains, via mobile apps. Nobody is going to want to download the thousands of mobile apps it would require to get comprehensive access to all that information at their finger tips. I'm not a huge fan of the mobile browsing experience, but I use Chrome constantly for stray information tasks on my S4.

How? Smart phones will be powerful enough to begin treating them like true home computers in the next five years. Some would argue they're already there. We're obviously going to replace the home PC + big screen with a smart phone + big screen, or equivalent. Android sticks and or a future more powerful Chromecast equivalent, five years from now, will be like plugging a desktop into your big screen. The same will be true for work monitors, eg. in a personal office. A $50 stick for your big TV in the living room, and maybe one for your work monitor, and optionally just use your smart phone. In this formulation, nothing changes in the home with regards to the web except the death of the PC as we know it today (powered by Windows), replaced by a better solution.

aplummer 1 hour ago 0 replies      
Apps will kill the web exactly like T.V killed radio and cinemas.
increment_i 1 hour ago 2 replies      
Keith Rabois certainly seems to think so.

The mainstream, browser-based web has been around for some 20 years now. I like the web, I use it everyday so I want to think it will last forever. But I wonder, were people saying the same things about BBS'es and AOL-like portals, or whatever came before them? I don't really know, as I was so young when these things were happening. All things must pass, right?

__pThrow 52 minutes ago 0 replies      
I think the PC's Death in the marketplace is due to the PC being given over to more and more web apps and the power of current PCs being nowhere near the bottleneck it once was.

When there are compelling apps and net bandwidth that requires higher performing PCs, we'll see PC sales trend upwards again.

Oracle to issue huge security patch addressing 36 Java vulnerabilities theinquirer.net
20 points by dsl  3 hours ago   17 comments top 5
JohnTHaller 53 minutes ago 0 replies      
For context, Java will not run by default in any of the 3 major web browsers. In Firefox, it uses Click to Play, requiring you to click on the plugin to load it. In Chrome and IE, it will present a permissions box asking you for permission to run the applet that you can deny.
pudquick 54 minutes ago 0 replies      
Does anyone know if this is the infamous 7u51 which will disable unsigned applets from running AT ALL in a browser? (Yes, I know about the whitelist jar)
waps 3 hours ago 3 replies      
Please note that the only way to exploit these vulnerabilities is you've already got your code executing on the machine you intend to break. The only thing this allows you to do is to break out of the java sandbox, and keep in mind that most languages don't even pretend that sandbox isolates code (e.g. python/perl/go/... all openly say you can easily break out of the sandbox). So they're trying to solve a really hard problem.
jgalt212 1 hour ago 0 replies      
Let's say this update does everything advertised and more, the perception remains that Java is just not usable in the browser--it's just too dangerous.

Away from testing purposes, the only reason we use IE in the office is so that we can use Webex without Java.

beedogs 2 hours ago 1 reply      
What an utter mess Java is. You'd think after 20 years they'd have this sorted out by now.
Everpix VC Feedback github.com
83 points by jhull  8 hours ago   34 comments top 8
tlb 8 hours ago 2 replies      
This seems to show the system working. Everpix was a reasonable idea with good execution, but nobody could see how it would be eventually profitable. That it didn't get funded is evidence that we're not in a bubble.
ivanplenty 8 hours ago 6 replies      
This morning I did a public write-up of the Everpix business model to see why it failed:


(Submitted to HN a few hours ago as https://news.ycombinator.com/item?id=7052593)

tl;drEverpix sold its product at a marginal loss and closed its doors after the financing ran out. Since the marginal costs always exceeded the marginal revenue we now know that Everpix should have shut its doors immediately as it never could be a viable business in either the short or long runs. There doesn't appear to be a what-if cost structure change that it could have made realistically to stay in business. Shutting down was the right decision for the business, and this evidence suggests it should have shut down a long time ago.

andyl 6 hours ago 0 replies      
"You guys are awesome and we wish you all the best."

Fundraising is such a giant waste of time. (I've been there)

esharef 3 hours ago 0 replies      
Thanks so much for making this public. You're putting yourself out there for our benefit and so that we can all learn (and not feel quite as shitty when we get similar emails). Thanks.
WhitneyLand 6 hours ago 0 replies      
Why not build such a service on top of a cloud storage provider platform?

Their own servers would run their code, searching/indexing, and thumbnails up to 1080p which are very small.

A user's cloud account would only be used for the originals and to generate search indexes/thumbnails as needed.

danabramov 6 hours ago 0 replies      
Thank you for releasing this and making such a valuable tribute to the community. If more companies did that, hopefully we wouldn't make the very same mistakes over and over again.

By the way, what other startups have published such detailed postmortems, if any?

bayesianhorse 3 hours ago 0 replies      
The founders ran out of Ketracel-White to give to their troops...
wellboy 6 hours ago 0 replies      
Idea: Make a post-mortem startup website, where recently deceased startups can open-source their documents to the whole startup community.

Pivot for everpix maybe?

This somehow has the same vibe as the becoming of Mattermark to me.

Why is reading lines from stdin much slower in C++ than Python? stackoverflow.com
59 points by luu  7 hours ago   24 comments top 7
Jach 2 hours ago 0 replies      
Fun fact: the solution of using `cin.sync_with_stdio(false);` introduces a fairly unimportant memory leak that you'll see when you use Valgrind. The behavior was reported as a bug (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27931) but it's actually part of the C++ standard not to clean up standard streams. (Edit: Here's the page of the standard in question, see the end of the second paragraph of 27.3: http://imgur.com/P7wYcHn)
nly 7 hours ago 1 reply      
tl;dr: C++s standard streams (cin, cout, cerr, clog) may be used alongside code using the underlying libc i/o streams API and therefore, by default, synchronize with libcs own buffers. In the very least, this means you don't end up with output resulting from characters interleaved between individual stream operations via the two sources.

Its worth noting that even Cs i/o APIs are typically synchronized across threads so you can output lines to stdout without experiencing the same issue within just vanilla threaded C code.

dded 7 hours ago 1 reply      
Back about 15 or more years or so ago, I wrote all my small utility programs in C. These were typically small programs, but I often had a lot of (text) input data in the form of netlists. I started reading about C++, and getline() and some of the containers (that I had to build from scratch in C), so I decided that C++ was for me. There were a number of disappointments, but a big one was that C++'s getline() was more than an order of magnitude slower than fgets() (on my system etc., etc.).

With some experimenting, I discovered that even Perl was much faster than C++ with getline(). (Note that this was input of a file, not stdin as in this article.)

I've not used getline() since.

dkhenry 7 hours ago 1 reply      
This is an old discussion, but I actually have this thread bookmarked so I can show young programmers the dangers of assuming that if you write in C++ it will be faster then everything else out there.
memracom 4 hours ago 0 replies      
tldr; when performance is important, don't just use defaults, optimize. And learn how libraries and your OS work deep down at low levels. Even Python's default IO performance can be improved in many cases, by changing buffer sizes or even bypassing the file io susbsytem and using memory mapped files. But no solution is right for all use cases.

Like they tell you in school, premature optimization is the root of all evil. So don't worry about this until you need it.

NAFV_P 4 hours ago 2 replies      
Thought I'd throw a point(er) alongside these comments...

Isn't the Python interpreter written in C?

gdy 6 hours ago 0 replies      
tl;dr It is not.
How To: Hosting with Amazon S3, CloudFront and Route 53 paulstamatiou.com
116 points by PStamatiou  11 hours ago   58 comments top 16
sehrope 10 hours ago 5 replies      
Nice write up. We have a very similar setup (Jekyll generated static site + S3) for our website[1] and reading through this is kind of nostalgic of getting it set up (and a friendly reminder to go back and gzip some of CSS files).

The biggest plus of this setup is that once it's deployed you don't think about it. It just works and you never think about scaling. Oh and it's cheap (seriously it's like peanuts a month as all you pay for is bandwidth at $.10/GB).

The biggest negative is getting SSL. CloudFront supports it but it's expensive ($600/mo see [2]). Compare that to the pennies it costs to host the non-HTTP site on S3. In our case our cloud app is on completely separate domain (SSL-only) and our public site is informational only so the trade off works. The only SSL enabled link on our public site is for our contact GPG key and it's linked directly to the HTTPS S3 URL.

[1]: http://www.jackdb.com/

[2]: http://aws.amazon.com/cloudfront/pricing/

mattdeboard 3 hours ago 0 replies      
One big warning here.

If you are wanting to serve static content for multiple domains (e.g. somefont.ttf for foo.example.com, bar.example.com and baz.example.com from a single CloudFront distribution) CloudFront is not your solution, because CloudFront does not vary its cache on the Origin header. So if your first visitor is loading foo.example.com/static/fonts/somefont.ttf, then the Access-Control-Allow-Origin header for somefont.ttf will be set to "foo.example.com". Subsequent requests for that file from (bar|baz).example.com will fail with a CORS error.

It was a pretty shocking thing to find out. We've concluded AWS/CloudFront isn't a viable CDN until this is fixed. Based on the following thread, it isn't clear when or if it will be fixed: https://forums.aws.amazon.com/thread.jspa?threadID=114646#

neals 9 hours ago 2 replies      
Hi! I hope somebody can answer me this.

Why do you need this DNS routing? I tried to Google and see a large offer of "hosted DNS" services, but I don't understand something:

I have a small site. It runs over at Digital Ocean. I point the DNS records of the domainname to the Digitial Ocean server by putting then into the text-boxes where I log into the domain-name-reseller.

Where in all this would I require a more advanced solution?

HeyImAlex 9 hours ago 0 replies      
If you're needing an s3 deployment library for stuff like this, I'm planning a major merge on mine (s3tup) later tonight or tomorrow. It uses yaml files to declaratively control configuration of buckets and keys, and makes it nicer to do more complex things like setting appropriate headers based on pattern rules. Check it out here.


ctcliff 10 hours ago 1 reply      
I wrote an npm module to automate this workflow. You can read about it at http://caisson.co/.

Simplifies the process to a couple commands:

  $ caisson init yoursite.com  $ caisson push

bobfunk 7 hours ago 0 replies      
I built BitBalloon (https://www.bitballoon.com) to simplify all of this, while bringing benefits such as atomic deploys, built-in form processing, automatic gzipping, bundling and minification of your assets and perfect cache headers.

We have a comparison with S3 here: https://www.bitballoon.com/blog/2013/12/03/bitballoon-amazon...

davidcollantes 10 hours ago 2 replies      
I use Namecheap DNS (free, as they are my registrar). I can control everything, including APEX. It has never fail me.

And for hosting, Github pages (Jekyll rocks!) do a great job. I think you are still paying too much, Paul.

dirktheman 8 hours ago 3 replies      
Stammy! Nice writeup!There's a really simple way of redirecting your naked domain to the www-bucket at S3: just point the naked domain to ip and it will redirect automatically. Just be warned it's a free service.
nthitz 10 hours ago 1 reply      
Great writeup. At the end he links to the AWS docs for this whole process which I found equally if not more helpful. http://docs.aws.amazon.com/gettingstarted/latest/swh/website... but OP's tutorial definitely has some extra informative tips
SkyMarshal 4 hours ago 0 replies      
Paul's general workflow also works with just Grunt and one of its many S3 plugins. For example, you can clone the Bootstrap github repo (which comes with a nice Grunt build config), npm install an S3 plugin, add S3 deployment tasks to Gruntfile.js, and boom - static site generator and deployer.
subpixel 10 hours ago 1 reply      
I built my first mobile-first layout using his writeup(s), and will likely move from Heroku to S3 using this one. High five.
applecore 10 hours ago 4 replies      
I feel like SSL/TLS is a requirement for websites in 2014.

Does Amazon S3 and CloudFront support HTTPS?

Wouter33 10 hours ago 1 reply      
I'm already hosting a website with this setup. Works perfectly and is blazing fast. I recommend it for everyone.

The website is a static marketing front for a web app that is being served from a SSL subdomain on another cluster. The only thing i'm doubting about is that i want to offer a one input field e-mail signup on the frontpage, which of course, will be without SSL in this setup. What would you do? Skip this fast signup and put the whole signup on the subdomain or use the signup with a post to the SSL page (less secure)?

elliottkember 9 hours ago 0 replies      
Nice! We actually built a service to do this: https://getforge.com/ including a few other nice static hosting tweaks. Takes the hassle out of dealing with Amazon.
LogicX 6 hours ago 0 replies      
FWIW, after evaluating many solutions, I'm switching my DNS from zerigo to DNS.he.net - free, featureful, and backed by a company I believe will be around.
justinhj 8 hours ago 0 replies      
A cost effective alternative to this (I'm open to being corrected) is to use a cheap server (say a $5 monthly box from Digital Ocean) and Cloudflare (which is free).
The Worlds Best Bounty Hunter Is 4-Foot-11. Heres How She Hunts wired.com
148 points by danso  11 hours ago   71 comments top 18
latj 10 hours ago 9 replies      
Its strange how casually they mention her friends who are in law enforcement who do illegal searches for her.
sequoia 8 hours ago 0 replies      
my biggest takeaway from the article:

"The most troubling lesson she learned from Mullen, Gomez says, is how readily misleading information can migrate from a posting on an Internet forum to official status. In a second, whats false becomes true, she observes. All it takes is for one person to put it on the record. That seems to be what happened with Mullens Most Wanted status. A spokeswoman from the US Marshals Service told WIRED that Deputy Sheasby knew nothing about a $2 million cybertheft by Mullen until he was told by an investigator, and that hed passed on the story only because he felt obliged to make other investigators aware of everything he had heard."

Looks like the barrier between social network/forums etc. & official record are pretty porous.

bostonpete 10 hours ago 2 replies      
> To track down the fleet of Caterpillar wheel loaders taken by the Peruvians, Gomez reached out to the estranged wife of the familys patriarch

In other news, the estranged wife of a Peruvian crime boss turned up dead at 8:30 this morning...

ricardobeat 8 hours ago 5 replies      
$10k for what looks like a few months work. Looks like a successful case of outsourcing, but the thought of private law enforcement, mercenaries intruding into people's lives and chasing people with guns doesn't sit very well with me. Who is responsible if everything goes wrong?
larrydag 11 hours ago 1 reply      
Skip tracing is not bounty hunting but it makes for a great article title. Skip tracing is finding a debtor for a lender. The bounty hunter gets the collection or asset from the debtor.
ck2 9 hours ago 1 reply      
Glad it is just thieves and not terrorists that can so easily fool law enforcement and private contractors have to be used.
polskibus 10 hours ago 1 reply      
Now that she revealed her face, identity and some of her methods, she'll be much easier to avoid.
jfmercer 10 hours ago 4 replies      
How is it even possible to rank bounty hunters? This article's title is just sensationalist link bait.
js2 3 hours ago 0 replies      
I am reminded of recovery specialist http://en.m.wikipedia.org/wiki/Max_Hardberger
anonymouscowar1 10 hours ago 2 replies      
So apparently the only thing this guy had going for him was that he could print checks with magnetic ink and fake a caller id. Both are trivial. Wow.
ableal 7 hours ago 0 replies      
Good story. The insightful bit, perhaps not unlike http://en.wikipedia.org/wiki/Chaff_%28countermeasure%29 :

The most troubling lesson she learned from Mullen, Gomez says, is how readily misleading information can migrate from a posting on an Internet forum to official status. In a second, whats false becomes true, she observes. All it takes is for one person to put it on the record.

P.S. Wired's comments are worth a look.

thret 8 hours ago 1 reply      
Who does the photos for these? They are clearly just using unrelated images and fitting in a certain number of pictures per page.

Good stories do not need to be embellished with images of apartment windows or empty parking lots.

JimA 4 hours ago 1 reply      
Anyone know what makes the "Mastercheck Keypad and Printer" so special? I can buy cheap magnetic toner and put it in my laser printer to get magnetic encoded checks, but my understanding is most banks don't rely on that much any more in favor of optical recognition. That was one bit that seemed a bit hyperbolic.
spiderPig 9 hours ago 0 replies      
This'll make a good "Catch Me If you Can 2"
vaadu 7 hours ago 0 replies      
what was disappointing about the article was no pictures of Mullen, his cars, yacht or properties.
PavlovsCat 4 hours ago 0 replies      
I'd love to read an article about that computer she was "forced" to build at age 10. Now that sounds fascinating.
squirejons 3 hours ago 0 replies      
ah, the neoliberal media once again glorifying their weapons of economic doom, the debt collectors. These demons should be demonized instead.
patrickmay 10 hours ago 0 replies      
Now I want to read another Stephanie Plum novel.

Just kidding. I wouldn't really read that fluff. Honest.

Hindley-Milner in Clojure lispcast.com
117 points by ericn  12 hours ago   56 comments top 10
exDM69 11 hours ago 2 replies      
Here's the Hindley-Milner implementation (in Haskell) from a toy compiler project of mine. It was really enlightening to write it and surprisingly simple.


This was also the first time I used monad transformers and almost the first non-IO monad application (I've used ST and Parsec before) I have dealt with. If you compare my code with the book source (Peter Hancock's type checker in Peyton-Jones' "Implementation of Functional Programming Languages", link in source code comment), my version using monads is a lot simpler to follow than the original, written in a pre-Haskell functional programming language called Miranda with no monads.

The type checker is a "pure function", it has inputs and outputs but no side-effects but in the code you need to 1) generate unique "names" 2) bail out early on type errors. I solved this problem using Error and State monads. The Miranda code used an infinite list of numbers for unique names and cumbersome tricks to handle type errors.

__--__ 2 hours ago 1 reply      
All this talk about formal type theory, but where are the references to the relevant studies? Where's the data? The few studies[1][2][3][4] I've found are inconclusive one way or the other and none of them focus on error rates. I found another conversation about how to go about studying error rate in dynamically vs statically typed languages, but all I really found was this article studying the affect of hair style on language design[5].

[1] http://pleiad.dcc.uchile.cl/papers/2012/kleinschmagerAl-icpc... - maintainability

[2] http://dl.acm.org/citation.cfm?id=2047861&CFID=399382397&CFT... - development time

[3] https://courses.cs.washington.edu/courses/cse590n/10au/hanen... - development time, take 2

[4] http://pleiad.dcc.uchile.cl/papers/2012/mayerAl-oopsla2012.p... - usability

[5] http://z.caudate.me/language-hair-and-popularity/

willismichael 11 hours ago 6 replies      
I find it curious that the venn diagram seems to indicate that a sizable subset of people who are familiar with type theory don't advocate either static typing or dynamic typing.
michaelochurch 9 hours ago 2 replies      
I'm familiar with type theory and (often) a proponent of dynamic typing.

It depends on what you're doing. If you're building cathedrals-- high-quality, performance-critical software that can never fail-- then static typing is a great tool, because it can do things that are very hard to do with unit testing, and you only pay the costs once in compilation. There are plenty of use cases in which I'd want to be using a statically typed language like OCaml (or, possibly, Rust).

If you're out in the bazaar-- say, building a web app that will have to contend with constant API changes and shifting needs, or building distributed systems designed to last decades without total failure (that may, like the Ship of Theseus, have all parts replaced) despite constant environmental change-- then dynamic typing often wins.

What I like about Clojure is that, being such a powerful language, you can get contracts and types and schemas but aren't bound to them. I like static typing in many ways, but Scala left me asking the question, any time someone insists that static typing is necessary: which static type system?

nabla9 10 hours ago 2 replies      
Why not both?

ghc compiler in Haskell has -fdefer-type-errors flag. SBCL Common Lisp implementation has option to turn type warnings into errors. Extending -fdefer-type-errors function and creating better type checker for dynamic languages could achieve best of both worlds.

chongli 10 hours ago 3 replies      
Dynamic typing is just a special case of static typing where there is only one type!
hardboiled 7 hours ago 3 replies      
Disagree about the idea that those who are unfamiliar with type theory prefer dynamic typing.

Typing preferences are usually due to trends in language usage having little to do with knowledge.

Plenty of java programmers use static typing without ever having to understand type theory.

But looking to history of language designers/implementers

Dan Friedman

Gilad Brachahttp://www.infoq.com/presentations/functional-pros-cons

Guy Steele

Rich Hickey

All of these guys have worked on static languages, have a keener understanding of type theory than most,and yet they seem to promote dynamic languagesat least when it comes to their pet languages.

kd0amg 8 hours ago 0 replies      
I think you should implement Hindley-Milner in the language of your choice for a small toy -calculus.

Did this a little while ago (as a stepping stone to building an inference system for a more complicated calculus).


moomin 10 hours ago 1 reply      
As an aside, if you just want curried functions in Clojure, try poppea.


elwell 9 hours ago 1 reply      
Douglas Crockford is a proponent of dynamic typing. (At least from what I read in the beginning of "JavaScript: The Good Parts)
The architecture of Stack Overflow [video] dev-metal.com
70 points by schmylan  9 hours ago   41 comments top 9
merak136 7 hours ago 4 replies      
Some points that I find interesting:

[1] StackOverflow has VERY FEW tests. He says that StackOverflow doesn't use many unit tests because of their active community and heavy usage of static code.

[2] Most StackOverflow employees work remotely. This is very different than a lot of companies that are now trying to force employees back into an office.

[3] Heavy usage of Static classes and methods. His main argument is that this gives them better performance than a more standard OO approach.

[4] Caching even simple pages in order to avoid performance issues caused by garbage collection.

[5] They don't worry about making a "Square Wheel". If their developers can write something more lightweight than an already developed alternative, they do! This is very different from the normal mindset of " don't reinvent the wheel ".

[6] Always using multiple monitors. I love this. I feel like my productivity is nearly halved when I am working on one tiny screen.

Overall, I was surprised at how few of the "norms" that they follow. Either way, seems like it could be a pretty cool place to work.

carsongross 8 hours ago 3 replies      
The most important thing, technically, is having great developers who ship.

For piths sake, I want to say "Everything else is noise" but that isn't true. Everything else can help or hurt, depending on the application and how doctrinaire the application of a given approach/methodology is, the organizational knock on effects (e.g. "Mr Tough Guy Testalot" holds up the release train or nukes your architecture to make it 'testable'), etc. but, seriously, "great developers who ship" is really what moves the needle.

esw 8 hours ago 0 replies      
Here are the slides for anyone who's interested: https://speakerdeck.com/sklivvz/the-architecture-of-stackove...
alexgartrell 4 hours ago 1 reply      
Dear any Stack Overflow Developers,

Can you describe the network infrastructure in finer detail? Specifically what type of load balancer are you running?

And what's peak RPS? Where are your network peaks? (I'm guessing major peak US Pacific and minor US Atlantic?)

skittles 5 hours ago 1 reply      
He mentioned that they use the servicestack.text library. I've looked into servicestack recently (using the nuget packages), but then found the library to be pay-to-play. There's an older version (v3) that is BSD licensed that is being maintained. Do any of you have experience with it? I have grown tired of Microsoft pushing new solutions to the same problem (REST service with WCF and then Asp.net web api).
y0ghur7_xxx 7 hours ago 1 reply      
I would love to know more about the Databases:

- Are they used for different things on the sites?

- Is data partitioned across tables?

- Are they all SQL Server instances?

dlazerka 4 hours ago 0 replies      
I wouldn't trust Joel Spolsky's code expertise -- just look at Excel internals!Nevertheless, Stack Overflow is super cool. But that tells nothing about its architectural quality.
schmylan 8 hours ago 1 reply      
Before the title was moderated there was an important tidbit. StackOverflow doesn't unit-test. Fascinating.
notastartup 5 hours ago 1 reply      
is there an open source, self-hosted version of stack overflow that you can deploy on your own domain?
Jumpers and the Software-Defined Localhost coreos.com
35 points by vishvananda  6 hours ago   20 comments top 6
WestCoastJustin 5 hours ago 1 reply      
This is an interesting idea. It might be useful to have a high level architecture diagram to illustrate what is happening between the containers.

A couple issues:

1. Using localhost seems like a bad idea (I have the expectation that traffic is local to the instance if using localhost). 2. Managing more then a couple instances of this will be unmaintainable if done manually. There will need to be some controller logic happening.

Maybe I am missing something, but why not add a second dedicated virtual ethernet adapter where all your containers can talk (works across containers and across servers)? This is traditionally how you would handle something like this. We have dedicated nonroutable reserved address space configured to handle all this internal datacenter traffic.

dsl 5 hours ago 3 replies      
It is an interesting proposal.

It starts to fall over when you have, for example a MySQL slave that needs to connect to a MySQL master. But now the listening slave on 3306 is trying to connect to localhost:3306.

I would strongly recommend that the CoreOS team reach out to ARIN (the IANA operator) to get a special use /24 assigned for these "magic addresses" and not hijack

skybrian 5 hours ago 1 reply      
"localhost" seems like a really bad name for this when in reality, the socket is anything but local. Instead of "localhost:3306" how about something like "docker:3306"?
jared314 4 hours ago 2 replies      
This is a great step forward.

But, I am still hoping for a local OpenFlow compatible software switch, that can exploit network namespaces as a local optimization. That way you can leave your cluster's networking setup, from network edge to docker container, to a centralized controller.

nl 3 hours ago 1 reply      
How different is this to Docker link containers[1] for service discovery?

It seems to me like this is a network layer proposal, while link containers are more about name based discovery at the Docker layer.

[1] http://docs.docker.io/en/latest/use/working_with_links_names...

vishvananda 6 hours ago 0 replies      
This is some pretty interesting stuff. I've been working on something similar in my spare time. The cost of running everything through a proxy can be mitigated by having the proxy do other smart things like load balancing and/or autoscaling.
Shell programming with bash: by example, by counter-example might.net
110 points by ColinWright  12 hours ago   22 comments top 7
fhd2 11 hours ago 3 replies      
Not bad, the first time I've seen most that matters about bash scripting in one place. (I miss [[ ]] though.)

That said, I try not to do bash scripting. It's always been a boiling-the-frog thing for me: You start with a cute script with a couple of lines and then you make it more robust and add some more stuff and before you know, you've written a non-trivial program in an awfully quirky programming language.

But I'm still happy I've been there and know my way around bash fairly well. Makes for some pretty powerful quick one liners.

minimax 8 hours ago 1 reply      
Modern Bash (a.k.a. the bash you get with Linux but not with OS X) even has associative arrays similar to what you would call a dict in Python or a hash in Perl.

    $ declare -A capitals    $ capitals["Illinois"]="Springfield"    $ capitals["California"]="Sacramento"    $ echo ${capitals["California"]}    Sacramento    $ echo ${capitals["Illinois"]}    Springfield    $ for k in ${!capitals[@]} ; do    >     echo "The capital of $k is ${capitals[$k]}"    > done    The capital of Illinois is Springfield    The capital of California is Sacramento

Kurtz79 11 hours ago 1 reply      
I like how this article just presents you with the basics, but all that is really needed to get up to speed quickly and with little fuss.

Most bash tutorials I have seen seem to be taking forever to get to the parts I usually need a quick refresher of.

A very good quick start/reminder/advanced cheat sheet, good work.

drivers99 5 hours ago 0 replies      
The linked relational shell programming article was really fun, informative, and inspiring.
VLM 12 hours ago 1 reply      
A nice article. Needs some apps. My suggestion is in the mid-late 90s I was making dough by writing CGI apps in BASH. Yes, BASH. lots of


echo "Content-type: text/html"


echo "<pre>"

(Run some CLI status type app here, or du -m a file or whatever)

echo "</pre>"

exit 0

Doing "web stuff" could be exciting for the learner.

(edited to add, doing "modern" web development in BASH might count as one of the counterexamples, no matter how fun and easy it is)

GlitchMr 8 hours ago 0 replies      
This is why I prefer http://fishshell.com/ to bash. Bash just has too much noise in my opinion (like then keyword).
sebnukem2 6 hours ago 0 replies      
Indispensible? What does it mean?

ed. "indispensable" got it.

Report: NSA bulk metadata collection has no discernible impact arstechnica.com
88 points by fortepianissimo  11 hours ago   24 comments top 6
suprgeek 2 hours ago 0 replies      
"No discernible Impact" on preventing acts of terrorism.

But that was never the end goal of bulk metadata collection. Terrorism was the merely the excuse to invoke those powers.The end goal is and always has been, to give the U.S Govt unfettered access to all communications of it citizens for whatever purposes it desires without having to go thru the legal process (not even the ridiculously permissive FISA court).

Prediction - They will use every excuse in the book to hold onto this capability.

jessaustin 9 hours ago 1 reply      
Wow. OK now we see why we had to spend billions of dollars and shred the Bill of Rights. Because a cab driver sent $8500 home to Somalia. It's all clear now. Carry on, patriots.

What may annoy me even more, is that even in this article, Ars needed "balance" so much that they let some executive branch lizard blow his pompous bullshit at us without putting his name on it. That has got to stop. Anonymous speech is for citizens, not tyrants.

fit2rule 9 hours ago 3 replies      
I believe we are watching agitprop being masterfully and skillfully applied to the positioning on this whole metadata issue, even here among the cognescenti, by the powers in charge of these meetings.

This is a setup for us to all start thinking "oh, metadata doesn't really mean anything". 'It doesn't mean anything, because its obviously not useful.'

So .. here's my delegitprop thought: If today, the line is "Feds are not achieving anything with metadata", then tomorrow .. or maybe a little later .. the line will become "metadata doesn't achieve anything".

So we'll see.

Who is asking the politicians if they actually do know what the word 'meta-' means? Because .. those who are collecting "meta-"data can facetiously use that word from multiple angles .. and these totalitarians (call them what they are, folks) are fastidious about never using words in this realm that they do not have clearly, internally at least, defined.

The NSA has its own dictionary of the English language, for internal use. It is used to promote internal doctrine, and it is used externally to position. Information positioning is a refined, pure, utter science in this realm. So this NSA dictionary should be on the table in front of any Senator, doing their job, in these meetings. I highly doubt it is relevant what I think, but the only way out of this mess is a total dox'ing of the agency, and a replacement with something else ..

Vivtek 8 hours ago 0 replies      
"This capability was put in place after 9/11 for a good reason," said a senior administration official who asked not to be identified discussing sensitive deliberations.

"We can't tell you exactly what those reasons are, but trust us, they're really important."

MaysonL 51 minutes ago 0 replies      
Surveillence of Citizens Is ALWAYS Aimed at Crushing Dissent[0]


fortepianissimo 11 hours ago 1 reply      

"A new paper published Monday by the New America Foundation... closely examines the 225 cases... the controversial bulk collection of American telephone metadata... appears to have played an identifiable role in initiating, at most, 1.8 percent of these cases."

(That's 4 cases)

A member of our community is missing, help find him izs.me
553 points by mcgwiz  1 day ago   61 comments top 18
8ig8 1 day ago 2 replies      
Please read this. It seems to be the source of the original post and provides additional details (news articles). It is also easier to read.


Edited to better describe the link.

bazzargh 1 day ago 1 reply      
Hope Luke turns up ok, here's a story of someone local to me who disappeared but came back:http://thepopcop.co.uk/2013/12/the-boy-who-went-missing-from...

The point being (as Tom says in his story), if you feel alone - talk to someone. It's ok not to feel ok.

mcantelon 1 day ago 0 replies      
according to another person who worked with him when he lived in New York, he disappeared for 5 days once there. So hopefully he'll turn up.
dhimes 15 hours ago 0 replies      
Beyond giving my heartfelt wishes that he turns up ok, I have to give kudos to Yahoo for hiring a private investigator to try to find him.
MojoJolo 1 day ago 2 replies      
There are some inconsistency with regards to his tattoo in the blog and the website.

In the blog, it mentioned that the rm -rf / was in his left chest. And in the website, it was said to be in his right chest. This is also the case for his sacred heart tattoo. Based on the picture, sacred heart tattoo is placed in his right chest.

I hope Luke is fine and okay.

elwell 1 day ago 2 replies      
His last tweet just says "Ok." [0]

[0] - https://twitter.com/luk

duffdevice 9 hours ago 3 replies      
I don't mean to be callous, but what exactly are you asking for help with? He's not a 4 year old child, he's not an elderly person with dementia. He's a grown man. If he can't be found, apparently he doesn't want to be found. Are we concerned that he is somehow wandering around town without access to any means of communication? I don't get this.
VMG 20 hours ago 1 reply      
Reminds me of this guy: https://twitter.com/mauricemach

Could never figure out what happened to him

jacquesm 1 day ago 1 reply      
Ominous title on that last twitter picture.



jaseemabid 23 hours ago 0 replies      
Hey Luke,

I hope you are ok and is reading HN right now. Come back, the world is missing you.

SG- 15 hours ago 0 replies      
what if he doesn't want to be found?
industriousthou 23 hours ago 0 replies      
Hope the guy's okay and he turns up. It's sort of touching to see his coworkers come up with this. I wonder though, if he had issues with anxiety or depression, if the attention could push him away.
gotrecruit 23 hours ago 2 replies      
sorry i can't be of more help, but i'm curious as to why the fact that he "has travelled to Thailand" is relevant...
pjbrunet 1 day ago 1 reply      
"rm -rf /" is pretty cool.

Hope he's not in trouble.

tensafefrogs 1 day ago 1 reply      
What's the deal with the creepy hookah/booze bottle picture at the bottom of that page?
jheriko 12 hours ago 0 replies      
it would be nice if this could somehow be a banner across the top of HN...

hope he is found soon.

bhartzer 1 day ago 1 reply      
Seems as though he was in the SF area. For those of us who are not in California is there anything we can do? I'm in Texas...
failho 19 hours ago 0 replies      
I'll share this - not that I'll be any help (no where near you). Hope he turns up ok!

My friend disappeared before and it's amazing to see how many people we reached with just facebook and twitter. In my case it sadly didn't have a happy ending - but the support you receive from complete strangers is just mind blowing!

Byte Magazine Smalltalk-80 Issue archive.org
65 points by t1m  10 hours ago   26 comments top 12
Mithrandir 5 hours ago 2 replies      
I found this amusing "complaint" to Byte in this same issue (https://archive.org/stream/byte-magazine-1981-08/1981_08_BYT...):

  April's Foolers  The hasty printing of data concerning   our Black-Hole Diode is not only an inva-   sion of our corporate security, but is not   in the national interest. (See the April   1981 BYTE, page 363.) 

  Our device, which is covered by US   patents and is classified by the National   Security Council as "Top Secret," should   not be pandered about in a general-circu-   lation magazine for all to see, especially   when those not friendly to our nation may   learn details of this device.   Furthermore, how BYTE learned of the   existence of our device is unknown to us,   but be advised that stricter security has   been imposed to forestall any further   lapses.   Be also advised that the company BYTE   lists as being responsible for creating the   Black-Hole Device, Spatial Regression   Ltd, will shortly receive summons from   our legal department.   Any repetition or further disregard for   national security regarding this device or   its uses in particle-beam research will   bring about swift and final action.   J.W. Kelty  Chief Executive Officer   Code-7 Electronics   POB 1505   Modesto CA 95353
And Byte's response:

  Each year, the BYTE staff enjoys slip-   ping a few joke items into the April issue   for our readers to find; some are subtle,   some are outrageous. Response to this   year's foolishness was greater than in any   year past. In case you missed it, look for  "Lost Dutchman's Bug" (photo),   page 302   "Black-Hole Diode" (new product),   page 363   "Noise-Emitting Diode" (new product),   page 364   "Slightly Used Cray-1" (unclassified ad),  page 414   So you see, there's no need for "swift   and final action" (gulp!)  we were just   kidding! By the way, where should we   return the sample device that was in-   cluded with your letter? . . . CPF

ghc 1 hour ago 0 replies      
Page 196:

An add for "The Last One", a system that claims to write your programs for you in a bug-free fashion by asking you questions in plain English. "Coming soon..."

I wonder at the context behind this. The idea that computers could even remotely be capable of this seems crazy now, let alone in the '80s. But maybe it wasn't as high level as I'm thinking....

analog31 53 minutes ago 0 replies      
For me, that's not the Smalltalk-80 issue, but the Z8-BASIC issue. My mom was studying computer science, and subscribed to Byte. I was interested in math, electronics, and programming at the time. I was in high school and had taken a course in BASIC. Ciarcia's Circuit Cellar was transformative for me.
Erwin 6 hours ago 0 replies      
Page 53 has an ad from Paul Lutus (lutusp @ HN occasionally) for TransForth: https://archive.org/stream/byte-magazine-1981-08/1981_08_BYT...
salgernon 9 hours ago 4 replies      
I'm so torn. I've got BYTEs from 1976 through the early 80s and I love flipping through them - there's just nothing like having a paper copy to find things you didn't even know you were looking for. But the space they take up is getting too valuable... And now archive.org has all those magazines digitized... Its hard to prioritize the classic books and magazines vs other non-essential life detritus.
gonzo 1 hour ago 0 replies      
I read this, on paper, in 1981.

I was lost, but at least understood that I was trying to read something important. Something well beyond Fortran, Basic and Pascal.

jorgem 8 hours ago 1 reply      
I remember this issue vividly. I was 14. :)

I couldn't grok what smalltalk was. :(

agumonkey 9 hours ago 0 replies      
Nice complement to the Self release thread here https://news.ycombinator.com/item?id=7047953
dded 2 hours ago 1 reply      
Is the "Microsoft" in the ad on p 111 the Microsoft? I don't recognize that logo. The address on the ad is Bellevue, WA.
rayiner 8 hours ago 1 reply      
Byte Magazine was awesome. So much more technical and in-depth than what computer magazines offer today. One of the most interesting issues I ever read was an early 1990's issue that I found several years later (but before we had internet outside of AOL's walled garden). It had two major stories: new operating systems and new processors. It covered OS/2, how the new NT micro-kernel could host different personalities like Windows and POSIX, OSF/Mach and this object-oriented OS called Taligent that never took off. It also covered Alpha and PowerPC and talked about how RISC would take over from x86.

Ironically, it all came to pass in a slightly different way than they imagined. Now, everyone is running a Mach-based UNIX on RISC hardware, except instead of a workstation it's your iPhone.

pan69 9 hours ago 2 replies      
Love to read it but I can't because it's stuck in some gimmick. Why is Archive.org doing this?
jrobbins 7 hours ago 0 replies      
I have that issue, it's my favorite!
Why the Climate Corporation sold itself to Monsanto newyorker.com
34 points by yapcguy  5 hours ago   25 comments top 7
Daishiman 2 hours ago 1 reply      
It seems odd to me that such an appeal to reason would have a prelude of several paragraphs of emotional rhetoric, with no references to the science that backs up his position.

It also strikes me as amazingly, incredibly naive, the idea that a small, newly-acquired business unit would have any say in how the rest of the corporation operates.

For so much rationality, the clear strawman of attacking anti-GMO positions as purely anti-science is also striking; there are very well-reasoned arguments that go beyond the genetic modifications, which instead talk about genetic IP, food sovereignty, the atrociously excessive use of pesticides that GMO seeds promote, and the destruction of traditional methods for preserving soil quality in place of monocultures that devastate topsoil to the point of complete dependence on Monsanto's products to keep the land productive.

civilian 3 hours ago 0 replies      
When commenting, be careful not to fall into Argumentum Ad Monsantum:


zarriak 3 hours ago 0 replies      
I really dislike his attack on Google, yes they are occasionally evil, but they don't own the FCC like Monsanto owns the FDA.It doesn't matter how many people you have in office as long as the head of the government body established to regulate you is run by one of you former employees. Also, why would Monsanto want to sue their consumers, their policies allow for such small margins that the farmers are worth almost nothing.
throwaway5752 3 hours ago 6 replies      
Funny how the anti-Monsanto position is always caricatured as a liberal anti-GMO person.

I think it is just as likely to be a libertarian person who finds their legal pursuit of farmers who have (frequently accidentally) had their crops cross pollinate with Monsanto patented breeds to be ethically distasteful.

NickSharp 4 hours ago 0 replies      
Because Monsanto gave them a Billion dollars. Next question.
state 3 hours ago 0 replies      
Interesting to stand this up next to the Nest news from today. Is it the market size that makes Nest worth 3x more than the Climate Corporation?
yapcguy 4 hours ago 5 replies      
Given the bizarre letter, the CEO must have known that many of his employees considered Monsanto the enemy. So it looks like the CEO was a sell-out and took the easy money.

Data scientists, programmers and agricultural experts were hired to help build systems and crunch numbers to help farmers make the best from their soil and weather.

By contrast, Monsanto sell magic seeds, resistant to everything, guaranteed to improve yield and profits for farmers... who needs weather analysis when you have Monsanto?

Does anybody know how the employees feel about it? Is the company struggling to retain employees? It seems they are aggressively hiring, but hard to know if this is actual expansion or replacement hires.

A Lisp interpeter in a thousand lines of Bash github.com
81 points by Morgawr  11 hours ago   25 comments top 7
gaius 6 hours ago 0 replies      
I started a project like this about 10 years ago, but then I discovered that you could just compile Lisp on your own workstation and upload it to prod with a .sh extension and no-one would actually check, they would just blindly run it. Not even the size was suspicious. Used the same trick abit later with OCaml and Haskell, you just compile them as whatever.py and no-one's any the wiser.
mikeash 10 hours ago 3 replies      
This is fascinating. I assumed it was yet another ridiculous attempt to build something in an environment completely unsuited for it, but it seems that they are serious. But they're also sufficiently aware of the craziness of the project that the first thing they do is explain just why the heck they're doing it:


The short version is that bash is the closest thing to being universally available on every UNIXoid system no matter what, and so by writing stuff in bash, you make it so that it can run everywhere. But because bash sucks to program in, this is a minimalist interpreter for a sane language. You can then write programs in that language, and they will only depend on bash and on this interpreter, and the interpreter is simple enough not to need any sort of complex installation.

I can't quite think of a use case for this where it's not worth e.g. installing Python first, but it's an interesting project all the same.

VLM 11 hours ago 3 replies      
As a nostalgic trip, a little over 30 years ago I was playing with Randall Beer's LISP interpreter which ran in MS Basic on a TRS-80 model III, very slowly, as seen on page 176 at this link (this is the first in a multiple article series)


I distinctly remember as a kid it was very slow indeed, but interesting.

I got lost reading the ads. In retrospect, computing used to be a much more expensive hobby than it is today. Not just relative terms, but absolute terms. Then again, people are much poorer now, so its required.

Anyway, since 1983, he became a neuroscience prof and mentions his BASIC LISP on his homepage


The line numbers are not consecutive, but I think he's well under a thousand lines of BASIC, there just aren't enough pages of code in the listing to exceed that.

And yes, this was considered reasonable coding style back then. That is why this generation never shrank in terror at the sight of bad Perl code. Why yes, this is a bit hard to read, but I've certainly seen worse...

mzs 8 hours ago 0 replies      
There a UUOC in strmap_file, in fact all those uses head, tr, and tail could likely be just handled by sed.
finin 10 hours ago 0 replies      
interpeter => interpreter
cbsw 3 hours ago 1 reply      
+ - * / even doesn't support multi-data. (+ 1 2 3) would be 3,stupid
       cached 14 January 2014 05:02:01 GMT