hacker news with inline top comments    .. more ..    15 Jul 2012 News
home   ask   best   6 years ago   
The long and sordid history of the user-agent HTTP header field webaim.org
36 points by damian2000  2 hours ago   8 comments top 5
decklin 45 minutes ago 3 replies      
I wonder why browsers with a modern automatic-update process don't set their user-agent to something that discards all this madness ("Chrome/23.4.5678 (Windows)", or similar) for the cutting-edge/nightly builds only (or even betas, if they wanted to discourage casual users from switching to them, but I don't think that's the case at this point). Surely their users have signed up for a little breakage in exchange for the latest features? And if they actually get website operators to stop or at least fix their sniffing, the whole prisoners-dilemma situation would disappear.

(I guess this assumes that the huge user-agent that my Chrome is currently sending is necessarily bad, and in the real world maybe no one really cares...)

y0ghur7_xxx 53 minutes ago 0 replies      
...and everyone pretended to be everyone else, and confusion abounded.

And then JavaScript driven feature detection came to be, and everyone thought it was a good idea. And the people wrung their hands and wept

mnutt 16 minutes ago 0 replies      
Not only the user agent, either. Try javascript `navigator.appName` in any browser, and you'll get "netscape". `navigator.appCodeName` in most browsers returns "mozilla".

Mike Taylor gave a talk about this and more at yesterday's GothamJS conference:


JBiserkov 1 hour ago 0 replies      
Educational and fun!

All this could have been avoided if Webmasters used <noframes>, but I'm not sure when it was added to HTML.

w0utert 1 hour ago 0 replies      
That was interesting and fun to read, never thought about the User-Agent string and why it's so messed up.
TapChat is mobile IRC done right codebutler.com
15 points by davepeck  1 hour ago   6 comments top 5
spindritf 15 minutes ago 0 replies      
I use irssi on the server with ConnectBot https://play.google.com/store/apps/details?id=org.woltage.ir... and Hacker's Keyboard https://play.google.com/store/apps/details?id=org.pocketwork... . Why is SSHing from a phone no fun? Honest question, what am I missing out on?
mandarkk 11 minutes ago 0 replies      
No it isn't.

- The only way to stop receiving notifications is to uninstall the app, there is no disconnect/detach option.

- It doesn't have support for identd (necessary for old ZNC bouncers, and for some bitlbee builds).

- The UI is very young, not comfortable to use, no easy channel switchers, etc.

- No buddy/friends list.

- Install fails in "supported' nodejs versions, almost works
only on v.0.8.0

- Complete lack of settings, including what I said about the most basic thing, connecting/disconnecting.

I had a lot of hopes for this, I use IRC with a few servers and a personal IM gateway (bitlbee), and for now I'm using AndChat which is a very good Android client but the permanent connection makes the battery drain very fast.

After having trouble getting the server up (nodejs version and such), I had trouble setting up the servers to connect to, as the settings for creating a new server were different with those of editing the new server, and the password was not even remembered when creating one, so I had to put it again after creating it in the edit mode.

But still, no support for identd had me update ZNC and bitlbee to support only server password with "user:pass".

After I finally got it up, for the first few minutes it was a dream, notifications came very fast, the app was good looking. And then I tried detaching from the server, no way to do that, after I kill the process, notifications still came up, and after "logging off" from my server I would still get them!!!! only way to stop was uninstalling the App.

Which I did. Hopefully I'll reinstall it when these problems get fixed.

a3_nm 42 minutes ago 0 replies      
I'm curious about why you need a specific bouncer for this. I can't see any doc about what it does that usual bouncers don't do (except for "[existing bouncers] have many limitations such as not being able to sync private conversations across all of your devices", which I don't get: the bip bouncer which I use syncs private conversations in a very reasonable way).
tobylane 17 minutes ago 0 replies      
Is this (irc bouncer) the sort of thing that should work with Jabber? Keep the parts interchangeable?
tallowen 55 minutes ago 1 reply      
I would pay for this if I could use my own ZNC bouncer.
Damages like the $147M verdict against RIM could make smartphones unaffordable fosspatents.com
23 points by FlorianMueller  2 hours ago   5 comments top 2
JonnieCache 47 minutes ago 2 replies      
Bare in mind that this guy is the famous oracle consultant.
Zenst 1 hour ago 1 reply      
Rather silly when if you remove the word wireless from the patent and read it thru.

All phones have wires in them, thats how there low-level transport layer work. Is a wire perfectly dense or are there gaps at the sub atomic level which the electrons flow.

If you think of a old sailboat battleship as a device and semiphore as the wireless transport layer then you could equaly apply this to Nelsons times.

Also thought configuration of a phone was done via the GSM and even the CDMA standards in that the registeration process to the network and control SMS do that. WHat came first, hmmmm.

1) patent should never of been approved upon common sence of the time and mearly add's nothing.
2) charge per use is just plain old silly
3) chewbacca defence used at all on the jury?

makes you wonder: get all patents add word wireless as a adjective and patent as something new. Wait a bit, then profit. Crazy

Goldman Sachs and the $580 Million Black Hole nytimes.com
177 points by jbae29  12 hours ago   62 comments top 16
chernevik 3 hours ago 1 reply      
I don't think this happens if someone like Kleiner Perkins had a 10% or 20% stake in Dragon.

A serious VC partner would have been more likely to realize that Goldmans sent out the junior varsity here, and would have the standing and confidence to insist they do better or be fired. They would have asked better questions about the due diligence and been more attentive to valuing the buyer -- valuation is what VC firms do -- and might have spotted the problems themselves. In the worst case, where this disaster still strikes, Goldmans would be far more likely to be reasonable about its responsibilities, lest it spoil its reputation with a well-connected VC.

But maybe I exaggerate the business and financial knowledge of tech VC types?

I don't much like the idea of allowing big VC firms to collect rents based on their reputation. But you cannot expect fee-based labor to be as careful and paranoid as you must be on this sort of thing. Only partnership brings that level of attention. When the stakes get this high, you need a partner capable of taking these responsibilities. If your financial partner or CFO isn't up to the task -- and Dragon's CFO was not -- then you have to fill that gap in the _partnership_.

Another solution would be bringing in a CFO with an equity stake, but this raises the same problem of financial expertise evaluation that sank the founders here.

None of which excuses Goldmans.

confluence 9 hours ago 4 replies      
Reminds me of 4 lessons I read about start-up exits:

  Don't swap your stock for another.

Don't deal with anyone with a question mark over their head.

Take the lower valuation with the company you trust more.

Don't celebrate until the cash is in the bank.

All 4 rules were broken here. Seller beware.


When someone swaps a stock - they implicitly value it less than what they get. Hence if you swap your stock with someone else, the buyer implicitly states that your stock is worth more than theirs. Losing deal.

Would you marry someone you didn't trust? No. Then why would you swap your baby for theirs?

Certain return with someone you trust is 10x better than a "certain" return from a flaky agent.

Nullify any agreements that don't put cash in the bank and give you more risk than reward. Take the breaker clause. Or lose everything.

ryanwaggoner 8 hours ago 3 replies      
To me, the lesson here is clear: trust your instincts!

There are too many stories of founders who built a successful company and then let the "experts" run it into the ground. In almost every case, the founder(s) felt like something wasn't right, but they swept their concerns aside, because they had hired "experts" and felt compelled to listen to them.

The reality is that no one has more expertise in your company than you do, and more importantly (particularly in this case), no one cares as much as you do. So yes, surround yourself with experts and seek as much wisdom as you can from them, but (almost) never go against your gut to follow their advice. Your instincts are usually what got you to that point in the first place.

ams6110 17 minutes ago 0 replies      
To the Dragon deal, Goldman assigned four bankers, two in their 20s and one in his early 30s. That wasn't unusual. Although Dragon Systems was worth everything to the Bakers, the company â€" with $70 million in revenue and 400 employees â€" was small beer on Wall Street.

Illustrating why you don't want to be a small fish in a big ocean. Should have gone with a smaller, hungrier firm.

andreyf 4 hours ago 1 reply      
But on Feb. 29, Dragon received an odd memo from Goldman. It wasn't addressed to anyone in particular at Dragon, and it wasn't signed by anyone at Goldman. The Goldman Four testified later that they had no idea who had sent it. But the memo referred to many of the same due diligence issues that Ms. Chamberlain raised. The memo asserted, however, that Dragon's accounting firm, Arthur Andersen, should do the work, not Goldman. [...] To support the argument that Goldman was not obligated to perform due diligence, the firm points to that mystery memo of Feb. 29, 2000 â€" the memo that no one at Goldman has acknowledged sending â€" as establishing that Dragon Systems needed to push its accounting firm to explain any red flags or resolve outstanding worries.

Given that GS is now using this memo to cover their asses, it makes me wonder whether someone there knew what was going on...

veyron 11 hours ago 1 reply      
There are a multitude of cases involving technology companies and shenanigans with Goldman Sachs, including Marvell Technologies: http://dealbook.nytimes.com/2011/04/11/marvell-co-founders-s...
anigbrowl 11 hours ago 0 replies      
I made part of my living in the 90s from selling and installing Dragon's software, and was perplexed to see L&H buy the company and then implode. What an appalling story, though I wonder why it has taken so long to end up in litigation.
snorkel 1 hour ago 0 replies      
I don't know if I entirely blame Goldman for this, Dragon bears some of the responsibility for accepting a bad deal from a shady company. Afterall Dragon's board of directors all voted to approve the deal without a clear signal from Goldman if that was safe or not. And sure, Goldman was clearly being unresponsive, but in that case Dragon should've fired Goldman for being unresponsive sought out another bank to perform the due diligence.
maxidog 9 hours ago 3 replies      
When I sold my company, our own advisers (not Goldman but a famous name) did something which in my opinion is much worse -- I'm 99% sure they told the winning bidder that we'd been prepared to accept a 20% lower bid from their competitor. The winning bidder, of course, then suddenly reduced their bid by 20% on the planned day of completion. The reason our advisers did this is that it was much more important to them to get future business from the buyer, a large multinational, than future business from me and my colleagues.

I feel sorry for the vendors in this case, but you don't do an all-share deal without being extremely cautious about the shares you're taking as payment. Even a pair of PhDs should have known that.

What interests me here is that we have a lot of news stories floating around at the moment lambasting financial companies for relatively minor misdeeds, because that's all journalists can pin on them without getting sued. If the real truth about what goes on begins to leak out the public reaction could be very interesting.

ChuckMcM 9 hours ago 0 replies      
This sort of story makes me feel the as horrified and disgusted as I do when I hear a story about how someone's child was molested by someone they trusted.

I can only hope that if, in the course of the trial, it is established that these are the facts. That Goldman pays dearly for it.

saumil07 9 hours ago 4 replies      
This makes me sick - I'm no M&A expert so it's unclear what Goldman's exact due diligence responsibilities should have been but clearly they screwed up if they helped execute a transaction against a company that basically didn't exist.

The article lets the founders completely off the hook, however, which I believe is also unfair. A $580M all-stock deal at the height of the bubble? Signing away your life's work without calling your acquirer's customers? Come on.

I hope the founders get paid (on Goldman's dime) but they have to carry some of the blame here.

einhverfr 4 hours ago 0 replies      
Horrifying story. I hope Goldman gets held liable for that billion in damages.

At the same time I think that there are some important lessons here. The big one that comes to my mind is always have an exit strategy. For example, if I am able to make my business take off great. If it gets acquired and I end up not liking the new bosses, great, I can quit. But what can I take with me? What do I do after that?

I am fortunate in this area to have a lot of people who, while not aware of the whole situation can still nonetheless provide some help with that question. And I am grateful to those who have pushed a greater open source angle here.

And of course we can find how many missed opportunities there were to notice that this deal was bad on everyone's side. But the question for the rest of us not involved in litigation is what we take away from it.

I take away from it:

1) Be very careful about M&A. If something doesn't look right, it probably isn't.

2) Always have an exit strategy.

gojomo 8 hours ago 1 reply      
I feel for the Dragon founders, but with $580 million of L&H stock at one point, they also could have and should have done some prompt and serious hedging/collaring. Perhaps they did, but not nearly enough?
mrose 11 hours ago 2 replies      
Very interesting article with ties to Wall St as well as Siri. It should serve as a reminder that in business dealings, only -you- have your best interests in mind.
bickfordb 9 hours ago 1 reply      
There were a few things that were hard for me to follow in this article:

1. If the company was worth $1B before as X before selling it to Y, wouldn't Y+X be at least worth $1B?

2. If L&H made fraudulent claims, why not make a claim against L&H to recover the software, brand, intellectual property? According to the Wikipedia page (http://en.wikipedia.org/wiki/Lernout_%26_Hauspie) their software ended up being bought by Nuance (Siri).

Micro-apartments next for S.F.? sfgate.com
31 points by iProject  4 hours ago   28 comments top 14
MrFoof 16 minutes ago 0 replies      
I've downsized a few times over the years. 1080 > 920 > 720 > 660. How much have I missed having a bigger space? None. I could probably drop down to about 550 tomorrow and still have tons of room.

The first thing is "usable square footage". In the larger units (> 720) I had things like hallways. Closets for a water heater or furnace. Etc. When I dropped down to 720, I actually had more usable square footage than when I had 1080 square feet. This is because the smaller spaces were better designed for their intended activities, and other things were inlined or made more efficient (in-line electric water heaters, in-wall thermal pump) as a result of the space constraints.

Additionally space isn't to store stuff, it's the support the activities within the space. When I had 1080 square feet, the kitchen was larger than the bedroom I grew up in. It meant a lot of unnecessary walking around to get anything done. When the work triangle shrunk to about 20 square feet, everything was in reach, and I still had tons of room for all the prep work, and an excess of space to store everything. A 160 square foot kitchen was excessively large when 50 works just as well. Just like a 50 square foot "laundry room" is worthless when there wasn't any room to put an ironing board -- now I have a laundry closet (9 square feet?) with some stacked Bosch units that allow me to get just as much done (with a fold-out ironing board on the door). The bedroom went from having a 90-square foot walk in closet, to 14 square feet of reach-ins -- of which I use one of them. I guess if I had a live-in girlfriend she'd use the other. Bathroom? Also shrunk. However the bedroom got larger, as did the main living area. Big wins.

As for stuff… I've gotten rid of tons of it. Every year, clean things out. Every year, be baffled at home much gets tossed out. I've zero clutter now, yet I still have everything I care about. I still have a home-office built into a 15square foot reach-in closet with two 27" displays and a laser printer that's very comfortable to work in. I still have some collectibles stashed in a storage bench at the bottom of the bedroom closet. However if it doesn't have sentimental, monetary, or immediate value, it's gotten tossed at one point. It forces you to think hard about what you value, and stick to it if you don't want to live in clutter or with a giant stack of boxes somewhere.

Coworkers always seem to "feel sorry for me". "How do you live without being able to stock up on toilet paper at Costco?" I guess I don't need nearly as much TP in the bathroom as you do. "How do you live with such a small car?" Yeah, a 2-seat roadster is really roughing it, but y'know, I soldier on. I have everything I want, nothing that wastes my time or attention, less to clean, less space to heat/cool, and a car I drive very sideways. 220 square feet I probably couldn't immediately shift to, but I'm pretty certain I could go to 350 very hastily if I had to. 220 would just require a lot of thought, and giving up activities such being able to host thanksgiving, etc.

othello 2 hours ago 1 reply      
To put this in a European perspective: the legal minimum floor area to rent out an apartment in France is 9 square meters, or slightly less than 97 sq feet.

Even better (or worse, depending on your perspective): the absolute minimum is a volume of 20 cubic meters [1].

Therefore in cities like Paris where flats with high ceilings abound, you find owners of big, old apartments with 3m high ceilings breaking them up in 7 sq meters (75 sq feet) "studios"...

250 sq feet is aplenty.

[1] http://www.adil75.org/pdf/av12.pdf in French)

SteveJS 34 minutes ago 0 replies      
The idea of sharing a kitchen counter and computer desk just makes me think crumbs in the keyboard. However, minimizing the kitchen and having really high end shared kitchen facilities would be great. Between the ages of 24 and 28 I didn't eat at home even once.

One of the underlying sources for 'smaller is better' is Christopher Alexander's "A Pattern Language". It also is the source that inspired the pattern movement in software. A pattern language is second in the series, and is a 1000+ page book with strong opinions on everything from City planning down to interior decorating. It is very much worth the read.

po 1 hour ago 0 replies      
I was just talking with a friend yesterday about living in Tokyo. A big part of what makes it reasonable is that everything is smaller. The containers of food I buy at the store are smaller and fit in my smaller refrigerator. There's a laundry place right next door and they give you hangars that are half width (folding the shirts back on themselves like when you buy them) so they can fit in a shallow closet. As an American, it's hard to get used to not buying in bulk but it really helps you make the most of your space.
fiatmoney 34 minutes ago 0 replies      
The problem isn't that SF is running out of square footage for luxuriously spacious apartments. The problem is that SF doesn't allow the construction of high-density housing, or much new housing of any kind. They gained all of 50-something market-rate housing units in 2011.


firefoxman1 9 minutes ago 0 replies      
This is a great TED talk about how this guy maximized the space in a tiny apartment and found he was happier than when he had a large apartment with lots of "stuff" http://www.ted.com/talks/graham_hill_less_stuff_more_happine...
ktizo 2 hours ago 3 replies      
The new proposed minimum is about the same size as my flat here in the UK, and I know plenty of people living in smaller places than mine.
ibagrak 1 hour ago 0 replies      
My wife and I live in a ~300 square foot apartment in NYC, although we don't feel like we've got plenty of room, we never feel cramped either. Moreover, we often have people sleep over in our "living room". A lot of it is about light and how high your ceilings are.

I guess what I am saying is that the absolute minimum square footage you think you need is a malleable concept.

mdanger 2 hours ago 4 replies      
"That demographic cohort wants to continue their collegiate experience for an indefinite amount of time," Kennedy said. "I envision this as a launching space as they get established."

The "collegiate experience" Kennedy seems to be going for with this is dormitory-style living (the article even touts the common areas that will be available to tenants, just like some of the pitches when I was shopping around for colleges!), but the first goal for many students at my university and others I've visited has always been "get out of the dorms and into a real apartment".

CPops 33 minutes ago 2 replies      
It's sort of silly that a minimum apartment size is even legislated in the first place. Somebody who chooses to live in a small apartment does so because it's their best available option.
slaundy 35 minutes ago 0 replies      
Interesting. This might change the nightlife culture of SF to be more similar to bigger cities. I did a lot of hanging out in friend's houses, cooking, playing games, or having house parties in SF. In NYC, where the apartments are much much smaller, there's a lot less of thatâ€"people go out on the town all night and buy $15 drinks instead, and that supports the bars and restaurants than stay open late.

I'm also curious what safety measure they have on those vertical storage units. In an earthquake, all your stuff falling down in front of the door could trap you in your apartment.

vipervpn 1 hour ago 2 replies      
I'm concerned about house pets (like dogs and cats) in such a small apartment. Are these micro apartments fit for dwelling in for long periods of time? I don't think so. Cats and dogs need space too, and they don't have the freedom to come and go whenever they please.
wslh 1 hour ago 1 reply      
Does this sound interesting for investing? it can start a new real state bubble based on micro apartments.
olalonde 2 hours ago 0 replies      
There is a legal apartment size? Seriously?
When Agile jumped the shark deathrayresearch.tumblr.com
11 points by ljw1001  2 hours ago   7 comments top 4
DanielBMarkham 27 minutes ago 2 replies      
This was a bit meandering, but I feel his pain.

I was very skeptical at first, but I've become a big fan of story points: they decouple estimation from scheduling, and that's a good thing.

Note that they are not "... a new, fuzzy unit of measure..." likewise they are also not a "...metrics sleight-of-hand..."

This a very simple, yet powerful exercise. Relatively size the things you have to do. Now, without caring about what the points are, select what you can do in the next time-frame. Measure your ability to deliver against this.

Over time, your relative estimates get better and your ability to commit gets better. The kicker is that none of this has a damn thing to do with scheduling. Once you can tell reliably tell me you're going to deliver 10% of the remaining total work in the next two weeks, I know for a fact you have 18 more weeks left in the project. Then I can then release plan and work dependencies based on real-world data and without breathing down a developer's neck. Meeting schedules doesn't have to be (and shouldn't be) the higher-stress thing many of us make it out to be.

The author seems to not be able to get scheduling out of his head. If you're constantly wondering how many days a story point is, don't use story points. You don't understand them.

Note that for such a simple idea, there are several gotchas here. Most teams screw up story points and velocity. Anybody have the PM that divides the points by the sprints remaining and then announces what the velocity will be? Or how about the ScrumMaster that empirically determines current velocity and then tells the team how many stories they can do for the upcoming sprint? Ouch. Lots of bad practices out there. That doesn't make story points bad, though. Just makes most people suck at them. (Shameless plug: I have a book on being a ScrumMaster and an upcoming one on backlogs. http://tiny-giant-books.com/scrummaster.htm )

T-hawk 17 minutes ago 0 replies      
Story points vs time estimates is not an either-or question. You can use both. My team at my job does. We do time estimating for each two-week sprint (Scrum) so we know what will fit, and have fuzzier point estimates in the backlog for longer-term planning by the product owners. Both levels of abstraction are appropriate in the right context.

To DanielBMarkham in a sibling comment:

> Or how about the ScrumMaster that empirically determines current velocity and then tells the team how many stories they can do for the upcoming sprint?

That's what we have, but is that bad? Isn't that how the points should be used to gauge predicted velocity? Of course it shouldn't be treated as a concrete inviolate set-in-stone prediction, but that methodology comes in pretty accurate for estimating.

ljw1001 13 minutes ago 0 replies      
Not totally different, but simpler, which counts. Earned value presupposes things like someone put a value on individual deliverables and that there's a budget that matters. Neither happens much in commercial software in my experience.

Since most software cost is headcount * time * cost-per-person, tracking time is a pretty good proxy for that.

Of course no metric matters if you produce bad code and call it progress.

jacques_chester 1 hour ago 0 replies      
It used to be that you had Earned-Value Management and the concomitant EV Charts. One could take the first derivative of the current point on the chart (or perhaps do something fancier involving moving weighted averages) and use that to predict how the EV chart would play out in future.

But EV charts and first derivatives are old fashioned and hokey. Practically waterfall!

Instead we use the latest in agile management: burndown charts and velocity. Totally different.

Three Months with Sublime Text 2 steverandytantra.com
3 points by steverandy  41 minutes ago   1 comment top
cageface 0 minutes ago 0 replies      
I read feature lists of editors like this and wonder why people just don't use a good IDE like IntelliJ. It does everything in this list and a lot more with minimal configuration.
Facebook announces SPDY support w3.org
66 points by igrigorik  9 hours ago   27 comments top 7
jebblue 2 minutes ago 0 replies      
I'm actually a fan of the Trac software linked to in the article. It's hard to beat a combined Wiki, source browser and ticket management.
eliben 3 hours ago 0 replies      
It is not what the linked post announces. Quoting:

We currently are implementing SPDY/v2, due to the availability of browser support and the immediate gains we expect to reap. Although we have not run SPDY in production yet, our implementation is almost complete and we feel qualified to comment on SPDY from the implementor's perspective. We are planning to deploy SPDY widely at large scale and will share our deployment experiences as we gain them.

metabrew 5 hours ago 0 replies      
And here's Twitter's response to the same Expression of Interest too: http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/...
mcpherrinm 8 hours ago 2 replies      
It's great to see how such a fundamental change in how browsers and servers communicate can get rolled out so quickly! I would have guessed that this sort of thing would require many more years of effort than it did.

The author of the post, however, does seem to misunderstand SPDY's server push feature. He states that Facebook requires a substitute for long-polling for low-latency message delivery, and seems to think SPDY provides this. Unless I'm mistaken and there's some Javascript API available, server push is merely for cache-priming and reducing latency of requested objects alongside a regular pageload (eg, push the CSS and images along with an html page).

kristofferR 7 hours ago 5 replies      
Forced SSL is not a problem for big sites like FB and medium size sites, but it is incredibly problematic for the small sites with less than a couple of thousand visitors per month, in effect it means that SPDY (and eventually HTTP 2.0 unfortunately) will remain a bonus for the elite web sites while the majority of the smaller web sites will remain on HTTP 1.1 "forever".

Yeah, I'm aware that some providers like StartSSL hands out free SSL certificates, but I don't think it's a good sign of things to come that you need to hand over sensitive personal information in order to use the latest generation of a fundemental web technology. You'll also need a dedicated IP, which costs money and is becoming increasingly scarce and expensive.

I actually run a small web host for my clients and all of them have denied my offer to install a free SSL from StartSSL in order to get SPDY because of the privacy concerns and the extra cost of the dedicated IP they're required to get.

It's a shame that a large majority of the web sites on the net will become stuck on an old technology just because of an arbitrary requirement for encryption even though they have nothing to secure anyway.

sathappan 6 hours ago 2 replies      
That's great news. But why is that FB's SPDY session never gets captured in chrome net-internals?
tysons 2 hours ago 0 replies      
a massive gain for FB would be pushing data to the browser rather than pinging for new data no? SPDY allows this
Joe Armstrong: Why OO Sucks cat-v.org
172 points by it  15 hours ago   149 comments top 33
nessus42 12 hours ago  replies      
I've been watching some talks online recently by Rich Hickey of Clojure fame, and he's a very interesting and convincing speaker. He basically makes the same argument that Armstrong makes here.

I'm not clear, however, how the pro-FP, anti-OO crowd address the Law of Demeter, which is often summarized as "One dot: good. Two dots: bad." The canonical example where the Law of Demeter serves us well comes from some of the original Demeter papers, which I actually read a long time ago when they were current. This canonical example is that of an object to represent a book.

One of the initial selling points of OO was that if you encapsulate the representation of an object from its interface, this ends up giving you a lot more flexibility. For the case of representing a book, pre-Demeter, a typical OO organization would have been to provide a method to give you chapters of the book as Chapter objects, and from there you could get Section objects, from which you could get Paragraph objects, from which you could get Sentence objects, from which you could extract the words as strings.

The Demeter proponents correctly argued that this OO organization of the Book rather defeats the goal of encapsulation, since with this organization you cannot restructure the internals of the Book object without breaking the API. E.g., if you decide to insert Subsections between Sections and Paragraphs, your API for extracting all the sentences of a book will change, and consequently, much of the client code will have to change.

The Demeter folks argued that instead of having to explicitly navigate to sentences, you should just be able to call a method on the Book object directly to get all the sentences. Without special tools, however, this is hard for the implementers of Book, since now they have to write tons of little delegation methods. I take it that people who are serious about following the Law of Demeter do do this, however. In the original Demeter system, Demeter would do this automatically for you. The problem with the original Demeter system is that few people actually ever used it, and it was rather complicated for Demeter to provide this automatic navigation.

So, back to FP: Rich Hickey argues to forgo damned objects and to just let the data be data. So if I follow Hickey's advice, how am I supposed to represent a book? As a vector of vectors of vectors of vectors of strings? If so, then how do I prevent a change in the representation of the Book from breaking client code? If I had followed the Law of Demeter with OO, then everything would be golden.

Sure, with this naive FP approach, I could also provide a zillion functions to fetch different sub-structures out of the book. E.g., I could have a function to return all the sections in a specified chapter, and another to return all of the sentences in the book. This, however, would end up being little different from the OO approach following the Law of Demeter, with the further downside that if you change the representation of the book, you don't know that you haven't broken the client code, because you have no guarantee that the client code isn't accessing the representation directly.

Please advise.

dgreensp 13 hours ago 5 replies      
OO vs. FP is just a matter of whether you focus the nouns or the verbs. The counterargument to the OP is that surely a function that manipulates "data" is less powerful and abstract than one that manipulates objects.

For example, take an "interface" or abstract data type like Array, consisting of a length() and a get(i) method. (This is really called List in Java and Seq in Scala.) There may even be an associated type, A, such that all items are of type A. This is very powerful because functions written against the Array interface don't depend on the implementation; we can store the data different ways, calculated it on demand, etc.

The "binding together" Joe is complaining about is binding the implementation of length() and get(i) to the implementation of the data structure, which is surely understandable. The alternative, seen in Lisps and other "verb-oriented" languages, is that there is a global function called "length" which takes an object... er, a value... and desperately tries to figure out how to measure its length properly, perhaps with a giant conditional.

The original OO (SmallTalk) was about message passing rather than abstract data types; just the idea that an object was responsible for responding to certain messages, and that these communication patterns completely characterized the object. This is how we think about modern cloud services, too; it's kind of inevitable. Who would complain that S3's "functions" and "data" are too coupled? Who would ask for a description of S3 in terms of what sequence of calls to car and cdr it makes internally? OO concepts allow a functional description of a system that starts at the top and can stop at any point.

The "everything is an object" philosophy gets a bad rap. It's a big pain in Java, especially, because of how the type system works. Ideally I'd be able to define a type of ints between 1 and 3, an obvious subclass of ints in general, whereas in Java I find myself declaring "class [or enum] IntBetweenOneAndThree" or some nonsense.

gabordemooij 2 minutes ago 0 replies      
To me object oriented programming makes a program 'come to life'.
In our daily lives we are surrounded by objects: trees, houses, books... to name just a few...

I love the fact that I can reason about these 'natural' concepts in my code. Thinking 'in objects' sparks my creativity and boosts my imagination. It helps me to visualize otherwise very abstract notions.

I love to talk about a 'Book' instead of an Array. With good Object Oriented code, technical concepts and natural ideas seem to come together. To me, the benefits of writing object oriented code have more to do with human-computer symbiosis ( http://en.wikipedia.org/wiki/Smalltalk ) than with pure technical correctness, it just fits my mind.

If you want to appreciate the real beauty of objects I recommend to skip Java and C++ for a minute and look at Smalltalk. I just read the Blue Book (Smalltalk-80) and I had tears in my eyes.
The elegance and beauty of this language is just stunning.

jroseattle 49 minutes ago 0 replies      
The conversation around programmatic semantics and languages and the like always come and go over time, and feelings about them ebb and flow. Over umpteen years as a developer, I can say that I've found there's something about every language that will cause one to say "why do I have to think about this like that?" Nothing is perfect, but certainly a lot of languages do a few things quite well.

As such, I would be grateful to hear an argument of why object-oriented programming structures are incorrect. I disagree with the reasons provided by the OP because of the slant toward personal preference. The arguments posted here are specious; I can find holes in each of the points made.

1. Data structure and functions should not be bound together - very true, they should be independent. However, this statement: "Objects bind functions and data structures together in indivisible units." This implies how something is implemented (or rather ALWAYS implemented), and while a tight binding is possible in most OO-supporting languages, it's not requisite. Just because the ability to violate this exists doesn't make it awful; it just makes it complicit on the programmer to use the right approach in a given situation.

2. Everything has to be an object - in some languages, this is true. However, this causes what problems? For the OP, this is nothing more than semantical ickiness. I won't defend any implementation of things like time and date and other primitives, but the chief complaint here seems to be how that information is accessed and the form of which it takes. I simply find the "this-is-an-object-so-it-feels-wrong" argument quite lacking.

3. In an OOPL data type definitions are spread out all over the place - this is organizational, but I'm not sure what "find" means in context. I guess it depends on the language being used, but I question why this is an issue for the OP. "In Erlang or C I can define all my data types in a single include file or data dictionary." I can do the same thing in Java or C# or other languages, if I want. For most developers, "finding" data type definitions has more to do with documentation than the actual language.

4. Objects have private state - of course they do, it's the nature of OOP. This statement: "State is the root of all evil. In particular functions with side effects should be avoided." This is unfounded (not the side effects part, which has nothing to do with state.) State, as the OP points out, exists in reality but should be eliminated from programming. Just as the bank example points out, one will want state to be accounted for in cases of deposits and withdrawals from an account. Thinking that state can only be handled in a certain way (which is what this argument suggests) is limiting in evaluation and unimaginative in assessment.

Most of the arguments show personal preference to application development, and with that I totally understand. But these arguments are intended to show why the languages which support OO are conceptually wrong, as if the concepts of the alternative are an accepted truism.

zacharyvoase 14 hours ago 2 replies      
“Data structure and functions should not be bound together” â€" I can't agree with you more.

However, in Smalltalk (and even Ruby, to some degree) objects are not data structures, they are collections of functions invokable on a 'thing' with an unknown structure. They have an internal structureâ€"potentially immutableâ€"but you never see this, because you only interact with methods on the object.

And in many cases, there is syntactic sugar to make invocation of these methods look like slot access: think of Objective-C's `@property`, or Python's descriptors, or Ruby's `def method=(value)`.

When people talk about 'object-oriented languages' in such general terms I get frustrated, because there's a lot more nuance to this than simply 'bundles of functions and data structures'. That's a very implementation-led way of looking at it. The reality is that these objects are supposed to represent real-life situations where knowledge of what something is, or how it behaves, or how it fulfills its contracts is unknown. If NASA's Remote Agent[1] had been implemented in Haskell, OCaml or ML, do you think debugging DS1 would have been as simple as connecting to a REPL and changing some definitions in a living core? I don't think the image-based persistence of SmallTalk and many Lisps would be possible in a purely functional or traditional procedural language.

And what is a data type anyway? It's supposed to represent a mathematical set of possible values. Sure, you can use a simple array to build a b-tree, but don't you want to explicitly state that variable x is a b-tree if that's the case? I was always taught that explicit is better than implicit.

I should probably stop ranting now, it's just that if you're going to start hating on programming paradigms, at least sound like you've thought your argument through a bit more.

[1]: http://www.flownet.com/gat/jpl-lisp.html

bithive123 14 hours ago 2 replies      
A series of assertions and "I just don't see it"-s presented as self-evident when they are anything but. No examples of real cases where OO does in fact "suck", ending with the claim that in order to understand the popularity of OO one should "follow the money".

What? I mean, I don't even...

notJim 13 hours ago 2 replies      
99% of the time I read these articles that say $commonly_used_thing [1] sucks, the arguments are always "it is fundamentally incorrect" or some variant thereof [2], and strawmen [3] abound.

Where these arguments fall short are in addressing the simple fact that highly-skilled people produce very neat, well-designed systems that they are pretty happy with from a technical standpoint, and that make money every single day using $commonly_used_thing. If you can't acknowledge that $commonly_used_thing has some good attributes, and that it actually works well for many cases, I don't understand why I should take you seriously.

[1]: Examples of commonly_used_thing: ORMs, OOP, SQL databases, NoSQL databases, operating systems, platforms.

[2]: There are a handful of variants. I think my favorite is the magical phrase "impedance mismatch", which I think in non-buzzword-speak translates to "fuck you, I'm right"

[3]: Most-frequent strawman: the most essentialist, rigidly-formal version of $commonly_used_thing, when in reality, nearly every version of $commonly_used_thing compromises to cope with reality.

MarkMc 11 hours ago 3 replies      
Wow, I am genuinely shocked by the comments in this thread. I didn't realise that so many people held the polar opposite view to me. It's a bit like suddenly finding out that all your friends are racist.

I love object oriented programming. For me it aligns perfectly with the way I think - it allows me to produce a system of interrelated 'things' where each thing (or group of things) has a well-defined role and can hide its internal state and behaviour from other things.

When I see how some code tackles a problem I get an emotional response from how 'clean' it is. Does it smell bad or is it a work of beauty and elegance? If the code feels wrong I get an urge to make it better and for me that process of improvement relies heavily on object-oriented concepts. I get a real buzz from creating a clean, elegant solution to a problem: Trying to do that without object-oriented features would be like trying to write a letter by holding the pen with my teeth. Ugh.

phleet 51 minutes ago 0 replies      
So I have very limited experience with FP, and a reasonable amount with OO (mostly dynamically typed).

I can really see the benefits of FP, but there are some problems I have trouble modelling with FP.

For instance, if I have a simple 2D rendering engine, I just want to say "add this object to the screen". The object might be a geometric primitive (square, circle, etc.), it might be generated particles, or it might be an image or video or something. The way I deal with this at the moment is have Drawable or something to add to the screen with something like, which implements an interface with a "draw" method.

    screen.addToScene(new Circle(...))
screen.addToScene(new Square(...))
screen.addToScene(new ParticleGenerator(...))
screen.addToScene(new ImageSprite(...))

Then the game would loop over each of the Drawables, then call .draw() on them, which is implemented differently for everything that implements Drawable.

How would I model this in FP?

The only solution I can think of at the moment is to have a draw function that does pattern matching on the type of thing and do it that way. How do people do stuff like this in scheme or other languages with limited support for pattern matching?

The problem I have with this is that it means every time I want to add a new kind of thing, if it implements many methods in an interface, I have to go to many different files to implement how this new kind of thing works.

Among other things, that's a huge pain for revision control, since if I have 3 coworkers adding new kinds of things that can be drawn, we're all going to have to modify the draw function. In OO, we'd each just be creating a new subclass in its own new isolated file.

As a second question - what should I read to get a good idea of how to sanely model things in OO and FP? I've read a lot of debate about the right way of doing things, but I don't really know where to learn this stuff. The OO class in university was completely useless, since the examples were outrageously contrived and too small to see any real benefits. I'd ideally be looking for 1 book that explains how to model real problems in OO very clearly, and one book for how to model real problems in FP.

programminggeek 14 hours ago 0 replies      
I think OOP took off because it seems like a great way to model things. The idea that you can simulate a car by saying you have a base kind of car with properties and actions and then a Ferrari is a kind of car, so you can just kind of take that car object and make it have more horsepower and a different body type and you have a Ferrari, is very exciting.

Businesses like to model things and simulate things. So, in that respect, OOP was probably an easy sell because it's selling an idea of what businesses want, even if it hasn't worked out exactly as they hoped in all cases.

LnxPrgr3 11 hours ago 1 reply      
I'm all for proper rants against popular tools to keep people on their toes.

This isn't one of those.

"Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds."

Sureâ€"a class defines a type and operations on that type. What's fundamentally wrong about date.addDays(1) vs. date_add_days(date, 1)? (Let's skip the mutable state argument and assume both versions return a new date.)

There is the problem that sufficiently opaque classes are hard or impossible to extend. That's the class author's fault: this is an avoidable problem in every object-oriented language I've used.

"Functions are understood as black boxes that transform inputs to outputs. If I understand the input and the output then I have understood the function. … Functions are usually 'understood' by observing that they are the things in a computational system whose job is to transfer data structures of type T1 into data structure of type T2."

A constructor is a black box that converts a data structure of type T1 into a data structure of type T2. Objects just also have other black box functions defined on them.

Sure, some objects are stateful, but they don't have to be.

"In an OOPL I have to choose some base object in which I will define the ubiquitous data structure, all other objects that want to use this data structure must inherit this object."

Um, no. This is a job for composition, not inheritance.

"Instead of revealing the state and trying to find ways to minimise (sic) the nuisance of state, they hide it away."

They hide state's implementation, for mutable objects.

  std::vector<std::string> some_list;
std::cout << "Items: " << some_list.size() << std::endl;
some_list.push_back("Hello, world!");
std::cout << "Items: " << some_list.size() << std::endl;
// Oh no! State, EXPOSED!

Sure, an allocated piece of memory might have grown, or even moved. Why should I care? I still see the state I care about, presented through a hopefully useful abstraction.

This rant seems to somehow miss the points of both object-oriented and functional programming, instead harping on mostly meaningless (or outright wrong) details. Or am I missing something here?

timruffles 52 minutes ago 0 replies      
I think a project along the lines of Todo MVC for general programming languages - https://github.com/addyosmani/todomvc/ - would work really well for illustrating these kind of debates.

Ideally it'd be a reasonably involved problem domain (rather than a todo list) with persistance, networking and something which requires parallelism/concurrency (I'm sure there are other categories too). This'd expose each language to the types of complexity that exposes the really interesting differences - does language X allow a clean API even when we require immutability for parallelism, does language Y impose boiler plate on simple problems, Z require unreadable line noise?

I find these debates nearly useless without evidence and code to read.

nirvana 11 hours ago 1 reply      
I can express my OO ideas in erlang with no problem (objects become processes).

I cannot express my concurrency ideas-- that erlang makes super easy-- in the OO languages.

Maybe Go changes this but I haven't used Go yet.

NinetyNine 14 hours ago 3 replies      
There's a certain allure to saying code should be a certain way because of natural properties of computing, or our own feelings on what things are different and similar from what other things.

The reason OO shines is because it allows you to make that distinction at the domain level rather than the code level. You organize your software into business objects, or components, and have these interact with each other. They allow you to separate it out in a way that a new developer can come to the project, understand what the code should be doing, and look for the classes which seem the like objects involved, including the types of data it has and the things it can do.

There are all sorts of nasty things we've invented in OO over the past few years (mixing up inheritance and composition, using way too much state), but it gives us a lot of advantages from an engineering point of view.

6ren 11 hours ago 0 replies      
- though he has no qualms about misleading and deceptive answers.

- "If I understand the input and the output then I have understood the function." A good point, I tend to think of "information hiding" http://en.wikipedia.org/wiki/Information_hiding as applying to state, to enable SOTSOG, but it also applies to pure functions.

- "define all my data types in a single include file" That quote sounds silly, but in practice, I find it much clearer if all the part of a data type are next to each other, uncluttered by methods. It also supports Brooks' observation: "Show me your tables, and I won't usually need your flowcharts; they'll be obvious" ("tables" being datastructures). In Java, I tried this by defining fields in superclasses, methods in subclasses. But having two classes per class was awkward. (I ended up keeping code entirely separate except for very core methods - still not happy with it). But I don't think this is entirely a language problem, it's partly just complexity management is hard.

- this article makes me feel antagonistic, but in fact I never liked OO when taught it; it seemed dogmatic, not actually useful in practice. But I did like the idea of an ADT, where you can package something up (esp. a list, hashtable etc), and work at a higher level of abstraction. Subdividing tasks and SOTSOG

carsongross 13 hours ago 0 replies      
I find grouping functional code along with the data the code is supposed to work on reasonably intuitive. The OO religionists definitely sold the world a bill of goods on the reuse arguments, and the religious fervor was silly (just like it is with todays functional zealots) but still.

Having an 'x' and hitting '.' and seeing what 'x' knows to do with itself doesn't suck.

einhverfr 13 hours ago 0 replies      
I see a bunch of caveats here.

The basic issue is that OOP is easily hyped and far too easily taken too far. My favorite OOP environments are decidedly un-OOP in specific ways (Moose, for example, has very transparent data structures, which is really useful).

The first big criticism I have is with the idea that "state is the root of all evil." I think the truth is more nuanced than that. State is, in many cases, extremely necessary to track. The problem is that state errors create bugs that are very difficult to track down (you can eventually figure the state out, but how did it get corrupted)? A better approach I think is for state to be approached declaratively and with constraints. This is why things like foreign keys, check constraints, etc. in the RDBMS world are so nice. In fact good db design usually has a lot to do with eliminating possible state errors. Wouldn't it be great if OOP environments gave that possibility! Well, Moose does to a large extent (another thing I really like about it).

A lot of it comes down to the really hard question of "what should be abstracted?" The correct answer is a question, "what makes your API most usable?"

This is why I think it's important to be able to move in and out of the OOP worlds, and why OOP taken too far runs into the problems the author mentions, but that it also doesn't have to.

msluyter 13 hours ago 0 replies      
I expect this to be a busy thread.

I think the point about state has some traction. For example, Chaper 15 of Effective Java (awesome book on Java, btw) is entitled "Minimize Mutability," so I think this idea is one that has caught on even in fairly traditional OO languages.

As for the other points... I do think sometimes that using OO to model real world objects may not always be wise, esp. if the result is a deep hierarchy, as in the OO 101 example of, say, Boston Terrier < Dog < Mammal < Animal < Thing... And then someone changes Dog and gives your Boston a tail... I dunno. I have only vague intuitions here, but perhaps the "objects as models of reality" might be perfectly suited for reality simulators of some sort that require stateful elements a la Sim City but not generally. Or, perhaps a better example: You could model a chess game as classes of Pieces on a Board, with methods like King.isInCheck(), or Queen.canMoveTo(Square) but this to me seems clumsier than simply having an 8x8 array of enums with the logic living in functions and not inside individual pieces.

damian2000 8 hours ago 0 replies      
In the 1990's I saw OO as just another tool which was infinitely better than what I had at the time. I hazard a guess that most devs at the time were still working with procedural languages like C, Cobol, Fortran, Pascal or Basic. OO gave you abstraction and encapsulation, making it a little easier to write better code, that's all.

For me it was never OO v.s. FP, it was just OO v.s. the status quo. If OO was hyped up so some guys could make money from it (as the article suggests), then who was behind it? -- Bjarne Stroustrup, Anders Hejlsberg or James Gosling? I think not.

Tloewald 2 hours ago 0 replies      
The whole basis of this argument is that functions and data structures should not be "locked in a cage together". Replace object with "file" in the entire article and you'd have an equally but more obviously ridiculous argument.

Does OOP have problems? Sure. Is Erlang great in some ways? Sure. But this argument is silly.

pacala 11 hours ago 0 replies      
The historical win of OO was polymorphism. The competition to OO was procedural code that consist(s|ed) of hardwired procedure calls. Enter polymorphism, which provides a way to abstract over functions, not only over values. Of course, this is nothing new to functional programming where functions are first class citizens, but it's new for procedural programming. Modern OO is about stateless objects, dependency injection and unit testing, aka functional programming.
erlkonig 39 minutes ago 0 replies      
Heh. The "-deftype second() = 1..60." is a problem, since some minutes have 61 seconds in them.
yason 5 hours ago 0 replies      
My sentiments exactly couldn't have been better put than the author did in the opening sentence: When I was first introduced to the idea of OOP I was skeptical but didn't know why - it just felt "wrong".
ColinWright 6 hours ago 0 replies      
There was a substantial discussion when this was submitted 3 1/2 years ago:

I do wonder if this discussion repeats all the same points, or if it raises new ones.

alttab 13 hours ago 2 replies      
What about the benefits of abstraction? I'm not going to introduce hyperbole on how I think this article is overstated, and instead I'm going to ask HN members who have more experience with functional programming how they leverage concepts similar to abstraction with Haskell, clojure, or erlang, etc?
lukifer 13 hours ago 0 replies      
OO is just one design pattern of many; it should be used when the mental model is a good fit for the problem domain. It does annoy me, though, when languages or frameworks force the use OO when it's unneeded.

Use a function when you need a function, and a class when you need a class.

borplk 13 hours ago 1 reply      
From a purely scientific view, OO is a terrible idea because it moves the program further away from the mathematical form and makes it harder (if not impossible) to say, logically prove the correctness of the program. But from a practical perspective OO is a great idea because it makes many things so much easier.
nivertech 10 hours ago 1 reply      
While I dislike OOP, I think that CLU-style ADT has some merits, especially when ADT implemented in pure functional way.

Just because you don't have explicit schema, doesn't mean that you have no implicit schema.

Likewise just because you don't have explicit objects and classes, doesn't mean that you have no implicit objects and classes.

I code in Erlang, and I treat every gen_server either as a singleton object or as a class (in case I spawn many instances of it)

lightblade 7 hours ago 0 replies      
What I finds funny is that all these OO design patterns and best practices are aimed to solve problems that doesn't exist in FP. Of course I may be over generalizing, but you get my point.
hurshp 11 hours ago 0 replies      
What I find so obtrusive about OOP which I feel is a massive issue (maybe has to do with the last sentence in Joe's post). OOP is pushed into places it does not belong and causes a lot of impedance issues.

OOP developers want if something doesn't talk OOP then to make it talk OOP, for example ORM's and SQL databases.

It is a tables and sets, most of computers use sets and tuples, yet OOP needs to be serialized and abstracted away and pushed in almost becoming a data type in it's self.

And I think there are other issues and pervious failures like this.

ww520 14 hours ago 4 replies      
I don't get it. If people don't like OO, why don't they just not use it. Just use your favorite methodology to get the job done. Why do they have to bad mouth it?
CurtMonash 8 hours ago 0 replies      
His history is wrong. OO won in large part for a good reason -- it was a way of implementing, if not enforcing, modularity. One that people accepted, unlike LISP.
ioquatix 13 hours ago 1 reply      
Languages that are "OO" and languages that are not (in this case functions and data structures) are semantically equivalent.


Traction mistakes gabrielweinberg.com
155 points by dwynings  18 hours ago   33 comments top 9
dmbaggett 4 minutes ago 0 replies      
I thought this post was great, and I cringe when I read the comments in this thread that are so negative about the product. Kudos to GW for tackling a massive, difficult problem in the most competitive space in software. Obviously there's still work to be done, but DDG has invested, what, 5 orders of magnitude less into search than Google? This is about as hard as it gets in startup land.

I imagine there's a GW post yet to be written [1] on the related point that some problems are harder than others (e.g., search >>> picture-sharing-app), and the complete lack of correlation between effort and value (DDG <<< Instagram, at least so far) [2].

But line that I didn't fully understand was:

I believe distribution is equally important as product. That means quite literally you should be spending 50% of your time on it. For tech people, you should probably bias it to 75% so you end up getting to equal in the end.

What exactly does "distribution" mean in this context?

[1] or maybe there already is such a post..
[2] yes, I know there are many reasons for this, search is valuable, etc.

Gring 16 hours ago 3 replies      
"Most startups don't fail at building a product. They fail at acquiring customers ". I disagree, at least when it comes to me and duckduckgo (which Gabriel founded). Here's why:

Three months ago, I started using ddg instead of Google. I'm quite disappointed:

- search speed is slow. Instead of <1s, it's often more like 2-3 seconds.

- search quality is adequate to quite bad. Example: "amazing spiderman rotten" (I was looking for the rotten tomatoes page for that movie that just came out, entered a typo) gave the right page for google, while the right page is not amongst the 20 top results in ddg.

- while ddg says that they don't track me, the still insist in not using direct links in their search results, but indirect results (via duckduckgo.com/l/u?=...). Not only is this insincere, it also messes up my browser history: when I visit a page through ddg, Safari lists that strange ddg url in the browser history instead of the target page.

Now, Gabriel was succesful at "aquiring" me. I tried it out. For a long time. And I'm on the verge of leaving. Why? because he failed at building a good product.

But maybe that is exactly why he is failing. He is focusing very much on these other "most likely cases of failing", while ignoring the very reason he is failing in this instance.

--edit: examples, conclusion.

benjaminwootton 10 hours ago 0 replies      
I knew before I opened the thread there would be a bunch of detractors. Not sure why Gabriel seems to attract the naysayers on here?

To me this and the traction verticals stuff on this blog are absolute gold.

I don't think I would be far off the mark if I suggested that 90% of the people on HN would be capable of building a product, but 90% would also fail to get something off the ground in terms of users. (Partly through lack of skills, partly just because it's a hard problem in a competitive world.) It's only where the skills intersect that they're even in the game.

Anything that breaks 'getting traction' down into an analytical approach is great, and much more actionable than most of the fluff that passes through here. Fantastic article.

einhverfr 14 hours ago 1 reply      
Actually I think the info in the article is on the mark. The most important aspect is a systematic approach to obtaining customers.

The real tricky part though is figuring out how to use each traction approach (the article calls them verticals) properly. For example, in my first three years of business, I learned a bunch of hard lessons here. These include:

1) Most advertising doesn't work unless people already know of you. Push your advertising later. Advertising that does work at that stage is that which has a personal feel to it (like infomercials).

2) PR is golden. Go for it at every opportunity.

3) Your biggest friend is your competition. If you can reach out and build good relationships with folks who are already in the field, that is support that can't be underrated. I thought at first that this was specific to smaller businesses but it turns out that the more I look into it, the more many successful businesses of all size do it, and do well because of it.

4) Public service announcements are very good as well. If you are just doing tech support, and the local radio station has an open hour or so, call up every time there is a major virus outbreak and let folks know. Or buy advertisement space in these cases, or the like.

marcamillion 13 hours ago 1 reply      
I would love to see a framework developed - like Lean/Agile Startups - that specifically deal with a scientific process for startups across all/most verticals.

Just like I can use the principles of Agile Software Development to build my product, I would love something similar for marketing.

I know there are many theories - AARRR from Dave McClure is one that jumps to mind, along with the long-tail of SEO landing pages like BCC & Patio11....but there isn't a coherent, or rather I don't know of one, framework that pulls it altogether.

Anyone care to take a stab at this?

I am sure many founders would find this immensely useful.

tlogan 10 hours ago 0 replies      
We are right now doing all these mistakes.

Currently, we have the problem that somehow majority of people signing up to our service are not at all potential customers.

The main problem for us that majority of blogs and advices we got are actually about how to acquire customers in consumer segment.

There is very little blogs on how to acquire customers which are small businesses via internet.

sunwooz 14 hours ago 1 reply      
I commented on your blog but I thought I'd repeat it here to get more answers :)

How do you determine if your product is something people don't really want? I'm looking for my hypothetical early adopters(new and young restaurant owners) and a lot of the older restaurant owners don't care too much about the product. How do you determine whether your product is not ready for the mainstream or whether people just don't care about the problem and/or solution?

kiba 15 hours ago 3 replies      
Most startups don't fail at building a product. They fail at acquiring customers.

Citation needed.

rmah 16 hours ago 0 replies      
He's talking about the promotions aspect of marketing (getting the word out) and sales.
Intel Core 2 Duo Remote Exec Exploit in JavaScript 1337day.com
184 points by mef  12 hours ago   95 comments top 28
duskwuff 9 hours ago 3 replies      
By my reading, this is 100% fake. My best guess is that the author has an unstable computer, and they believe that occasional crashes while running parts of this JS mean that their exploit is working. Another possibility is that they tried to reimplement a C exploit in Javascript without understanding the difference between the languages. Either way, it doesn't work.

There's all kinds of bizarre silliness in the "exploit". They're passing URL-encoded x86 assembly to unescape() in a void context, as if that'll somehow execute the code in the result. (This technique is sometimes useful in heap sprays, but they aren't using it in a way that would work for that -- in particular, they aren't creating NOP slides or saving the result anywhere, so the resulting code would be almost impossible to hit.) They're claiming to have a "microcode VM" with a "scrambler + dynamic encoder + multi-pass obfuscator", but no such thing is in evidence. There's sillier things still, but I'll leave it for now.

I've run the PoC code, because I don't see anything to fear, and, as expected, it does nothing. The "Check vuln" button always returns "your CPU isn't buggy", because it's simply checking that the "test()" function returns 1 (which it does), and the "PoC Run!" button throws an exception, because it ends up assigning "[object Object]NaN" to the global "unescape" and attempting to call it. There is no way in hell that this code could ever have anything resembling the intended effect, on any Javascript interpreter, platform, or architecture.

jimrandomh 11 hours ago 5 replies      
Here's a quick summary of what this is. It's a partially-obfuscated piece of malware, which claims to demonstrate a zero-day (that is, unpatched and previously unknown) security vulnerability affecting Intel Core 2 Duo and Intel Atom processors, allowing privilege escalation from inside a Javascript interpreter up to kernel memory. I don't know whether it actually works, since I'm not brave enough to experiment with it, but it's likely that it does.

If this works as advertised, then if you have an affected CPU, it is a zero-day exploit affecting every web browser on every operating system, both desktop and mobile, as long as you have Javascript enabled. Until a workaround has been found, any site which serves you Javascript or any of its advertising networks could use it to give you malware.

If you are using Noscript and also blocking Flash, then you are probably safe. To protect yourself, you should, first of all, use ad-blocking software, because ad networks are more likely to distribute malware than the sites they advertise on. Second, you should use only the most security hardened browser, which is Google Chrome; it's not clear whether Chrome's hardening will actually help, but it's likely that it will, and also that it will be the first to have a workaround. And third, you should be immediately suspicious if your browser crashes unexpectedly.

olalonde 10 hours ago 1 reply      
112 points right now and still no one has confirmed the exploit is working... What happened to the good old "extraordinary claims require extraordinary evidence"?
Timothee 12 hours ago 1 reply      
So what's going on here?

I have been trying to follow along but I'm confused why anything happens.

There's a test button that calls ThreadProc_dbg(bug) which then calls test(result), which in turns has some assembler code commented out and finishes with:

return 0;

The variable 'result' is (visibly) untouched by the function but ThreadProc_dbg tests its value to see if the processor is vulnerable or not. So just the test() function has the good stuff. (assuming it works) So either the assembler code does something even though it's commented out, or the unescape is not happy but I'm not sure why…

I haven't tried too much on the code that actually crashing the computer (or whatever it does) since just the test puzzles me.

js4all 9 hours ago 0 replies      
An explanation attempt: This demo consists of actually two programs. A test loop, which gets exploited and the malicious code. The test loop needs to run until patched. It is completely running from the cache. When the exploit runs, it modifies the 4 first bytes of the cached loop into 4 NOPs via the cache exploit. When the change happens, the exploit is successful.

This test code is save for c2d users to try. It just checks, if the cache modification is possible.

A real exploit would combine this with other explotation code and would change the machine code of the test loop into a jump or a call.

The real scary part of this is, that it is possible to patch code despite of access rights. If the loop is really changed, I have no doubt that this can be made into an effective exploit.

brohee 1 hour ago 1 reply      
Whoever upvoted that should seriously consider a career away from computers. Away from anything requiring critical thinking actually.

This is on par with some 90s hoaxes claiming some emails could burn your CPU.

The code isn't even hard to follow AT ALL! There is no way it will ever display anything but "<h1>[-] your CPU isn't buggy!<h1>". There is like 5 lines of very simple code to read to achieve this conclusion.

Very disappointed in Hacker News.

fijal 3 hours ago 0 replies      
This sort of article makes me loose faith in hackernews. It's 166 points by now and getting more, yet when you look at the code it's obviously fake. It does not do anything. Why people who cannot confirm or deny such claims upvote this? You don't even have to read the code - it's blatantly obvious you cannot have an exploit using a bug in CPU cache strategy that affects all the JS vendors!!!
trentmb 12 hours ago 1 reply      
I'm not smart, can someone give me a rundown of what's going on here?
espes 11 hours ago 1 reply      
Relevant: http://www.cs.dartmouth.edu/~sergey/cs258/2010/D2T1%20-%20Kr...

(The PoC as it is doesn't actually do anything...)

verroq 11 hours ago 0 replies      
Has anybody confirmed this is working yet? If not it seems like an elaborate joke.
sown 11 hours ago 0 replies      

I'm dumbfounded at how much more clever and sophisticated attacks get. It will never end! I fear that I cannot be of much use anymore.

I remember back when buffer overflows was the exploit and I viewed it as some kind of sorcery, even though I understood it.

I guess so long as software keeps getting written, exploits can be found, and if you plumb the depths of specification, you can find holes, but they're so much more harder to find now. :(

Maybe I'm feeling my age? Security is a game for the young? Or at least more energetic.

sedev 10 hours ago 1 reply      
Allowing JavaScript is only going to get closer to being equivalent to allowing untrusted, unsigned code on your machine. Atwood's Law applies to malware too.

Before I edited this comment, I had a laugh at the expense of people who think I'm in some way misguided for using NoScript and complaining when sites break with JavaScript off. That was wrong. I think that those critics are also wrong, though, and this sort of thing is why. Even if this particular code is a non-starter, the plausibility of this kind of threat, this kind of nightmare scenario, is a huge problem. JavaScript is a general-purpose programming language that's present on nearly every user-facing computer in the world, with all the security issues that come with that. It is in some ways the world's biggest and most-rewarding malware attack surface. A working 0-day attack in JavaScript itself could be worth millions or billions in the right hands.

kgc 11 hours ago 1 reply      
This placed a malicious file in my cache. I'd recommend not visiting for now.
munin 9 hours ago 0 replies      
so the horror story here appears to be that from a javscript application i could get a blind write to any location on the system.

this is powerful but undirected. locations of important code have been randomized in your operating system for quite some time. if this technique even works, to turn it into an 'exploit' you would need to know the location of the code that you want to patch, and knowing this requires yet another exploit...

kristopher 12 hours ago 2 replies      
With V8[0] and Nitro[1] having gone mainstream, it has never been more easier for these kinds of exploits to exist on the Web.

[0] https://developers.google.com/v8/design#mach_code

[1] http://www.webkit.org/blog/214/introducing-squirrelfish-extr...

chubot 11 hours ago 1 reply      
So how does a bug in the CPU cache controller cause a remote execution exploit? You can write an exploit into memory, have it cached somehow, trigger the bug, and then the CPU will execute the wrong data in the CPU cache?
zht 12 hours ago 0 replies      
I don't have a C2D processor to test this with, but does this work on all browsers or some subset of the popular ones?
Bockit 12 hours ago 0 replies      
If I have a Core 2 Duo, should I be concerned?
EricDeb 11 hours ago 0 replies      
Why would they obfuscate this if they intended to publish it? Maybe this means it was not meant to be published...
bobobjorn 8 hours ago 0 replies      
Tested it on my c2d (P8700). In chrome it claimed my cpu is not buggy, and in firefox the script didnt even work.
level09 8 hours ago 0 replies      
Seriously, how do people go about finding bugs like these ?
looks like they start from assembly code then try to trigger that code in javascript.

Any one tested this ?

waitwhatwhoa 12 hours ago 1 reply      
PoC did not work on a Core2Duo L7100
KeyBoardG 11 hours ago 1 reply      
Just opening this link threw security warnings from my antivirus. As such, I'd pull this from HackerNews. AMD processor.
dontbestupid 10 hours ago 0 replies      
this does nothing, nothing at all, it's fake.
ObnoxiousJul 9 hours ago 0 replies      
okay read the code.
It mainly writes: you have been hacked on a web page.
End of the story.
robertelder 10 hours ago 1 reply      
No wonder it doesn't work. The </html> tag is in the wrong place :p
zmonkeyz 11 hours ago 0 replies      
More importantly will Apple give me a replacement for my 2008 IMac? :P
robertelder 12 hours ago 0 replies      
Everyone Should Set Their Own Salary lincolnloop.com
4 points by gglanzani  1 hour ago   1 comment top
kiba 2 minutes ago 0 replies      
Standard deviation is only 5 dollars? Sounds like everyone make the same amount of money.
Binary data processing in JavaScript varunkumar.me
4 points by varunkumar  1 hour ago   discuss
Why Lisp macros are cool, a Perl perspective (2005) warhead.org.uk
53 points by lobo_tuerto  11 hours ago   7 comments top 3
btilly 35 minutes ago 0 replies      
How did this wind up here?

For the original, go to http://plover.com:8080/~alias/list.cgi?2:mss:234:200507:hokn... and see the discussion we had about it then.

It is also worth noting that MJD has the distinction of being the highest rated speaker at Oscon ever. If there is any possibility of interest, he is worth watching because he is incredibly informative.

See http://perl.plover.com/ for some of his older Perl writings.

berntb 4 hours ago 0 replies      
Lisp people might enjoy this short specification for Perl 6: http://perlcabal.org/syn/S06.html#Macros

"Real" macros in a non-lisp language [edit: I mean, a language without the parse tree explicitly in the syntax]? Well, it'd be wonderful if it works.

Edit: re Dominus, his Higher Order Perl is on the net. http://hop.perl.plover.com/

mmphosis 7 hours ago 3 replies      
"Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in." - Larry Wall

C's macro system, for example, is so unreliable that you
can't even define a simple macro like

        #define square(x)      x*x

How do you set the value associated with the "foo" property? Oh, you use "setf", which rewrites

        (setf (get x foo) 1)


        (LET* ((#:G847 X) (#:G848 FOO))
#:G848 #:G850)))

but you don't have to know that. It just works.

I agree about C macros. But, show me the source for:

    (defmacro setf ...

and you may be quoting Larry Wall.

Vim esckeys ksjoberg.com
28 points by KevinSjoberg  8 hours ago   5 comments top 4
kator 2 hours ago 0 replies      
To the OP, Thanks, I knew why it was delaying but didn't realize I could disable the behavior!!

The leads back to the very old days of serial terminals and the horrible ^[ prefix in many terminal's function and arrow keys. I remember fondly working with Wyse60's and some other terminals that used ^A instead.

Imagine trying to detect ^[{randomDelay}O{randomDelay}C -vs- the user typing ^[{randomDelay}O

The Wyse60 used left=^H, right=^L, up=^K, down=^J so there was no need for the delay to detect the sequence.

Back in the old days Vi would detect this lack of multi-character function keys and not use a delay.

On a 1200 baud modem it was very nice however later as DEC and ANSI multi-byte sequences became more popular this became much more painful.

See: http://en.wikipedia.org/wiki/Termcap for more terminal insanity fun.

On a fun note, I have a customer with a 1987's era copy of Microsoft Excel for Xenix (yes it really existed).

Each time we upgraded their system to a more powerful cpu I had to patch the binary as it literally used a spin loop to delay long enough to detect the keys (never mind tty settings can help with that at kernel level). The last time I had to make it work on a recent core I literally set the "spin loop counter" in the millions to get it to work via telnet on a local network where they keys come fast and low latency!

It took me a couple hours the first time with a debugger to find this little gem, so I have a README in the binary's lib dir that explains how to update it. Every couple of years I have to dig in and figure out the right value! Fun!

EDIT: It's possible the magic was actually happening in termcap with the original vi. I didn't dig into the source to verify. :)

imurray 3 hours ago 1 reply      
I'd wondered why hitting 'O' after hitting escape led to such a nasty delay when running vim in a terminal. Now I know.

For those that still want arrow keys in insert mode (sacrilege! ... but I do), this is probably the .vimrc option you want:

    set timeout timeoutlen=1000 ttimeoutlen=100

Symmetry 1 hour ago 0 replies      
Having remapped my caplocks key to be escape, I find myself using it fairly often not just with vim but in all sorts of ways. Considering I never used it as caplocks before except accidentally I found the remapping a good investment.
aidos 3 hours ago 0 replies      
I only just figured this one out myself a few days ago. Another alternative is to map jj or jk (or whatever) to esc in insert mode. Then you just need to get out of the habit of hitting esc. Of course, then there's a delay on hitting j but I've found it less of an annoyance.
Lessons I learned the hard way with Startups.com (after multiple pivots) thenextweb.com
3 points by benjlang  1 hour ago   1 comment top
garzuaga 0 minutes ago 0 replies      
It sucks when it happens, but it's good to remind ourselves that it could happen. Mostly because we'll keep starting new companies all our lives (or so it seems).
XBMC For Android xbmc.org
147 points by mmahemoff  1 day ago   41 comments top 13
w1ntermute 23 hours ago 2 replies      
> Currently, for most devices only software decode of audio and video is hooked up.

This is a good start, but ultimately, you need hardware decoding. Even if a modern phone or tablet could decode 720p x264 in real time (many cannot), the battery usage would make it entirely untenable.

drivebyacct2 18 hours ago 1 reply      
Remember all of the cheap, hardware decode capable ARM devices we've seen lately?

This is awesome, awesome, awesome. XBMC is an impressive piece of software. uPNP AV/DLNA, AirPlay, can play from every network sharing technology I've ever heard of.

jsz0 20 hours ago 3 replies      
What about Plex? IIRC it's a fork of XBMC and has been available for both iOS and Android for the last 6 months or so. Nothing against XBMC but Plex is probably a better choice right now if you want something stable.
CrazedGeek 23 hours ago 2 replies      
For those more knowledgeable than I: would this run on Google TVs? It looks like it's an NDK app, and I was under the impression that GTV didn't support the NDK.
kgutteridge 22 hours ago 1 reply      
Syncing up with the repo, it will certainly be interesting to see how this runs on the NexusQ that I received from I/O
donniezazen 22 hours ago 2 replies      
Ouya makes Android a major gaming platform and XBMC will make it a dedicated media center. Android it truly realizing its true potential.
te_chris 16 hours ago 0 replies      
Awesome, I've been thinking about building an HTPC, but beginning to balk at the power cost compared to some sort of streaming device and low power storage unit connected over N wifi. If there's box that has spdif out I'll buy it straight away (for high quality DA audio conversion).
beefsack 4 hours ago 0 replies      
There's a built APK up on Miniand: https://www.miniand.com/forums/forums/1/topics/136
mmanfrin 22 hours ago 0 replies      
This is wonderful news for OUYA preorderers.
stcredzero 23 hours ago 1 reply      
This should result in another surge of OUYA contributors.
hu_me 22 hours ago 0 replies      
i think this plus the fact that nexus q has been hacked to run apps might be a boon for nexus q.


watty 23 hours ago 1 reply      
Will this work with the Nexus Q? Doesn't seem to work on my HP TouchPad.
badboy 23 hours ago 0 replies      
Yeah, "leaked" is quite a statement here. The full source code is on github. Should be clear that someone will post a debug build of that, right?
Elon Musk Fireside Chat Video spaceindustrynews.com
108 points by littlesparkvt  22 hours ago   14 comments top 8
suprgeek 7 hours ago 0 replies      
Even more exciting - He is going to go into details about the Transportation Idea (Hyperloop) that he alludes to in the video in a few weeks https://twitter.com/elonmusk/status/224406502188916739.

Can't wait...

jc4p 8 hours ago 0 replies      
While I absolutely loved this video, I absolutely despised the former Facebook employee's long winded question that wasn't relevant to anything Elon Musk could've talked about but was instead an advertisement.
beefman 19 hours ago 1 reply      
Which company he was asked (but declined) to be cofounder of? Ah, must be Solar City.
"SolarCity was founded in July 2006 by brothers Peter and Lyndon Rive, based on a suggestion for a solar company concept from Elon Musk."

And the douchebag remark was directed at Eberhard? No, it was Randall Stross.

Thiel and he riding in an F1? I thought it only had one seat. No, three seats:

MikeCapone 9 hours ago 1 reply      
Great one. Elon is so fascinating. We definitely need more people like him. I wish I could see what our civilization would look like if more of our smart and driven people tackled important problems rather than go into finance or Web 2.0 startups..
guynamedloren 20 hours ago 0 replies      
Recent HN discussion about this (and all things Elon Musk):


dojomouse 3 hours ago 0 replies      
I'd bet a large chunk of change that the HyperLoop is some variant of vacumn tube maglev. Not an idea Musk came up with by any stretch, but he's someone who might actually be able to make it happen which would be very awesome. His patent is going to hit a fair bit of prior art though.

Also think his dismissal of fuel cells is a bit disingenuous - Nissan apparently have them down to $50/kW (volume production estimate) which, given the low cost of energy storage ($/kWh), means they could be entirely viable at least as a range extension option.

But still, overall, admire the hell out of the guy. Legend.

ajays 15 hours ago 3 replies      
You know what I'd love Elon Musk to tackle next? The education system in the US. We are producing a generation of illiterates here. Today's economy demands a more educated workforce (face it, the manufacturing jobs are disappearing), but we're getting the opposite.

Yesterday, at the doctor's office, there was a young kid there (college age). He was writing out a receipt for my copay. He had a hard time spelling out "fifteen", and wrote "fifthteen" after several attempts. Yes, English was his first language. And the fact that he's working in the doc's office means he must be at least average.

sgt 21 hours ago 1 reply      
Already submitted to HN, but I'm sure a few missed out so thanks for posting again.
Minecraft Accounts Exploit github.com
26 points by wedtm  9 hours ago   10 comments top 3
pilif 8 hours ago 5 replies      
I see no mention of notifying Mojang. And even if they did and Mojang is late with patching, I don't think it's very nice to post a public report on a weekend. Mojang is still a comparably small company and I'm sure nobody there is thrilled about fixing security flaws over the weekend.

This is, IMHO, not totally what I would call responsible disclosure.

alt_ 3 hours ago 0 replies      
"UPDATE: Woohoo! Things are back up and running perfectly! Thank you all for being patient while things were fixed. Also major props to Grum, Dinnerbone, and Leo who were out of bed and in to action in the blink of an eye!"[0]

[0] http://www.mojang.com/2012/07/houston-we-have-a-problem/

buttscicles 37 minutes ago 0 replies      
I'd have thought ensuring a session ID was only valid for a single account would have been the first thing to test when developing an authentication system. Perhaps not in Sweden.
Marc Andreessen “tried really hard not to invent anything new” horsesaysinternet.com
67 points by oscar-the-horse  17 hours ago   30 comments top 10
confluence 11 hours ago 4 replies      
Let me just ask - When has anyone ever actually invented anything totally new?

The telephone? It was referred to as the "speaking telegraph" (telegraph + speakers).

The car? Horseless carriage (engine + wheels + steering + brakes)

The plane? Glider + engine. The Wright brothers invented powered flight - not flight. The difficulty was getting the weight to lift ratio high enough with a primitive heavy engine on board. Gliders already existed - you just couldn't go anywhere with them! You can't just jump off a cliff and glide to China.

Google? A more advanced MITS algorithm + inktomi's commodity cluster map-reduce architecture + inktomi's PPC. Indeed, had inktomi doubled down on search instead of their CDN, we might well be talking about inktomi and not Google.

General and Special Relativity? Nope - http://www.quora.com/If-Albert-Einstein-had-never-existed-at....

There are no really new ideas out there - merely combinations old ones that "hang in the air". There are no new ideas - merely old ones combined in unique ways.

Everything is a remix (https://vimeo.com/14912890).

paulsutter 11 hours ago 1 reply      
It's really hard to build a company. It's really admirable to build a successful tech company, no matter how lofty or straightforward the project. The Instagram guys are also heroes to me.

Zip2, Elon's first company, didn't invent anything new or solve any hard problems either (it started out as an online yellow pages). But it was really important that Elon learned from that process. And I know that he enjoyed it, I remember meeting with him when they were 10 guys in one room. And Zip2 was worth $300M to Altavista, which is how Elon paid for the Maclaren mentioned in the recent video.

I'm also glad that Peter Thiel and Max Levchin and others have started to create social pressure for us to think about how to do big things that really benefit society. And maybe we only need to feel that pressure after we've done one or two successful things. As individuals it's also ok to ignore it and say hey I really enjoy building companies and don't need to change the world in a huge way.

gleb 15 hours ago 0 replies      
Note that he is talking about NCSA Mosaic, not Netscape
ippisl 15 hours ago 0 replies      
"we borrowed protocols, formats and even code from the world wide web project ... our goal:easy to use, fun graphical front end"

It's a classic university commercialization effort:take ip from university for some important problem, integrate it together and give it a commercial appeal, and sell.

But it's not as easy as writing some crud site.

stcredzero 14 hours ago 3 replies      
Be real with yourself: Cleverness is a finite resource, even yours. Leverage it in the most efficient way possible.

(So alpha-geek pissing matches are clearly a waste. Everything should be focused towards furthering your company's goals while avoiding bugs and making refactoring easier.)

oscar-the-horse 17 hours ago 0 replies      
Here's one of the quotes from the video (Marc):
"We tried really hard not to invent anything new. And we also tried not to solve any hard problems. Which makes it a lot easier to actually get something done"
mdonahoe 10 hours ago 0 replies      
My favorite part of the video is when he advanced the slide, and you realize you are looking at an overhead project with printed transparency instead of PowerPoint.

1994, simpler times.

_delirium 13 hours ago 0 replies      
Seems pretty standard for a commercializer. The goal is to take some existing research funded by someone else (out of a university usually, though sometimes a corporate lab or government lab) and turn it into a product with the minimum additional modifications.
DivisibleByZero 12 hours ago 1 reply      
There's an ongoing debate about whether startups should focus on harder problems.

Is there a really a debate or am I missing something? This is honestly the first thing I have seen supporting not solving a problem.

Really I think there is a place for both. He mentions avoiding the problem of search, which happened to be the foundation of a particularly successful company.

pmboyd 11 hours ago 1 reply      
Netscape no longer exists as a company (or meaningful subdivision of AOL) so that might not have been the best strategy. Copying got them ahead quickly but didn't keep them there.
The Remembrance Agent remem.org
20 points by eloisius  9 hours ago   3 comments top 2
mcculley 36 minutes ago 0 replies      
I used this for a while approximately 15 years ago. It was excellent and while I was composing an email it would find things in my history that were useful to the conversation. Alas, I can no longer do all email and web browsing in Emacs. I really wish such a thing existed for Mail.app in OS X.
ojilles 3 hours ago 1 reply      
So this is GNU Clippy? (As description of what this does
Show HN: Easily block Glassdoor app notifications mypermissions.org
9 points by benjlang  2 hours ago   2 comments top 2
almost 27 minutes ago 0 replies      
Is this a "thing" I'm not getting?

Do HN readers really need basic Facebook usage help?

erans 1 hour ago 0 replies      
I really hate those notifications. This also works for other apps as well!
Read the masters federicopereiro.com
162 points by fpereiro  1 day ago   49 comments top 17
DanielBMarkham 23 hours ago 3 replies      
I can't agree enough with this, except the part about staying with it. I think sometimes we have to come at the same material from several different directions before it actually makes sense to us. Perhaps it's our preferred mode of learning, or maybe just how old we are and our personality types. Don't know.

So for a long time I avoided a lot of the literary masters. As a programmer, I thought they were way too artsy and "fluffy" for my tastes. I wanted something with hard science and boolean logic in it, dammit.

But around 40 I listened to the Learning Company's "Great Authors of the Western Literary Tradition." It was like a guided tour of a huge amount of masterpieces. From this overview i could pick and choose what to consume. As I read each work, i had already been "prepped" by listening to a lecturer describe what made the work so outstanding.

So I picked up "Anna Karenina" Wow! Tolstoy could sketch a character like nobody else. I read some Dickinson. What a great, simple, yet complex way she had of describing inner emotional states!

Still couldn't get all of it. Joyce is on my list, but I procrastinate. I had another go at Melville and loved it, but I couldn't generate enough momentum to make it through Moby Dick. Both writing and reading styles have changed. I'd love to learn Greek and have a go at the true classical works, but I will never have the time, sadly.

I'm hoping to get another overview or introduction and then make a go at some of the rest of the material. I've loved reading the literary masters.

What I find is that you need a preparation or background to really absorb and appreciate the masters. This is the same as having to have a background in baseball to understand a baseball game. Otherwise, without context, it's very difficult to understand what parts work, what parts don't, and where the beauty is. (This is called a liberal education, by the way). The more broad and deep background you have, the more you can appreciate the masters in many fields.

Also I'd separate cargo cult liberal arts with actual understanding. To me there's tons of venues that exist to convince you that you're smarter than some other slob. They pitch quite a bit of snob appeal. I'd avoid that. You end up thinking you have class when all you're really doing is running around in a mob consuming whatever was on NPR last week. To me developing a sense of what the crowd thinks is beautiful versus truly coming to a personal grip with the masters is completely missing the point. I'm sure there's a social aspect to art consumption but to me a true master spans the test of time. While it's possible that something can be popular today and 100 years from now, for me using social proof as some form of merit for masterworks is almost diametrically opposed to the entire concept of what makes art truly great in the first place.

Fair warning, however: once you start consuming works from the masters it makes mediocre works hard to stomach. Oddly enough, bad material is fine. I still love me some pulp fiction and trashy pop music. It's the stuff that tries to be highbrow but you know is going to be gone with the wind in ten years that's impossible to take.

AndrewO 1 day ago 0 replies      
Or, if the masters are too hard at first, see if their pupils have written annotations. I tried Turing, couldn't even understand why what he was talking about was important (or why his "computers" seem so much... lamer than the ones I was used to), and then found Charles Petzold's "Annotated Turing".


He takes Turing's "On Computable Numbers..." and mixes in chapters giving the necessary background on the history of mathematics, number theory, logic, etc. in-line (albeit, it takes about 100 pages to get to the first sentence of Turing's paper). I whole-heartedly recommend it.

SiVal 12 hours ago 2 replies      
More silly, macho, "real programmer" nonsense. It's like saying that the best way to learn calculus is to pick up a calculus book in first grade and read it over and over until it all becomes clear. Yeah, that's the express lane to mastery.

If you want to master something, sequentially master its prerequisites and methodically work your way up. If you do that, then a lot of the ideas of the masters will have begun occurring to you before you even read their works, and you will be the sort of person they were writing for.

cruise02 21 hours ago 3 replies      
> Another thing that resonated deeply with me was how Newton studied a book by a French mathematician (I don't remember which one): he started reading the book â€" when he found it too hard, he started over. That's what he did until he understood the whole thing. So, he didn't go and read something more basic, or tried to found someone who could explain it. He just stayed with the source until he groked it.

Approach this advice with caution. Yes, it might work great if you're working with a great book that has good coverage of all the fundamentals of the topic and presents it in a logical order. Many books don't do this. Starting over when you get stuck can just throw you into an endless loop if the required information just isn't in the book you picked to learn from. Know when to branch out to other sources of information instead of looping back. Don't be afraid to make a quick interlude to Wikipedia if you need to.

(Personally, I prefer to just backtrack to the information I missed instead of starting over from scratch. Starting over seems to waste a lot of time, but if you missed one concept, there's a chance you missed more.)

shizzy0 1 hour ago 0 replies      
When he writes that Newton too just read and re-read through the Master's, he's wrong.

"How Newton was introduced to the most advanced mathematical texts of his day is slightly less clear. According to de Moivre, Newton's interest in mathematics began in the autumn of 1663 when he bought an astrology book at a fair in Cambridge and found that he could not understand the mathematics in it. Attempting to read a trigonometry book, he found that he lacked knowledge of geometry and so decided to read Barrow's edition of Euclid's Elements."


When Newton failed to grasp something, he backtracked. When he failed to understand that, he backtracked again. I think many people when confronted with a failure of understanding may be disinclined, throw their hands up, and say, this isn't for me. Newton, instead, continued to work his way back up the chain until he found material that helped him understand.

I think this article does a disservice to the lessons we might learn from Newton by suggesting that he just smashed his head against the same book until he understood it.

rokhayakebe 1 day ago 2 replies      
Or the "original communicators." When you start to read the masters of every field, you realize they are in some sort of a conversation that spans centuries and you get to participate. One problem with this approach is these are dead teachers so no question & answer session for you, and usually the material is hard to comprehend at first. That is exactly what you need it, a material that elevates you, something that pushes you from understanding less to understanding more. Lastly, during this exercise you will find amazingly that there are really only a few original teachers and that most of what we read today are simply digest of what was originally written and discovered by a handful of experts. So it is probably in your best interest to take it straight from the horse's mouth.

Yes, I am repeating Adler's How To Read.

EricDeb 17 hours ago 1 reply      
I actually believe the mindset this author has (along with many university professors) makes learning far more difficult for students and discourages them from staying in STEM (math and science) degrees.

I have consistently found that for me to tackle difficult concepts I need multiple points of view, specifically views that simplify the topic tremendously. My professors never encouraged me to seek multiple sources and usually pushed overly complex, decades-old textbooks on my peers and I. The "masters" typically write to a niche, university-based audience and do not tailor their original works for the masses.

I certainly agree that if one wants a complete understanding of a field or subject they should eventually study original works, however to encourage them as a starting or leverage point for understanding difficult subjects is poor advice in my opinion.

mjn 1 day ago 0 replies      
Sometimes this leads to a considerably different flavor as well, which is interesting. Often papers are remembered for their "results", but sometimes the results are only a small part of the paper, and positioned much differently by the original author.

This is the case with Turing, for example. If you go by Turing in the secondary literature, he comes across as very mathematical, formal, rigorous. Which he was, but he was also very aesthetically oriented, playful, and philosophical. Many of his original papers are really quite "weird" in a way, at times even allusive/metaphorical. So you get a bit different view of him if you read the originals. (Due credit: I was reminded of that in Turing's case by this article, which aims to convince humanities scholars that they should read Turing: http://www.furtherfield.org/features/articles/why-arent-we-r...)

ekm2 19 hours ago 2 replies      
A few days ago,i stumbled on Newton's Principia in my college library, and i could not help noticing how accessible he was relative to his pupils.On the very first page,he gives only one rule for finding derivatives that applies in ALL cases plus worked out examples on how it works.I felt like i had wasted too much of my time in college reading too many useless tomes.
StacyC 23 hours ago 0 replies      
I'm not a programmer but I loved this line from the post:

But if you don't mind too much feeling like a baby, and if you can create some space in your life where you aren't forced to be an adult, then give the masters a try.

I think this is very good advice for many things.

eshvk 18 hours ago 0 replies      
This is so true: I remember in high school studying calculus from a random book which made me think that calculus was about was a bunch of tricks for doing integration and differentiation. A few years later when I first start working through Spivak's excellent book on Calculus, It was one of the hardest things I had ever done but the sheer magnificent beauty of the structure on which most of modern calculus is built on and how it gradually evolved comes to light. It is really both a journey in history and time and building mental models that reoccur through so many branches of mathematics.
Irishsteve 1 day ago 1 reply      
Nice idea, however if you lack fundamental knowledge in an area, prepare for this.


(P.s sorry about the title. I guess thats how they get more eye balls on it).

olalonde 5 hours ago 0 replies      
Be cautious when reading the "classics" of fields such as physics or philosophy (unless you are interested in the history of the field). Socrates was wrong about a few things and so was Einstein.
skardan 12 hours ago 0 replies      
There is an interesting question. Does the advice "read the masters" also applies to teaching?

My teacher at university pointed me to Moore's method named by great american mathematician Robert Lee Moore. Moore selected students without previous knowledge of the subject and let them "invent" the subject - definition, theorems, proofs.


So great teachers do not "teach masters" but rather teach to "think like masters". If you know how to think, "reading the master" feels more like a dialogue.

onemach 12 hours ago 0 replies      
The list of the article reminds me of this post http://cstheory.stackexchange.com/questions/1168/what-papers...
pbsd 22 hours ago 1 reply      
This is just a nitpick, but why is SRP (Wu) in your list? Looks a bit out of place there.
Why Do We Wear Pants? Horses. theatlantic.com
107 points by rosser  20 hours ago   37 comments top 17
tokenadult 16 hours ago 2 replies      
An oversimplified view of human history from a commentator who has too little acquaintance with non-Western cultures. (Well, all right, I've long accepted the point, which I read somewhere else long before the Internet existed, that European dress shifted from robes to trousers in large part to accommodate horse riding.) But with cross-cultural perspective, we would consider the trousers (often in the form of "pajamas") worn by women in China and southeast Asia developed in cultures where peasant women certainly did not have opportunity to ride horses. Rather, in those places agricultural work in paddy fields made clothing that allowed a wide range of movement with modesty very helpful. There isn't one single human story about how trousers developed as a form of dress.

For those of you who like to learn about how study of human cultural behavior goes awry from too little exposure to non-Western cultures, see

Henrich, Heine, Norenzayan (2010). "The weirdest people in the world?" Behavioral and Brain Sciences 33, 61â€"135



jballanc 17 hours ago 0 replies      
Relevant Dinosaur Comics: http://www.qwantz.com/index.php?comic=1908

It seems like this "treating history as a science" movement that started with Jared Diamond has really caught on...and I love it! Evolution is such a hugely important field for the future, and looking to the past is turning out to be a really fruitful way of learning more about it.

Another example I like: how do you determine when humans first started wearing clothing? Trace the timing of the genetic divergence of human hair lice and human body lice (lice require a hairy/furry material, so body lice can't thrive until humans wear clothes). More details: http://news.ufl.edu/2011/01/06/clothing-lice/

arohner 18 hours ago 1 reply      
Supposedly, high heeled shoes are also related to horses.

Men's boots were heeled to make stirrups more effective. Heels then became a status symbol, as in "I wear heeled shoes because I own a horse, and therefore I am rich". then the fashion jumped over to women's clothing.


afterburner 10 hours ago 0 replies      
From wikipedia:

"Trousers first enter recorded history in the 6th century BCE, with the appearance of horse-riding Iranian peoples in Greek ethnography. At this time, not only the Persians, but also allied Eastern and Central Asian peoples such as the Bactrians, Armenians, Tigraxauda Scythians and Xiongnu Hunnu, are known to have worn them. Trousers are believed to have been worn by both sexes among these early users.

Republican Rome viewed the draped clothing of Greek and Minoan (Cretan) culture as an emblem of civilization and disdained trousers as the mark of barbarians."

So, while the Romans did eventually copy the trousers once the barbarians were overrunning and ruling Roman lands in the western half of the Empire, it's disappointing that the author didn't go further back. Back to the barbarians, which likely goes back to the central Asian peoples.

driverdan 12 hours ago 0 replies      
How about linking to the original work instead of this ripoff blogspam?


JonnieCache 7 hours ago 0 replies      
The story I got told in school was that as the romans advanced through Gaul and into ancient Britain, they were appalled at the leather britches worn by the barbarian tribes. When they finally took (part of) Britain however, they realised that the extreme cold (compared to rome) meant that the trousers were basically essential.

All those barbarian tribes would have been horsemen, so the article fits with this idea I suppose.

jasonkolb 15 hours ago 0 replies      
Looking at in perspective, it makes sense. Every cool guy is pretending to be a cowboy.

Now, who is going to be the celebrity to go all Socrates on us and re-introduce the tunic.

Interestingly, Julius Caeser was supposed to be some kind of a rebel for wearing his tunic "loosely belted" with fringes. It was the equivalent of long hair in the 60's and 70's for our culture.

Wouldn't it be wild if things went around again--that far?

nsns 7 hours ago 0 replies      
While cultural artifacts get input and evolve from their functional use, such uses can't explain anything, certainly not mass adoption. The main reason for clothes is cultural, and their main use is as a social code (e.g. http://books.google.com/books/about/The_Fashion_System.html?...).
etfb 13 hours ago 0 replies      
I'm given to understand [1] that cars, or at least buggies and coaches and other things like them, are the reason cloaks got replaced with coats and jackets, to the eternal detriment of cool. If you've ever tried driving anywhere with a full circle cloak on, this requires no explanation.

[1] Handwave, handwave... feel free to add [citation needed] there if you want to

doktrin 18 hours ago 2 replies      
Interesting article. I'm perpetually fascinated by the causal chains that have influenced aspects of our lives that we otherwise entirely take for granted.
keiferski 17 hours ago 0 replies      
Another interesting tidbit is that Brazilian clothing and swimwear tend to be more revealing than Euro-American clothing, because the medieval Brazilian man was a sparsely dressed native, whereas the medievel European man was outfitted in heavy armor.

At least, according to this:
(nsfw? Girls in bikinis)

clvv 15 hours ago 1 reply      
What about Native American cultures? They had not seen horses until the Europeans arrived, but they certainly had pants.
mcguire 11 hours ago 0 replies      
To be read while listening to a Tribute to Pants (http://mst3k.wikia.com/wiki/Tribute_to_Pants)!
10dpd 14 hours ago 1 reply      
Pants in the UK = underwear. So why do we wear underwear?
hobb0001 10 hours ago 1 reply      
Why aren't capes still around, then?

(They were made for horseback riding, too)

grumblepeet 16 hours ago 0 replies      
None of this goes to explain Capri pants. Horses be damned...sometimes there just isn't an excuse...
msutherl 18 hours ago 1 reply      
Why Do We Wear Pants? Bicycles.
A World Without Coral Reefs nytimes.com
100 points by raymondh  22 hours ago   37 comments top 7
nkoren 6 hours ago 0 replies      
It's worth noting that a world without coral reefs is hardly unprecedented. Approximately 14,000 years ago, every coral reef in the world would have been destroyed by Meltwater Pulse 1A[1], an event which caused sea levels to rise by 20 metres in just a few hundred years. Although this must have caused the demise of the complex reef ecosystems, coral itself survived, presumably as isolated individuals and very small clusters. Afterwards, sea levels continued to rise rapidly, and the large reefs that we have today could not become established until about 7,000 years ago, when the sea levels finally stabilised. The corresponding ecosystems have evolved only since then.

For this reason, I am mostly unconvinced that the (admittedly horrible) disruption which we are causing today will irrevocably knock things back to a precambrian state. When you look at history of our planet over evolutionary timescales, you see all kinds of disruptive events -- rapidly rising or falling seas; bolide impacts as powerful as tens of millions of nuclear bombs -- which the planet has managed to shrug off with apparent ease. With one exception, most of the harm we are causing seems to be along these lines. So even if the great reefs die off, then my expectation would be that coral itself would continue to survive in niches. 10,000 years from now -- long after we've wised up or died off -- there could very well be great reefs again, and our disruption would be relegated to a small blip in the overall evolutionary record. The planet's losses will ultimately be minimal; the real losers will be our children, who will inherit a less beautiful and wondrous world than the one we know. I've swum through coral reefs, and it's painful to think that the next few dozen generations won't be able to experience that kind of beauty. But from the planet's perspective -- looking at evolutionary timescales -- this may not be a particularly big deal.

The one real and substantial caveat to my nonchalance is ocean acidification. It is truly global compared to other forms of ocean pollution, acts as a disruptor to individual cells rather than to the (more resilient than we give them credit for) ecosystem relationships, and is making an excursion that may be unprecedented in the past 300 million years. In contrast to things like changes in sea levels and temperatures -- which actually happen all the time when you take the long view -- evolutionary timescales give no assurance that ocean acidification will be a survivable event. It is something that merits a much higher level of attention than it's getting.

[1] http://en.wikipedia.org/wiki/Meltwater_pulse_1A

yaakov34 19 hours ago 1 reply      
I really recommend reading the thread at http://coris.noaa.gov/exchanges/coralfuture/coral_future.pdf - this is a discussion by working scientists, and although it's from 2001, the predictions of the demise of coral reefs were already very current then. It's a good introduction to the huge complexity of reasoning about these systems, which involve more feedback loops than a non-expert can even imagine. What I take away from that discussion is that there is no serious researcher in the field who doesn't see the coral reefs disappearing at a huge rate, but there is no consensus about the dominant mechanism; I think (although I am not an expert) that the arguments of those who see pollution and overfishing as the main cause are more persuasive.

For example, a very large fraction of coral reefs around Sri Lanka disappeared in the 1990s, and this apparently had more to do with the fact that people blasted them and hauled them away to be used as limestone in the construction industry, than with any subtle changes in the pH of seawater. In the (very plausible) opinion of some of the scientists from that discussion, most coral reefs will die off long before the pH changes really become significant. It would be great if someone who is a researcher in this field could give us some more recent results and data.

ScottBurson 20 hours ago 4 replies      
Is it completely ridiculous to imagine that ocean acidification could be countered? I'm imagining something like having cargo ships sprinkle sodium hydroxide in their wakes. Could any such plan be workable?
mkl 13 hours ago 0 replies      
This TED talk is relevant and very interesting/disheartening: "Jeremy Jackson: How we wrecked the ocean", http://www.ted.com/talks/jeremy_jackson.html

It was enough to make me stop eating fish.

cageface 9 hours ago 0 replies      
There must be some way hackers can help do something about things like this. Maybe a Kiva for environmental issues or something?

Surely some of the brainpower we're burning on social media and cute mobile apps could be redirected this way.

mmphosis 18 hours ago 0 replies      
A World Without...
javajosh 18 hours ago 3 replies      
This is a marketing piece masquerading as advocacy. What's the difference? An advocate won't personally profit from the course of action they are recommending. A marketer will. I say this because this man's conclusion is, basically, "The coral reefs are disappearing. Fund my research more."

This does, of course, put research scientists into a bit of a bind. Presumably, since they are the ones looking at these systems in detail, they are the "early warning system" for any catastrophic change. When they see catastrophe, what should a research scientist do? Write an op-ed piece in the Times asking for more research funding? I don't think so.

I think the correct move is to complete the fucking research. "Completion" means to come up with some really solid conclusions, and if the system is on course to do something nasty, to have a list of actionable steps to change it. Then, you advocate that list of actionable steps, citing your research as the basis for it. Presumably, you will NOT personally profit from those steps (except, perhaps, as a consultant).

Now, I can hear the objectivist/egoist hacker contingent's hackles raise - why shouldn't a research scientist profit from their research? Was not that their blood and tears and insight? Actually, I think that should be open for them - but they should call it marketing, not advocacy, and it should be clearly a matter of personal enrichment. The thing that an objectivist should focus on is the hypocrisy of a "call to action" masquerading as altruistic concern over the environment, when it is really a quite selfish concern for securing one's own funding source. It's not that I'm against securing one's own funding source, but to do it under the guise of advocacy is flat out wrong.

10 Years of Atari/Atari Games VaxMail textfiles.com
59 points by quadfour  18 hours ago   20 comments top 8
jf 17 hours ago 5 replies      
This archive has been a personal reminder to watch what I say in all digital communication. Even if I think that my post is "internal only" or "private", it can still end up online.

I say this because one of these files has an email that my dad wrote nearly 30 years ago. I'm willing to bet that my dad had no idea that the email he was writing would be read by his son, who would be slightly embarrassed at the fact he was writing in ALL CAPS, and wishing that his dad had used a little more tact in what he was saying.

(The email I'm referring to is in "vax84.txt" search for "THE FUTURE AT ATARI")

T-hawk 9 hours ago 0 replies      
Yes, there are some gems in here, nestled among the chaff of new VAX commands and building operational schedules. I found this bit amazing, from vax84:

  This brings up one of many problems with games of skill that
include monetary payoffs ... As an example, consider a
multi-player space war type game where you win money by eliminating other
players and receiving what they have won so far. The house percentage
could be falling into the sun. What do you suppose would happen out in
the parking lot if you overheard the guy in the next console scream "I just
got a ship worth $10,000!" and you had just been about to return that
much to your home base before some turkey blew you out of the sky...

Rusty foresaw EVE Online twenty years ahead of time.

artursapek 1 hour ago 0 replies      
I love their early version of crontab.

      Find out what is todays date, ala 830717 (1983, 7'th month, 17'th day)
as well as what day of the week it is (Sunday) and the standard three letter
abbreviation (coincidently the first three letters of the long name) (Sun in this case). The proper spelling for Wed is WEDNESDAY, by the way.

Look for each of the following and do the appropriate thing (execute
the command file or type the text file):

'weekday'.com ! as in "SUNDAY"
'weekday'.day ! SUNDAY.DAY will be typed
'dow'.com ! SUN.COM will be executed
'dow'.day ! SUN.DAY will be typed
'date'.com ! 830717.com, remember?
'date'.day ! this gets typed
daily.com ! every day (7 days a week, not 5)
daily.day ! this one too

keyle 16 hours ago 0 replies      
It completely amazes me how nothing has changed. The only difference from Today's email is that those vax mails were usually longer, more polite, and full of nostalgia.

Take baby announcements emails for example. They're identical word for word as today's. I always thought people recently got better at writing those. But no, the pattern was established back then and, shockingly, we haven't changed the way we're making babies, or changed standards in describing a healthy birth.

Back then, and still today, you could guess the person's personality by his/her emails.

Also, back then, they were trying to improve productivity as a constant struggle, just as we do today. Interestingly, nothing has improved much in that field. It's still a rat race. Everybody recognizes the loss of productivity in large businesses, and there seems to be no real fix.

kabdib 17 hours ago 0 replies      
Wow, lots of memories there (1984-ish, anyway).

You'll notice a bunch of people leaving after July 84, which is when the Tramiels took over. Coin-op remained with Warner. About a week into the split, there was an email sent out with the subject "Look! Two companies joined by a single computer network!" which caused the Tramiel Atari to be dropped from the net within a couple of hours.

kevinburke 17 hours ago 1 reply      
Didn't click on any of the links. It's creepy to read other people's emails
activepeanut 15 hours ago 2 replies      
Was I wrong to expect Steve Jobs and Steve Wozniak's name to be in there somewhere?
drudru11 16 hours ago 0 replies      
so fricking awesome
The Most Important Social Network: GitHub 7fff.com
146 points by tuke  1 day ago   68 comments top 16
cletus 21 hours ago 4 replies      
To paraphrase Leonard Nimoy from the Simpsons [1]:

The following article is true and by "true" I mean "false". It's all lies but they're entertaining lies and in the end isn't that the real truth? The answer is "no".

We geeks seem to often be susceptible to hype and hyperbole. Someone is really in love with Github and thinks it's the greatest thing ever and it's going to change the world. It's easy to get caught up in your own excitement. I get it. That's fine.

But I have to admit to having some Github fatigue. We've gone through a spate in the last year of "Github is the new resume", "Github will change engineer recruiting" and now "Github is the most important social network ever".

In many cases I don't believe the author is being deliberately "linkbaity" but that's ultimately what it is.

Part of the problem too is that you get a certain about of "bubble thinking" in tech circles. You see this when VCs get excited about Quora thinking it's going to be the Next Big Thing [tm] because "everyone" is using it (meaning "lots of other people in the Valley"). That's what I mean by "bubble".

I play boardgames a lot and it's much like "groupthink" there (an isolated group of players will evolve a play style and view on strategy very different from other such groups).

In all of the above cases the cure is just to get out of the bubble and expose yourself to different influences and views because the end of the road for this kind of thinking is simply stagnation and becoming out of touch.

Github is great. Their engineers are great. Source control is important. Some will be able to use it to demonstrate their work ([2] really resonates with me). All of this is true but let's not go overboard.

[1]: http://www.imdb.com/title/tt0701263/quotes?qt=qt0332688

[2]: http://news.ycombinator.com/item?id=4244420

aggronn 1 day ago 6 replies      
What's the point of claiming that GitHub is 'The Most Important Social Network'? How can anyone make that claim? I feel silly thinking in the back of my head "But facebook is approaching 1 billion users, how is GitHub even comparable?".

Twitter gets credit for facilitating the arab spring. Facebook has 1000X as many users. Even if we want to be 'work' specific, Yammer is used by over 200k companies for what I assume must be business purposes.

This must be hyperbole. That would make sense.

arihant 23 hours ago 2 replies      
If you were a designer, you would claim Dribbble instead of Github.

Either way, you would be delusional. Social is just a useful paradigm on the web. More and more products would use it as default. Just like every site now has a search box. Github also has a search box. It is not the most important search engine.

SethMurphy 1 hour ago 0 replies      
I think the term "social" is being overused here in the context of a startup, and was probably stuck in the tagline like many buzzwords, with an eye on marketing. The whole internet is social to a point. I think the term collaborative is more apt. The words that tended to jump out to me in the article were collaborative and commentary. He says "GitHub puts the social exchange at the very center", and I disagree. At the center with my view I see a strong tool to manage code that enables great collaboration. When you see a getting started tutorial with Github, the communication aspects are rarely mentioned. In the few projects I have seen getting popular there are usually external forces at work (i.e. Hacker News, Blogs, Large Corporate Support). It may be nice if Github was better at marketing your project. I suspect they may have to be better at it now and the product will undergo many changes in the coming year. That substantial portion of that 100m will likely all go to marketing efforts, not more engineers, which I think is a good thing.
BasDirks 22 hours ago 0 replies      
"Facebook! Twitter! LinkedIn! VK! Renren! These are among the most famous and largest social networking platforms in the world. But are they important? Of course they are. They've changed the way humans interact. But let me challenge you and ask: Have they changed the way we work and think? I do think they have, to some extent."

"To some extent" is comically naive. These platforms have played a considerable part in the youth of internet/earth.

Perhaps the author suggests that github is part of the next phase, which is more credible.

incongruity 22 hours ago 1 reply      
I would submit that the term "social network" has lost almost all distinctive value.

Github isn't a social network in the way that traditional online social networks have been viewed â€" it isn't just about "connecting" and communicating the way that friendster, facebook and twitter all are.

Instead, it's an online code repository that has social features â€" in other words, it's the next step in making our online selves a more effective extension of our off-line selves, doing work, building things, but doing it in the context of a social group â€" just like we do in the off-line world, more often than not.

This is an example of a niche concept becoming widespread enough that it almost becomes table stakes rather than a notable feature.

That's still pretty cool â€" but again, I think it means that the term "social network" is losing its distinctiveness.

unreal37 20 hours ago 1 reply      
I would argue that 10 million+ people have gotten jobs through LinkedIn [1]. And getting a job equates to at least $40,000 a year in income, and a direct improvement to people's lives. The existence of LinkedIn has been a part of ~$400 billion a year to the world's economy. I think Github is a tiny tiny tiny fraction of that in terms of its real-world effect on peoples lives.

[1] LinkedIn has 161 million user accounts, and assuming 5% of them have gotten jobs through the services which could be low. Around 4 billion candidate searches a month are done there.

DivisibleByZero 23 hours ago 1 reply      
Reading this post I get a nice feeling of joy for belonging to the software community.

Being so entrenched in the software world it becomes easy to gloss over these details. The outside perspective of this article really shines a light on how well the software community collaborates and shares.

I can't think of a single community that even comes close to the level of collaboration we have in software.

rokhayakebe 1 day ago 0 replies      
I believe there are opportunities for many Githubs For {insert_industry}
munchor 23 hours ago 2 replies      
I don't really agree with the claim, but I'd love to see Github enter other areas of science like Physics, Maths, Chemistry, etc. Not just programming and computer science. That would be great, Github could work as a center of science all around the world.
elssar 22 hours ago 1 reply      
Well it all comes down to what metrics you use to decide the importance of social networks, doesn't it.

If you're judging by the ability to interact with your real life contacts online then facebook is probably the best social network(gah!)

If you think that meeting like minded people & having awesome interactions with them is more important, then maybe Google+ is the most important social network.

If you agree with Jane McGonigal that games can help make the world a better place, then WoW is probably the most important social network.

But if you think that collaboration on software is more important(and facebook & WoW are softwares), then GitHub is probably the most important social network.
It all depends on what you rate higher.

And putting myself in the shoes of the writer, I think it's the end product that comes out of social networks, is the metric he's pointing to. While other social networks directly affect the lives of more people, much much more than GitHub can ever hope to, the products coming out of GitHub are, or would soon affect more people than any single social network could hope to do.

Someone out there is building the next facebook, the next WoW, the next Linux, or maybe the next Google and it's likely that GitHub will play a part in it.

Keeping that in mind, I'd say that yes, GitHub is maybe the most important social network on the internet.

azinman2 17 hours ago 0 replies      
The article focuses on blending communication & code. While it's great to build tools that blend them, we can't pretend that other channels and mechanisms won't be used. Hopefully github will see the importance of doing that rather than forcing everyone to live inside a github world. Back in 2004 I did a class project called Open Sources [1] at MIT that looked at blending public mailing lists with public CVS archives (no github at the time). Years later I see some of these ideas becoming more popular in coding tools -- my take is that visualization of group history and individual contribution will become critical as catalysts of open-based works. These top-down summaries provide a map & narration rather than autistic reverse chronological fine-grain lists, as well as giving a broader overview as to who an individual is and what they have achieved.

[1]: http://smg.media.mit.edu/projects/OpenSources

mcbaby 20 hours ago 2 replies      
While I'm sure Andreessen Horowitz feels some tech-world pride in this investment, they are still a venture capital firm. This article is a bit too sensational, self-loving to really portray the importance of the investment.

I think TechCrunch nailed it in their article today, where they wrote, "Think of it as a filing system for every draft of a document." Github right now is limited to code-sharing. But it's potential is so incredibly huge. Like the article stated, imagine applying it to PSDs, Word Docs, Excel sheets, any document imaginable. IMO that would be THE killer enterprise prodcut.

nerdfiles 22 hours ago 0 replies      
I believe the thrust here is that Source Code Management has given us a template for Source Content Management.
samstave 22 hours ago 1 reply      
The most important social network: GitHub ... (as long as you're a developer)

Clearly my mom has no reason to know github even exists.

cjdrake 14 hours ago 0 replies      
If Github can figure out how to serve customers who want to do version control on cat pictures, then I think the link title would have some merit.
       cached 15 July 2012 16:02:01 GMT