hacker news with inline top comments    .. more ..    31 Aug 2011 News
home   ask   best   8 years ago   
How Steve Jobs handles trolls (WWDC 1997) garry.posterous.com
103 points by ryannielsen  3 hours ago   32 comments top 11
stiff 2 hours ago 1 reply      
Context: http://en.wikipedia.org/wiki/OpenDoc#Cancellation

This guy is not necessary a troll, of course this is just speculation, but if a project he was working on for a few years got cancelled, I could pretty well understand his frustration, even if the decision to cancel turned out valid in the end from a business point of view. I don't think it is valid to stick labels on people (both on the "troll" and on Steve Jobs) without knowing the whole story.

tomstuart 2 hours ago 1 reply      
The "inaudible" part of the question is: "I would like, for example, for you to express in clear terms how, say, Java, in any of its incarnations, addresses the ideas embodied in OpenDoc."
redthrowaway 2 hours ago 1 reply      
I'd hardly call the guy a troll. He was a developer who had sunk time and money into developing with a technology (OpenDoc) that Apple had just killed. He wasn't polite, but he was justifiably upset and dismissing him as a troll is both inaccurate and unfair.
jmtame 37 minutes ago 0 replies      
Kind of reminds me of this story: http://techcrunch.com/2010/09/08/kno-raises-46-million-more-...

I know one of the early engineers who wrote the low-level software for that device. He was one of the more arrogant engineers I've known and basically dismissed the iPad because it didn't have enough "power." When he showed me the Kno tablet, I said "I couldn't even fit that thing in my backpack, let alone on any desk. You're never going to sell this thing to people." He insisted that power was more important.

And it turns out he was wrong because he was thinking like an engineer. Kno scrapped that idea and decided to build exclusively for the iPad. http://techcrunch.com/2011/04/08/kno-bails-hardware-30-milli.... Good on them.

spiralganglion 2 hours ago 2 replies      
There's another part of the talk (not included in the linked clip) where someone asks Steve what things they'll do differently than the rest of the industry. Steve responds that being different isn't important; what's important is being better. The two have a back and forth on this issue " it's hilarious in hindsight given the perfectionist nature that Apple has come to embody.

And for what it's worth, the market seems to have proven Steve right. Nowadays, we can see some of Apple's competitors resorting to "different" in an attempt to gain traction. Doesn't seem to be working for them, either.

kevin_morrill 2 hours ago 3 replies      
It's helpful to watch the preface video at: http://www.youtube.com/watch?feature=player_embedded&v=u...!

This is a lesson Microsoft needs and has never really learned, neither under Gates nor Ballmer. The bizarre approach in Windows 8 that has all kinds of UI doing the same thing with no clarity around development platform sounds exactly like what Jobs talks about with people going in 18 different directions.

__david__ 1 hour ago 0 replies      
"Mistakes will be made, but that's good because it means decisions are being made."

What a great insight, it's really striking a chord with me right now.

chuinard 3 hours ago 2 replies      
This was really interesting, because lately I've been asking myself how to improve my design ability by starting with the technology or starting with the customer.

I will read through the App Engine docs every day or so to figure out what cool thing I can make out of the APIs provided. Maybe I should forget that and just think to myself 'what would I want to use?'.

d_r 2 hours ago 0 replies      
This is a great video. For one, it showcases SJ's confidence in deprecating technologies for the benefit of newer and better things.

It also underscores the importance of being able to translate tech "pieces" into compelling products. I'm an app developer. When reading documentation for the latest release of iOS or Lion SDKs, and seeing all of the new APIs, I feel like a kid with a brand-new box of Legos. The challenge (and art) is in combining these technologies to build something actually catchy.

tmsh 2 hours ago 0 replies      
His response is almost as great as jean patches. Man, that probably makes me sound like a troll. Oh well.

Here's to one of the greatest capitalist visionaries of our lifetime though. In jean patches no less.

Mithrandir 2 hours ago 0 replies      
Yi haskell.org
56 points by nyellin  2 hours ago   7 comments top 3
exDM69 19 minutes ago 0 replies      
It's wonderful to see the Yi editor back in business, there seems to be some new commits in the github repo.

Yi has taken a wonderfully pragmatic approach to implementing a text editor. They were working on an incremental parser framework to power the editor. The framework was inspired by Parsec and aimed at parsing incomplete code while typing. Emacs has similar parsers but in Emacs they're implemented with a messy pile of emacs lisp, while the Haskell parser in Yi tries to do it with an easy to read domain specific language.

I also like how Yi has a very flexible frontend. They ship with Vim- and Emacs-like configurations to get started.

gaius 16 minutes ago 1 reply      
I like a lot the way Yi maps \ and -> to appropriate extended characters for display but leaves them in ASCII in the underlying file, but I didn't find it to be stable enough for primetime yet.
anonymoushn 1 hour ago 1 reply      
That's not a very informative title. I would prefer "Yi is a text editor extensible in Haskell."
Sublime Text 2 (Build 2111) gets vi key bindings, indent guides sublimetext.com
26 points by nikuda  2 hours ago   16 comments top 8
frou_dh 35 minutes ago 0 replies      
What I love about ST2 is that although it's very data-driven in configuration (several hierarchies of plain text configuration files + a simple python API), it somehow doesn't seem overwhelming, or that I'm missing all the cool tricks. I hope the author continues to comment all the configuration files and will document 100% of the API.

The editor itself has a great fluid feel. Looks-wise, the default theme is nice, and "Soda Dark", which seems to be a community favourite, is gorgeous.

GeneralMaximus 1 hour ago 0 replies      
ST2 is brilliant. If you're used to TextMate, I encourage you to give it a whirl. I think the trial version is an "unlimited" trial. It only nags you with a dialog box once in a while.
BasDirks 32 minutes ago 0 replies      
Currently stuck on an ancient WinXP box due to my MBP frying itself, so I decided to see how ST2 runs on it. Absolutely wonderful experience @ 1.3Ghz/256RAM, and the vintage mode finally makes it a viable option for all my programming.
bguthrie 1 hour ago 0 replies      
Sublime Text is a great editor. As a sometime vi guy, this makes me super happy.
endtime 51 minutes ago 1 reply      
Thanks for adding indent guides; that's one of the must-have features of Komodo Edit for me.
jamesmoss 1 hour ago 3 replies      
Have they sorted out the rubbish file manager yet? When I open a directory I don't need an animation of the contents sliding in from the right. This slows me down. I want to switch to ST2 but there's just too many little things holding me back.
jasoncodes 1 hour ago 1 reply      
If you're a Vim user and like the idea of indent guides, check out https://github.com/nathanaelkane/vim-indent-guides
Raphael 1 hour ago 1 reply      
Needs C-[ to double for Esc.
IPhone Apps Design Mistakes: Over-Blown Visuals (2009) smashingmagazine.com
11 points by acqq  55 minutes ago   1 comment top
jamesbkel 30 minutes ago 0 replies      
Decent points, but it's from 2009.
Also, using 3D pie charts in an article about proper design... maybe not the most effective technique.
Rails 3.1 Gem Available rubygems.org
146 points by aaronbrethorst  7 hours ago   30 comments top 15
tenderlove 6 hours ago 4 replies      
Yes, I am very excited. I should have released during business hours with announcements prepared and whatnot, but I really wanted this code in people's hands. I hope that everyone enjoys this release!
nfm 6 hours ago 1 reply      
Thanks to Rails core and all the contributors for yet another killer release :)

If you're new to 3.1, the following resources will help you to get started:

Release notes:


Asset pipeline:



dmix 6 hours ago 2 replies      
I'm hesitant to give up on Jammit, but asset pipeline looks great.
eddanger 6 hours ago 0 replies      
The asset pipeline is s great evolution to this amazing framework. I'm looking forward to playing with this.
sebilasse 1 hour ago 0 replies      
Thanks for the great work. After every release I wonder what's next. Is there some sort of roadmap?
DanielKehoe 5 hours ago 0 replies      
Here's my walk-through "Read This Before Installing Rails 3.1" which helps in dodging pitfalls and potholes:


dasil003 5 hours ago 0 replies      
This time it's no red herring like the nefarious 3.0.10 release!
dkrich 4 hours ago 0 replies      
Thanks! Started learning 3.1 a few months ago and loved the asset pipeline, but had a few problems getting a particular piece to work. Will have to get back to using it soon.
CoachRufus87 5 hours ago 0 replies      
You guys rock! Thanks for all the work y'all put in.
tmeasday 2 hours ago 1 reply      
Does anyone know if/when heroku will support 3.1?
nkeating 4 hours ago 0 replies      
Great update.. Asset Pipeline & Coffescript have quickly become indispensable.
thedjpetersen 6 hours ago 0 replies      
Alright! Good work Rails core team.
jdelsman 6 hours ago 0 replies      
So happy! Thanks guys! Upgrading now ;)
hankberg 3 hours ago 1 reply      
How come the identity map is disabled by default?
diegogomes 5 hours ago 0 replies      
Hands on. Upgrading!
Physicist cuts plane boarding time in half cnet.com
162 points by timf  9 hours ago   91 comments top 24
zeteo 8 hours ago 3 replies      
Looking at the paper, it seems the passengers were the same and boarded five times in a row, with the last two methods proving the fastest. The major experimental flaw seems to be that the passengers themselves might learn to become more efficient after concentrating on a normally rare task, and repeating it several times in a row. Also, since some of them were paid extras, they might just be getting impatient by the end, and rushing through at what would normally be an uncomfortable pace. The population sampling is also probably not representative.

I would have found the experiment more convincing if it had been used to validate the basic assumptions of the theoretical model instead (e.g. the statistical distribution of the baggage loading and seating times).

ColinWright 19 minutes ago 0 replies      
This has been going on for over 2 1/2 years now. At that time we had this item submitted:


It looks like it's the same physicist, and the same algorithm. Further more, HN had pretty much exactly the same discussion.

Plus ça change, plus c'est la même chose.

There's another submission from over a month ago here:


In that it's described how ...

    American Airlines undertook a two-year study to try and
speed up boarding. The result: The airline has recently
rolled out a new strategy"randomized boarding.

I haven't seen any news of how that panned out.

This submission from 1300 days ago - http://news.ycombinator.com/item?id=111416 - is a paper from Arxiv, suggesting that boarding times can be cut by a factor of 4. Guess who it's by - yup, our favorite physicist again. So he's been at this for 3.5 years. There are just 5 comments on that submission.

This latest paper is here: http://arxiv.org/abs/1108.5211

That was linked to from this submission: http://news.ycombinator.com/item?id=2943615

It was also referenced in the article pointed to in this submission: http://news.ycombinator.com/item?id=2943003

All in all, a popular topic that's been going for 3.5 years from this one physicist at least.

Despite his perseverance, it hasn't been adopted on any of the flights I've been on.


So here's a list of some of the previous HN items on this topic:







cliff 8 hours ago 2 replies      
I remember seeing this before.

I think it's really cool, though in the article I read before one major block to implementing this is that you'd be splitting up group boarding (of even 2 people travelling together).

I feel like that might be a tough message to try to explain to everyone at the airport, since in general people are worried about everyone in their party making it on the plane safely and with all their stuff. Gate agents have enough worried customers as it is.

martingordon 8 hours ago 0 replies      
He should totally patent his boarding method. Who cares that it could save the airlines billions? He could rake in so much dough by licensing the method or suing airlines who use his method without a license! And if the patent is vague enough, he could probably collect on all the other inferior boarding methods too!

In all seriousness, the boarding problem only got worse once airlines started charging for bags as people starting carrying on more and more. I read somewhere that Southwest actually saves more money by offering free checked bags and saving on boarding time than they would make had they charged for free bags.

kemayo 8 hours ago 3 replies      
Boarding by blocks starting at the front is ridiculous.

In fact, I'd be hard pressed to think of a worse way to board a plane. And yet somehow every time I fly that's how it happens. Maybe it's just that my company chooses horrible airlines.

The article mentions that assorted methods of boarding were tried, though it only goes into detail about "the Steffen method". I wonder what the difference between blocks-from-the-front and the obvious improvement of blocks-from-the-back is.

matthew-wegner 6 hours ago 1 reply      
My solution: Offer free drinks if everybody can board in <X minutes. Social pressures will do the rest (I occasionally see people help load heavy bags overhead, but I imagine this would pick up).

Not entirely joking!

rmc 1 hour ago 0 replies      
I don't think this'll work. Airlines who have the most to gain from a short boarding time (budget airlines) have another approach, where they do not assign seats. Individual passengers are motivated to board quickly because they don't want to be sat next to a big fat person, or they want to stay with their group.

Ryanair has an average turn around time (time between when the plane lands to when it takes off again) of 25 minutes. I've been passed through the gate and waiting at the door before the plane I've to travel on has landed.

frossie 7 hours ago 3 replies      
Savings from sophisticated method to board plane faster: 110 million dolars

Getting passengers to board when (and only when) it's actually their turn: priceless

(and, I suspect, far trickier)

jgfoot 7 hours ago 1 reply      
Airlines have been doing their own research into this, apparently. American Airlines is switching to randomized boarding (one of Steffen's proposed solutions) and United is partially switching to window-middle-aisle. http://online.wsj.com/article/SB1000142405311190423340457645...
Nate75Sanders 8 hours ago 1 reply      
While we're on the subject of statistics, airplanes, and mathematically-but-not-socially-correct ways to do things, I really wish they'd have seats on the plane spaced according to a height distribution of passengers.
WalterBright 8 hours ago 2 replies      
I'd try loading people without carryons first, as the carryon stowage is what keeps blocking the aisle.
mef 8 hours ago 0 replies      
Boarding statused passengers first aside, it seems like this could be easily communicable with announcements of "boarding even rows", "boarding odd rows".

I skimmed the paper and didn't see any mention of how to get passengers to obey gate agent instructions, though, which would be a prerequisite of implementing an effective boarding method. Perhaps this should be re-tested by airlines randomly selecting sold out flights with identical planes to try these methods.

mgkimsal 7 hours ago 2 replies      
I flew a few weeks ago and dialogued (argued? - tried to be good natured about it!) with the boarding staff at 3 different gates about boarding processes. I suggested they try windows first, then middle seats, then aisle seats. 2 out of the 3 argued back that the way they were doing it had been 'proven' by some study some years back by... either a Finnish airline, or some studies in Arizona - I honestly can't remember which they said (but they'd both said the same place). Very odd, because it's demonstrably pretty damn slow, and often the slowness is quite visible - people with window seats having to stop and climb over someone in the aisle and middle seats, causing a backup. Agreed, it's not the only cause of backups, but in my recent 6 flights, 5 of the boarding processes were rather significantly slowed by multiple window/aisle snafus.

As much as someone wants to say "we've studied this already, and this is the best way to do it!", you'd have a hard time convincing me that any major airline knows how to make good decisions about anything.

LiveTheDream 8 hours ago 0 replies      
psychotik 7 hours ago 0 replies      
This looks good in theory, but if you have a party of 2 or more people, who are presumably seated together, then it is practically impossible to get them to board at different times as the algorithm would want to. I don't know for sure, but based on anecdotal observation I would guess that at least 50% of persons onboard travel with a co-passenger. I think that will throw the algorithm off quite significantly.
stretchwithme 8 hours ago 1 reply      
Imagine how much boarding time could be slashed if robots handled the baggage. The line stops moving when people are trying to store their bags, even if stepping aside for a moment would significantly reduce delays for others.
Bud 7 hours ago 0 replies      
What a shock to read that despite this knowledge, the airline industry continues to do things the exact same way, wasting huge amounts of time and money and fuel in the process.

This really isn't a smart industry, in many respects.

pyoung 8 hours ago 1 reply      
I am interested to know where they got the savings estimate. $110M seems like a bit much for cutting the boarding time by 10 to 20 minutes. I was under the impression that most airline delays were caused by weather and traffic, not boarding times.
teyc 6 hours ago 0 replies      
The delays I experienced is usually due to one or two passengers trying to get their last cigarettes before boarding.

Furthermore, there is no point getting too geeky about complicated boarding sequence if passengers are going to get unhappy over it.

Better to do a Steve Jobs and keep things simple.

SonicSoul 7 hours ago 0 replies      
this method seems to assume that people will line up in perfect order so that no one blocks anyone else, but that simply isn't the case. the alternating rows that were called would stagger in random order and blocking would occur as normal.
probably enforcing less carryon would speed the process up the most, but they'd have to make the checked in baggage pickup quick and easy to give people more incentives to check in.. maybe even offer a service that picks up rollies right at the gate and makes the available after getting off the plane..
synacksynack 6 hours ago 0 replies      
Menkes van den Briel did some work on airplane boarding. He has really nice explanations and videos here: http://leeds-faculty.colorado.edu/vandenbr/projects/boarding...
wooswiff 8 hours ago 0 replies      
If you want to maintain preference towards first class and board from front to back, why not just use the door at the back of the plane?
herdrick 7 hours ago 0 replies      
I don't think you'd call this a Monte Carlo simulation. It's just an experiment.
arturadib 8 hours ago 2 replies      
Classic example of "cracking a nut with a sledge hammer". You don't need Monte Carlo to understand that boarding first rows is less efficient, or to come up with a much more efficient procedure. (And this is coming from a Monte Carlo lover).

What you do need is biz savvyness to understand that rows are ordered by passenger value (first and biz class first, followed by premium, platinum, frequent flyers, etc). These passengers pay a high premium to board and deplane first. Airlines are not going to lose these valuable passengers for a gain whose magnitude is uncertain at best.

Rails 3.1 Released rubyonrails.org
86 points by jonpaul  6 hours ago   1 comment top
jc123 3 hours ago 0 replies      
Open source is illegal? opensource.com
11 points by pwg  1 hour ago   3 comments top 2
Aqwis 5 minutes ago 0 replies      
It's not called the "Slovak Republic", but rather just Slovakia. The proper name of the neighbouring Czech Republic is just that instead of something like "Czechia", but this is not true for Slovakia.
userulluipeste 13 minutes ago 1 reply      
I'm a romanian and I knew nothing about it. It seems to be illegal only for governmental use because of the lack of accountability. If you think about it, it make sense. "WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE" doesn't sound promising or fit for governmental critical systems.
RIP Higgs Boson (with 95% confidence) scientificamerican.com
110 points by akkartik  7 hours ago   105 comments top 10
guygurari 3 hours ago 2 replies      
I'm a particle physics grad student. My god, what utter nonsense. The only reasonably accurate paragraph is this:

    "And, more importantly, the lower energy range from 114 to just under 145 billion electron volts, a region of energy that Fermilab has determined, through earlier experiments, may harbor the Higgs, has not been ruled out. But the Higgs is quickly running out of places to hide."

The region of higher Higgs mass is indeed ruled out (at 95% confidence), and currently the bound is even stronger than stated -- the Higgs mass is expected to be between 114-130 GeV if it exists.

The article's main flaw is its assumption that, because the remaining mass window is "small", it decreases our chances of finding the Higgs. This is not the case for several reasons. Most importantly, it has been known since the planning stages of the accelerator that a Higgs with such a low mass is more difficult to find, in the sense that it requires running the experiment for longer, collecting more statistics, before we can decide whether or not it exists. So it comes as no surprise that we first have conclusive results about the higher mass range. It just happens that the Higgs, if it exists, doesn't have a high mass, so we keep looking.

It is expected that in 1-2 years we will have enough statistics to either discover or rule out the Higgs in the remaining mass window.

The second mistake the article makes is in claiming that not finding the Higgs is somehow a bad thing. That it means the LHC was a waste of taxpayer money. I would say quite the opposite. If the LHC finds the Higgs and nothing else, then it will only confirm our existing model and we will learn nothing new about the world (except for the value of the Higgs mass). This is the worst possible outcome. On the other hand, not finding the Higgs would be an extremely exciting result, since it would open the way to less well-explored ideas about the origin of mass. The goal of the LHC is to teach us about the world, not stroke physicists' egos and tell us how clever our existing theories are.

indrax 7 hours ago  replies      
>to spend billions of taxpayer dollars in search of a particle that likely does not exist would have been wasteful

This is infuriating. A negative result is a successful experiment. Don't hamper efforts to fund science with the argument that science might figure out it was wrong.

Europe explored the universe where America did not.

idlewords 6 hours ago 0 replies      
That's an utterly misleading headline. The Higgs boson is not excluded in the mass range 115-145 GeV, and there won't be enough evidence for a null result (or discovery) until at least November.

More context here:


melling 7 hours ago 1 reply      
"Congress may feel that even though its 1993 decision to cancel the American alternative to CERN"the Superconducting Super Collider"was generally met with chagrin by the American physics community, it may have been the right move one after all: to spend billions of taxpayer dollars in search of a particle that likely does not exist would have been wasteful."

I'm a little surprised by that comment. The SSC would have be almost 3x more powerful than the LHC. I still feel like particle physics has been set back decades.


Eliezer 2 hours ago 1 reply      
It's too early to call this one for sure, but still, I'm proudly on the record as betting against the Higgs.


quandrum 7 hours ago 2 replies      
Incorrect, exaggerated title.

They have 95% confidence that it's not in the 145-466 GeV range.

They haven't searched the 114-145 GeV range. There's still plenty of work to be done, and sensationalist headlines only serve to mis-inform.

andrewflnr 5 hours ago 1 reply      
Maybe its irrational, but kind of hope they don't find it. It's almost gratifying, definitely exhilarating, to know that particle physics is still a fresh frontier with plenty of territory to be discovered. Supersymmetric particles seem to be unlikely too. It seems like we're almost back at the drawing board.

So if it turns out that it doesn't exist, where do we go from here? What are the alternative theories? It's been a while since I went into particle physics at all deeply, so I don't know all the leading theories and their pros and cons.

hypersoar 4 hours ago 1 reply      
This is rather beside the point, but why must they call it the "God Particle"? According to Wikipedia, it was coined in a popsci book when the writer's editor wouldn't let him call them "goddamn particles". So now we're stuck with this hollow, cliché, uninformative phrase for eternity.
mathattack 6 hours ago 0 replies      
The should/shouldn't argument is really about Free Riding basic research. For most companies this problem means deferring basic rEsearch to govt. Both directly (funding research) and indirectly (IP protection and care policy) governments free ride each others research. The US isn't perfect but generally we are the the horse and not the rider.
nazgulnarsil 7 hours ago 1 reply      
I should place a long bet regarding dark matter...
Bit-squatting " DNS hijacking by cosmic rays/memory errors sophos.com
38 points by joshwa  4 hours ago   9 comments top 3
xd 54 minutes ago 0 replies      
Mithrandir 3 hours ago 2 replies      
There's an example in this article:


Also, could ECC memory for clients solve this problem?

strictfp 3 hours ago 0 replies      
Some of these problems could be mitigated by checksumming DNS entries when created.
87% of the U.S. Population are uniquely identified by {DOB, gender, zip} latanyasweeney.org
249 points by jessekeys  14 hours ago   82 comments top 22
jdp23 13 hours ago 3 replies      
Here's some backstory, from the 1990s when the Massachusetts' Group Insurance Commission released "anonymized" health data:

''At the time GIC released the data, William Weld, then Governor of Massachusetts, assured the public that GIC had protected patient privacy by deleting identifiers. In response, then-graduate student Sweeney started hunting for the Governor's hospital records in the GIC data. She knew that Governor Weld resided in Cambridge, Massachusetts, a city of 54,000 residents and seven ZIP codes. For twenty dollars, she purchased the complete voter rolls from the city of Cambridge, a database containing, among other things, the name, address, ZIP code, birth date, and sex of every voter. By combining this data with the GIC records, Sweeney found Governor Weld with ease. Only six people in Cambridge shared his birth date, only three of them men, and of them, only he lived in his ZIP code. In a theatrical flourish, Dr. Sweeney sent the Governor's health records (which included diagnoses and prescriptions) to his office.''

Source: http://arstechnica.com/tech-policy/news/2009/09/your-secrets...

_delirium 10 hours ago 0 replies      
This is sort of the reverse of the "curse of dimensionality", for those interested in machine learning (http://en.wikipedia.org/wiki/Curse_of_dimensionality). From an ML perspective, as you add dimensions to a dataset, the amount of data you'd need to accurately model the data without overfitting (i.e. without memorizing specific details of the sample rather than underlying trends) grows very fast, because due to the combinatorics you end up with extremely sparse coverage of the overall possibility space even with huge data sets.

The reverse of that phenomenon is that, given a data set in a high-dimensional space (even 3 dimensions, if each dimension has more than a few bits of entropy), it will cover the dimensions very sparsely (even if it's large!), and therefore it's relatively easy to recover specific details of the sample from the aggregate statistics.

edit: Well, I was hoping this might be a new insight, but in fact there's a good 2005 paper exploring that connection in much more detail: http://www.vldb2005.org/program/paper/fri/p901-aggarwal.pdf

javert 10 hours ago 0 replies      
This is an excellent example of "how to write a title for an HN submission."

For the busy HNer, it's not even necessary to click on the link to get the key idea from the article.

yuvadam 11 hours ago 0 replies      
I should run that on the Israeli population census - which has been illegally leaked several times [1].

The ZIP codes in Israel are per-street, not city. Given Israel's population of just under 8M, I believe a very high percentage (95% <) of unique people can be found.

[1] - http://blog.y3xz.com/post/7846661044/data-mining-the-israeli...

giberson 3 hours ago 1 reply      
This is an interesting statistic for another reason aside from de-anonymization, particularly directory listing applications. A lot of web sites have url's to access the user's profile page by their username. For popular sites, it's always a rush to register "yourname" as a username so that you can get website.com/<yourname>. Inevitably, your name gets registered by some one else and you're stuck picking a nick name or appending a randomized letter set to it.

I've been pondering a useful way to have /<yourname>in a URL, so that everyone with that name can use the url containing it without collisions. Of course, I always end up with something like website.com/a3fx/<yourname>. Which is arbitrary, and ugly. However, with this stat it seems we have something close to a non colliding, pretty, meaningful addressing scheme. Ie: website.com/<dob>/<gender>/<zip>/<yourname>, sure it's a bit long but it provides assurance you're getting who you think your getting.

kingkilr 13 hours ago 3 replies      
There were no other males, born on october 8th 1990, in Lakeview, Chicago. I'm a little flabbergasted, to be frank.
EwanG 8 hours ago 0 replies      
Then there are those of us who have unique enough names that you can pretty much look us up and find there's only two in the whole US (and we're related). Having been a DBA at one point, it's always fun to tell folks that "really, if you just search for my name I guarantee you'll find me faster". Heck in most cases a search in a system that separates first and last names will let me be found with just my first name. If it weren't for that lousy Welsh actor, that would work on Google also (although full name turns me up as most of the front page).

My point is that this shouldn't be THAT surprising. I suspect full name and gender would uniquely identify a fair portion of the US as well. We're not as homogenous as some societies and I think this proves it...

Zakuzaa 13 hours ago 2 replies      
I don't find that number surprising. DOB along with zip is pretty good data.

What I'd find amusing would be how much of the population is uniquely identifiable by browser+plugins+os+resolution.

mikey_p 13 hours ago 4 replies      
While I find this fascinating, I'm sure this is old news to people in the business, like the people that run the grocery/chain store savings programs.

What I'd be really interested to see is why this works, and what it tells us about distribution of population by zip code. I'd imagine the places where this doesn't work as well, are the most densely populated zip codes, where the likely hood of duplicates on the given key increases, but I would never have guessed that the accuracy would be anywhere near 87%. (maybe there's alot more zipcodes than what I thought? maybe they used zip+4?)

est 4 hours ago 0 replies      
China's Citizen ID:

Area_ID + DOB + Order Number + Checksum

For order number: Men are assigned to odd numbers, women assigned to even numbers.


pyre 12 hours ago 2 replies      
If we want more anonymized datasets, maybe we should be asking for approx. age or year of birth instead. I doubt that most things that ask for DOB need anything more than granular than that.
Zimahl 13 hours ago 3 replies      
Putting it to a non-scientific test, where I grew up is within the zip code of 98632. From the 2000 census there are 47,202 within that zip code. This is probably a good average city for America - not too big, not too small. Based on the fact that there are 366 possible (year excluded) birthdays (I'm including Feb. 29th) and only two genders, just about the best I can do for uniquely identifying people in that county is 1 in 64. Add in the year and what I would assume is an even population between the ages of 1 to 50 and that gets you closer to 1:1. Not bad.

A zip code with large populations probably are more around 50% than 87%, and obviously the reverse is true as well. I wonder what the population size for a zip code would have to be to be really close to 100%. Just throwing some numbers in a calculator I'd guess at 15-20k people would be damn close. So 10k is probably just about a unique identifier.

dotcoma 2 hours ago 0 replies      
But most define themselves by asl ;-)
ditojim 8 hours ago 0 replies      
now im glad i randomize my birthdate on just about every form on the internet.
thisuser 13 hours ago 1 reply      
Medical and voting records generally ask for sex, not gender. It would be nice if the two concepts would stop being confused.
jeffreymcmanus 12 hours ago 2 replies      
True unique identifiers are immutable. At least two of those three things can be changed.
merraksh 7 hours ago 0 replies      
With the density of large cities (though matched by zip codes), I wonder if the 13% live in urban areas and the remaining 87% outside.
zeratul 12 hours ago 1 reply      
De-identified patient data does NOT have DOB or ZIP codes:


Sex, gender or age is allowed.

pavel_lishin 13 hours ago 1 reply      
I'd love to see someone with a large dataset containing those variables to see what % of their set overlaps on those.
thelovelyfish 8 hours ago 0 replies      
The Mark of The Beast is at hand.
marshray 13 hours ago 0 replies      
That's what makes it so paradoxical!
samstave 7 hours ago 0 replies      
JOKES ON YOU! I can change 2 of the 3!!
Lessons from Valve - How to build a designer's paradise garrettamini.com
157 points by kadjar  12 hours ago   17 comments top 5
staunch 3 hours ago 1 reply      
I wonder if they fire bad hires? Even the strictest hiring regime will result in some errors. Do they fix those?
jhermsmeyer 10 hours ago 1 reply      
Wait, if no one ever leaves Valve then why is Garrett for hire? Would be interesting to see him address that...
BenSS 9 hours ago 1 reply      
The rolling desk idea is brilliant, I do wonder how often the configurations are shifted around though.
nathansobo 11 hours ago 1 reply      
What are some other software companies that share a similar philosophy?
jshou 9 hours ago 0 replies      
Great post, and an awesome perspective on working for Valve
Show HN: PHP does Meta Programming too github.com
4 points by yuri41  33 minutes ago   discuss
The Embarrassing Naked Photos On Your Stolen Laptop May Not Belong To The Thief forbes.com
43 points by pwg  8 hours ago   38 comments top 8
tlrobinson 3 hours ago 2 replies      
Everything about this pisses me off: the woman who likely knew she was buying a stolen laptop, the cops judging the woman calling the photos "disgusting", and the "expert" witness who says "there is no reasonable expectation of privacy in communications via the Internet."


nirvana 7 hours ago 2 replies      
There's a libertarian principle called minimal force. That is to say, libertarians believe that the initiation of force is immoral, and thus use of force against the initiators is just, but only sufficient to mitigate the crime.

When someone steals your laptop, they are initiating force. In doing so, they give you the right to use methods, such as this software, to recover the laptop. But that right only extends to recovering the laptop, not to unnecessarily violate the privacy of the thief. While the thief does owe the laptop owner compensation for the crime, this is something that is determined via court, not by the victim of the crime.

So, yes, you have the right to hack into your laptop, turn on the webcam, collect evidence necessary to locate and recover the laptop, but you don't have the moral right to exploit the thief beyond that.

I remember the Defcon presentation had some privacy obscuring bits for the thief when he took nude shots of himself in the shower. That's appropriate. Shaming the theif by showing their face is reasonable, but only if you know they aren't an accidental victim. Sharing the thief's nude photos with the police or with any unnecessary third parties, where the they are not necessary for recovery, is a violation of privacy.

At least morally. Who knows whether a government in the US will hold the police accountable for any immoral actions.

Zak 6 hours ago 1 reply      
I think it's always reasonable to access data that's on your computer unless you have authorized a third party to use it with the understanding that you will respect their privacy, and to authorize a third-party to access that data on your behalf. It may not be reasonable to make that data public, depending on the content of the data and the probability that its owner is the actual thief.
ugh 4 hours ago 1 reply      
I'm always disgusted when people share those photos with the public or third parties even though doing so is clearly not necessary for recovering the stolen property. It just seems immoral to me, no matter whether you know that the thief or someone else took the photos.

I can't comment on the law but I really don't understand people who are ok with this kind of behavior. Thieves don't suddenly become fair game for any treatment just because they are thieves.

nowarninglabel 5 hours ago 4 replies      
and she's got a strong case…

Uh, no she doesn't. Buying a working laptop for $60 is going to easily constitute knowing the laptop was being sold for less than true value. Sorry, I don't have the link, but this whole receiving stolen goods knowingly thing was explained fairly well on HN in some comments a couple months back.

ams6110 6 hours ago 2 replies      
Anyone who buys a used laptop should immediately dban it and set it up clean from bare metal. But it's not realistic to expect that everyone knows this, and it requires the OS install media which is often not included with used machines.
sliverstorm 6 hours ago 1 reply      
They had one expert witness, a criminal law professor, go a step beyond that, attesting that “there is no reasonable expectation of privacy in communications via the Internet.” Period. The idea that any electronic communication lacks privacy protection whether on a stolen laptop or not is a scary thought and a ridiculous conclusion.

A scary thought? That's why it's wrong, because it's scary? I've grown to expect slightly more sophisticated arguments from the people I agree with.

breck 5 hours ago 1 reply      
> “We're eager to bring this case to the jury,” says [an attorney for the plaintiff]. “It's a fascinating case and I'm eager to see what a jury will think.”

This is pure B.S. right? "I'm eager to see what a jury will think."

A jury is composed of randomly selected individuals. So he's saying "I'm eager to see what a group of random people will think." Clearly doesn't make sense.

Basically what he's saying is, "A group of random people are clearly going to side with us."

Of Gravatars and Robohashes codingthewheel.com
69 points by LiveTheDream  10 hours ago   10 comments top 5
e1ven 8 hours ago 1 reply      
That is really cool! ;)

I didn't realize that this was a use-case anyone was interested in, but it sort of makes sense. I've added a parameter you can pass to RH.org to deal with the gravatar pull on my side, so you don't have to.

This adds a bit of serverload, since I need to make a bunch of requests, but it's not THAT bad.


If you pass gravatar=yes, it will make a pull to the gravatar URL for that address. If something exists there that isn't the default, it will issue a 301 over to the site.

Otherwise, it will return the robohash you requested.
It also passes the size param over to Gravatar, just to be nice ;)

unfletch 3 hours ago 1 reply      
Nice work, but the "they look like they were drawn by programmers" comment makes me think you missed the point of Gravatar's specific default avatars.

They're intentionally ugly. The theory is that an ugly default avatar makes the user more likely to upload their own image.

nirvana 1 hour ago 0 replies      
I'm curious how many variations robohash uses. It seems either you've got a very wide variety of custom graphics elements, or only a few elements and have to draw a very wide number of images to fit in them.

Each two digits of hex is 255, and I count 16 pairs, but can think of only 8 elements in the photos (eyes, ears, nose, mouth, head, body, arms, background).

If it's those 8 elements and they're derived from four digits in the hash, then you need 65,535 or so different ears.

I'm guessing the solution to this is to reduce the hash further... so you get more repetition where two texts produce the same robot, but apparently they've not made it too bad.

How far did robohash reduce it? Or how far should one reduce it? How many images does a robohash art set have?

bigethan 8 hours ago 1 reply      
yikes, Gravatar has an open redirect.


It's a fun trick, but it should't be possible.

erikig 6 hours ago 0 replies      
Very nice, I was planning to implement a similar thing with for 'anonymous' ip-based human readable usernames to identify users without forcing them to create profiles.

An excellent bonus - my robohash looks exactly like Bender from Futurama :^)

VimConf vimconf.org
239 points by flippingbits  20 hours ago   46 comments top 14
rudle 19 hours ago 1 reply      
As far as topics are concerned, I'd really like to see a vim plugin created from start to finish. The tutorials and docs that exist leave a lot to be desired, and this seems like a nice venue to demonstrate (and explain!) the capabilities of the vim plugin system.
petercooper 19 hours ago 3 replies      
Separate to Vim, this concept should become more popular. Real world conferences have their place, for sure, but some topics are avoided as they probably wouldn't break even, and seeing more things like this would be great.
xbryanx 18 hours ago 5 replies      
Anyone know some Vim luminaries who'd be good "keynote" speakers/demo-ers? I'd vote for Drew Neil - http://vimcasts.org/about
davidbalbert 18 hours ago 0 replies      
I am so excited about this! Watching other people use vim has been the most valuable part of my vim education. I think this is going to be a great venue for it.

Edit: I'm also clueless when it comes to vimscript. It would be nice to see some stuff on that.

illuminated 19 hours ago 1 reply      
When I saw the URL my first thought was that it's a collaborative repo of vim's conf files, snippets, etc...
Omnipresent 18 hours ago 0 replies      
I know this is off topic but would you mind sharing how you created a special link that can be passed to earn credit?

Are there plugins/gems for something like this or was this made from scratch?

PedroCandeias 19 hours ago 0 replies      
Cool idea. Signed up. I've been using vim just for editing config files remotely and now I'd like to start exploring it a bit more.
dadro 19 hours ago 0 replies      
This event looks promising. Interacting with other Vim users really helped me get over the initial Vim learning curve. For any devs that use OSX and want to test the Vim waters, checkout Vico (http://www.vicoapp.com). It is a Textmate-esque editor with Vim bindings.
wingerlang 19 hours ago 3 replies      
Signed up. Id like to use vim since i like the concept but it seems so damn hard to get into. The post yesterday "learn vim progressively" was interesting though.

Not sure i know what it is though. I really like VS debugger and programming in C#, C++ (both with VS). So, what exactly will i give up if i switch to vim?

astrofinch 13 hours ago 3 replies      
I've never understood people's obsession with vim. I used it for a few months, then timed myself doing the same tasks with vim and gedit. Gedit was faster by a significant margin.
almost 19 hours ago 0 replies      
I'm an Emacs user but I'm really curious about modal editing and composability of Vim commands. Signed up!
ferengi31337 9 hours ago 1 reply      
Does it strike anyone else as a bit funny that a promoter of vim would send HTML e-mail?
d0m 19 hours ago 0 replies      
Signed up and I spread the word :) Great initiative!
davedx 19 hours ago 0 replies      
550 lines of css! I hope he has ctags :)
Pakistan bans VPNs tribune.com.pk
147 points by molecule  16 hours ago   57 comments top 14
emilsedgh 15 hours ago 2 replies      
In Iran, in the days we had protests, they dropped all encrypted connections as well. That makes internet simply unusable. I hope this would never come to Iran, although I believe it will. Soon.
reginaldo 15 hours ago 5 replies      
The goal of the Pakistani government seems to be the complete obliteration of all private communications. But the only way to do that is by banning all communication.

With the ban on VPNs, steganographic[1] techniques that make encrypted traffic look like regular traffic will become more and more common. The troubling thing is the fact that these techniques are somewhat hungry for bandwidth.

[1] http://en.wikipedia.org/wiki/Steganography

mhlakhani 14 hours ago 0 replies      
They're doing this under the pre-text of monitoring all internet usage so that they can 'search traffic for terrorist communication'.

At my university, students are required to browse through an authenticated proxy (which we have to sign in to using our university IDs), which logs our browsing history. This is done so that they can comply with the PTA's requirement that an ISP should be able to provide browsing history of all users for the last 45 days upon request.

Never mind that it's trivial to get around that proxy, all it actually does is mess up most stuff like Windows updates, gaming, etc.

praptak 12 hours ago 1 reply      
If Pakistani government can easily spy on the communications of their citizens, so can other organizations. Israel, India, Iran, to name a few of them.
gilgad13 15 hours ago 2 replies      
To do this, wouldn't they have to effectively block SSL and SSH connections as well? SSL is used in OpenVPN and some Cisco implementations. And we all know that you can tunnel any port over ssh.

Or is the plan that the punishment for stepping outside the lines be enough to keep people from experimenting with these technologies?

77ko 14 hours ago 0 replies      
SSL works in most ISP's in Pakistan, though anti-state and very bad for the children websites like the Rolling Stones are blocked. Nice, Pakistani friendly sites like redtube or child porn remain unblocked of course.

VPN's work too, so far. I'm on one right now. As to why - the filtration system the government is using is so brain dead - there is basically one Juniper router and a couple of Cisco routers (last time I looked) - through which the entire country's traffic is routed.

Using a VPN makes web browsing much faster, with no annoying "waiting" moments - which I presume is the routers locking up under massive load.

The day VPN's are blocked is going to be a sad day indeed. I am going to explore for alternatives to VPN's. Way back in the days of super slow dial up I used these services which would take a link and email the page or entire site to you depending on the command you sent, in a zip file.

thethimble 10 hours ago 0 replies      
This does nothing to stop people who are intent on communicating privately (SSL, SSH, public key encrypted messages, etc.) and everything to hamper internet progress in Pakistan.

Why would a tech company even consider spreading/outsourcing to Pakistan after this?

77ko 13 hours ago 0 replies      
Besides censorship, another reason is the local telephone monopoly, PTCL is trying to shut off all voice gateways into Pakistan, which are causing it to lose money and are hard to tap into as they are routed over VPN's to a local gateway connected to a bunch of landlines or cellphones which connects the local call.

Though of course they could just tap into the local last mile...

jasonjei 14 hours ago 2 replies      
Apple obviously uses VPN in its non-Cupertino locations, presumably too with its production lines in China. I know that China allows the use of VPN for businesses with a legitimate need. Even though Pakistan was never an ideal location for doing business anyway, they've essentially banned any technical business from conducting operations in Pakistan.
mkup 2 hours ago 0 replies      
OpenVPN sessions look like SSL traffic to the eavesdropper. So there's a good reason to use OpenVPN in Pakistan. They'll have to ban SSL at the state level as well.
maeon3 11 hours ago 0 replies      
I'm not sure how banning VPN's is going to stop the terrorists. Don't they use cellphones to coordinate their strikes? You would have to stop the internet and all forms of communication to slow them down, and then still you wouldn't slow them down much. We gotta get "Right to bear encryption" next to "Right to bear arms" in the constitution/bill of rights.
sturadnidge 14 hours ago 1 reply      
I am not sure this is that much more disturbing than the British Government even thinking about restricting the use of social media apps during times of civil disobedience. And this is extremely disturbing.

How can any global company now do business in Pakistan? Surely there is some kind of back door in there.

rorrr 11 hours ago 0 replies      
So no more online banking? Everyone's accounts are in danger.
lurkinggrue 11 hours ago 1 reply      
On the plus side: It's much easier to steal credit cards there.
Non-Admin Chrome Frame Reaches Stable chromium.org
7 points by franze  2 hours ago   2 comments top
nodata 1 hour ago 1 reply      
I don't want software writable by non-admin. It's a huge security hole.
Show HN: iGeek.at, Our Yahoo HackDay India App igeek.at
16 points by prateekdayal  4 hours ago   8 comments top 5
prateekdayal 4 hours ago 0 replies      
This is the app we made for 2011 Yahoo hack day India. Its basically a usesthis.com for everyone. The idea is to showoff apps that you use and discover more apps.

You can basically build and browse profiles like these - http://igeek.at/deryldoucette

We have a bunch of apps right now but we wanted to put it out on HN and see what people think about this. If you want to see some app added, please let us know. Also, we would appreciate any feedback. This is a side project right now but if there is a lot of interest, we are certainly keep on working more on this.

rb2k_ 1 hour ago 0 replies      
Isn't this basically iusethis.com ? (random profile: http://osx.iusethis.com/user/arne)
satyajit 4 hours ago 0 replies      
about.me for the geeks. Cool.
koopajah 3 hours ago 1 reply      
This is pretty nice! But is twitter sign-in a first step or will it always stay like this?
Could be nice to add a small post explaining why we love a specific app! And on the same idea, could be nice to say the app we will NEVER use again !
TobbenTM 3 hours ago 2 replies      
Is this sorta like Wakoopa?
( http://social.wakoopa.com )
Design Philosophies of Developer Tools stuartsierra.com
36 points by fogus  8 hours ago   10 comments top 6
SwellJoe 5 hours ago 1 reply      
I feel like a lot of problems in software come from lack of understanding of things that were figured out by previous generations of developers. That's not to say there aren't improvements to be found in modern software projects; just that sometimes there's a lot of reinventing the wheel, badly, because folks don't understand the beautiful simplicity and power of the UNIX system.

I suspect the fact that git exhibits a deep comprehension of that history is one reason git pretty much took over the mindshare for DVCS in record time. Where it took years for Subversion to oust CVS, and numerous DVCS systems had been plodding along for years, git was the obvious leader seemingly overnight.

So many projects have vastly over-engineered interfaces and component architecture and such, with a huge variety of interdependencies, often proudly, as though it is a benefit.

substack 4 hours ago 1 reply      
Reading the section on ruby reminds me of the things that I take for granted in node.js right now owing to the long history of package management and module systems that it builds upon.

It's super nice having the dependencies specified by semvers in a package.json and installed locally in a project node_modules directory so that libraries can't step on each others toes.

The "Don't use Bundler version X with RVM version Y" can be specified directly in the package.json and concurrent versions of module dependencies even work without incident in the same project.

drothlis 1 hour ago 0 replies      
The stability of most unix tools' interfaces is wonderful. With the rise of tools like bash-completion, even the output of a tool's '--help' option[1] or debug output[2] needs to be stable.

[1] http://anonscm.debian.org/gitweb/?p=bash-completion/bash-com...

[2] http://david.rothlis.net/tools/case_studies/#bash_completion

PLejeck 3 hours ago 0 replies      
I would be curious to see an analysis of the Node Package Manager (npmjs.org) and the general Node package structure.

NPM is basically encapsulated into one command (npm), and there are no ways for modules to modify the way Node itself works, without the permission of other modules, plus NPM automatically resolves dependencies and ensures that things Just Work.

zachrose 5 hours ago 1 reply      
How does Rubygems modify the behavior of the Ruby interpreter?
andrewflnr 5 hours ago 1 reply      
I'm not clear on exactly what design philosophy the Ruby ecosystem is supposed to embody. I guess it's something to do with the way everything modifies and uses everything else, but it doesn't coalesce into a single idea in my mind. Maybe it's just a matter of my not having much experience with it.
If I were a single founder... foysavas.com
110 points by foysavas  14 hours ago   39 comments top 15
jazzychad 13 hours ago 7 replies      
As someone who went through YC as a single-founder, then got a co-founder, then became a single-founder again after a year I've been on both sides of this fence. I've been meaning to write an article about this experience, and I guess this post has given me motivation to actually do it. There are so many misconceptions from both sides about the other. Hopefully I can find time in the next few weeks to write it all down. Are there any questions/topics you would want to see included in such a write-up?
limedaring 13 hours ago 2 replies      
This mirrors a lot of my own feelings (being a single founder myself). I originally applied for YC for the W11 round, and I rushed to find a cofounder before applying (even wrote this for HN: http://www.limedaring.com/technical-co-founder-wanted-for-di...)

I spent a few weeks "interviewing" cofounders and working with a few, before choosing one and starting to build a product and applied (and scored an interview) for YC. But during that time I realized that this person was not the person I wanted to start a startup with. Thankfully we didn't get in, and parted ways. Since then, I've decided that having a partner would be awesome but I'm not wasting time looking for someone (especially with the very small pool of people that want to work on wedding startups)... instead focusing on building as fast as possible on my own (product: http://weddinglovely.com, launched and making small bits of revenue).

I'm applying again for YC this round as a single founder, and I would encourage people not to rush out and find last minute cofounders just to apply. Spend time now finding a cofounder, then apply for S12 after a good few months of working together " and if you can't find the right cofounder, then start building yourself (especially if you're not technical).

TrevorBurnham 11 hours ago 0 replies      
This article rings very true to me. I applied to YC in Summer 2010 with two fellow grad students I picked by sending out a "founders wanted" email, then meeting each of them for lunch. We got invited to interview, but weren't accepted. So we went with another accelerator, Betaspring, and proceeded to fall apart almost immediately.

The biggest problem was that we didn't really have a template for making decisions. Up to that point, our only goal had been getting into an accelerator, which we somehow expected to solve all our problems. And I'd already decided that I'd rather lead a startup than get a PhD, so I was very focused on that goal, while they were more focused on schoolwork. So the way decisions were made before the summer was that I'd say "I did some research, and think we should do this" and they'd say "OK." It was very unilateral. Once we started working together full-time, though, that wasn't tenable, and there was a very fine line I had to walk, to provide both autonomy and a sense of direction. And we never could come to an arrangement that made everyone happy.

That's not to say that I would have been more successful as a single founder"I had pretty minimal dev experience at the time"but there's a very good reason that one of the YC application questions is "Please tell us about an interesting project, preferably outside of class or work, that two or more of you created together." If you don't have a good answer for that, my advice is to hold off on doing YC and just figure out something substantial that you can build together.

idlewords 12 hours ago 1 reply      
I'd be curious to see any actual data about relative success rates with single and multiple founders (where success is measured as 'business profitable or sold after N years' rather than 'got funded').

Like in so many things, it's best to minimize the hand-wringing and make a decision based on your personal working style, who's available to join you, and how badly you want to be funded by an entity with a strong bias against solo founders.

jayliew 9 hours ago 0 replies      
Related: If you're a single founder (I am too), I'll help you make 10 cold sales calls to get feedback. No strings attached!


rdl 13 hours ago 1 reply      
I've seen a LOT of "single founder companies" just sort of sit around and never really make any irrevocable moves. It's easy, especially if you have other income, to just keep something at the hobby stage.

The absolute best way to prevent this is to have customers demanding your product now (or by a certain date). The next best is either to be dependent upon the startup for income, or to have other people pushing (and pulling) you forward (or pushing them, at other times). Not sure which is better.

tzm 8 hours ago 0 replies      
Being a single founder has made me tougher, sharper and focused. I've been through Hell and back for the past 4 years. Finance background turned brogrammer.. to mobile strategist / development firm working with large clients.

I've been focused on credibility and positioning. Waiting for the right co-founder. Forming strategic relationships, staffing up and making key investments. Book deal pending, just acquired a software company.

Crush it...

6ren 5 hours ago 0 replies      
YC summer 2011 had 9 single founders (out of 64). http://news.ycombinator.com/item?id=2938349
Although YC doesn't favour them, it's far from impossible.

However, I'm not sure what the odds are for successful single founders. And at least some acquire co-founders (like dropbox).

dkrich 13 hours ago 0 replies      
Awesome piece. Having great cofounders is certainly ideal, but scrambling to find them unnaturally because you think you need them to satisfy a requirement is a bad idea. Most successful founders seem to find each other naturally and work together over and over. It's pretty hard to find people you genuinely like, respect, and want to work with.

In my experience the best way is to start alone and actively promote your work to others. Pretty soon people who are serious about working with you will present themselves.

dmitri1981 12 hours ago 1 reply      
> A lone founder never wastes time

I find it quite difficult to agree with this statement based on personal experience. For me, it is quite easy to try things to see if they work which often leads to going round in circles until a solution becomes obvious. I found that when working with other people, it is a lot easier to put ideas out there and you usually get the weak points shown to you right away. Also, having said that, I am also finding that the most of my dev activity on my current project involves stripping out functionality that simply turned out to be unnecessary or half baked. I feel that having a co-founder would be great help in avoiding some of the unnecessary work in first place.

dirkdeman 13 hours ago 2 replies      
Forgive me my ignorance, but why is it such a disadvantage having a single founder? Think about it: if the founder is a Jack of all trades, gets the product out and hires specialized staff when the startup gets revenue, why would he need a cofounder?
deniz 8 hours ago 1 reply      
I keep hearing the same story of how most people meet their co-founder "x and y met in school".

I'm a few years out of university and have moved around a lot. Most of my trusted friends now days are non technical which makes my candidate pool near empty. I agree completely that getting on board with someone you don't know well is a bad idea.

It seems to me that all the rules placed around how you must have a team to get funded is restricting startups to the realm of college/uni students and there's probably a lot of opportunities being missed.

foysavas 14 hours ago 4 replies      
Question for anyone: if you're not a single founder, how did you meet your co-founders?
Vaismania 7 hours ago 0 replies      
I'd argue on the fact that having two business oriented co-founders can sometimes hurt you more than having a lone co-founder.
matusz13 13 hours ago 0 replies      
encouraging and well thought out points.
Official enrollment for Stanford's online AI class has begun ai-class.com
210 points by epenn  21 hours ago   59 comments top 22
amirmc 19 hours ago 5 replies      
If you've signed up for these classes, how about letting others know here? Perhaps we might be able to form ad-hoc groups to help each other when we're stuck?

If I've missed this suggestion in another thread, please let me know.

In the meantime, I've made a google spreadsheet so please add your details if you want to find other HN readers taking part. http://bit.ly/pLCRzg

Edit: Also found this on reddit - http://www.reddit.com/r/aiclass

Panoramix 18 hours ago 3 replies      
I don't know whether to take this course or the machine learning one. The both seem very interesting, but I only have time for one. I don't care much about robots, and was partly sold by Ng's separating music from background. OTOH I want to learn Bayes networks and natural language processing. I'd appreciate any advice.
jetbean 19 hours ago 2 replies      
Interesting, did anyone who subscribed receive any news via email about the registration being opened ? I didn't.
AlexC04 19 hours ago 3 replies      
Does anyone have a link to the other classes? I see from the spreadsheet there are DB (database?) and ML (machine language?) classes as well ?
sliverstorm 5 hours ago 1 reply      
Does anyone know why the class is only something like 8 weeks long? Stanford is on a semester system, and even in the quarter system classes are 10 weeks.
jmspring 7 hours ago 0 replies      
I've signed up for the AI class. It should be interesting to revisit the topic after taking it several years ago with Bob Levinson @ UCSC. There, a good deal of the focus was on Lisp and playing Chess.

I'm likely to sign up for the ML one as well.

brosephius 11 hours ago 0 replies      
I like that they separated it into basic/advanced, originally I was thinking of signing up and skipping the homework when I didn't have free time for it, but now I can just do the basic and not feel bad :)
knarf55 10 hours ago 0 replies      
I love how stanford is doing this for the public -- especially empowering those who really can't afford a degree but really want to enrich their education desires. +1 to Stanford for pushing this and the instructors and TAs who will be dedicating their time to make this happen.

On a side note, I'm deciding to take this class or the ML one. In my line of work, I do believe that the ML class will be more beneficial but the AI one seems way more interesting.

vibragiel 16 hours ago 1 reply      
Does anyone know approximately how much dedication would this course require (in hours per week)?
drieddust 14 hours ago 0 replies      
This book looks interesting but there is no review on Amazon.


I was wondering if someone can comment on the suitability of this book?

swah 19 hours ago 2 replies      
Are we gonna do this? I took AI in college but it was so-so, and I was thinking about doing this again. Anyone in a similar situation?
Evgeny 10 hours ago 1 reply      
The "terms and conditions" checkbox was not enabling the "Register" button for me under IE7.

It works under Chrome though.

zachgalant 12 hours ago 0 replies      
There are a lot of Stanford CS classes available online. Here's a list of the best ones according to Stanford CS majors - http://raunk.com/list/669,682,1364,2394,2395?filter=4,5
allanchao 15 hours ago 2 replies      
I can't find any list of prerequisites for this class, though their FAQ https://www.ai-class.com/registration/faq implies that there are some. Does anyone know if this class is noob friendly? (as in someone with no CS or programming background)
chegra 19 hours ago 0 replies      
I'm in...
shazam 12 hours ago 3 replies      
I'm all for the increasing availability of free online education, but it's interesting to note I'm paying 50 grand (partially) for this...
alanmeaney 18 hours ago 0 replies      
This is a great idea. I've signed up to the DB class. Looking forward to getting started!
steve_b 19 hours ago 1 reply      
I signed up. I'm guessing that we don't have to write the exams in person. Does anyone know?
klaut 14 hours ago 0 replies      
just signed up!
mjainit 18 hours ago 1 reply      
There is one Q-A community for this class at aiqus.com
cyphersanctus 15 hours ago 0 replies      
im in :)
ansy 13 hours ago 2 replies      
Why does this require students to submit a birthday? This seems like an unnecessary disclosure of personal information.
How did academic publishers acquire these feudal powers? monbiot.com
309 points by sasvari  1 day ago   77 comments top 23
jgrahamc 1 day ago 1 reply      
Reminds me of the time I wanted to read Chadwick's 1932 paper "Possible Existence of a Neutron" in which he mentioned the discovery of the neutron.


retube 23 hours ago 4 replies      
Yes this is a problem. Occasionally I have to to resort to emailing authors directly and asking for a copy of the paper - in most cases they have been obliging.

As far as I am concerned publicly funded research papers should (must) be freely available. If the public are funding it then the public has a right to the fruits of this investment. And newspapers must be able to link to or reference a source when they quote or review academic literature (in fact I think it should be law that they have to).

A very simple solution would be for authors or institutions to make copies freely available on their websites. I can only assume that they are not allowed to, due to copyright imposed by the journals.

It's ironic that the invention of the www was driven by the need for an easy way to freely distribute and share academic literature.

P.S. There's also a strong case for privately funded research to be made public too. Companies who make product claims based on privately funded research for example absolutely must make this research ("research") available for the public to review. It is notoriously hard to get pharma firms to cough up the papers which support their claims for the latest wonder drug.

blahedo 15 hours ago 0 replies      
It doesn't have to be this way, and individual fields can break away (to a greater or lesser extent). For instance:

In Natural Language Processing / Computational Linguistics, the professional society (Association for Computational Linguistics, ACL) was its own publisher, with no profit motive, and so authors for its conferences and journal never signed over copyright (merely granted permission to ACL to publish the work). For years, it was quite standard for nearly all of the authors to post PS or PDF versions of their papers on their own websites. Then ACL started accepting PDF instead of camera-ready, and just posted the PDFs themselves; and then they started scanning the back-catalogue.

The result of this is that the vast majority of all NLP/CL papers ever written (excluding only those published elsewhere, e.g. in AAAI, and a very few missing proceedings from fifty years ago) are available online, for free, in PDF, at http://aclweb.org/anthology-new/ .

This is how science should be.

impendia 22 hours ago 3 replies      
I heard an interesting argument from my advisor (a very famous mathematician). I strongly disagree with it, but it is the only argument I have heard for keeping this system in place.

His argument was the following: In many fields such as laboratory science, research is expensive; one has to apply for grants and then spend the money, and these departments have large budgets, and this all looks good to deans. If a department is going through a lot of money, then it must be prestigious, important, and doing good work.

I heard a joke once that mathematicians are the second-cheapest academics to hire because all we require is a pencil, paper, and a wastebasket. But, in fact, we require online access to all these journals, for which we have to spend a ton of money. Spending all this money makes us look good to our deans, and lends prestige and the look of importance to our department, and allows us to compete with other departments for resources.

I think it's a bunch of BS, frankly, but it's the one time I heard the existing system defended, so perhaps it's worth bringing up.

dctoedt 22 hours ago 1 reply      
One possible disrupter is the open-access model used by the Social Science Research Network, http://www.ssrn.com, which was founded in 1994 and seems to be extensively used in the legal academic community.

SSRN makes posted PDFs available for free download. The Wikipedia entry says that "In economics, and to some degree in law (especially in the field of law and economics), almost all papers are now first published as preprints on SSRN and/or on other paper distribution networks such as RePEc before being submitted to an academic journal."

Quality and prestige metrics: SSRN ranks posted papers by number of downloads, and it also compiles citation lists---if I successfully find Paper X at SSRN, I can look up which other SSRN-available papers have cited Paper X. (Sounds like a job for Google's PageRank algorithm, no?)

According to SSRN's FAQ, it's produced by an independent privately held corporation. I assume that means they're a for-profit company. I don't know how they make their money, other than that they will sell you a printed hard copy of a paper, presumably print-on-demand.

impendia 22 hours ago 3 replies      
But surely, one might think, that some of the price goes to offset the expensive costs of peer review?

The author neglected to mention that peer reviewers work for free, and that the editorial boards are also made up of scholars who work for next to nothing. (edit: see reply below, this was in the article and I missed it)

It used to be that it fell to the publishers to typeset the articles, but with the advent of TeX they don't have to do that either. (in my field anyway)

Speaking as an academic, these companies do nothing for us. The sooner we agree on an alternative model which doesn't go through them, the better.

rmc 22 hours ago 1 reply      
University libraries are still mostly money grabbers aswell, but are slowly changing.

In the 1830s Ireland was mapped to a great detail (for tax purposes) of 1:6500ish. These maps would still be very helpful for OpenStreetMap, and are obviously outside copyright. A few Irish universities have them (e.g. Trinity Map Library, a copyright deposit library for Ireland & the UK), however they charge €50,000ish for a copy of the digitial scans of the full set. Other libraries are similar.

Universities really are not pro-sharing

DanielBMarkham 17 hours ago 0 replies      
I'm buying into his thesis, but heck, I really expected to see some causality explained. The title began, after all, with "how did...."

It would be great if somebody could provided an insider's account of why the academic publishing industry maintained those margins from 2000 to 2010. Did nobody propose legislation to stop them? Were there no criminal investigations? Who are these people connected with politically? What sorts of causes to they contribute to? Just how is the status quo maintained? The guy made a point I was already predisposed to agree with, then kind of went on a rant about how bad it all was. Hey, I'm with you. I'm just not sure my useful knowledge of the issue has increased any.

tylerneylon 17 hours ago 1 reply      
The obvious negative this has on folks outside of research-level academia is a significant contribution to tuition prices, which seem to be rising at about 5% per year.

But a less obvious - and personally very painful - consequence of greedy publishers is the inability to do serious academic research outside of academia/industry. I have the ability (math PhD) and the will (have published a few papers post grad school, though it's hard to find time), but I have all but given up due to lacking access to books and articles behind these paywalls. Yes, you can find a decent chunk of articles online -- but very often there are one or two (or more!) key papers you _need_ to read to be at the front of a field, and one of those will be behind a paywall. The worst part is that I never know how truly useful an article will be before reading it, so in the few cases where I've payed I find that only a small percentage of the time was it worthwhile.

In short, this system essentially kills research outside of academia / industry.

RyanMcGreal 22 hours ago 2 replies      
A professor emeritus recently sent me an article to publish on my web magazine. The turnaround time was around 18 hours and he replied to express his surprise at how quickly his piece was published. In contrast, he has had an article pending at an academic journal for three years now.
jtwb 15 hours ago 0 replies      
We forget the value of curation.

Why can a boutique shop sell a $50 dress for $200? Taste. One could simply walk into that boutique, confident that 20 minutes later a cute, fashionable and well-fitting dress would be acquired.

Why can top universities charge so much for tuition? Every year, %s University generates a curated list of individuals, and many hiring processes (not to mention ad-hoc interpersonal filtering processes) emphasize individuals in that list. Like boutique shopping, this is an expensive strategy that often excludes superior talent, but is fast.

Is it worth $200,000 to have one's name on that list? Apparently.

Is it worth application fees and an iron publishing agreement to have one's paper published in Nature. Apparently.

jpallen 23 hours ago 2 replies      
> The reason is that the big publishers have rounded up the journals with the highest academic impact factors, in which publication is essential for researchers trying to secure grants and advance their careers(16). You can start reading open-access journals, but you can't stop reading the closed ones.

This is the only problem standing in the way of open access publishing. While the arXiv doesn't offer peer review and so doesn't negate the need for journals, the ecosystem would quickly adapt to open peer review. Unfortunately the implied reputation of being published in certain journals is still something that's too ingrained in academia. It's getting better slowly but it's going to take at least a generation to go away at the current rate.

forkandwait 5 hours ago 0 replies      
Academic journals are gatekeepers of academic promotion, the prestigious old journals are the only ones that count today, and they are owned by Wiley and friends.

To make matters worse, academics are among the most tradition bound creatures in the universe, especially when there is no clear criteria for truth (which is like all of the social sciences and humanities). The only thing they have to calibrate against is consensus, and consensus favors institutions already in place.

mathattack 15 hours ago 1 reply      
I agree with All the points about hiw government sponsored research (including all public school research) Should be free or near free.

The unanswered question is "Why is the market failing?"

A couple unmentioned ideas:

- Until recently, tuition hikes went unchallenged.

- Faculty have a vested interest in maintaining the system. (If my publishing in Journal X marks my competence, what happens if it goes away?)

- An alternate system for rating a very hard to measure topic would be needed. Counting scarce publishing, and references in scarce journals is imperfect but nothing else has beaten it.

I don't have an answer but perhaps a couple bright entrepreneurs could figure out a better equilibrium, and find a way to cross the chasm to get there. Geoffrey Moore would say pick one vertical or academic discipline.

merraksh 18 hours ago 0 replies      
Interesting article. In 2011 there is no practical hurdle to web-based publishing portals. Given that papers are already peer reviewed on a volunteer basis, the middle man and its administrative staff has a high cost and a low benefit.

The system could keep volunteer-based peer review, and establish a (perhaps private) forum-like interaction for the authors to improve their article.

Google Scholar has solved many of my article search problems and often gives me directly a link to the PDF of (sometimes just a preprint of) the article. However the problem remains for the libraries, which might well be the largest contributors to publishers, and which may find it hard to cut a subscription and suggest its users to use Google scholar.

PaulHoule 19 hours ago 0 replies      
A lot of it is that academics are the most atomized and individualistic group of people that you'll find. If there's any part of society where "Aproi Moi Le Deluge" is the slogan, it's academia.

Cornell has about 18 libraries and is slowly implementing a "Fahrenheit 451" plan to eliminate them. First they eliminated the Physical Sciences Library, next the Engineering Library, and they'll eliminate most of the others, one at a time, until there's nothing left but a remote storage unit, lots of computers, and a few pretty library buildings for show. Since it's happening slowly and only affecting one community at a time, they'll avoid a general uproar.

If I blame anything, I blame the institution of tenure, which can be seen more clearly as a cause of moral decay than ever.

Workers and capitalists alike will fight to the death to protect the interests of groups they are a part of because shifts in the rules can cause their personal destruction. A man with tenure knows he can't be ruined, so he's got no reason to ever take a stand.

Estragon 19 hours ago 0 replies      
This is simply a consequence of the fiscalization of academic values, and doesn't just apply to libraries. Professors need "high-impact" papers to justify their grants so they can get more grants. The fiscal imperative to acquire grants is very strong, because grant overhead is a major revenue stream for the host institution (probably THE major stream.) It's much easier to have a high-impact paper if you publish in a famous journal, so everyone shoots for Science, then Nature, and on down the hierarchy as their field sees it. And they will eat just about any kind of shit to get published there, including having their papers locked behind a paywall. Because they are plugged into a system where getting grant money takes priority over advancing knowledge. Don't get me started on how this skews research priorities and experimental designs...
rflrob 16 hours ago 0 replies      
Given that PLoS is only 8 years old, I think it's too soon to draw any meaningful conclusions from the fact that the open access movement "has failed to displace the monopolists". I think more important is that the trends are moving in the right direction: some high profile journals (like PNAS) have an open access publishing option, and it's unusual for new journals (at least in biology) not to be open access [1][2]. We aren't where we could be 20 years into the World Wide Web, but we're getting there.

[1] http://www.hhmi.org/news/20110627.html
[2] http://www.eurekalert.org/pub_releases/2011-06/gsoa-gln06211...

roadnottaken 21 hours ago 1 reply      
This is not quite right. The NIH does, in fact, maintain a Public Access Policy:


which states that all NIH-funded research must be placed in this database ("PubMed Central") within 12 months of publication. I do not know how widely it is obeyed, but I know that the several labs I've been in regularly deposited their papers there.

Cyranix 17 hours ago 0 replies      
Articles like this make me sympathize with Aaron Swartz a little more. http://news.ycombinator.com/item?id=2813870
merraksh 19 hours ago 0 replies      
rizumu 22 hours ago 1 reply      
This is in line with the True Cost Economics Manifesto: http://www.adbusters.org/campaigns/truecosteconomics/sign
omouse 20 hours ago 0 replies      
The military-industrial complex! Done, next question?

Why would a government want to remove the middle-man when the middle-man is making enough money to lobby the government to protect them? -_-'

Elastic Load Balancer SSL Support Options aws.typepad.com
20 points by jeffbarr  6 hours ago   1 comment top
wunki 50 minutes ago 0 replies      
Wonder if this makes it easier for Heroku to offer SSL support. Currently it's either:

  - SNI, doesn't work with the Cedar stack or < IE8
- Hostname, strips headers
- IP, costs 100 dollar p/m.

Heroku Postgres heroku.com
121 points by ryandotsmith  17 hours ago   74 comments top 17
boundlessdreamz 15 hours ago 7 replies      
If there is anyone from heroku reading, please pay some money to http://www.sequelpro.com/ developers to develop postgresql compatibility. All GUIs available on OS X for postgres are horrible
swilliams 15 hours ago 1 reply      
It would seem that their acquisition by SalesForce hasn't slowed them down at all, in fact I'd go as far to say that they've been even more productive since that happened. I'm curious about how that whole process has gone; has SalesForce provided more support/funding/resources? Or just removed distractions and allowed them to focus on their product?

Nice job Heroku.

sgrove 16 hours ago 1 reply      
This is really exciting - the team behind this really cares about developers and their experience. They see a way things could be better, and regardless of technical difficulty, say, "why not?" Forking a db? Brilliant. Offload expensive task to passive followers? Obvious (in hindsight). Instant, brain-dead db provisioning? Of course.

Personally, the only terrifying thing about this is that when Heroku releases something, it's been in the works for a good long while, and they usually have a stream of incredible releases waiting right behind it. The mind boggles at what they're going to be doing next.

But we'll be first in line for it :)

olefoo 16 hours ago 4 replies      
Just a little usability nit, is the pricing per month, per year, lifetime?

It's not clear from the pricing page, and I'm assuming it's per month, in which case it's reasonable, especially given the value added services being offered, especially the ability to fork the database and the automated creation of read slaves.

It looks like a solid offering. It's probably not right for companies that are subject to HIPAA or PCI compliance requirements, and there's no information on the use of compiled extensions in the db which may limit it's utility if you're wanting to use PostGIS or other specialist datatypes.

firemanx 15 hours ago 2 replies      
Do they still limit the usage of user defined functions and contrib modules, or did that open up? I would love to use Heroku for some lightweight data warehousing that I've got going, but it's still pretty dependent on functions/sprocs for performance reasons.

I looked for any more documentation about that, but the only official word I have from them in the past is the support ticket I filed last year stating that they don't support "additions" like user defined functions or the various contrib modules.

andypants 16 hours ago 3 replies      
This is really awesome, just a few downsides (for me):

I don't use postgres - I hope heroku expands this kind of service to more databases (although I don't think it's likely in the near future).

The smallest database is also pretty expensive. I wouldn't mind a cheaper plan for less resources.

Only being able to create one database of each size is just weird, especially if forking or following. Can anybody confirm that this is a limitation? (I only read about this from one of the other comments here)

rbranson 14 hours ago 1 reply      
Is this still using the EBS RAID that you guys mentioned in a blog post a while ago? If so, how do you avoid any slow I/O requests (which plague EBS) stalling all I/O to the volume?
Fluxx 13 hours ago 1 reply      
This is really awesome! A couple questions though:

* What do I do if I need to tune/configure Postgres for my workload.

* How is the performance of transferring all of the queries and responses across the internet?

Keep in mind I've never used postgres before so these may be moot points.

drewcrawford 15 hours ago 2 replies      
This is ridiculous, but I would love a REST interface to a SQL server. I do a lot of AppEngine work and although I love the datastore and write super-optimal queries, I would really love the ability to pull a SQL datasource in every once in awhile.
pbh 14 hours ago 1 reply      
Any chance of PL/python or similar?

I guess I'm probably one of like three users of PL/python, especially since it's untrusted. Worth a check, though!

mberning 10 hours ago 0 replies      
This is cool, seems to have a pretty fair price structure. I still don't understand their shared database pricing. 5mb for free, then $15 for 20gb. I need like 100 megs for 5 bucks.
joevandyk 16 hours ago 1 reply      
Although they only let you create one database per plan, you can use schemas to emulate multiple databases.


hopeless 15 hours ago 1 reply      
I just realised that Heroku have a problem: when I looked at the page, and looked at the subdomain URL, I couldn't tell if this was a real offering or a fake app put up by someone else.
jvehent 16 hours ago 3 replies      
"Forget daily backups, Continuous Protection redundantly archives data to high-durability storage as it is written, ensuring that it is safe no matter what."

I'm sorry, but daily backups are not only for high availability, but also for point in time recovery.

What if $dev drops the user table by mistake ? Do they provide backup for that ?

francoisdevlin 16 hours ago 1 reply      
I think the Fugu plan is a typo, if you're reading this Heroku guys...
njharman 7 hours ago 1 reply      
need GIS

And confused by "one database only". Does that mean CREATE DATABASE databases? If so why?

laran 11 hours ago 0 replies      
This blew my mind. Amazing stuff. Heroku is absolutely my favorite platform.
       cached 31 August 2011 10:02:01 GMT