hacker news with inline top comments    .. more ..    1 Feb 2013 News
home   ask   best   6 years ago   
1
Racket v5.3.2 racket-lang.org
89 points by racketlang  3 hours ago   17 comments top
1
DASD 2 hours ago 5 replies      
Am I the only one who forgets Racket? Everytime I visit the site and browse through the documentation, I find myself wondering why I'm not using the language. Years ago I used Chicken Scheme but this seems more robust and the standard library is adequate. Who's using this in production? What's your experience?
2
LLVM Tutorial llvm.org
43 points by thewarrior  2 hours ago   discuss
3
Congratulations Crunchies, Winners. GitHub Wins Best Overall Startup techcrunch.com
26 points by acremades  1 hour ago   8 comments top 5
1
aviswanathan 53 minutes ago 1 reply      
Very surprised that Elon Musk was runner-up for founder. The guy is tackling some of the toughest, most important problems in the world that most fear of even thinking about. Not to diminish Kevin Systrom's accomplishments, but I just think that the amount Musk has accomplished with his (three?) companies is tremendous.
2
tantalor 20 minutes ago 0 replies      
Instagram was bought in April. Does it really qualify as startup?
3
laluser 34 minutes ago 1 reply      
How does Google glass win runner-up for best tech achievement? Very few people have actually seen what it is capable of and what it actually looks like.
4
keyle 34 minutes ago 0 replies      
I'm not understanding how this techcrunch post was posted two hours ago, and has only one comment (and rather spammy)?

I haven't been there for ages, so if someone could enlighten me to whatever happened there?

5
snogglethorpe 38 minutes ago 0 replies      
It would have been hilarious if grindr had won best mobile app...

Oh well, next year!

4
Someone got the natural gas report 400 ms early nanex.net
431 points by HockeyPlayer  13 hours ago   239 comments top 38
1
apaprocki 11 hours ago 4 replies      
http://invezz.com/news/alternative-investments/625-uk-report...

"Veteran traders would usually wait in anticipation for the weekly report of gas-inventory figures by the U.S. Energy Information Administration released on Thursday at 10.30 AM and then dive into the busiest trading window of the week. This is no longer true as most traders are now staying out of the market due to the HFTs new strategy - sending floods of orders in an effort to trigger huge price swings just before the data gets released, also known as “banging the beehive”."

edit: Fast algo puts orders out on multiple equity exchanges and then hedges itself with a slow algo in the futures market.

2
lrm242 11 hours ago 3 replies      
The most likely explanation: no one got anything early.

Venue timestamps can often disagree by a significant amount. It is very likely that SIAC (distributors of CQS and CTS) simply are not well synced to the reference clocked used to distribute the report.

Nanex spends a lot of time doing analysis based on precision timing without providing any sort of error analysis as to how well timestamps produced from different references synchronize. It would lend credibility to their hypotheses if they either provided their own reference or provided some measure of margin of error to the timestamps they so heavily rely upon.

For example, if Nanex had stated that they measured the time of release and the time of trades themselves as they appeared in CQS as observed from their machines then they could make a much more concrete statement as to whether: (a) the report was "early" or (b) the trades were "early" relative to one another. Most likely they would observe that the report appeared "early" as measured to their system clock (or possibly some other reference), but the trades did not appear before the report. Of course, Nanex would be smart enough to account for transit latency and other what-not.

3
aleyan 6 hours ago 1 reply      
Some back of the envelope calculation for UNG (ETF in the top Nanex graph) profits for the early mover.

In the second before the announcement UNG was trading at $18.72 High and $18.55 Low. According to some minutely data I saw 400,000 shares where traded between 10:29:00 and 10:30:00; I believe this ties out with the first Nanex chart where there are 490,000 shares traded, with majority of it coming in 400 ms before the 10:30:00 mark. 4 Seconds after the 10:30:00 report release the price had stabilized at around 18.51. I will consider this $18.51 the fair price with all resonably fast algos having made their post release moves.

Let us assume that there was a single trader/algo who got the report early and executed all of the 400,000 share sells 400ms before 10:30:00 and all other market participants only bought. Additionally assume that the average fill for these sell trades was the average of the High and the Low at (18.72 + 18.55)/2= $18.635. I believe this fair because looking at the first Nanex graph the early trades are somewhat uniformly distributed between the high and the low. In a simple arb on UNG, where the trader went short 400ms before the the announcement and closed the position at the fair price a few seconds after the announcement he stands to make a profit of (18.635 - 18.51) * 400,000= $50,000.

For a trade that lasts 5 seconds, making $50,000 is nothing to sneeze at, it is not that much in grand scheme of things. Additionally other ETFS and futures were impacted and could have made more or less money.

TL;DR: If one guy captured all the profit from the early UNG trade, the max he made was roughly $50,000.

4
zaphar 11 hours ago 2 replies      
This has nothing to do with the main subject matter of the article but those graphs are almost unintelligible. I didn't even try to decode what they were trying to communicate.
5
smcl 13 hours ago 4 replies      
Or someone got the report a few hours early, didn't want to be seen to jump the gun (insider trading, and all that) and kicked off a new trading strategy 400ms earlier than they intended...
6
petercooper 12 hours ago 4 replies      
It is worth pointing out that the EIA Natural Gas Report comes out weekly (every Thursday at 10:30) and the market reacts within a few milliseconds.

Since it's not the report producer's job for investors' computers to rapidly parse it, they should have some fun in phrasing and presenting the information in different ways each time. If nothing else, it could lead to an explosion in NLP and content parsing technology ;-)

7
niggler 13 hours ago 4 replies      
After seeing some of their posts earlier and comparing it to live data I record at the colocations, I've concluded that they have clock issues which makes these types of anomalies appear frequently. Or they have a bad data vendor.

Interestingly enough, even the regulators don't have good (only millisecond-resolution) trade data.

8
mixedbit 12 hours ago 3 replies      
I don't think that increased trading activity supports a conclusion that the report leaked.

People may have anticipated increased trading after the rapport release and may have prepared algorithms to try to gain during this event. The algorithms may have started working before the release.

I'm not saying this was actually the case, but my theory is as well supported as the claim in the post.

9
madaxe 12 hours ago 1 reply      
So, that initial downwards spike at -400ms that immediately kicks back up to the halfway point?

Those can only possibly be pre-programmed strategies. From the big boys. 'Cos you can't stack 'em exchange side last I checked, if they're price dependent. Which means they have a gloriously low-latency-close-to-the-exchange-link.

Which means this news was leaked well before the event.

10
zwischenzug 12 hours ago 1 reply      
Sorry if this is a question with an obvious answer, but how is the report delivered/obtained?
11
thechut 12 hours ago 0 replies      
Can someone explain what this means in practice. As in how much money did these people make or stand to make because of being 400ms early?
12
unreal37 13 hours ago 0 replies      
This shows that not all investors acted on the information at once. But doesn't show that the report got out early.

To the point of "private investors", if you are expecting to be able to see a news release the second it drops, spend 10 seconds reading it, make quick predictions on stock movements, and then buy/sell equities based on that... computers have had you beat for a long time on that game. Find another or automate that.

13
fnordfnordfnord 1 hour ago 0 replies      
For those who are skeptical of Nanex, or want some background on some of the problems with HFT, here is a Dec 2012 analysis by Credit Suisse. HFT Measurement Detection and Response

http://www.scribd.com/doc/116761218/CS-HFT-DETECTION

14
danielweber 12 hours ago 4 replies      
Every sell is someone else's buy, and every buy is someone else's trade.

If you saw a bunch of activity happening milliseconds before it should, why would you be the other party to someone you suspect is committing fraud? You will be the primary victim of the fraud.

15
parfe 12 hours ago 3 replies      
Why would trade relevant data be released while trading is open?

Seems like it could screw tons of people with open orders who can't react within seconds of new information.

16
homosaur 13 hours ago 5 replies      
If we can't solve this problem, with the rise of machine driven microtrading, is there really any reason to place any faith in the stock market as a private investor?
17
benpbenp 9 hours ago 4 replies      
A fun thought experiment. Suppose someone invents a time machine that gives the correct price of all securities at all future points in time.

1) Would it be against current rules to trade on this information?

2) If someone did use this machine surreptitiously to their own gain, how quickly will they approach owning 100% of everything?

3) If the entire data set of future prices were made publicly available, what would happen to markets? I mean, exactly, what would happen to stock prices?

18
jmix 12 hours ago 4 replies      
Since it takes a while to digest the report after having seen it, chances are that they were in possession of the report far earlier than T-400ms but waited until they were in a time window where they knew the regulators would not come after them.

This is how fortunes are made. By taking advantage of loopholes in the regulatory mechanism.

19
geuis 2 hours ago 0 replies      
Would putting a minimum time barrier on HFT systems help? Basically say, you can't make any trades faster than 100ms. You still get the benefit of fast HFT systems, but not the insane microsecond stuff going on now.
20
eksith 3 hours ago 0 replies      
Delay every trade by 10 seconds. Not only does it prevent spikes like this, but it will ensure trades done with thousands of transactions per second by software are brought down as to not cause stupid crashes : http://en.wikipedia.org/wiki/2010_Flash_Crash
21
achompas 12 hours ago 0 replies      
Is there any way to subscribe to these Nanex posts? Couldn't find an RSS read but I'd like to slowly dip into HFT, and they like a good source of trading news.
22
vincefutr23 7 hours ago 0 replies      
"There is the old story about the market craze in sardine trading when the sardines disappeared from their traditional waters in Monterey, California. The commodity traders bid them up and the price of a can of sardines soared. One day a buyer decided to treat himself to an expensive meal and actually opened a can and started eating. He immediately became ill and told the seller the sardines were no good. The seller said, "You don't understand. These are not eating sardines, they are trading sardines."
23
kjackson2012 6 hours ago 0 replies      
If the report is accessed via HTTP, I wouldn't be surprised if the clocks on the government server are off by a few ms, so all the HFTs that are pounding the URL are trying to get access to the data. The first one who got it made their trades before everyone else.
24
Zenst 12 hours ago 0 replies      
Remind me of the film Trading Places and the Orange Juice crop reports.

A pico or nano second is in this digital age is an advantage - it is the fairness of that advantage be it routing or some more underlying more sinister aspect that is the real issue here.

25
mildavw 12 hours ago 2 replies      
To make shenangins more obvious, what if 1 minute were the maximum resolution that any trade could happen? Say, every order gets a random number of seconds between 0 and 60 added to it before it is executed. Or even longer. What would happen if everyone gets 10 minutes to digest any news?
26
CarlTheAwesome 12 hours ago 0 replies      
Maybe that news was published 400ms or even a little bit earlier. Such little difference human beings cannot notice. Those who trade early were those who utilized A.I. systems to watch and analyze the report and trade accordingly, 400ms is long enough for big expensive machines to accomplish that task. To add even more conspiracy into the story, you can imagine someone was paid to publish the report just about 400ms earlier than supposed to by those who use computers to trade, and not very many people will notice, or just like post said feds don't even think this matters.
27
lefinita 2 hours ago 0 replies      
They really should learn data visualization to make it more readable and beautiful.
28
camkego 4 hours ago 0 replies      
Those are some heavy graphs, but I wonder what was the spread before the release of the news?
Traders buying early on the news might be dealing with a big spread as everybody is expecting the news to come out.
29
undrcvr 1 hour ago 0 replies      
if the market was architected to be fair, all data releases would happen at midnight london time. it just isn't and will never be...
30
dfc 12 hours ago 0 replies      
I always get a kick out of reading these NANEX reports. Reading a new report usually means I spend 30 or 40 minutes filling in the gaps (giant) in my knowledge. Are there any other organizations that put out similar quality reports?
31
SanjayUttam 12 hours ago 2 replies      
Off-topic; what was used to generate those graphs?
32
pbhjpbhj 11 hours ago 4 replies      
What is the benefit of trading over ms resolutions. What problem is it solving?

Wouldn't trade be more efficient if it were lock stepped - say one trade per hour (per day?): you agree your trade and the exchange processes it on the hour.

What would be lost that benefits the pseudo-capitalism of these systems by having such a regime. How would this negatively impact production.

33
unclebucknasty 3 hours ago 0 replies      
There is a lot of philosophizing over whether HFT harms or helps the market, etc. Much of the pro camp centers around liquidity, but as someone else mentioned, much of that liquidity is absorbed by offsetting HFT.

Rather than get lost in all of the gnarly details, however, I think it is easier to simply look at the purpose of the market and ask whether HFT serves or harms that purpose. IMO, it is pretty clear that it represents a hijacking of the market's true purpose and functioning in the service of that purpose.

For example, is it helpful in setting a price which reflects true supply and demand that we have algorithms designed specifically to manipulate the pricing mechanism by creating artificial supply and demand. These algorithms place phony orders, never intended for execution, but instead merely to trigger a move from the other side. How can that possibly be helpful to such a fundemental market mechanism as pricing?

HFT uses the market for an entirely different purpose. Anyone who defends it must acknowledge this point and argue that the purpose is good if they wish to defend HFT honestly. Otherwise, to couch pro HFT arguments in terms of it being supportive of the market's true purpose and functioning is to mislead.

34
Raz0rblade 8 hours ago 0 replies      
BTW 400ms rules out that someone had a closer internet
connection. As in 400ms light travels 119917 kilometers.

Someone sure new something earlier.
But those high frequency traders don't go by names, and so never can be put to normal justice.

35
fideloper 11 hours ago 1 reply      
I really hate that I can't find articles when I need to. However, I have read that investment companies build datacenters as close to internet hubs as possible, to take advantage of the extra milliseconds gained.

If I recall correctly, this is notable in Manhattan.

36
iso-8859-1 12 hours ago 2 replies      
What would happen if that crucial number was published as a riddle instead?
37
arjn 12 hours ago 1 reply      
So is this evidence of insider-trading ? Why would they not prosecute or at least investigate ?
38
joering2 12 hours ago 2 replies      
any idea what kind of profit we talking about here? Perhaps this is just not worth for Federalies to pursue...
5
Kim Dotcom puts up €10,000 bounty for first person to break Mega's security thenextweb.com
29 points by skeletonjelly  2 hours ago   26 comments top 10
1
incision 2 hours ago 1 reply      
It could be a standing offer with a hole found every other month and they'd be paying far less than the going rate for a quality, full-time security consultant.

Better yet, they get to indirectly watch, learn from and adapt to bounty hunters work and only pay out if someone stays fully ahead of them.

2
tptacek 43 minutes ago 1 reply      
http://www.schneier.com/crypto-gram-9902.html#snakeoil

See: Warning sign #9.

Read the whole thing, of course. Holding a contest doesn't make it snake oil...

3
enoch_r 1 hour ago 1 reply      
Why does Mega's security matter to Mega?

It seems like they've implemented encryption to provide plausible deniability that they're knowingly hosting pirated content, not to actually protect the privacy of users, the vast majority of whom will not be uploading confidential data.

4
nwh 1 hour ago 0 replies      
If anything, it'll be an attack against their entropy gathering (mouse movements) or a mis-implemented encryption spec. Given the number of XSS holes in the site, I bet someone will be claiming that bounty soon.
5
Attocs 10 minutes ago 0 replies      
does this have anything to do with me getting a 403 from http://kim.com/mega/ ??
6
ck2 2 hours ago 4 replies      
Isn't he a billionaire? Have you see the photos of his home?

He might want to make the reward slightly higher than what the black market will pay out.

7
mike_herrera 2 hours ago 1 reply      
> It's been seven busy days for us since MEGA went live. As millions of users were hitting 50,000 freshly written and barely tested lines of code and dozens of newly installed servers, teething troubles were inevitable. -- Mega blog entry #4

If I were a gambler, my money would be on an infrastructure/man-in-the-middle or social-engineering attack vector. The people are likely stressed and the code is admittedly troublesome.

Godspeed.

8
sschueller 1 hour ago 0 replies      
He should have the reward go up every month it is not claimed.
9
nu2ycombinator 1 hour ago 0 replies      
mega-search.me already looks like having problems.
10
ForFreedom 22 minutes ago 0 replies      
only 10K?
6
C++ Grandmaster Certification cppgm.org
50 points by int3  5 hours ago   66 comments top 20
1
jevinskie 4 hours ago 3 replies      
This seems like an insane undertaking for one person. I could understand it with C - that would be difficult enough. But a compliant C++11 compiler? It has taken MSVC, GCC, and clang years to get just partial C++11 compliance! None of them are fully compliant in their current state. How is this course supposed to be tractable!?
2
adamnemecek 3 hours ago 2 replies      
Okay I signed up to see what it's all about. But I can't get rid of the feeling that this is a joke of some sort.

1.) There is not a single name of anyone involved in this endeavor.

2.) The sing-up confirmation is a simple alert box? It seems like an XHR request does go out but no email is sent to the email address provided. Also they don't even check for email uniqueness. That seems somewhat...strange.

If it is a joke, I will be pretty sad :-(.

3
jasonzemos 3 hours ago 0 replies      
Considering it normally takes 10 years to develop a reasonable standard library alone -- and that's about 10% of what's being asked here -- this is clearly a joke. But it's a humbling and enjoyable trolling for sure. It would be interesting to develop a very limited amount of material for each layer of the stack being addressed here though, that would give some valuable experience.
4
jetti 3 hours ago 1 reply      
I signed up and am excited. It is definitely something that will be challenging, but the fact that it is free means that I have nothing to lose and everything to gain. (One would say that time is something I can lose, but this is a better use of my time than Diablo 3). The only hesitation I have is that this is just yet another joke that I'm too naive to recognize. We shall see!
5
thomasbk 4 hours ago 1 reply      
I think something like this might be a very cool project for those of us who have the time. You get intimate knowledge of C++, compilers, assembly and other close-to-the-metal aspects of programming. I wonder how much time it is going to take; the very simple compiler I had to write in college using yacc/Bison, supplied boilerplating and a trivial output format already took several days of full-time work. Full C++-compliance seems like a _huge_ project.

The website is missing some information however: who's behind it? It says:

  The CPPGM Foundation was formed by a software company that
recognized the value to programmer productivity that a good
knowledge of language mechanics had to new developers to
their team. The C++ Grandmaster Certification began
development as an internal training program, and the
foundation was founded to offer it publicly.

but fails to mention what that company is. Additionally, the domain is registered anonymously...

Edit:
After some digging, it seems that the only information available on the people behind this is the fact that they emailed the press release to the comp.lang.c++ newsgroup from a residential IP-address in Switzerland. (NNTP-Posting-Host header)

6
idupree 3 hours ago 1 reply      
I was tempted until "We ask you to agree, when you start the course, to not release your toolchains source code anywhere but the cppgm site.". Interesting work I do not-for-pay I release as FOSS on github. They're worried about plagiarism (which is fair); they should use anti-plagiarism tools to check whether submitted code matches previous submissions or any known C++ library and compiler code in the wild.
7
wglb 3 hours ago 1 reply      
So this seems like a pretty enormous task. Having written a couple of compiler code generation phases, I must admit that this challenge is very attractive.

However, there is something a little off about this proposal. First, the size of this effort is really quite substantial, even neglecting optimization. Secondly, the phrase The C++ Grandmaster Certification began
development as an internal training program, and the foundation was founded to offer it publicly
suggests some compiler-writing company heavily involved in the C++ space. How many of those are there really? I mean, it has been 20 years since anyone made any money producing C++ compilers. All for-profit companies do it as a side effect. VC++, for example, in the 90s had 50 people working on just the compiler itself, not counting the Visual part.

So it presents a secondary challenge, which is 1) is this a real company 2) what really is the end goal?

Edit: Also, the bootstrapping question is not well addressed. What do we have to start with? Regular C? can I do the first phase in Lisp or Arc or Factor?

Am I allowed to look at other source, like that of g++ or clang or llvm or objective C?

Finally, the apparent copyright terms seem at least unacceptable, if not downright goofy.

8
cliffbean 3 hours ago 0 replies      
I borrowed a copy of the Dragon Book from a friend and I'm starting to go through some of the parsing exercises as a refresher. I think the undergraduate compiler course I took used MIPS, so this might be a little different, but I always did like AMD. C++ is a big language, but as the FAQ says, we won't be writing an optimizer, and hey, we're not even writing an OS or designing a CPU, so we're only doing a tiny fraction of a complete implementation.

I went to a C++ standards committee meeting once. They were having this fascinatingly complex discussion about temporary object lifetimes; it was so amazing how everyone there understood C++ so thoroughly. I'm hoping that taking this course will give me a better appreciation for their art.

9
jabits 3 hours ago 0 replies      
Subtle. A lot of effort just to prove that there will never be a "C++ Grandmaster".
10
dinkumthinkum 3 hours ago 2 replies      
Neat but I will say this, I think the course is mis-named. This seems to be less of a C++ master class and a course on compilers. I would expect such a course as a C++ master class would be more on using the language rather than compiling it. But that doesn't mean the course isn't good, I haven't seen the content. :)

- For those criticizing the amount of work ... I tend to agree with you all. However, in the faq this is addressed, take that for what it's worth. However, it's free so it seems like, even if you fail at building a fully working compiler, you could learn a lot, so I say good for them!

11
rorrr 3 hours ago 0 replies      
That's either a troll or will take decades for each person.
12
cmccabe 1 hour ago 0 replies      
OK, I forked LLVM and libstdc++ on github, and compiled them.

In the process I learned about the joys of code reuse, surely what a true grandmaster would use.

When can I expect my certificate in the mail?

13
daurnimator 3 hours ago 3 replies      
off on a tangent: does anyone know a good resource/book for learning C++ from a C background? I've loathed C++ every time I've gone to use it, I'm sure there's something good in there....

i.e. is there a good book "C++ for C programmers who hate the thought of it"

14
DannyBee 2 hours ago 0 replies      
Gonna go with "recruiter phishing schemes" for $1000, Alex
15
gills 3 hours ago 0 replies      
I've got to agree with some of the comments here that this is insane. Sounds fun, but insane. There are probably 3 or 4 programming-intensive undergrad courses wound together here, with a more difficult substrate than your average compilers course.

Then I remember that this is the Internet, and if it's 'free' you're usually not looking at the product; but you can see it in the mirror.

16
gelisam 2 hours ago 0 replies      
"You will also earn the title Certified C++ Grandmaster [CPPGM]."

A certificate or title isn't worth anything unless the organization who emits it has the authority to emit them. And if you can truthfully claim to have written a fully-compliant C++11 compiler all by yourself... you're already so badass that throwing in an extra title or certificate won't make a difference.

17
mikegirouard 3 hours ago 0 replies      
I'm so not a candidate for a class like this, but damn I wish I could take it.
18
jjm 4 hours ago 1 reply      
Side note, I forgot that CPTTv2 was $101 (for the Kindle version no less).
19
virtualwhys 3 hours ago 1 reply      
"Isn't this a huge undertaking, usually done by an entire team of programmers?" ... 'This is a "Grandmaster" level programming course for world-class senior software engineers.'

What? One of the prerequisites is 2+ years experience with C++ (or similar language), and on completion one shall demonstrate, "a complete, exhaustive knowledge of the C++ language and C++ standard library".

OK, world class senior software engineers and a base point of 2 years experience generally do not go together.

Perhaps the author meant, having had at least 2 years C++ experience at one time in one's career.

20
crowhack 3 hours ago 0 replies      
This looks awesome! Now to read about compilers....
7
Limiting passwords to 12 characters is "secure enough" stardock.com
52 points by Jayschwa  4 hours ago   41 comments top 13
1
kijin 1 hour ago 0 replies      
If a 12-char password is "secure enough" today, then a 16-char password is obviously even more secure and future-proof. Not to mention a 30-char password, or a password that contains more special characters than what your dumb webapp allows.

Not to mention that a 30-char purely alphabetic passphrase such as xkcd.com/936 is so much easier to remember (i.e. less likely to be written on a post-it note) and type into today's mobile devices (i.e. more likely to log out properly, because it's easier to log in the next time) than a 12-char password with two numbers and one symbol in it. What I'm trying to say is that there is more to password strength than the time it takes for a botnet to brute-force it. Humans are the weakest link in most security systems. If you want strong security, you need to design your system so that humans can interact with it without too much mental strain.

Nowadays, anything less than [\x20-\xFE]{8,64} is just a lame excuse for storing passwords in a plain-text VARCHAR field without proper escaping. Therefore, if your password policy as any more restrictive than [\x20-\xFE]{8,64}, I'm going to assume that you store my passwords in a plain-text VARCHAR field without proper escaping.

2
akg_67 5 minutes ago 1 reply      
Reading comments in this thread have been very enlightening. I am wondering if there is a best practices or guidelines for password storage for web service operators.

I currently manage a web service that has about 1,000 registered users. I have taken the most restrictive path to storing password in database except I need to make sure user/password database is portable from one host to another. Reading the comments, I am getting the impression that such portability may not be a good thing. But then how can I migrate from one host to another or restore backups?

Are there documented good practices for storing passwords in database that also allow portability.

3
jimrandomh 2 hours ago 3 replies      
As I see it, character limits aren't so much about security, as just a dumb way to be hostile to the user. All of my passwords are site-specific unique passwords generated by a password manager. I don't care if you store plain-text passwords, because if someone steals passwords out of your database then they already have all the access that my password to your site would've given.

But if a site rejects the password that my password manager generated (16 chars [a-zA-Z0-9]), then I have to work around it, make a password manually, and it's generally a pain in the ass that shouldn't be necessary. And since I'm doing it right and these sites doing it wrong, I'm not inclined to be forgiving.

4
omni 2 hours ago 1 reply      
Ah, yes, there's nothing quite like a condescending representative entirely out of his depth telling you to "do the maths" to show your customers that you really care about their security and privacy. I wish you good luck in getting them to listen to you.
5
tlrobinson 1 hour ago 0 replies      
When I complained to my bank about their 12 character limit they told me...

"12 characters is already hard enough to remember."

Sigh.

6
sp332 1 hour ago 0 replies      
Even if they were brute-forcing, a new GPU cluster can do 350 billion guesses per second. http://arstechnica.com/security/2012/12/25-gpu-cluster-crack... That means an average of 78 days to crack an individual password, even with no heuristics about which passwords are more likely.
7
eksith 1 hour ago 4 replies      
Get a whole heap of passwords from random.org. Create a text file with the sites you use with the usernames/passwords. PGP Encrypt the whole ensamble with a good strong password. The only one you really need to remember.

Forget your password? Once you reset via email, as soon as you get access to that encrypted file, get a new random password and reset it again. Save the new password in the encrypted file.

Password managers often connect to the internet to retrieve your passwords so if you lose your access, you're SOL. I also wouldn't trust a browser plugin as that may be prone to compromise. Backup to your laptop or something if you need to take it with you, but keep it encrypted until needed.

Forget the password to the encrypted text file? Throw your life away and start a new identity.

Edit:
Pointed out below (and I agree whole-heartedly) use /dev/u(a)random

And idupree's points are spot on.

8
yarianluis 1 hour ago 2 replies      
This is far from the worst offender.

Banks are typically the worst. All sorts of gimmicky password requirements. 8-12 characters. Must have one capital letter. Must have one number. No special symbols.

So "I can't believe it's not butter!" won't work, yet that would probably be a pretty secure password, and be entirely rememberable. In fact I could come up with a silly pun-filled sentence for each site I visit and make passwords fun again.

9
ck2 2 hours ago 1 reply      
In theory it is secure enough - you should not be allowing a password attempt every second on an account and unlimited attempts per day per account.

But of course we should be using pass-sentences by now.

10
rorrr 1 hour ago 2 replies      
Technically, if they are using bcrypt hashes with a high enough work factor, and salt then with something like UserID, then 12 characters is pretty damn secure even if their whole DB gets leaked.

Of course, there's no good reason to limit passwords to any length.

11
Sami_Lehtinen 1 hour ago 2 replies      
I blogged about week ago: Some funny stuff for a change: I told my colleagues that we receive at least 5000 "hack attempts" aka failed logins daily to any of our public Internet facing servers. One of my colleagues just said to me: "Well, you're having such a password policy, that maybe those are actually failed login attempts and not hack attempts at all." - It really got me laughing. Yes, passwords, especially long complex and random ones are painful for users. Here's password of the day (opening and closing quotes aren't included in the password):"^j'lb#K-€3,<_úgWJdXå(n_6=41Bµ%cj!" Btw. Good luck guessing the password or finding it out using SHA-1 hashs or so. I know it's possible, it just might take a while. ;) p.s. This password still got less than 256 bits of entropy. I never really intented anyone to actually remember the passwords. I personally consider passwords as "shared secret", which is just a blob of random bits.
12
atsaloli 3 hours ago 0 replies      
Bill Cheswick, of the "Firewalls and Internet Security; Repelling the Wily Hacker" fame, gives a great talk called "Rethinking Passwords" calling for a better solution:

http://www.youtube.com/watch?v=KRVRlhrLKkI [video, 22 min]
http://web.cheswick.com/ches/talks/rethink.pdf [slides]

13
stopcyring 53 minutes ago 0 replies      
Any user input needs to be filtered, sanitized, validated and limited.
Please be my guest and pass any user input to your magic hashing function, don't cry about it later because due to some special circumstances / framework bug / language bug / buffer overflow / extra hidden utf char, your magic function opens a huge security hole. oh oops.
9
School of Haskell Goes Beta fpcomplete.com
136 points by dons  9 hours ago   38 comments top 9
1
Ixiaus 7 hours ago 4 replies      
Any Haskell web-application framework is awesome - Yesod in particular as it has the most active community and written material for it.

I've lately been working on an (unpublished) project using Haskell+Yesod+Fay+Clay and it's remarkable how productive I am; in that most of my time is spent figuring out my types, writing the code, then...it's done? In Python I run into a lot of programmer related bugs that Haskell's rigid type system prevents. It can't prevent logic/flow "bugs" but it does keep a lot of other bugs out that would normally have me spending potential future hours fixing/debugging.

It's also not just the type safety that increases my productivity, it's also...Haskell. Abstraction in Haskell can make for very concise and correct programs.

Fay and Clay are particularly awesome too.

Haskell definitely has a learning curve, there was a steep curve for me and I already had significant functional programming experience (Erlang & Scheme). I'm really happy to see these guys taking that on!

2
skwosh 7 hours ago 3 replies      
Hope this goes into a lot of depth...

My main issue with existing books and tutorials is that I'm left in the dark about the more interesting/advanced parts of Haskell.

Some of the areas I would like to have a better understanding:

- More advanced kinds of monads e.g. Logic, Continuation

- Monad Transformers

- Arrows

- Rank-N types

- GADTs

- Category theoretical ideas e.g. Bananas and Lenses

- Type derivatives and TypeClass abuse c.f. Conor McBride

- Control patterns like Iteratees, Generic Zippers c.f. Oleg Kiselyov, Chung-Chieh Shan

I have an intuitive grasp of the above, but to me Haskell is more about programming with types than anything functional.

On the practical side, it seems like Haskell excels as a language processor, and parsing/compilation would be a great way to explore how to structure certain kinds of applications (like web servers, graphics pipelines, etc).

Please don't make it too real-world, I already have bash ;)

3
maximveksler 7 hours ago 2 replies      
HN never fails, just as I was browsing the web after reading pg assay about programming languages and the story of how he built what is now yahoo shop using lisp and deciding that instead of lisp I will learn Haskell this comes along. Looks very good and hope that it won't be /too/ easy start without any challenges...
4
arocks 5 hours ago 0 replies      
This is a very promising approach. I was blown away when they actually executed a web application and the working web page was shown inside an IFRAME. This means that I could learn programming from any device, even a tablet, without needing to install anything.
5
monkeyfacebag 8 hours ago 1 reply      
As excited as this makes me, I am way more excited to hear about the "full blown Haskell IDE" they mention. Is this a cloud service?
6
hosh 9 hours ago 1 reply      
Lady's Illustrated Primer, ha!

I've been working my way through "Learn You a Haskell For Great Good". I worked my way up around functors and monads. Then I stopped for a while. This new School will be interesting :-)

7
bpolania 7 hours ago 0 replies      
I learned Haskell in my first algorithm course in the University, mine was the last class ever to do it in that university, the following semester they switched completely to Pascal and later to Java.

These are good news, I just can't wait to see that IDE.

8
yarou 7 hours ago 0 replies      
Signed up. :)
This is my nth attempt at learning Haskell, but I have a fairly good handle on most of the basics.
9
hexonexxon 7 hours ago 1 reply      
alas waiting list

doubt i'll switch from scheme but still interested

10
Bill Gates 2013 Annual Letter gatesfoundation.org
72 points by sethbannon  7 hours ago   27 comments top 6
1
sethbannon 5 hours ago 9 replies      
Bill Gates always seemed like a more appropriate person to idol, if you must do such things, than Steve Jobs. Not only did he build the worlds largest technology company (in its time), but he's making a global philanthropic impact.
2
tokenadult 3 hours ago 0 replies      
I don't know how many of you who have followed the link where offered the reader survey as you visited the link. I found that interesting, in that it asked a few of the same questions before reading the page, with my consent, and then again after I read it. I suppose the questions are designed to track attitudinal change from reading the page. I had read some of the page content earlier as the New York Times op-ed, which I think was also submitted here to HN.

I will always rail at Microsoft products, although I still use them. (Two of my four children have already switched to Linux for their PCs, but I'm still an old Windoze fuddy-duddy.) I do like Bill Gates's approach to philanthropy a lot. I particularly like the Gates Foundation research on effective teaching, some of which I apply to my own work as a mathematics teacher in private practice. Helping charities become more effective would indeed be a great contribution to society.

3
adnrw 6 hours ago 0 replies      
Kottke's Report on it is very interesting: http://kottke.org/13/01/read-bill-gates-annual-letter
4
dr_ 5 hours ago 2 replies      
Gates' legacy will ultimately be his foundation, more so than Microsoft.
5
pkeane 3 hours ago 1 reply      
This five-part exchange between a public education advocate and a member of the Gates Foundation team working on education was eye opening.

http://blogs.edweek.org/teachers/living-in-dialogue/2012/07/...

The Gates Foundation work is and will likely continue to be disastrous for public education in the U.S. It is simply an attempt to push education into the private sector.

6
zt 4 hours ago 0 replies      
The letter led me to wonder about measurment and results in the world of "doing good". I was wondering what this forum's thoughts on the matter are. Should we only fund the things that get results? How should we quantify our results? And probably most importantly: what do we choose to measure? (Decrease in rainforest destruction vs companies that choose to use non rainforest sources for wood or palm oil?)

I worry sometimes--and I co-founded an organization that brings data science to civic and social organizations--that this will be similar to No Child Left Behind (or World Bank like), where non profits do what is best for their metric instead of what is best for the population they are trying to serve.

11
What The Rails Security Issue Means For Your Startup kalzumeus.com
330 points by timcraft  16 hours ago   157 comments top 30
1
pifflesnort 15 hours ago 8 replies      
The "everybody has bugs" response is intellectually dishonest. Yes, everybody has bugs, but most people's bugs aren't an intentional feature that a trained monkey ought to have known was a bad idea.

- Someone implemented a YAML parser that executed code. This should have been obviously wrong to them, but it wasn't.

- Thousands of ostensible developers used this parser, saw the fact that it could deserialize more than just data, and never said "Oh dear, that's a massive red flag".

- The bug in the YAML parser was reported and the author of the YAML library genuinely couldn't figure out why this mattered or how it could be bad.

- The issue was reported to RubyGems multiple times and they did nothing.

This isn't the same thing as a complex and accidental bug that even careful engineers have difficulty avoiding, after they've already taken steps to reduce the failure surface of their code through privilege separation, high-level languages/libraries, etc.

This is systemic engineering incompetence that apparently pervades an entire language community, and this is the tipping point where other people start looking for these issues.

2
mikegirouard 14 hours ago 4 replies      
This quote caught my attention:

    There are many developers who are not presently active on a Ruby on Rails
project who nonetheless have a vulnerable Rails application running on
localhost:3000. If they do, eventually, their local machine will be
compromised. (Any page on the Internet which serves Javascript can, currently,
root your Macbook if it is running an out-of-date Rails on it. No, it
does not matter that the Internet can't connect to your
localhost:3000, because your browser can, and your browser will follow
the attacker's instructions to do so. It will probably be possible to
eventually do this with an IMG tag, which means any webpage that can
contain a user-supplied cat photo could ALSO contain a user-supplied
remote code execution.)

That reminded me of an incredible presentation WhiteHat did back in 2007 on cracking intranets. Slides[1] are still around, though I couldn't readily find the video.

[1]: https://www.whitehatsec.com/assets/presentations/blackhatusa...

3
bguthrie 14 hours ago 0 replies      
This was a hugely helpful big-picture overview of the recent vulnerabilities. Everyone, please go read it.

I had been meaning to get some context for the recent spate of security problems and this provided that in spades. Thanks for taking the time to write it up and post it.

4
firemanx 14 hours ago 0 replies      
I think the recent Rails, Java, RubyGems and other vulnerability issues have been an absolute boon to the industry. And not just for the increased business I think most security consultants are going to be seeing.

The exploits have happened in ways that have exposed and hammered home the myriad places many applications expose unexpected side channels and larger attack surfaces than you'd think. These issues have opened a broader range of people to vulnerability, and I think opened a lot of people's eyes to the need for a sense of security and what that really means.

Top that with the level of explanation we've seen in at least the Rails and Ruby exploits, it's been a tremendous educational opportunity for a lot of people who will benefit greatly from it, and by proxy their users.

When the idea of a "SQL Injection" first became really prevalent, we saw an uptick in concern for security amongst framework developers, as far as I could tell. I think this will help get some momentum going again.

Speaking as a non-expert on the subject, security is all about a healthy sense of paranoia, across the board :)

5
jcampbell1 13 hours ago 1 reply      
It would be interesting if someone wrote a worm that just took all the vulnerable rails apps offline. That way we would have less worry about a million compromised databases. It could be launched from a bookmarklet run from tor browser, and would probably exhaust every ip address in a few days. It would also land whoever did it in jail for a really long time.
6
jammycakes 7 hours ago 2 replies      
When I look at the Ruby/Rails community, the word that comes to my mind more than any other is hubris.

You see this in things such as security issues being marked as wontfix until they are actively exploited (e.g. the Homakov/GitHub incident), in the attitude that developer cycles are more expensive than CPU cycles, and on a more puerile level in the tendency towards swearing in presentations.

I've always had the impression that the Rails ecosystem favours convenience over security, in an Agile Manifesto kind of way (yes, we value the stuff on the right, but we value the stuff on the left even more). One of the attractions of Rails is that it is very easy to get stuff up and running with it, but some of the security exploits that I've seen cropping up recently with it make me pretty worried about it. I get especially concerned when I see SQL injection vulnerabilities in a framework based on an O/R mapper, for instance.

7
ontoillogical 15 hours ago 3 replies      
> The recent bugs were, contrary to some reporting, not particularly trivial to spot. They're being found at breakneck pace right now precisely because they required substantial new security technology to actually exploit, and that new technology has unlocked an exciting new frontier in vulnerability research.

What technology is he talking about here?

8
tomjen3 15 hours ago 1 reply      
This is a pretty good example of why I hate big frameworks. They are simply too big to prevent stupid issues like YAML extraction in JSON and XML.

If you are like me, you would expect that YAML was used in the configuration files and nowhere else. A small framework like Sinatra wouldn't have been big enough to hide an issue like this.

9
josephlord 15 hours ago 2 replies      
All the RubyGems stuff is happening at a high rate and I understand that over 90% of the Gems are now verified and it looks like nothing was backdoored but I couldn't find a good summary of the current situation so I have a couple of questions.

1) Is it currently safe to "bundle update" and be confident that only verified Gems will be provided? I don't mind errors on any unverified ones but don't want to download them.

2) Is there a drop in replacement for RubyGems? The problems that have occurred this month would have been multiplied if RubyGems was unavailable at the time Rails had an apocalyptic bug.

10
sneak 3 hours ago 0 replies      
Why do people write things like "We Should Avoid #'#(ing Their #()#% Up" instead of "We Should Avoid Fucking Their Shit Up"?

http://www.youtube.com/watch?v=dF1NUposXVQ

11
scarmig 10 hours ago 0 replies      
Given the severity, it'd almost be a public service to hit every public Rails server and exploit it to patch it with the security fix(es)...
12
true_religion 12 hours ago 1 reply      
> The first reported compromise of a production system was in an industry which hit the trifecta of amateurs-at-the-helm, seedy-industry-by-nature, and under-constant-attack. It is imperative that you understand that all Rails applications will eventually be targeted by this and similar attacks, and any vulnerable applications will be owned, regardless of absence of these risk factors.

Who was the first reported compromise of a production system?

13
jmount 15 hours ago 1 reply      
This blindness to how bad YAML was is causing "convention over configuration" to devolve into "security by convention".
14
lucian1900 13 hours ago 0 replies      
To me it seems that all of this is due to the obsession with implicit behaviour in Rails, and to some extent Ruby.

I hope they learn from this and stop chanting "convention over configuration" when told that explicit is better than implicit.

15
ph0rque 15 hours ago 1 reply      
So I went through my heroku closet and cleaned everything up (pulling the plug on unneeded apps and making sure needed apps were up to date).

My question: do these security issues affect Sinatra apps?

16
static_typed 14 hours ago 2 replies      
As much as I was a fan of developing Ruby apps, I was constantly shocked by the lack of engineering, security concern, stability of API, basically serious software engineering within the community.

It would be good if all this was a clarion call to the Ruby community to improve things holistically, rather than the current trend of band-aid fixes they seem to apply.

17
kyllo 11 hours ago 1 reply      
Is this YAML vulnerability something that can be patched in relatively short order without Rails itself having to be completely rewritten?

Or should I basically just not run Rails on any machine ever anymore, get a different web server, and start implementing my own request routing and ORM without any sort of YAML-parsing magic?

>One of my friends who is an actual security researcher has deleted all of his accounts on Internet services which he knows to use Ruby on Rails. That's not an insane measure.

So anyone who uses Twitter, for example, could have their passwords and other data stolen through this exploit?

18
btown 14 hours ago 0 replies      
Is Rails moving to a YAML (or almost-YAML) parser that does not execute code for future major releases? I find it hard to believe that such functionality is used often. Until then, as the article says, people will just keep finding zero-days. This seems like the only logical choice for the Rails core team.
19
rlpb 15 hours ago 2 replies      
You do all deploy from your own cache of all the gems you depend on, right? No? Why not?
20
delinka 13 hours ago 3 replies      
"Any page on the Internet which serves Javascript can, currently, root your Macbook if it is running an out-of-date Rails on it."

Why are you running Rails as the root user? This is a bad idea.

EDIT: I'm not really into client-side JavaScript these days, but when did browsers start allowing JavaScript to connect to anything except the server from which it came? That would be yet another Bad Idea.

21
romaniv 13 hours ago 0 replies      
Some time ago I asked certain Ruby people how to dynamically load Ruby code (for configs). They told me it's Wrong. Seems that in practice the idea wasn't much worse than Yaml after all.

I am still convinced that configs and templates should be treated as executable code and are best implemented in the same language they're used from. At least it makes certain things blatantly obvious. (It also makes a lot of other things possible without any extra coding/learning.)

22
sergiotapia 13 hours ago 2 replies      
I'm losing my Ruby and Rails faith here; what gives? This is just as bad as leaving SQL injection attacks open.
23
ams6110 3 hours ago 0 replies      
Why don't we have "building codes" for software?

There was a time when anyone who claimed to have the ability could design and build things like bridges and buildings. After enough of them collapsed due to repeated, avoidable mistakes, we said no, you can't do that anymore, you need to be licensed to design and build buildings, and furthermore you have to follow some basic minimum conventions that are proven to work. And you and your firm has to take on personal liability when you certify that your design and construction follows those basic best practices.

24
drawkbox 11 hours ago 0 replies      
New internet law: any sufficiently sized platform or framework will attract increasingly more compromising/malware attacks. Anyone running Wordpress still knows this all day.
26
djkz 6 hours ago 0 replies      
How feasible would it to have a gem that sits in middleware that would check for possible attacks before the string gets any further and block/share IPs of people fishing for exploits?

I could see it as a service company that shares blacklist info between sites and can even find new exploits from the "bad" requests.

27
s1kx 14 hours ago 1 reply      
Is there no hardened version of Psych which lets you either disable object deserialization, or whitelist classes? That would seem like the safest option right now to guard against coming vulnerabilities in Rails in this regard.
28
hayksaakian 13 hours ago 0 replies      
So what are the good versions of recent minor versions of rails? And where can I find them in the future?
29
SkyMarshal 10 hours ago 0 replies      
I'm not a Rails developer, is JRuby on Rails affected by this?
30
hawleyal 13 hours ago 2 replies      
FUD much
12
Reaching 200K events/sec aphyr.com
62 points by mattyb  7 hours ago   22 comments top 5
1
aphyr 6 hours ago 1 reply      
If you're wondering about the workload, this is the trivial Riemann config this benchmark uses:

  (streams
(rate 5 (comp prn float :metric))

Which means for each five-second interval, sum all the metrics of the events flowing through this stream, divide by the time elapsed, and print that rate to the console.

I'm using this setup to put the heaviest load possible on Riemann's client and TCP server while I optimize those layers--it's not meant to stress the internal stream library, the state index, or pubsub. When I start optimizing those components, I'll have more "real-world" numbers to report.

I should also explain that this particular post explores the high-throughput, high-latency range of the client spectrum. End-to-end TCP latencies (not counting wire time) for single-event messages are on the order of ~100 microseconds-1ms, with occasional spikes to ~30ms depending on JVM GC behavior.

2
revelation 6 hours ago 2 replies      
TLDR: Burned by framework magic. Talk about side effects.

Ten layers (and probably buffers) traveled through until your data hits the wire. Layer x decides to change its IO model and your throughput takes a dive. It's exactly why there was a post recently about building an operating system just to run some network daemon.

3
trekkin 6 hours ago 1 reply      
>> Throughput here is measured in messages, each containing 100 events, so master is processing 200,000"215,000 events/sec.

So in reality it is ~ 2k messages/sec. This is a rather poor throughput, as even off-the-shelf generic web servers (e.g. nginx) have the throughput an order of magnitude higher, and proprietary systems can reach 500k messages/sec over the network.

4
dschiptsov 27 minutes ago 1 reply      

  (defn execution-handler
"Creates a new netty execution handler."
[]
(ExecutionHandler.
(OrderedMemoryAwareThreadPoolExecutor.
16 ; Core pool size
1048576 ; 1MB per channel queued
10485760 ; 10MB total queued
)))

It is a farce, isn't it?))

5
bad_user 1 hour ago 0 replies      
Offtopic, but it's refreshing to read blogs with clean designs and readable texts, without commercials or widgets inviting to click on things and all that crap.
13
Chinese Hackers Stole NYT Employee Passwords nymag.com
5 points by angelohuang  49 minutes ago   1 comment top
1
tantalor 30 minutes ago 0 replies      
> Symantec… found just one of the 45 pieces of custom malware installed on the Times servers

Obviously general-purpose anti-virus software is completely ineffective against purpose-built malware.

14
That Daily Shower Can Be a Killer nytimes.com
313 points by danso  16 hours ago   211 comments top 40
1
daeken 13 hours ago 6 replies      
This is a bit of an aside, but I have to say this: even as a young person, take falls very seriously. I was 22 or so when I slipped in the shower. I was falling to the side and was going to hit my head, so I decided to twist so that I'd fall flat on my back, figuring I'd be fine. Well, I was, until a few hours later, when I started having chest pain and my left arm went numb and started getting shooting pains.

Thinking it was a heart attack, I went to the ER and was told I was fine, and it was probably just a pinched nerve from the fall. Three years later, the pain hasn't stopped -- the chest pain isn't so bad these days usually, but my left arm is almost continually numb and, well, the body doesn't really get used to the pins-and-needles feeling. If I had taken it more seriously, had a CT scan taken at the time, etc, it may have been caught early. Unfortunately, now that so much time has passed, doctors are at a loss as for what's going on.

I'm still finding new doctors and doing my own research into what's going on, but this process has been excruciating. So, please, if you have a fall: go to the doctor, and have them do a real examination immediately. When I went, they focused on my heart and didn't even so much as look at my neck or my shoulder; had I gone after the fall, they may have figured out what it was, and I wouldn't be in pain years later.

Hope this cautionary tale helps someone!

2
JoeAltmaier 16 hours ago 4 replies      
A handy technique for evaluating situations is this: how many mistakes am I away from death/injury? If I drive without a seatbelt, I've put myself 1 accident away in many cases.

My Scouts are young, and love to climb things. I tell them, I know you're strong and skilled. But a loose rock or slippery foothold puts you at risk anyway. So wear the harness - now it takes two mistakes to kill you (e.g. loose rock + badly rigged harness). The risk goes down drastically.

So put some non-skid floor mat in your shower, or a chair as advised in this thread. The mis-step no longer carries the same risk.

3
noname123 14 hours ago 2 replies      
In the derivatives world, there's a saying for trying to bet against highly unlikely events with catastrophic risk for small gains: "picking up nickels in front of a steamroller." The issue is when you pick up enough nickels and watch for the steamrollers vigilantly first few times, you grow complacent and think that you are the master of nickel pickers and steamrollers are slow mofo's. You try to pick more nickels and linger longer in front of incoming steamrollers.

See debacle of Long Term Capital Management. Options trader take profit/loss as soon as a humble target is hit. It's as in life, the biggest loss is the complete loss of your physical capital which takes you out permanently of the game. Gamblers focus on the potential profits and get high on how their luck evaded fate in one nick of time, traders focus on preservation of capital.

4
ef4 11 hours ago 2 replies      
Speaking specifically about fall risk in older people: if you want to ensure a high quality of life when you're older, maintain strong muscles through exercise. This may seem like a no brainer, and yet almost nobody actually does it.

Balance and strength are highly dependent on exercise. Even fairly old people can maintain very good balance and strength if they don't let their muscles deteriorate through inactivity.

Being frail in old age is not inevitable. A sedentary person after age 50 loses something like 5% of muscle mass annually. But that same person can boost their muscle mass 20% in a single year if they just get serious about strength training, and then slow the deterioration to 1 or 2% thereafter. Run the numbers, it makes a dramatic difference in outcomes.

5
jessedhillon 15 hours ago 2 replies      
Strangely not mentioned at all, so I'll put it here: Jared Diamond is the author of Guns, Germs and Steel, his most famous work. It addresses the question of how and why European societies were able to advance themselves so much farther ahead of all other civilizations. There is also an excellent four-part series streaming on Netflix.
6
protomyth 14 hours ago 4 replies      
The CDC has keeps statistics on how people in the USA die http://www.cdc.gov/nchs/fastats/lcod.htm

Looking at the 2011 prelims "Accidents (unintentional injuries)" comes in at #5 with 122,777 deaths and Intentional self-harm (suicide) is at #10 with 38,285. Assault (homicide) is no longer in the top 15.

Also, Assault (homicide) by discharge of firearms is 11,101 with all other Assault (homicide) totaling 4,852. To give some context to the Assault (homicide) numbers, "Accidental poisoning and exposure to noxious substances" totals 33,554 deaths or about 3x the Assault (homicide) firearm number or 2x the total.

Looking at the stats and what kills us, we spend a lot of time looking at the stuff that is actually going down versus the stuff that is increasing.

7
nathan_long 15 hours ago 2 replies      
Yep. This is why every dollar spent on the TSA would be better spent on preventing car accidents, heart attacks, and falls in the shower.

http://www.schneier.com/blog/archives/2010/01/the_comparativ...

But I like the OP's more practical point: personal attention to normal activities that are actually risky, based on a realistic view of those risks.

8
brudgers 13 hours ago 0 replies      
Mortality Data on Falls from the CDC shows the increase in risk as Americans age - and that is the direction of our demographics:

  Cause of death (based on ICD-10, 2004)        Falls (W00-W19)
All ages 26,009
Under 1 year 10
1-4 years 24
5-14 years 28
15-24 years 211
25-34 years 299
35-44 years 493
45-54 years 1,283
55-64 years 2,011
65-74 years 2,988
75-84 years 7,249
85 years and over 11,412
Not stated 1

http://www.cdc.gov/nchs/data/dvs/deaths_2010_release.pdf

9
bambax 26 minutes ago 1 reply      
> If I'm to achieve my statistical quota of 15 more years of life, that means about 15 times 365, or 5,475, more showers.

Or, take less showers. You don't need to shower every day in winter, do you?

10
stcredzero 13 hours ago 1 reply      
The takeaway for programmers: every time you write a statement, the odds that you're introducing a bug are quite small, but you do this a lot, so the odds guarantee you will introduce a bug. The guys at NASA who came to TX/RX in Houston used to treat soldering defects as a statistical certainty. You empirically determine your solder failure rate, count the solder joins in the project, then look for your predicted number of failures. When they told me this, a light went off in my head. Why don't all programmers do this?

Think of it like this: if someone paid you $100 to take 2 steps balancing on a rail, you're certain to be able to do it. How about going 100 times as far for $10,000 over a 1000 foot drop? As they say: Quantity has a quality all its own.

11
dr_ 2 hours ago 0 replies      
The article ignores the fact that there is a variety of equipment available to further minimize the risk of falls in the elderly - and it's the elderly that really matter, because a fall, in their case, could very well turn out to be life threatening.
There are shower mats to increase friction. There are bars in the shower to hold on to. There are even tub benches to sit on while showering.
I don't really expect the sort of absurd headline grabbing articles like this from the NYTimes. I expect articles that responsibly describe the risks and ways to mitigate them.
12
shin_lao 15 hours ago 5 replies      
Author doesn't understand how statistics/probabilities work and it ruins the article.

If you roll a dice six times, you have no guarantee to get a six. Each time you roll the dice, you have 1/6th chance of getting a six and this doesn't change no matter how many times you roll.

If you roll a dice six times, you have around 66,51% (1-(5/6)^6) chance of getting six at least once.

For the same reason, if you have 1/1000th chance of dying under the shower, and you take 5,000 showers, you won't die 5 times...

You will have 1-(999/1000)^5000 chance of dying, that 99,32%. That's not 1. So you won't die 5 times.

13
EwanG 16 hours ago 0 replies      
For those equally concerned, there are "shower chairs" that you can get that greatly reduce the risk of falls. Since my SO had her amputation it was the only way she could take a shower. However I've found it to be pretty convenient as well.
14
IsaacL 15 hours ago 2 replies      
One of the reasons falls are so fatal for old people is because elderly people in our society rarely stay active. Look at cultures like the Chinese, where you so elderly women up in the morning doing Taichi, or working out on those weird outdoor gym sets in the rest of the day. Maintaining strength and flexibility during old age helps mitigate damage from falls a lot.

Physical deterioration is often a self-fulfilling prophecy,

15
skittles 3 hours ago 0 replies      
I see someone on a motorcycle, and I think "idiot". I've known 7 people in my life who have had motorcycle accidents. One has had 2. He broke his neck in one of them. A couple spilled their bike on uneven pavement. A guy I met in college was like doctor House (dead bone in his leg causing great pain). Another guy was thrown 60 feet when he was rear-ended by a truck. And the last 2 are a father and son. The father has brain damage that destroyed his marriage. Go ahead and have your midlife crisis. I'll be in my car. I might die in a horrific accident someday, but a fender bender won't turn into road-rash and a concussion.
16
hammock 15 hours ago 2 replies      
Nothing against New Guineans, but the notion of attributing not sleeping under a dead tree to their specific culture is funny. It's basic risk management of anyone who camps frequently in the backcountry. In fact many learn the 4 W's of concern when picking a campsite- Wind/Weather, Water, Wildlife, and Widowmakers (i.e. a falling tree branch)
17
timruffles 15 hours ago 0 replies      
Reminds me a lot of Nassim Taleb's focus on payoff/cost (expected outcomes) vs probability. If the possible costs are high enough, it makes a lot of sense to take precautions. Hardly rocket science - seat-belts, hand-rails, smoking etc.
18
casca 11 hours ago 0 replies      
For a more detailed description of Jared Diamond's experience with the dead tree in New Guinea:

http://www.edge.org/conversation/tales-from-the-world-before...

And if you're in London, consider adding yourself to the waiting list: http://www.thersa.org/events/our-events/the-world-until-yest...

19
Qantourisc 3 minutes ago 0 replies      
You mean ... you people in the US don't all have rubber anti-slip mats in the showers ?
20
caf 11 hours ago 1 reply      
It's common knowledge in Australia that you don't pitch your tent under a tree, alive or dead. I imagine that's because Eucalyptus trees have a habit of dropping large limbs without warning (it's an adaptation to survive droughts).
21
symmetricsaurus 10 hours ago 0 replies      
Angband(the roguelike) is an excellent teacher of the point made in the article.

The game is quite long and if you die you have to start over from the beginning. You can also be killed in one turn if you are unlucky and not careful. The only way to win is to lower the risk to die at each turn sufficiently that you can play through the 100 000 turns or so that it takes to win.

Playing it has really given me perspective on risks in a similar way to the author of the article. In real life you end up doing some things a lot of times and then the risk has to be damn low.

22
ajaymehta 8 hours ago 0 replies      
I initially thought this was going to be about long-term health risks of showering every day. I've heard that long hot showers are bad for the skin and/or hair, is that true? Or might they be bad for the body in other ways?

(But good piece nonetheless, Jared Diamond is brilliant.)

23
abraininavat 16 hours ago 3 replies      
Not discussed is how the author's hyper-vigilance impacts his life in the form of stress. Stress kills.
24
larrys 13 hours ago 2 replies      
The NYT headline is clearly link bait. If it had simply said "Some thoughts on everyday risks we underestimate" if would go nowhere.

That said there is a clear benefit and need to take a shower for many people as opposed to sleeping under a tree or crossing the street in the middle for which there is a work around.

25
eagsalazar2 10 hours ago 3 replies      
Seriously showering 365 times/year is, while normal in the US, really pretty bizarre and definitely decadent. Put on some deodorant, shower every other day and this guy could cut down his risk massively. And save colossal amounts of water!

Does he sweat regularly? Play sports? If not, in the winter, shower twice per week. Doing otherwise is really falling into one of the weirdest forms of American prissiness.

In general I shower when my wife tells me I need to. On average that means every 2-3 days. How strange and wasteful would it be if someone washed their car every single morning regardless of whether it was dirty or had even been moved from the garage that day! (I work from home so the analogy frequently works)

I know he's making a broader point about risk but if you want to get over some crazy warped American perceptions to improve your life and the world, obsessive showering is as good a one as getting over delusions of risk of terrorist attacks.

26
doki_pen 5 hours ago 0 replies      
When I here a statistic like, your chance of falling and getting an injury in the shower is 1 in 1000, I assume they mean over my lifetime, not each day. There is not way in hell it's 1 in 1000 per day. I don't buy it. That said, I get the point of the article.
27
jokull 8 hours ago 0 replies      
Consider doing exercise that strains your bones and strengthens you in ways that will protect you when falling. Jiu-jitus is excellent at this. You will learn to structure your body, and creating frames with your limbs in many potitions, and you will fall to the ground multiple times. This prepares you.
28
dhughes 8 hours ago 0 replies      
CBC.ca mentioned a study by Simon Fraser University in BC, Canada.

> "We show that the most common causes of falls are incorrect weight shifting and tripping, and the most common activities leading to falls are forward walking, standing quietly and sitting down,"

http://www.cbc.ca/news/health/story/2012/10/16/falls-elderly...

29
samspot 11 hours ago 0 replies      
What else should we be vigilant about besides falling? I felt the article was a little incomplete w/o this information.

I've got one:
* Lifting heavy things correctly.

30
zobzu 6 hours ago 0 replies      
I like how this article is written, with good english. This is refreshing.
It might be that old people write better than we do. :)
31
eridius 11 hours ago 0 replies      
Amusingly, Lonely Island's latest music video, just posted 5 days ago, expresses pretty much this precise sentiment. It's called YOLO and is available at https://www.youtube.com/watch?v=z5Otla5157c.
32
taeric 15 hours ago 0 replies      
Oddly, I'm not sure why one would not consider skipping the daily shower if you had these concerns. Though, I realize that is a highly personal decision. (Meaning that personal influences matter a lot. I've known folks who would go without a shower for upwards of a month with nobody noticing. Others, if a day was skipped it was obvious.)
33
Irregardless 13 hours ago 0 replies      
"Cause of injury: Lack of adhesive ducks." - Sheldon

I'm not sure what the takeaway from this article is supposed to be. Always be conscious of your safety but not to the point of constant paranoia?

Isn't that basically -- dare I say it -- common sense?

34
stewbrew 15 hours ago 1 reply      
That's why they build special showers (that are on floor level) for elderly people. No reason to sleep under a dead tree.
35
vickytnz 12 hours ago 0 replies      
This stuff particularly applies to machinery: the horror stories about people getting hair and clothing caught in machinery or looking away for a second when using a bandsaw are always a reminder to be careful no matter how many times you've used the machine before.
36
eliben 11 hours ago 0 replies      
Boy, I love Jared Diamond's books. An absolute must read for any hacker.
37
mattmaroon 8 hours ago 0 replies      
I love it when people who don't understand the concept of expected value circle around it intuitively without quite getting there.
38
tomjen3 14 hours ago 2 replies      
Solution: a bathtub. I have some combination bathtub + shower so I sit down and wash myself -- it doesn't take longer and I can fall (since I am already at floor height.
39
abraininavat 13 hours ago 0 replies      
It remains to be shown that a hypervigilant attitude has any effect whatsoever on what you're being vigilant about, and in particular shower falls.
40
adamio 15 hours ago 0 replies      
Where can I get that snake-head shower-head?
15
Starting a Bike Shop priceonomics.com
144 points by rohin  12 hours ago   44 comments top 15
1
at-fates-hands 11 hours ago 3 replies      
Having worked for a bike shop for almost ten years, this article was interesting to me.

Full disclosure - this is who I worked for (http://www.eriksbikeshop.com/EriksHistory.aspx)

Some things about what made ERIKS so successful:

1) bike shops make almost zero money on their bikes. The margins are razor thin. They make a majority of their profit on accessories. For example, they buy kickstands for 25 cents and sell them for $12.

2) ERIKS actually used Macs in all their locations and used a custom built POS system. They only needed one IT guy who ran 8 stores when I was there (they have around 14 stores now). He said if they were using MS, they would've had to hire a bunch more support people. It helped keep costs down.

3) At first ERIK looked for lower rent, out of the way places to set up shops. What he called, "sub-prime" locations. Rent was lower, and because of their superior marketing, they didn't need a prime location to drive traffic.

4) They trained their sales staff. This was a biggie. As a new salesperson, you went through a full sales training. You were trained in sales techniques. How to close, how to get people to buy accessories, how to approach someone, how to go through the process and asking the right questions to get to a single bike to sell. By far this is really where they separated themselves from other stores. Most places would hire bike "enthusiasts" and let them casually sell a bike if a person was interested. They had a very pro-sales approach and it showed in their numbers.

5) Keeping wages low. This is probably a contentious subject for most. But as a salesperson, I made minimum wage, with a small percentage for commission. I think it was 1 or 2% of the total sales I had. By keeping the wages low and offering in store discounts and bike manufacturing discounts (which the manufacturers offer, not ERIKS) they were able to pitch potential employees that, "There's not a lot of money in the bike industry (lie), but people who work for us have a good quality of life and good perks of being a part of the bike community." By keeping wages low, they increased their own bottom line.

2
rjett 4 hours ago 0 replies      
While this article was noncommittal in offering up the reasons for HB's success, the pleasant sales experience was reiterated throughout the article. THIS IS HUGE IN RETAIL.

As of this coming Saturday, I'll be a year into operating a coffee shop I opened with two friends. Like HB, my shop has exceeded my expectations in the first year. FWIW, here are my observations on success:
If you have the best sales experience in town coupled with a great, consistent product, people will talk about you. I monitor every mention of keywords related to our shop on twitter, Facebook, yelp, instagram, tumblr, etc and after a year in operation, it's pretty clear that people value the attitude [or lack thereof] of my employees, our approachability, the quality and consistency of our product, and the beauty of our shop. In combination, these things help create and maintain a lot of positive buzz about the shop. Buzz begets buzz too. We've never once paid for advertising and instead let word of mouth and good product do the talking. First there were just tweets, Facebook mentions, and word of mouth. Then bloggers started writing about us. Then we had local press do a few stories on us. With time, all this turned into positive reputation and people outside of the city began to mention us. Recently we've received national press. We also just climbed atop the #1 ranking in yelp for coffee in Charleston, SC. Over the past year, I've made a concerted effort to keep people talking and it seems to have worked.

TLDR: If people see integrity in the ownership, quality in the product, and a pleasant sales environment, they will tell someone and they will be loyal to the brand. If you've done your job so well that you can inspire your customers to talk about you, you will grow, even if you aren't in a prime location.

3
jrockway 3 hours ago 3 replies      
I'm not from San Francisco, but is 7th/Market really "a bad neighborhood"? I mean, maybe it's not Midtown Manhattan nice, but I also doubt it's East New York bad.

(I used to work at the University of Chicago, and took the Red Line to the 55th St. bus. I only had my shoes stolen on the Red Line once, and only watched a high speed chase come to an end with guns-a-blazin' while waiting for the bus once... but I'd still consider that a sketchy neighborhood.)

4
fennecfoxen 7 hours ago 1 reply      
"There is no “google analytics” for a shop or measurable ways they can promote themselves offline or online. Somewhere in all of this, there is probably a billion dollar startup idea or two."

Startup idea or two? Don't mind if I do.

- http://www.ekahau.com/solutions/retail.html

- http://www.nearbuysystems.com/solutions/in-store-analytics.h...

And here's some additional coverage of that space in the media, with regards to car dealerships:

- http://online.wsj.com/article/SB1000142412788732478440457814... (google URL and click for HTTP Referer if you get paywalled, but I don't think you will)

5
JacobAldridge 10 hours ago 1 reply      
"it's impossible to measure what part of their marketing is driving new customers and what part is wasted spending."

The article, which I enjoyed, uses the word "impossible" 2-3 times in this context. I do not think that word means what you think it means - it's not impossible to ask at point of sale "How did you hear about us?" and record that. Or even have short survey forms for people to complete while they wait (you'd rather people didn't wait; but to the extent that they are they'd rather fill in info to enter a competition or similar than be bored.)

I disagreed with pg's distinction about startups v real world businesses at the time, and this is part of the reason why - assuming the different types of new businesses are worlds apart because one needed $250k to launch and another didn't.

6
malandrew 5 hours ago 0 replies      
There is an underserved niche in the biking industry and that is "urban chic" bike clothing from multiple vendors. Clothing is one area where brick and morter retail wins out over online retail because nothing beats trying clothes on. In SF, I've tried clothing on from Chrome, Mission Workshop, SWRVE, Rapha, etc.

Sizing between the brands is notoriously fickle and it's impossible to know if one brand will fit just right while another brand you can't even fit your thighs into the pants. For example, Chrome fits me really well, while Rapha doesn't fit at all. SWRVE on the other hand is only sold at 4 stores in the city (Huckleberry Bikes is one of them, the other three are MissionBike, BoxDog and PushBike). While I haven't been to HB, I have been to PushBike and MissionBike Co, and both have a very very limited selection and sizes of SWRVE clothing. The only non-cycling company that has decent stuff is Lululemon. I'm shocked that in a city like SF that has lots of cyclists, that it is so hard to find good biking clothing.

Given the growth of urban cycling, I'm honestly surprised no one has noticed that you could create just a store that has an excellent selection of fashionable everyday cycling clothing.

I hope the guys at HB are reading this, because I plan on going by there to see if their clothing selection is any better than PushBike and MissionBike Co.

7
grecy 9 hours ago 1 reply      
Did anyone else follow the google maps link only to find they were not looking at Huckleberry Bikes?

https://www.google.com/maps?q=&layer=c&z=17&iwlo...

Looks like Midtown rags to me.

8
bigiain 8 hours ago 1 reply      
"Talking to Huckleberry, it also seems clear that the most important software tools for small business haven't been invented yet. There is no “google analytics” for a shop or measurable ways they can promote themselves offline or online. "

Hmmm, I guess there's already people working on systems using cameras, face detection and recognition, and linking them to records of faces previously in the store (pr eben passing by), and to cash register sales (and credit card identities)?

I find the idea bth fascinating and creepy.

And I'm now wondering how many stores I walk into are doing it already? What're the chances that most big casinos don't already have something like this automatically alerting customer service when whales arrive (and security when card counters arrive)?

9
cunninghamd 11 hours ago 1 reply      
What is this? I hate to be rude, but it reads something like "Hey, we wanted to know how to start a bike shop, and had some friends that did. ... Here's their renovations ... 1 year later, they rock, and they don't know why ... oh, and there's 1 or 2 BILLION dollar ideas in here, have fun!"

Have they done surveys? Have they asked their customers "how did you hear about us?" Did they bring on each of those employees with sales experience? What was their onboarding process? Did they just get lucky? How far away WAS that other bike shop?

It was a fine article, but I felt it lacked real details.

10
cheald 10 hours ago 0 replies      
I am desperately disappointed that there is no discussion on the best practices for deciding which color to paint the shed.
11
calbear81 8 hours ago 0 replies      
Interesting read as usual from the folks at priceonomics but it seems like a lot of the problems about pinpointing how people found them have solutions that work to some degree including:

- Provide a nominal discount with a coupon code "goog" so you know users came from Google. Or you can do a printed coupon with a tracking code embedded. I see coupons all the time for bike shops like Valenica and American Cyclery.

- Just ask how people heard about you. Most people will tell you because making up a story is harder and it'll be natural since your sales associates are so personable.

In terms of why the location works, my best guess would be because the mid-Market area is becoming gentrified as more and more startups move in and rents rise and young professionals with disposal income will pass by the store on their way to work.

12
lamby 10 hours ago 0 replies      
There's an related thread on Slowtwitch about this:

http://forum.slowtwitch.com/gforum.cgi?do=post_view_flat;pos...;

13
guelo 10 hours ago 1 reply      
That neighborhood is not even close to one of the "worst areas" in America. Makes me wonder what the quote is from.
14
cmbaus 5 hours ago 0 replies      
It was a smart move to open at that location on Market, especially with the perks that the city was offering. It is right on a major bike commuter route, and I suspect hundreds of bike commuters ride by each day.

There are some other great shops in town, Box Dog for instance, but their access to commuters is probably better than any other shop in town.

15
hexonexxon 7 hours ago 0 replies      
all the bike shops in my city are also repair places which is where they make all their money charging crazy hourly rates to change a tire or for tuning, and from triple marked up accessories. they all know each other too and will collectively bulk buy everything to get a cheaper wholesale price
16
Jekyll CMS on Amazon S3 and MaxCDN netdna.com
37 points by jdorfman  5 hours ago   21 comments top 8
1
tantalor 50 minutes ago 0 replies      

  On the Obama campaign we made our donate pages 60%
faster and got a 14% increase in donation conversions.

Did they test this? For what purpose? We've known this is true since 2006.

http://www.zdnet.com/blog/btl/googles-marissa-mayer-speed-wi...

2
sudonim 3 hours ago 1 reply      
Current MaxCDN customer here. I'm really excited about this. I host multiple sites on Cloudfront but assets for a rails app on Maxcdn. I wanted to consolidate about a month ago. When I asked MaxCDN support why I couldn't use www, they just said no and that I should move to netdna (their other brand) to do it.

From the blog post:

"Note: MaxCDN does not automatically allow you to create a www CNAME in order to protect their system from DDoS attacks. To enable this functionality, email MaxCDN directly and mention this blog post."

I'm going to ask again and see if I can test against the experience of deploying to cloudfront. One of the things now with cloudfront is it often takes a bit of time to expire the cache.

Edit: Here's the response I received 5 mins ago from support after asking for this...

"Thanks for emailing us and I do apologize for the delayed response. I will need to escalate this to our support engineers for verification since we don't allow the use of "www" as part of custom domains for security purpose and also this needs higher level of access."

3
pclark 2 hours ago 3 replies      
Am I missing something or is Jekyll an outrageously user unfriendly blogging platform?

I have to create a text file, specify the layout, ensure the timestamp and title is in the filename, and then write HTML.

I am stunned no one has made a simple scaffold that lets me create a post with a button click and then use markdown to write, on a simple web app.

4
molecule 3 hours ago 0 replies      
Boto is awesome for interacting w/ AWS, but managing an application stack in ruby (jekyll) and a deployment stack in python (https://gist.github.com/4596766) does not seem optimal.
5
eli 4 hours ago 1 reply      
Any particular reason to use MaxCDN over CloudFront?

Edit: just realized why that's a silly question. Would still be curious to hear the answer though.

6
hayksaakian 4 hours ago 3 replies      
How does Jekyll + S3 + CDN compare to dynamic rails site with russian doll caching (cache digests) using something like redis?
7
kerno 2 hours ago 1 reply      
We're very interesting is maximising site speed (which we are currently reviewing - too many plugins slowing things our Wordpress site up) but I don't think we could easily manage to produce a new post everyday using a static site generator - am I wrong?
8
the1 3 hours ago 0 replies      
or just start blogging on gist.github.com. your blog url is http://gist.github.com/<your-github-id>;
17
Should we talk about the fact that founder Jody Sherman didn't just die? launch.co
160 points by danboarder  13 hours ago   111 comments top 20
1
antirez 11 hours ago 3 replies      
I'm not sure the problem is with being a founder. Actually this can be a very stimulating experience in your life, and may even bring some economical boost that is not bad for stability.

I think the problem is with the culture of being founder: there is agreement that you can sacrifice almost everything for your work. It is not true. As long as you enjoy what you do, it's ok to work long hours at critical stages of your startup, but it is also ok to take pauses, go to drink with friends, and avoid the pressure in general.

Another thing that always makes me a bit suspicious is that "we are going to change the world". This is a good recipe for pressure. Maybe there is another way to take it, and is simply, I work without too expectations, I try to do my best, but well it's not a drama if this does not work as expected. I'm also worth as an individual, outside my work, outside the business.

Maybe we are building a too aggressive culture and this is the result? I think that "back to technology" should be the motto for the next 10 years.

2
ahoyhere 10 hours ago 4 replies      
Yes, we should. Thank you for bringing it up. The OP makes some good points about the risks and ridiculous time/energy expenditures of "startups," but he misses the most dangerous element.

Grandiosity is a problem in the startup space. Grandiosity can be a sign of personality disorders… or, if you ask me, it can be a sign of hanging out with people who exhibit grandiosity, tell you it's what you have to have to achieve what you want, who laud you for having it, and who mysteriously aren't there to help you when you fall on your ass. In fact, who tear you up when you do.

Yeah, it sounds like high school, doesn't it? Only the stakes are a lot higher.

Every time I hear somebody say they are going to "change the world," I cringe. I imagine those people as some combination of arrogant fucks (pardon my english) and/or depression waiting to happen.

Please, please be reasonable. If you want to change the world, start by volunteering at a soup kitchen. Don't expect your startup to "change the world." Don't think you have to, to achieve your dreams and help people, either. Don't talk yourself up. If you're insecure, let yourself be insecure, don't slap a layer of grandiosity and self-aggrandizement on top.

Please, don't be somebody to other people who you aren't… the more you pretend to be confident to others, while being insecure inside, the less you feel like anyone KNOWS you, or cares about you, the more alone you feel, the more likely you are to really, desperately feel the pain of isolation.

The best way to be happy is to be grateful for what you have, not to constantly anticipate becoming famous or rich or having an outsized impact on the world when you haven't even made a tiny, local impact first. Have perspective. Volunteer. Spend time with your friends, and your family if you like them. If your friends exhibit all of the above symptoms, make a few new friends who are totally disconnected to the whole "ecosystem" so you can simply be real with them. (Note: not saying you should drop your grandiose friends. But consider whether they're healthy for you.)

Otherwise you risk being caught up in a spiral of obsession and disconnection, a constant raising of stakes ("Who's going to change the world more!? Who's more ridiculously confident?! Who can work longer and harder?!") which will, statistically speaking, never pay off.

3
richardjordan 12 hours ago 3 replies      
Jason raises a good point. The stresses of startups often go unmentioned, or certainly minimized, in the myth-of-the -heroic-founder out startup narratives become after the fact of a success. Failure and hardship are rites of passage, right?

But the collateral damage is real. Going out on a limb financially is celebrated but most stories don't end with a win - just the ones we tell. Most startups fail and a lot of these risk takers we celebrate end up with financial strains for years to come, busted relationships or broken marriages. Anything can derail the startup process. Experienced entrepreneurs are acutely aware of this and if the end comes when you're out on that limb it can be devastating.

I am sure there are many more suicides and countless lives broken that we just don't hear about because they never achieved prominence or success. It's worth pausing a moment to think about that.

To the criticism of Jason's post. He has his haters for whatever reasons, though I find them to be highly unfair - I think it's clear that he genuinely loves startups and will do whatever he can to help the ecosystem succeed. I've met him twice and he's been nothing but generous. I think it's wrong to accuse him of using this for traffic. It's an important point he's making, and he's right nobody was talking about it in this case, for some reason.

4
arram 12 hours ago 3 replies      
"I keep saying how brutally hard this is. Each time you crest the rise in front of you, it just makes it clear the size of the even larger hill that looms beyond it. It goes on for a long time. I pissed blood for years keeping Netflix alive while we figured that shit out " as did every other successful entrepreneur in the valley." - Marc Randolph, Founder/CEO, Netflix.
5
nicksergeant 12 hours ago 3 replies      
I said this on Twitter and I think it's worth mentioning here.

This stuff is sad. And it makes me think that true startup founders aren't doing it right. If your startup is so central to your life experiences that without it, life isn't worth living, then you have a problem, and you should seek professional help.

Making this worse is the culture of startup founders gloating about how hard they work and how much of a mental toll it takes on them (and those around them).

Work smarter, not harder.

6
endlessvoid94 11 hours ago 0 replies      
I care very deeply for my mental health and part of that is balance. My parents always taught me to maintain balance, and it's turning out to be the most important (and hardest) pursuit of my life right now, as I do a startup of my own.

Get up early, go home at a reasonable hour (5 or 6pm) and leave your computer at work. Read a book, go out with friends, have a social life when you have the energy.

Do not sacrifice your health for extended periods of time -- it's not worth your mental, physical, and social health to get a bunch of money.

7
astine 11 hours ago 2 replies      
The reason, I think, that people are being so circumspect about Jody's death, is because it is so close on the heels of Aaron Swartz's own demise. Suicide tends to happen in waves where the first one tends to encourage copycats. Aaron received a lot of publicity and it's not impossible that Jody was partly inspired/encouraged to follow through as a result. I think a lot of people are concerned about encouraging more potential suicides in the startup community.

Whether or not this is the way to do that, I do not know.

8
orionblastar 7 hours ago 0 replies      
Look I have talked about this before but was ignored. There is a lot of stress in the industry, and if people don't know how to handle that stress a mental illness may develop from it.

In my case I developed schzioaffective disorder, and ended up in a hospital and short-term disability. After that I was fired after having a panic attack. When others discovered I was mentally ill they bullied and harassed me. Yes adult bullying, and harassing, and adult social kliqs and all that exist. It is not just teenagers who are abused by bullies but adults as well.

In your startup you have to have a way to treat people who develop a mental illness and find a way to get them therapy and medication to get better and accommodate them and support them. You should not consider them of less value and demote them and cut their salary, you should not fire them, or consider it a weakness or personality flaw.

The way classical management treats the mentally ill, it is no wonder that suicides are up, and that some became workplace shooters, and many others just go on disability or become homeless or end up in an endless cycle of jails and mental hospitals. You need to have management deal with mental illnesses better than it currently does and it can even effect the CEO of your business as well.

It would do you well to hire some people with psychology, and sociology skills that can work with therapy that any employee can go to for help. You also need people who can watch out for warning signs as well. This should be a function of your HR department and your EAP (Employee Assistance Program) with the state or some other government agency.

Yes I've been suicidal in the past, yes I had friends kill themselves over issues of not finding work, stress from the job, and other stuff. I am a member of Generation-X the suicide generation and in my early 40's. It is a miracle that I am still alive, but since I am mentally ill no startup or community wants me. Being excluded can lead to suicidal thoughts as well you know.

9
klpa 11 hours ago 2 replies      
Mental health care is 1) unaffordable 2) inaccessible and 3) socially unacceptable.

I'm a well-paid white-collar professional and I'd still find the financial burden of between $150 to $250 a week or more hard to swallow. Seriously, who has an extra $600/month lying around? And that's a conservative cost - heaven help you if your insurance isn't up to snuff or if you need meds. Not to mention the extra time out of one's day, because most therapists will not do house calls. How much does an hour of your time cost? How much more that of an entrepreneur?

Let's talk social acceptability: how many people want to invest in a person (which is really what entrepreneurship is all about) who is seeing a therapist? How many people want to have relationships with those they know have mental health issues?

10
Mz 10 hours ago 1 reply      
You know, I know the copycat thing is a real phenomenon, but let me suggest that it could also just be that we all live in the same world and are often subjected to additional stress around the same time. The so called "January Effect", of a dip in sales, can be pretty directly tied to overspending for Christmas the month before. January is also a time when people start looking at the paperwork involved in filing taxes, a big UGH for most people. And many people put on a happy face for the holidays while feeling worse than ever because the merriment around them often reminds them how empty and unhappy they are. It isn't unusual for people to delay announcing ugly decisions, like a decision to divorce, until after the holidays.

Maybe this was a long time coming* for both Aaron and Jody, for completely unrelated reasons, and perhaps the close timing is "coincidental" in that we are all subjected to some of the same larger trends, no matter who we are.

* I do not necessarily mean years. I am more suggesting weeks, in other words maybe they both decided Christmas was not the time to do this to other people.

11
danbmil99 10 hours ago 0 replies      
I am reminded of the debate going on around American football. The big "concussions" are obvious, and in a good situation you will have friends, family and colleagues on your side (although disappointed), because the event and its impact are big and obvious to everyone around you.

But the slow buildup of damage due to the everyday smashing of your head against one brick wall or another, is both hard to measure, and difficult to communicate to people outside the startup bubble world. And even if you do get some sympathy, inevitably they say something like "you're so smart and talented, you can get a job anywhere and live a normal, relaxed life!" They're trying to help, but to your ears it just sounds like "PLEASE, QUIT NOW BEFORE YOU FAIL AND ALL YOUR DREAMS COME CRASHING DOWN AND YOU FALL APART IN FRONT OF EVERYONE WHO LOVES AND ADMIRES YOU!" and it has the paradoxical effect of making you even more depressed.

Just saying.

12
guard-of-terra 9 hours ago 0 replies      
Maybe Wil Shipley nailed the problem in his "On Being Crazy":
http://blog.wilshipley.com/2005/05/on-being-crazy.html

So, is genius linked with craziness? Is this why we aren't all geniuses? Is mankind only so smart because if we get any smarter, we cease to function correctly? Maybe it's just not evolutionarily advantageous to be smarter than we are; it makes us mopey, and we end up cutting our ears off when we're trying to woo girls, which rarely results in offspring.

13
benatkin 12 hours ago 0 replies      
No, all the other posts I've read about him are very good. When you give a eulogy you don't criticize the other people giving a eulogy.

The lack of details didn't take away from the message, in most cases. For an example of this, see Mark Suster's two posts. I think both of his posts stand alone.

14
dmoney 6 hours ago 0 replies      
Not to take away from the tragedy of these founders' deaths, or from the negative effects of unrealistic expectations, but is there evidence that these expectations or the pressure they were under caused their suicides? Is their any data on the suicide rate in the startup community vs. the population at large?
16
DuskStar 12 hours ago 1 reply      
The post title to me has the possibility of him still being alive... I find the full title much more clear.

"Should We Talk about the Fact That Jody Sherman Didn't Just Die, But That He Killed Himself?"

17
j_s 11 hours ago 0 replies      
I've never read a blog post at launch.co before, interesting that the author chooses not to include any bio.
18
hect0r 12 hours ago 4 replies      
This seems to me to simply be a gratuitous speculation on Jody's death with no other purpose than to try and generate traffic by appearing to be some sort of brave, dissenting voice. It is completely unnecessary and in bad taste to pontificate on his cause of death and, even if it was suicide, I really struggle to see the benefit of discussing what is ultimately a private matter for his family.

The argument that discussing the circumstances of Jody's death is necessary because there is a systemic issue of founders killing themselves is outrageous and an insult to the reader's intelligence. Is there any evidence at all that founders are more likely to kill themselves than, say, the unemployed or indeed any other vocational group? Sure, being a founder is stressful but then so are many other vocations in life...

19
Evbn 4 hours ago 0 replies      
Should we talk about the fact that President Obama didn't just die?
20
oh_sigh 8 hours ago 1 reply      
So what lawsuit was Jody Sherman involved in? That must have been the reason he killed himself. We should take down that prosecutor, where ever they may be.
18
Flight: A lightweight, component-based JavaScript framework from Twitter github.com
187 points by uggedal  14 hours ago   63 comments top 10
1
dos1 12 hours ago 2 replies      
>When you create a component you don't get a handle to it. Consequently, components cannot be referenced by other components and cannot become properties of the global object tree. This is by design. Components do not engage each other directly; instead, they broadcast their actions as events which are subscribed to by other components.

This is a maintainability nightmare. I've gone down this route on hand rolled JS frameworks before and it leads to untold headaches. Sometimes a little coupling is the right solution. Many times it is better for components to have knowledge of their environment. In most applications (other than iGoogle maybe) one thing on a page depends on another. I feel it's far better to explicitly call out those dependencies by having a reference to the other components and directly invoking methods on them with known arguments of a known type.

Edit: Also, what kind of memory allocation implications are there with event subscriptions and no references to the components themselves?

2
glymor 12 hours ago 0 replies      
This looks like Twitter's version of Web Components?

You might have heard of it shadow DOM etc. Basically the idea is to be able to add GUI components eg <progressbar> that are as integrated as the native ones are.

https://dvcs.w3.org/hg/webcomponents/raw-file/tip/explainer/...
https://plus.google.com/103330502635338602217/posts

Mozilla's X-Tags implementation seems closer to the goal though: http://www.x-tags.org/ IIRC it can do this because it's using Object.observable internally to detect DOM changes.

ie with X-tags you don't need the javascript definition part, just: <x-map data-key="39d2b66f0d5d49dbb52a5b7ad87aea9b"></x-map>.

3
dmragone 13 hours ago 8 replies      
Recently I discussed with colleagues why Ruby on Rails succeeded where no Python framework did. One hypothesis put forward was that there was 1 popular Ruby framework (Rails), but many popular Python frameworks.

I wonder if it will be the same for JavaScript front-end frameworks, with no one gaining the greatest mindshare.

Not necessarily make any claims about what's preferable (one framework to rule them all vs many competing options) - though I'm certainly curious about that as well for the front-end frameworks.

4
scottrblock 13 hours ago 1 reply      
I've played around with most of the JavaScript MV(whatever)'s and I'm just not sure about this.

"Flight is organized around the existing DOM model with functionality mapped directly to DOM nodes"

It seems the point of using backbone or which ever other framework (I'm partial to Angular so far), is to decouple from the DOM, so that if and when your markup changes, your JavaScript doesn't. This doesn't seem any better than well written jQuery.

Am I missing something?

5
karl_nerd 13 hours ago 0 replies      
What I think is interesting to see, is that the patterns from Nicholas zakas' "scalable Javascript architecture" spreads into new frameworks.

Enforcing components w/o return values, communicating via pub/sub is also seen in aura.js and backbone marionette. I think this thinking will lead to more stable, easy to change js apps. Exciting!

6
WayneDB 9 hours ago 0 replies      
How can anyone build a serious front-end without proper keyboard support? Every single one of these new-fangled web UI kits is missing this important feature and if you're going to do it at all, it really does need to be a core consideration.

This is my biggest complaint about most web apps and the number one reason that I think web apps are perceived to be less powerful. Give me a break web devs! (Or, should I say browser makers?) Let's get serious already!!

(Also if we're talking browser manufacturers, I'd really, really like to see a completely separate abstraction for apps than documents. C'mon man!! We've been inventing the same thing for 20 years, let's get it right for once :)

7
vicapow 12 hours ago 1 reply      
why not just use component?

http://github.com/component/component

8
weareconvo 8 hours ago 0 replies      
Seeing as how Bootstrap fails at even the most elementary of tasks (using absolute instead of relative values for positioning, for example, screws everything up when the user's zoom is anything other than 100%, among many, many, many other flaws), anybody who uses this framework is a competitor I am... not worried about.
9
benregenspan 10 hours ago 1 reply      
Why not just use Flash?
10
bvcqw 14 hours ago 6 replies      
Why not just use Angular?
19
Trello uses an icon font and so can you fogcreek.com
160 points by mxk  13 hours ago   33 comments top 10
1
latitude 12 hours ago 3 replies      
I just went through a process of @font-facing a website for one of my projects and I have several things to add:

1. Don't bother with direct font editing. Instead use a vector editor like Inkscape [1] to save your icons as SVG and then convert them to a complete font-face kit with IcoMoon [2].

2. IcoMoon will also let you cherry pick icons from a large number of both free and commercial icon packs. This is a great starting point, especially at the sketching phase.

3. A better way to browse icon packs though is with Fontello [3]. Same packs, snappier interface.

4. FontSquirrel font-kit generator is really good, because it does a great job (re-)hinting TTFs so that they come out looking better in smaller sizes (10-14px)

  -- but --

There are fonts that FontSquirrel generator doesn't process correctly. It would happily spit out an @font-face kit, which will look normal most of the time. However on selected Windows boxes the font will render without anti-aliasing, naturally making it look like butt. It doesn't appear to depend on the Windows version (saw it happen on XP and W7) nor the browser (saw it on IE and Firefox). Also, same machines will render all other font-face kits just fine. I spent a couple of days chasing the cause, but then gave up because I found a simple workaround.

The workaround is to serve OTFs. Instead of specifying .eot, .woff, .ttd, .svg in your CSS, list .eot, .otf, .woff, .ttf, .svg. OpenType files are generally heavier, but they also compress better, so it's a wash in terms of a I/O hit if served over gzip'd HTTP.

In other words -

  +----------------------------+
| Make sure to test the hell |
| out of your font-face kits |
+----------------------------+

To that effect, Adobe Browser Lab [4] includes a version of Firefox that exhibits above behavior. However, the Lab also includes a version of Chrome that pixelates all fonts, because it appears to run on a machine with anti-aliasing disabled at the OS level (yes, it doesn't make any sense, must be some sort of an inside Adobe joke).

So, the testing strategy is to get a proven font-face kit, like Open Sans, and check that both your font and this litmus font render well. If both look aliased, then it's the OS issue. If only yours does, then it's a problem with the font-face kit.

[1] http://inkscape.org

[2] http://icomoon.io

[3] http://fontello.com

[4] https://browserlab.adobe.com

2
taylorfausak 10 hours ago 1 reply      
I'm a huge fan of icon fonts, but I have a nit to pick with the HTML they chose.

    <span class="icon-sm icon-org"/>

That is not valid HTML. The <span> element is not a self-closing tag, so the slash is essentially ignored [1]. Fog Creek's blog is XHTML, so the tag makes sense there, but Trello is HTML5.

[1]: http://dev.w3.org/html5/spec-author-view/syntax.html#syntax-...

3
Poiesis 37 minutes ago 0 replies      
What are the benefits of using an icon font compared to just using svg icons? As far as I can tell, the only thing that a font can do better is get served in a single http request. I'd love to know what other differences are.
4
Andrex 1 hour ago 0 replies      
I'm using the Batch icon font[1] in a prototype web app right now and it's literally the best thing to happen to my workflow in years.

No redoing the icons in Photoshop or Gimp to get them in a different color, or shadow, or emboss, or size.

No messing with background position and constantly updating your sprite file, being careful not to place anything in the wrong place.

Just... data-icon="&#whatever;" It's incredibly beautiful and useful and I know it's going to be hard to go back whenever I end up having to.

[1] http://adamwhitcroft.com/batch/

5
joshstrange 13 hours ago 4 replies      
We use icon fonts on my current side project and it is wonderful to work with. We use font awesome (http://fortawesome.github.com/Font-Awesome/) which works well with (or without) twitter bootstrap.

It had greatly improved our UI and sped up development.

6
mmhd 11 hours ago 1 reply      
I'm not entirely sold on icon fonts. They look good at large sizes, but as they get smaller they become blurry. I'll stick to creating pixel perfect icons at different sizes.
7
J_Darnley 8 hours ago 1 reply      
What happens when I force my browser to use font "X"? Answer: I won't get to see your fancy trick, use an image!
8
nirvanatikku 12 hours ago 0 replies      
Nice write-up. As a developer who has a keen design eye, but prefers not to actually deal with Photoshop (particularly icons!), this approach has been a huge relief for me.

I know there are many out there, but if anyone is looking for a nice font-pack, I use pictos (http://pictos.cc/font/). That said, it's $50.

9
atesti 4 hours ago 1 reply      
Does Trello support IE8 now? Or why do they have special code for that?
10
pla3rhat3r 10 hours ago 0 replies      
©∞∫¡
20
How the U.S. Military Uses IRC to Wage War publicintelligence.net
98 points by Jaigus  11 hours ago   62 comments top 15
1
angersock 9 hours ago 2 replies      
Perhaps the funniest/scariest snippet of the article is a chat transcript:

  [03:31:27] <2/1BDE_BAE_FSE> IMMEDIATE Fire Mission, POO, Grid 28M MC 13245 24512, Killbox 32AY1SE, POI GRID 28M MC 14212 26114, Killbox 32AY3NE, MAX ORD 8.5K
[03:31:28] <CRC_Resolute> 2/1BDE_BAE_FSE, stby wkng
[03:31:57] <CRC_Resolute> 2/1BDE_BAE_FSE, Resolute all clear
[03:32:04] <2/1BDE_BAE_FSE> c
[03:41:23] <2/1BDE_BAE_FSE> EOM
[03:41:31] <CRC_Resolute> c

It's a like a normal botnet, except it's commanding our troops.

I thought it would be cool to do something like this as a hackathon project proof-of-concept; apparently I'm too late. Doing something like this for civil defense purposes would be pretty cool.

2
AYBABTME 2 hours ago 0 replies      
The main advantages of using IRC over radio comms is that:

* It's way more reliable than radio comms (you don't need to ask people to repeat everything all the time, you read your grids right the first time).

* It's concurrent, you can have multiple units reporting at the same time.

* It's buffered, you can skip stuff and come back to it later.

* It's cheap, anybody can get a window opened on the current ops and see what's going on, without needing all the hardware of a radio.

* It makes the reporting very fast.

* It makes collation/data collection much easier.

* It's scriptable, you can automate the collection of some messages, or the emission of some others.

They got that integrated at pretty much every level, and I think it's one of the most enabling thing available right now for C2C nodes, in many armies (not just US).

3
charonn0 10 hours ago 1 reply      
Upon reflection, I find that it's not at all surprising that IRC is so widely used in the military establishment.

Military operations are, by definition, carried out by different coordinating groups sometimes thousands of miles apart. IRC is ideal for such a task as it provides a conferencing platform that is easy to use, develop, and deploy (in both the software sense and the military sense.)

It's unconcerned with the transport layer, so a wide variety of transport systems (which can include any manner of authentication and security) can be used: from a General or Admiral at a desktop client to drones mid-flight to boots-on-ground soldiers carrying a pocket-sized device. IRC is also a common technique for C&C in certain malware families.

4
revelation 8 hours ago 0 replies      
From the Manning pretrial:

  Defense (Coombs): D6 machines used primarily for...?
Fulton: Analysis.
Defense (Coombs): mIRC chat as a baseline?
Fulton: Yes.
Defense (Coombs): In fact, mIRC chat was installed on your machine as an executable desktop application?
Fulton: I think so.

I wonder if they have netsplits over there.. gives a whole new meaning to EPIPE (Broken pipe).

5
NelsonMinar 9 hours ago 3 replies      
It surprises me that IRC is used. In part because IRC is old and crufty and, I fear, not very secure. Also because IRC isn't some milspec contract that made some insider hundreds of millions of dollars. It's great to adapt an existing chat technology but it surprises me.

I recently read "Predator: The Remote-Control Air War over Iraq and Afghanistan: A Pilot's Story" and it talks a lot about how UAV pilots hang out in chat rooms sharing intel during operations. Asynchronous text is the perfect medium for this kind of thing; low bandwidth, doesn't require a lot of attention. Just crazy to think it'd be IRC.

6
dendory 8 hours ago 1 reply      
IRC is great and relaying on such a simple and well understood process is smart. It provides group chat, persistency, and is much clearer than trying to decipher a dozen people talking on a radio frequency.
7
ryanmarsh 2 hours ago 0 replies      
The SF guys I hung out with had/were using IRC on their laptop back in 2004 to communicate with other elements and their HQ which was God knows where. I was airborne infantry and we had an SF team on our camp, we worked mostly separately but hung out together a lot. After he showed it to me he told me the call signs were never to be repeated. I couldn't remember them if I wanted to. They had the laptop hooked up to sat coms. It was pretty cool for back then.
8
rdl 8 hours ago 3 replies      
The really amusing thing is when they buy huge LCD TVs and projectors to run IRC clients in an operations center.

And then call it "mIRC" because that's the crappy Windows client they use.

9
manacit 7 hours ago 0 replies      
The creator and developer of mIRC - Khaled Mardam-Bey - is Syrian and Palestinian, I wonder how often his software is used for tactical purposes against those countries?
10
gnosis 9 hours ago 7 replies      
That my work may be used for highly unethical purposes such as waging war is one reason that makes me think twice before releasing software.
11
johngalt 7 hours ago 0 replies      
I wonder if they've compensated the developer of mIRC.
12
Zarathust 10 hours ago 2 replies      
What really saddens me is the high probability that they spend billions of dollars every year to build custom, fancy comm protocols that nobody uses
13
i386 5 hours ago 0 replies      
Wow, Khaled Mardam Bey (author of mIRC) must be rolling in gigantic piles of US Military cash if they have standardised on it.
14
Eliezer 6 hours ago 0 replies      
If they fight on IRC... I can win.
15
dobbsbob 9 hours ago 1 reply      
Looks like a LARP channel
22
Mark Cuban Is Endowing A Chair To ‘Eliminate Stupid Patents' techcrunch.com
107 points by isalmon  12 hours ago   31 comments top 11
1
ScottBurson 5 hours ago 2 replies      
I would also like to see a “cold room” exception. If you can show you invented the idea using completely independent thought, you don't violate the patent and the patent is invalidated.

This seems to make a lot of sense at first glance. But how do you ever prove that you were unaware of the patented invention?

I propose a different solution. The burden of proof should be on the patent applicant to show nonobviousness, rather than the burden being on the PTO to show obviousness, and objective evidence should be required. The kinds of objective evidence that could be supplied to show nonobviousness have already been delineated by the Supreme court:

() commercial success
() long-felt but unsolved needs
() failure of others

For software, if you managed to get a peer-reviewed paper accepted, I think that could also count as evidence of nonobviousness -- just how strong would depend on the prestigiousness of the conference or journal in question. Or maybe you could point to a passage in a textbook or someone else's published paper to the effect that the problem you are solving had been open for some time.

2
lukejduncan 11 hours ago 3 replies      
Can someone please clarify what it means to "endow a chair." I get from the article it has something to do with the EFF but no idea what it actually means.
3
rogerbinns 6 hours ago 2 replies      
You know who always wins? Lawyers are used in the litigation (for both parties). If you receive a letter written by another lawyer, you'll need to consult one in order to tell them to get lost. There are essentially no downsides to all the lawyers other than their clients no longer being able to pay.

40-50% of Congress and Senate are lawyers. I would be astonished to find them voting against their own profession's best interests.

4
aidenn0 12 hours ago 1 reply      
He's wrong about AMD and Intel; it's not that they don't patent the stuff they use, they have had cross-licensing agreements for x86 for a long, long time.
5
sievert 1 hour ago 0 replies      
Why does this only seem to be a problem in the USA? This doesn't occur in Australia/NZ but we still have patents.
6
trotsky 10 hours ago 1 reply      
I guess it's the times we live in, but the EFF has usually managed to seem a little more subdued than folks who would be fine with pushing a brand in return for a donation. After all it was never "Lotus 123 presents the hacker defense fund"
7
gesman 11 hours ago 2 replies      
I think the quickest way to do that would be to convince small country like Antigua for example to become a patent-free zone opened for infinite innovation.
8
bborud 9 hours ago 0 replies      
The best way to solve a problem is by not having the problem in the first place.

Or let me rephrase that: who decide which patents are stupid?

9
fennecfoxen 4 hours ago 0 replies      
Mark Cuban nothing. You should see what Steve Ballmer did to a chair.

rimshot

10
jthurman 8 hours ago 1 reply      
I read that as 'eliminate stupid parents' and got just a little excited.

But eliminating stupid patents is a good cause too.

11
aniijbod 9 hours ago 0 replies      
I have this cerebral tic where I often read stuff wrong the first time.

Upon rereading, it wasn't about parents, and it didn't involve a time-travel paradox.

23
Computer Science PhD trends vivekhaldar.com
5 points by gandalfgeek  1 hour ago   discuss
24
Computing Fibonacci atgcio.blogspot.com
12 points by legaultmarc  3 hours ago   3 comments top 2
1
wging 2 hours ago 1 reply      
"The proof of this identity probably requires mathematical knowledge that is beyond my current capacity"

Not at all!

Let M = [[1 1] ; [1 0]], the matrix under discussion. What does M do to a column vector v = [x ; y] under the rules of matrix multiplication?

    M*v = [x+y ; x].

But what is this transformation, in terms of the input and output vectors? It's the same transformation as the Fibonacci transformation! We take [current, previous] --> [current + previous, current].

This tells us that multiplying the matrix n times will give us a matrix that gives the same result as applying the Fibonacci transformation n times:

    M * (M * v) = (M * M) v , 

etc. (Think of the left hand side as applying the transformation twice, one after the other, and the right hand side as applying once a single transformation that has the same effect as two Fibonacci transformations.)

Now this tells us that M^n [1 ; 0] (the n^th power of the matrix M, multiplied by the initial state vector Fib_1 = 1, Fib_0 = 0), equals [Fib_n ; Fib_{n-1}].

You should be able to work backwards from that to see that M^n must have the entries specified, since matrix multiplication is just a simple algebraic process.

I've probably made some sort of off-by-one error here. But that's the idea.

What this suggests is that any method of computing M^n will work to give you Fib_n. You could try repeated matrix multiplications, but why not an adaptation of the standard fast exponentiation algorithm? To compute M^k, either square M^(k/2) (if k is odd) or multiply M by M^(k-1) (if k is even). So M^13 would be

    M * M^12 = M * (M^6)^2 = M * ((M^3)^2) ^2 = M * ((M^2 * M)^2)^2, 

done in 5 multiplications instead of 12 the naive way

    (M * M * M * ... * M).

This is done in chapter 1, exercise 19 of SICP, although they never explicitly admit that the transformation under discussion is a linear transformation or write down its associated matrix. http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html...

By the way, this view of matrices--that they express transformations that can be applied to vectors, and that transformations that can be written down as matrices have special properties and can be manipulated and composed to form new transformations with the same properties--is why linear algebra will knock your socks off in the right hands. (By contrast, if all you're told is that a matrix is what we call it when you line numbers up in a pretty little row, you will begin to hate your math class.)

2
pluies 1 hour ago 0 replies      
The naive recursive implementation + memoization would be a cool addition. :)
25
Most tech startups acquired in 2012 had no VC funding zdnet.com
65 points by skreech  10 hours ago   17 comments top 8
1
gyardley 2 hours ago 0 replies      
I'd expect this to be the case every year.

Companies can only acquire companies they can afford. When you take outside investment, your investors want a significant return, which places a floor on your acquisition price. The value you have to create gets bigger, and the pool of companies that can acquire you gets smaller.

Raising money is hard, but should you want to and manage to, it's very easy to paint your self into a high-valuation corner that blocks all sorts of opportunities to make life-changing amounts of money.

2
pg 10 hours ago 6 replies      
This is rather a meaningless statistic, because acquisitions have a power-law distribution. Most acquisitions are HR acquisitions.
3
nikcub 1 hour ago 0 replies      
Based on raw numbers, true - based on returns, not even close. It is hard to find a $1B+ or multi-hundred million acquisition that doesn't have a VC investment.
4
mappum 9 hours ago 1 reply      
This makes sense, because companies that don't take VC funding are the ones that didn't need it because they were turning profits from the start. However, there are a LOT more companies that don't get VC funding and never succeed.
5
pbreit 5 hours ago 0 replies      
I would need to see a list of the companies before drawing too many conclusions. Is there some reason the list is not available?
6
dangoldin 9 hours ago 0 replies      
Seems like a vanity statistic to me. Most companies don't have VC funding so this is to be expected. At least compare this data to prior years.
7
JDDunn9 8 hours ago 0 replies      
Less than 1% of companies get VC funding, so getting funded makes you more than 24x more likely to get acquired.

I wish the report had more bayesian probabilities to account for survivor-bias.

8
photorized 8 hours ago 0 replies      
I think many of those were acqui hires.
26
Draft. Version control for writing ninjasandrobots.com
118 points by revorad  14 hours ago   45 comments top 18
1
jmduke 13 hours ago 4 replies      
This looks cool, but my immediate thought was that the documents most in need of version control (and Word offers version control, but its hardly as elegant as what this purports) are the most complex ones: annual reports, long-form works, things that require the complex document elements that make Word feel so bloated sometimes. (Not to mention that these are documents that need to be worked on offline.)

Branding this service as a webapp seems like it's going to make those use cases impossible.

2
jonnathanson 5 hours ago 2 replies      
Have you considered an industry vertical? Version control is a huge-beyond-huge annoyance in the legal profession, or frankly, in any profession involving group work on contracts. I imagine you could make a decent chunk of change going the enterprise route and selling this to big firms, or licensing it to schools, or what have you.
3
bonaldi 12 hours ago 1 reply      
Looks good. Similar in principle to how a lot of newspaper production systems (Quark Publishing System/Quark CopyDesk, Atex etc) work - store major revisions alongside autosaves.

One comment: the diff screen is a bit too programmery. "har" vs "av" is impossible to accept/reject just from looking at it - you have to parse the context and work out what the words are. "Sharing" vs "Saving" or "Insanley" vs "Insanely" is much easier to immediately yes/no.

4
masnick 13 hours ago 2 replies      
This is a great idea. Microsoft Word-style version control (track changes) is not nearly as simple and powerful as Git (assuming you have already wrapped your head around Git, which isn't a reasonable expectation for many non-programmers).

I would love to have something like this for collaborating with non-technical authors on academic papers. Having a granular view of changes in an academic paper (or really any technical document) is really important, especially if you have grad students or research assistants making changes that need to be approved by the principle investigator.

Google Docs sort of does this, but it doesn't support any kind of citation management. (All citation management possibly excepting Papers2 sucks; BibTex seems too technical; but this is a whole other thing.)

But this looks way better than version control in Google Docs. If it really is easy to use and it works with some system for citation/bibliography management, I think academics would love it.

(Academia is the place where I personally see the greatest need for this. I don't mean to ignore or detract from other use cases that may be more prevalent.)

5
newishuser 13 hours ago 3 replies      
Why not just build an editor that works off git? Then you could abstract it from the user and leverage all the existing ways of sharing text like github and bitbucket.

"But even as a developer it's full of headaches." is also just plain not true.

6
ismarc 8 hours ago 1 reply      
While this looks kind of interesting, I've seen a large number of things that fall into this vein, and a few concepts seem to be missing (conflict resolution being a big one). Independent of that, though, when I first started working on my book, there were two tools I quickly needed but could not find. The first being a way to save references that linked outside of the document without being part of the document (think fact checking, reference materials, etc.). The second was a way to edit the document(s) and be able to compose sections/components together at a high level. I ended up using muse mode for emacs and fossil as an VCS (the wiki is particularly helpful for collecting resources). What I ended up doing is several files on different topics, then one for each chapter and then one for the book as a whole. Then it's versioned text copy and pasted between files. If anyone has a better system for linking to references and managing the structure of the book, I'd love to hear it.
7
intellection 11 hours ago 0 replies      
http://revisionator.com does too.

What we need is open source, so all that writing, and all that revision history, is downloadable, liberated data.

8
dbecker 2 hours ago 0 replies      
I haven't looked at this in depth, but I'm glad to see someone is doing this. We need it.
9
arikrak 2 hours ago 0 replies      
It would be good if a program let you view each sentences history on its own, so you could see different alternatives without needing to change the whole document.
10
programminggeek 4 hours ago 1 reply      
This is probably not a brilliant question, but why this instead of git for version control and um editing directly on github? With Github zen mode, you can edit a document directly on the site and every save is in stored in git.
11
sushimako 11 hours ago 1 reply      
lflux[0] is a new (open-source) journalism effort/plattform, using similar version-control for their articles in order to provide a different view on how online-journalism could work.

The core idea (as far as i understand it) is to have topic-based journalism and a single, evolving main-article per topic. This article always represents the status quo and received updates/changes/additions when news happen, instead of publishing multiple event-based articles over time and relate them by means of categories, tags or else. A "timeline" will give an overview on how the article evolved (using a diffing algorithm similar to the one mentioned in TFA).

So if you want to read about the current state of affairs of - say - Fukushima, you wouldn't have to search through the latest x articles to find out what's going on. You'd rather check the (singleton) "Fukushima" article and could see the chronological changes in its timeline.

They have a showcase-install online [1], which covers a few topics ([2], german) and is in fact maintained and authored by participating journalists.

[0] https://github.com/luminousflux/lflux

[1] http://onon.at/

[2] http://onon.at/wehrpflicht/

12
Azrael 7 hours ago 0 replies      
We use Mediawiki. Tracks revisions, lets you rollback, let's you make comments, let's you provide reasons for changes, gives you easy diffs...you can do a private wiki for specific project, or projects. It's free, open source and extensible, the data is portable, easy to move from one SQL to another.

Book it, done.

13
Johnyma22 8 hours ago 0 replies      
Check out https://www.youtube.com/watch?v=bbpbMeDgTF8 for historical document search :)
14
Giszmo 13 hours ago 0 replies      
Normally when I ask people to review a draft, I put it into an etherpad. Etherpad also features version control to the letter, concurrent editing, tagging of versions, editor colors.

Sure, for anything that requires formatting, it would fail but "draft" has yet to proof it is a full-bloated … uhm … featured word processor.

15
JacobJans 8 hours ago 0 replies      
I think this might be exactly what I need for working with the freelance writers that I hire. I'm looking forward to giving it a try!
16
af3 13 hours ago 3 replies      
OFFTOP: can someone enlighten me about SVBTLE thing on the left column. I have seen this logo on several blogs. Cannot figure out what is that.

thanks.

17
ctbeiser 13 hours ago 0 replies      
It's an interesting concept, but how does it differ (not technically, but practically) from, say, OS X's built-in document version control, "Versions"?
18
clintboxe 12 hours ago 0 replies      
Looks great Nathan! Can't wait to try it out.
28
Our take on RethinkDB vs. MongoDB rethinkdb.com
99 points by coffeemug  13 hours ago   71 comments top 17
1
desireco42 12 hours ago 3 replies      
I started using RethinkDB in one of my projects and am looking for excuses to use it in more of them. So far things have been great and honestly my impression is that RethinkDB doesn't get nearly the hype it deserves.

I used Mongo before and it is fine db and I don't think I would be sad to use it, however rethink really does so many things better.

Again I just started using it and things are really good, I didn't ran into any obvious limitations and annoyances.

There are several features that I really like, for example: web admin is really well done, it is easy and obvious how you create cluster, there are a lot of small things that made me jumpstart my development faster, as I can run queries in admin to try them out and I also get data back to see how things will look like.

The only thing I am somewhat missing is 'brew install rethinkdb'

2
optimiz3 12 hours ago 2 replies      
My company uses MongoDB. Our biggest pain points are:

1. MongoDB has massive storage overhead per field due to the BSON format. Even if you use single character field names, you're still looking at space wasted on null terminators. 32bit fixed length int32s also bloat your storage use. We solve this by serializing our objects as binary blobs into the DB, and only using extra fields when we need an index.

2. In Mongo, the entire DB eventually gets paged into memory and relies on the OS paging system which murders performance. For a humongous DB, not so much.

3. #1 and #2 force #3, which is sharding. MongoDB requires deploying a "config cluster" - 3 additional instances to manage sharding (annoying that the nodes themselves cannot manage this, and expensive from an ops/cost standpoint).

What I would like to know is:

1. What is the storage overhead per field of a document in RethinkDB? If it's greater than 1 byte, I'm wary.

2. Where is the .Net driver?

3
lucian1900 13 hours ago 1 reply      
It does indeed look very much like MongoDB, but made by people that actually know what they're doing. It's refreshing to see good database design for a change.
4
mintplant 11 hours ago 0 replies      
I'm eager to take RethinkDB for a spin, as soon as secondary and compound indexes are fully implemented.

[1] https://github.com/rethinkdb/rethinkdb/issues/88

[2] https://github.com/rethinkdb/rethinkdb/tree/jd_secondary_ind...

5
jcdavis 11 hours ago 1 reply      
Not mentioned: RethinkDB doesn't yet support secondary and compound indexes, which is a dealbreaker for a lot of setups

Definitely looks interesting though, and I look forward to playing around with it at some point.

6
rubyrescue 12 hours ago 3 replies      
Riak is NOT operations-oriented. It's nearly impossible to manage operationally without dedicated staff at scale and the tools to introspect and analyze and deal with failures aren't robust enough yet.

I know they're just trying to contrast Riak and Cassandra with Couch and Mongo, and that Riak is designed to shard easily without the developer having to think about it.

That philosophy actually is "developer-oriented" in that it SEEMS like an operational savings because it was designed by developers.

7
wiremine 13 hours ago 3 replies      
Most of the posts I've seen about RethinkDB focus on "hey, we're a better NoSQL solution than MongoDB." That could be true, but so far I see it mostly coming from RethinkDB themselves, or people who like the design in theory.

However, does anyone have any practical real-world experience using it? It's not production ready (from what I gather), but has anybody actually used it for real world stuff?

For my own part, I tried it out, and got stuck trying to implement a many-to-many style join. I did some searching, and it looks like that is not really possible at this point. Not a bit deal, but it might be handy to have some example SQL-to-RethinkDB queries, just to help us newbies figure out the ropes.

8
islon 13 hours ago 2 replies      
Every NoSQL database is perfect and better than all the other options until you start using it in the real world. I'm not saying rethink DB is not a good solution, the point is, nosql dbs are about compromise and specific problems.
9
mhd 11 hours ago 1 reply      
How's the general performance and memory consumption on smaller machines, e.g. entry-level VPS's or the lower spectrum of AWS VMs? Don't have any big projects in the pipeline that immediately required sharding etc, but would like to play with it on a few weekend-scale items.
10
arunoda 5 hours ago 0 replies      
This comparison speaks better than this - http://www.rethinkdb.com/docs/comparisons/mongodb/
11
bunkat 9 hours ago 1 reply      
Looks very interesting, but this statement in their FAQ is a red flag for me:

How can I understand the performance of slow queries?
Understanding query performance currently requires a pretty deep understanding of the system. For the moment, the easiest way to get an idea of why your query isn't performing well is to ask us.

Wish RethinkDB was a little further along because it seems like it might be a good fit for a new service I'm building.

12
bfirsh 12 hours ago 2 replies      
This just reads like marketing speak. What are the disadvantages of RethinkDB?
13
smagch 2 hours ago 0 replies      
Other than RethinkDB, BigCouch looks both Developer/Operation oriented database since it is a Dynamo-like CouchDB. Does anyone have a BigCouch experience?
14
munimkazia 11 hours ago 1 reply      
We have been looking at NewSQL(or even NoSQL) platforms for our databases at my place of work, and we also stumbled upon RethinkDB. While everything these guys say does sound amazing, we were looking for someone who has implemented it, or any third party case study about it. Since we couldn't find any, we decided not to go with RethinkDB for now.

Does anyone here know any big website/service which uses RethinkDB?

15
etanol 10 hours ago 2 replies      

    «An asynchronous, event-driven architecture based on 
highly optimized coroutine code scales across multiple
cores and processors, network cards, and storage systems.»

It may be a dumb question, but isn't this statement a bit contradictory? As far as I understand, event-driven design and coroutines (i.e. cooperative multitasking, lighweight threads, etc.) are the techniques usually chosen to AVOID concurrency.

How does such a design imply multicore scalability? Obviously, coroutines and event loops don't prevent you from running in multiple cores. I just fail to see the correlation.

16
dennis82 11 hours ago 0 replies      
this is marketing cloaked in a developer portal. I think it's great that rethinkdb is trying to distinguish themselves from Mongo, but what's the real marginal utility of a rethinkdb over Mongo?

Mongo has been around for years, and it still has problems.

Rethinkdb is just launching a new product that essentially does the same thing as Mongo, but is maybe just a little easier to use.

I think the Yet Another Database (YAD) question still hasn't been answered by this post.

17
raxitsheth 6 hours ago 0 replies      
geo support? i think No!
30
White House Considers Joining Publishers To Stamp Out Fair Use At Universities techdirt.com
105 points by sethbannon  13 hours ago   36 comments top 11
1
btilly 10 hours ago 1 reply      
I am puzzled that stories like this surprise people.

Ever since Obama picked Joe Biden as his VP, it was clear what his administration's default stance on copyright issues would be. If you wonder why I say that, https://www.google.com/search?q=joe+biden+copyright provides lots of starting places to learn more.

Am I the only one around here who actually researches the candidates that I'm presented with to see what they think about issues that I care about? I thought (and still do) that Obama was the lesser of 2 evils both times. But I'm well aware that I do not agree with this administration on a number of issues, and copyright is one of them.

2
danso 10 hours ago 2 replies      
> The very first Copyright Act in the US was actually titled "An Act for the Encouragement of Learning." Current copyright law is explicit that fair use covers this sort of situation:

(the OP links to here:)
http://www.copyright.gov/title17/92chap1.html#107

> Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include"

It appears to me that the OP is misreading the law. From what I can tell, the law says that if a copy/reproduction of a work is fair use...including for classroom purposes...then it is not an infringement of copyright.

The OP interprets this as: "If something is reproduced for classroom use, then it is fair use"

That is not at all correct. Check out this primer about fair use and education:
http://fairuse.stanford.edu/Copyright_and_Fair_Use_Overview/...

The short summary of it is: You can't just copy whatever you want for purposes of teaching and assume that it is "fair use".

Maybe someone more well-versed in the law can correct me, but from what I can tell (I've had some education in this, but not a formal law degree), the OP is quite a bit wrong in his assumptions.

3
tzs 12 hours ago 3 replies      
Hmmmm...hint of conspiracy theory; xenophobia; out of context quotes from law; no idea how fair use actually works; extreme hyperbole; general cluelessness.

In other words, a typical Mike Masnick piece. I do not understand why anyone takes Techdirt seriously.

4
csense 11 hours ago 1 reply      
If Democrats are in favor of expanding rights for copyright holders, why did the Republicans a few months ago back away so far and so fast from their policy memo exploring opposition, to the level of firing the person who wrote it (never mind that it appears that it went through the correct channels to be published with the party's name on it)?

What copyright reform needs is someone with deep enough pockets to be able to b̶r̶i̶b̶e̶ lobby politicians effectively. Or perhaps a general public that understands and cares about the issue...

5
richardjordan 11 hours ago 0 replies      
I get that there are decisions in elections, and I get that on the issues YOU care about YOUR party is infinitely better than THEIR party, but when both parties just do the bidding of anti-democratic big business elites it just seems impossible to ever make things better, and the accumulation of these seemingly small steps continues. Stories like this ruin my day, not in isolation, but precisely because of the pattern of behavior in our body politic that they reprsent.
6
GiraffeNecktie 12 hours ago 2 replies      
Damnit, Aaron, we still need you.
7
SeanDav 10 hours ago 0 replies      
America has some of the finest laws that money can buy.

Right there is the issue and America is going to continue to get ridiculous laws while the lobby system exists.

8
homosaur 11 hours ago 0 replies      
I honestly just assume at this point if there's a way for the Obama administration to dick over the public in favor of their corporate masters, they're going to jump on it. This has become typical and expected behavior from this hypocritical "transparency"-baiting group.
9
aspensmonster 4 hours ago 0 replies      
Well... think about it. If a professor was allowed to post the textbook --or even just pieces of it, say, the homework problems or example problems-- what use then would a student have for the textbook? Nobody would buy Fundamentals of Chemistry 50th Edition!

Clearly such actions are not fair use.

10
mark-r 8 hours ago 0 replies      
This ties in perfectly with another link presented on HN today: http://www.schneier.com/blog/archives/2013/01/power_and_the_...
11
kzahel 12 hours ago 0 replies      
This is outrageous.
       cached 1 February 2013 08:02:01 GMT