hacker news with inline top comments    .. more ..    12 Apr 2014 News
home   ask   best   5 years ago   
1
CloudFlare's Heartbleed challenge cracked twitter.com
351 points by jmduke  9 hours ago   89 comments top 18
1
nikcub 6 hours ago 4 replies      
Reading Cloudflare's blog post[0], they keep referring to the exploit having a length of 65,536 bytes, and how an allocation of that size is unlikely to find itself lower in the heap.

That is true - but this exploit doesn't depend on setting a length of 65,536. The server takes whatever length the client gives it (which is, afterall, the bug). Most of the early exploits just happen to set the maximum packet size to get as much data out (not realizing the nuances of heap allocation). You can set a length of 8bytes or 16bytes and get allocated in a very different part of the heap.

The metasploit module for this exploit[1] supports varied lengths. Beating this challenge could have been as simple as running it with short lengths repeatably and re-assembling the different parts of the key as you find it.

edit something that I want to sneak in here since I missed the other threads. Cloudflare keep talking about how they had the bug 12 days early. Security companies and vendors have worked together to fix bugs in private for years, but this is the first time i've ever seen a company brag about it or put a marketing spin on it. It isn't good - one simple reason why: other security companies will now have to compete with that, which forces companies not to co-operate on bugs (we had the bug 16 days early, no we had the bug 18 days early!, etc.).

As users you want vendors and security companies co-operating, not competing at that phase.

[0] Cloudflare - Can You Get Private SSL Keys Using Heartbleed? http://blog.cloudflare.com/answering-the-critical-question-c...

[1] see https://github.com/rapid7/metasploit-framework/blob/master/m...

2
tptacek 9 hours ago 4 replies      
3
danielpal 8 hours ago 2 replies      
The important thing to know here is that you not only have to change your current certs you ALSO HAVE TO REVOKE THE OLD ONE.

If you only change your current cert to get a new key but you don't go through the revocation process of the old certificate if someone managed to get the old one they can still use it for a MiTM attack - as both certs would be valid to any client.

4
d0ne 8 hours ago 1 reply      
We have reached out via twitter to this invidiual as to coordinate the delivery of the $10,000 bounty we offered. If anyone is already in contact with them please direct them to https://news.ycombinator.com/item?id=7572530
5
guelo 6 hours ago 2 replies      
"We rebooted the server at 3:08PST, which may have contributed to the key being available in memory, but we cant be certain.". https://www.cloudflarechallenge.com/heartbleed

That doesn't make sense to me, seems like the key needs to be in memory all the time, or at least during every session.

6
tomkwok 7 hours ago 3 replies      
* From https://www.cloudflarechallenge.com/heartbleed *

So far, two people have independently solved the Heartbleed Challenge.

The first was submitted at 4:22:01PST by Fedor Indutny (@indutny). He sent at least 2.5 million requests over the span of the challenge, this was approximately 30% of all the requests we saw. The second was submitted at 5:12:19PST by Illkka Mattila using around 100 thousand requests.

We confirmed that both of these individuals have the private key and that it was obtained through Heartbleed exploits. We rebooted the server at 3:08PST, which may have contributed to the key being available in memory, but we cant be certain.

7
ademarre 8 hours ago 2 replies      
https://twitter.com/eastdakota/status/454792635279220737

Pic of the CloudFlare team reviewing the attack. Ten guys crowded around one monitor.

8
aboodman 6 hours ago 0 replies      
It probably took longer to compose that blog post than it took @indutny to disprove it.
9
nodesocket 8 hours ago 1 reply      
Love to see a post on how it was done and the tools he used.
10
tszming 6 hours ago 1 reply      
So @indutny sent at least 2.5 million requests, should we start to think more on the practical prevention techniques?
11
wrs 8 hours ago 0 replies      
Well, so much for wishful thinking.
12
capcah 7 hours ago 0 replies      
I am not sure how those guys did it, but I was talking to a friend of mine today, and I guess that it had something to do with forcing the server to use its private key to check for information sent to it. Then you use the heartbleed bug to intercept the intermediate forms on the information you sent to be decrypted/authenticated. Since you know the plaintext, the ciphertext and the intermediate forms, it should be possible to recover the key.

As I said, I am not sure that is right or if that was the method used to exploit cloudflare, as I didn't had the time nor the knowledge of openssl implementation to test it out, I am just throwing my guess out there before the official exploit comes about.

edit: formatting

13
specto 7 hours ago 1 reply      
Considering he just pulled a shadow file as well, it's not pretty.
14
badusername 9 hours ago 1 reply      
So this does mean that I need to change my passwords on every damn site on the list? Oh bollocks, those passwords were a work of art.
15
tectonic 9 hours ago 2 replies      
Ah crap.
16
diakritikal 7 hours ago 0 replies      
Hubris is ugly.
17
yp_maplist 6 hours ago 0 replies      
IMO, CloudFlare is lame. Kudos to this guy for reminding me just how much so.
18
bitsteak 5 hours ago 0 replies      
Why did anyone need this challenge in the first place? Couldn't someone have justed ASKED a good exploit developer what they would do and what the impact is? No, I guess we're all up for wasting people's time and creating potential false negatives.
2
How we got read access on Googles production servers detectify.com
951 points by detectify  1 day ago   177 comments top 24
1
mixmax 23 hours ago 5 replies      
In large production environments it's almost impossible to avoid bugs - and some of them are going to be nasty. What sets great and security conscious companies apart from the rest is how they deal with them.

This is an examplary response from google. They respond promptly (with humor no less) and thank the guys that found the bug. Then they proceeded to pay out a bounty of $10.000.

Well done google.

2
numair 1 day ago 6 replies      
... And this is why you want to discontinue products and services your engineers can't be motivated to maintain. Amazing.

This should scare anyone who has ever left an old side project running; I could see a lot of companies doing a product/service portfolio review based on this as a case study.

3
msantos 22 hours ago 0 replies      
A few webcrawlers[1] out there follow HTTP redirect headers and ignore the change in schemas (this method is different of OP's but achieves the same goal).

So anyone can create a trap link such as

    <a href="file:///etc/passwd">gold</a>
Or

   <a href="trap.html">trap</a> 
once trap.html is requested the server issues a header "Location: file:///etc/passwd"

Then it's just a matter of seat and wait for the result to show up wherever that spider shows its indexed results.

[1] https://github.com/scrapy/scrapy/issues/457

4
raverbashing 22 hours ago 5 replies      
This is another reason not to use XML, plain and simple

It's too much hidden power in the hands of those who don't know what they're doing (loading external entities pointed in an XML automatically? what kind of joke is that?)

5
chmars 19 hours ago 2 replies      
The guys behind this report have an interesting pricing model: Pay what you want!

https://detectify.com/pricing

The pricing models has apparently worked so far. Are any active users of Detectify here and can share their experience?

6
cheald 1 day ago 1 reply      
XML legitimately scares me. The number of scary, twisted things it can do make me shudder every time I write code to parse some XML from anywhere - it just feels like a giant timebomb waiting to happen.
7
halflings 20 hours ago 1 reply      
I hope it doesn't get unnoticed that the guys who discovered this vulnerability created a really great product, Detectify :

https://detectify.com/

They also discovered vulnerabilities in many big websites (dropbox, facebook, mega, ...). Their blog also has many great write-ups : http://blog.detectify.com/

8
njharman 19 hours ago 0 replies      
take away: XML should not be used (at least as user input). It is too powerful, too big. It is much too hard and expensive to test and validate.

Input from potentially malicious users should be in the simplest, least powerful of formats. No logic, no programability, strictly data.

I'm putting "using XML for user input" in same bucket as "rolling your own crypto/security system". That is you're gonna do it wrong, so don't do it.

9
raesene3 1 day ago 3 replies      
Interesting to see this hit big companies like google. The problem, I think, stems from the idea that most people treat XML parsers as a "black box" and don't enquire too closely as to all the functionality that they support.

Reading the spec. which led to the implementations, can often reveal interesting things, like support for external entities..

10
NicoJuicy 23 hours ago 0 replies      
Offtopic: the reply was generated with Google's internal meme generator, i read about it here : https://plus.google.com/+ColinMcMillen/posts/D7gfxe4bU7o

Actually digged it when i read it a few years ago and awesome knowing that it was probably used for this reply :)

11
NicoJuicy 1 day ago 0 replies      
A job well done. This is actually impressive and quite interesting to see after what you are searching for (afterwards it seems logical :))
12
peterkelly 20 hours ago 1 reply      
I never understood why internal or external entities were included in XML. Can anyone explain what useful purpose they serve?
13
dantiberian 23 hours ago 1 reply      
Very cool hack. Is $10,000 around the top end of what Google will pay out? This seems like quite a serious bug as far as they go.
14
enscr 23 hours ago 4 replies      
Is there a startup that can help automate custom attacks on websites? Like guide the webmaster to look for holes in their setup. I'm guessing some security expert can do a good job educating new businesses on how to prepare for the big bad world.
15
kirab 19 hours ago 1 reply      
I think they couldnt read /etc/shadow, so its not that bad at first. But then they could surely access some configuration file of the application itself, probably containing DB creds and of course more information which helps to find more vulns.
16
plq 20 hours ago 0 replies      
For those who'd like to know more about xml-related attack vectors, here's a nice summary: https://pypi.python.org/pypi/defusedxml
17
antocv 23 hours ago 4 replies      
So, when you have read access to googles prod servers, what else would be fun to do besides reading /etc/passwd ?

Getting the source?

18
ajsharp 15 hours ago 0 replies      
Cheers to google for properly compensating these guys for their findings.
19
yummybear 16 hours ago 0 replies      
You should be aware that pixilating or blurring screenshots are likely not sufficient to ensure that the contents are unrecoverable.
20
h1ccup 23 hours ago 0 replies      
Well done. I had to deal with some similar issues with my own project, and they weren't legacy code either. This should push me to go through some of my code again.
21
pearjuice 14 hours ago 0 replies      
That must have been be a nasty call from Sergey to NSA head quarters earlier this week.

"Sir, I am sorry to inform you that another backdoor has been found. We will introduce two more as agreed upon in our service level agreement."

22
sebban_ 19 hours ago 0 replies      
Awesome work! The bounty is a bit low though.
23
blueskin_ 21 hours ago 0 replies      
I wonder how many of the blurred entries were NSA.
24
4ad 23 hours ago 19 replies      
Just $10k?

This sells for at least 10 times more on the black market. Why would one rationally chose to "sell" this to google instead of the black market.

Some people don't break the law because they are afraid to get caught, but I like to believe that most people don't break the law because of the moral aspect. To me at least, selling this on the black market poses no moral questions, so, leaving aside "I'm afraid to get caught", why would one not sell this on the black market? Simple economic analysis.

Very serious question.

3
Transcribing Piano Rolls, the Pythonic Way zulko.github.io
198 points by gcardone_  12 hours ago   23 comments top 11
1
msvan 1 minute ago 0 replies      
What a fascinating convergence of math, music and Python. Many people I meet who don't specialize in math but have taken university-level courses in it seem to remember the Fourier transform as a highlight, probably because of its many applications.
2
eliteraspberrie 8 hours ago 1 reply      
The faster way of doing this:

    def fourier_transform(signal, period, tt):        """ See http://en.wikipedia.org/wiki/Fourier_transform        How come Numpy and Scipy don't implement this ??? """        f = lambda func : (signal*func(2*pi*tt/period)).sum()        return f(cos)+ 1j*f(sin)
is using the FFT.

What you want is the power spectral density in the discrete case, called the power spectrum. It can be calculated by multiplying the discrete Fourier transform (FFT) with its conjugate, and shifting. NumPy can do it. Here is an example: http://stackoverflow.com/questions/15382076/plotting-power-s...

3
selmnoo 9 hours ago 0 replies      
That was a lovely read, thank you so much for writing and sharing it.
4
kbd 10 hours ago 2 replies      
I love the abundance of Python. For those unaware, even the youtube-dl command line utility he used to download the video is written in Python.
5
nanidin 10 hours ago 1 reply      
Interesting question - is the author's transcription a derivative work of the video? And if so, is he actually allowed to release his transcription into the public domain (without the permission of the author of the video)?
6
rfleck 9 hours ago 2 replies      
See a master at work making original rolls at QRS.http://www.youtube.com/watch?v=i3FTaGwfXPM

If was a fun place to see in the 70's after watchingmy father rebuild our player piano.

7
analog31 5 hours ago 0 replies      
I think this is a nice solution because it takes care of the hardware side of things by making use of a garden variety video camera.
8
elwell 11 hours ago 1 reply      
Really fantastic hack. Now try transcribing with just the audio track.
9
smortaz 11 hours ago 0 replies      
fantastic. with your permission, i'd love to use this to demo python!
10
peapicker 11 hours ago 0 replies      
This is really nice, thanks for sharing it with us.
11
evidencepi 8 hours ago 0 replies      
Nice post, thanks for sharing!
4
From L3 to seL4: What Have We Learnt in 20 Years of L4 Microkernels? [pdf] nicta.com.au
82 points by pjscott  7 hours ago   9 comments top 5
1
Rusky 4 hours ago 1 reply      
Microkernels are a nice step toward security, but they're a concept ahead of current hardware design and they don't really bring the flexibility typically promised.

Services (virtual memory/swapping, file systems, the network stack, etc.) in microkernel systems typically can't be modified or replaced by applications any more than in monolithic kernels, which is probably party of why microkernels have stayed in the realm of embedded systems, etc. where you have control over the whole system.

Exokernels bring the flexibility that microkernels don't, by moving the security boundary down the stack. Instead of moving services into trusted user-level processes, they manage protection at the level of hardware resources rather than services. This enables those services to be in untrusted shared libraries that can be securely modified or bypassed on a per-application basis.

Thus, instead of the lingering "eh, it's a little slower but we can ignore that," exokernels provide much better opportunities for optimization and tend to be much faster. For example, a database could choose to discard and regenerate index pages rather than swap them out to disk and back; a file copy program could issue large, asynchronous reads and writes of all the copied files at once; a web server could use its knowledge of HTTP to merge packets, or co-locate files from web pages to improve disk seek time.

Further, exokernels and microkernels are not mutually exclusive; they are rather orthogonal concepts (you could move an exokernel's drivers into user space processes if you wanted). If we had hardware that were more conducive to a microkernel design, for example with direct process switching rather than going through the kernel (32-bit x86 did this with task gates, but they weren't used much and were abandoned with 64-bit), this would probably be the optimal design, rather than a purist microkernel approach. Incidentally, the in-development Mill CPU design does this very efficiently, as well as a few other things that are good for both micro and exo-kernels.

2
kbenson 7 hours ago 2 replies      
Oh, the conundrum that is the very technical HN story. Do I dive in and devote the time to learn whether this is paper is as interesting as it seems on the surface, or do I wait for some explanatory posts or even a TL;DR summary to help me decide?

Edit: The paper helpfully provides much of this itself, with boxed section footers with the change from then to now in how that component is handled. It makes for an interesting way to skim and zero in on sections you may find of interest.

e.g. 4.2 Lazy scheduling ends with *Replaced: Lazy scheduling by Benno scheduling"

3
jacobolus 2 hours ago 0 replies      
The OOTB/Mill people are apparently working on porting L4/Linux to their architecture: http://millcomputing.com/topic/security/#post-802

Their machine-supported security features will be very interesting to see realized.

4
greenyoda 6 hours ago 0 replies      
This article is a PDF document. Here's the abstract:

The L4 microkernel has undergone 20 years of use andevolution. It has an active user and developer community,and there are commercial versions which are deployedon a large scale and in safety-critical systems.In this paper we examine the lessons learnt in those 20years about microkernel design and implementation. Werevisit the L4 design papers, and examine the evolutionof design and implementation from the original L4 to thelatest generation of L4 kernels, especially seL4, whichhas pushed the L4 model furthest and was the first OSkernel to undergo a complete formal verification of itsimplementation as well as a sound analysis of worst-caseexecution times. We demonstrate that while much haschanged, the fundamental principles of minimality andhigh IPC performance remain the main drivers of designand implementation decisions.

5
harry8 1 hour ago 0 replies      
"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled."--Richard Feynman

All of us need to learn this, re-learn it, revisit it, internalise it, live it and breathe it every day. I'm sure I could do better at attaining such an ideal. So too can these gentlemen.

5
This Tower Pulls Drinking Water Out of Thin Air smithsonianmag.com
35 points by ColdHawaiian  3 hours ago   11 comments top 6
1
shadowmint 10 minutes ago 0 replies      
Here's a link that describes the technology behind it:http://newsoffice.mit.edu/2013/how-to-get-fresh-water-out-of...

...and one showing the structure in detail:http://www.architectureandvision.com/projects/chronological/...

TLDR: Great for pulling moisture out of the air if the air already has a really high moisture content. Pretty much useless in other circumstances.

2
ghshephard 1 hour ago 1 reply      
Something doesn't seen internally consistent in this article. We read about a device, that can be locally assembled, that, can draw up to 25 gallons of water a day (I would have been impressed with 1 gallon a day) - and, "In all, it costs about $500 to set up a tower...His team hopes to install two Warka Towers in Ethiopia by next year and is currently searching for investors who may be interested in scaling the water harvesting technology across the region. "

Why would "two Warka towers" be a target for a year, when, on the surface, reading this - it would make sense to go install a thousand of them and see how they played out over a year. If this device really could pull, even 10 gallons of water a day for $500 cost, it would have zero problem attracting funding on that kind of tiny pilot scale.

3
davidw 48 minutes ago 0 replies      
So does this one:

http://starwars.wikia.com/wiki/File:LukeMoistureVaporator-MO...

The ones in the article look like they're cheaper, possible to construct with local materials, and importantly: more user friendly - you don't even need a droid that understands the binary language of moisture vaporators.

4
tonylemesmer 2 hours ago 2 replies      
Has anyone tested the long term viability? Contamination, dust, mildew, flies etc.? Seems like a good idea but I venture the water would need further processing. Still, looks like a better starting point than where many communities are now.
5
riffraff 1 hour ago 0 replies      
notice the design/idea is, AFAICT, from 2012

http://www.architectureandvision.com/projects/chronological/...

6
patchhill 37 minutes ago 1 reply      
Are these not powered? How do these things overcome the laws of thermodynamics?
6
Learn CSS Layout learnlayout.com
83 points by ScottyE  7 hours ago   2 comments top 2
1
rafeed 26 minutes ago 0 replies      
Well done. Everything is accurately explained using simple terminology. I'd love to see this expand beyond just layouts. CSS is overwhelming to beginners, but this is dead simple while still delving into deeper, more complex topics.
2
subir 35 minutes ago 0 replies      
This was on HN some time last week: https://news.ycombinator.com/item?id=7521180

Good site, though.

7
Experiment: Eliminating Toast Sweat murrayhurps.com
60 points by MurrayHurps  7 hours ago   45 comments top 16
1
sbierwagen 5 hours ago 1 reply      
Infrared thermometers like the one used in this article are (typically) calibrated for an emissivity of 0.95. If you use one on a material with a lower emissivity, it'll give inaccurate readings. This can be cheaply solved by sticking a piece of electrical tape on the surface, and measuring that.

http://en.wikipedia.org/wiki/Emissivity

2
makmanalp 1 hour ago 1 reply      
The main problem here is that the moisture in the toast has not yet completely exited. So what I do is I toast at a lower temperature (if your device allows this) and / or leave the toast in the toaster for a few minutes after it's done. You'll notice that if you put your hands on top of the toaster after it's done, water is still evaporating like crazy. Imho this works better than a cooling rack or putting the bread on its side. Then you can reheat a bit if you want to, too. Is it weird that we've all thought so much about this?
3
zdw 5 hours ago 1 reply      
Were tests done beyond the 53C temperature, to see if this resulted in a total elimination of toast sweat?

I realize that this may make the plate "too hot to handle", but I'd gladly eat breakfast with one hand in an oven mitt if it would result in better toast.

4
csmattryder 5 hours ago 2 replies      
I'd enjoy a follow-up where they test different materials against the 'keep-toast-dry' 53 degree temperature of clay plate.

Would something like neoprene/polystyrene plates (as insane as it sounds) provide a solution to slightly soggy toast at a lower temperature?

It's one thing to find an optimal temperature, but a completely other beast to find a practical solution to it!

5
eik3_de 53 minutes ago 0 replies      
I have never had problems with toast sweat: Just stand the two slices upright on the plate so that they look like a T from above. It's very easy and stable and after a minute the steam is out so that they won't sweat anymore.

I never measured it, but I had the feeling that the A form (looking from the side) slightly interferes with the rising steam.

How do you do it?

6
DannyBee 3 hours ago 1 reply      
Today we learn about dew points?

This is no different than why you have vapor barriers in certain climates.

Without trying to sound dismissive, i was not aware there was a lot of experimentation here necessary to figure out the temperature at which the water would stop condensing on the plate again.

Still a fun article, of course :)

7
Jedd 6 hours ago 4 replies      
For a pseudo-scientific analysis ... it's a shame the units of measurement weren't defined up front.

(I live in one of the 190+ countries that use Celsius, but I know that 99.8% of things on the Internets are written by people from just one of those three other countries that doesn't. I'm also aware that for reasons that are a bit bewildering, everyone in those 190+ countries politely goes out of their way to make it clear that we're talking metric, because we're now used to the idea that if people don't mention units then they're probably from North America, and consequently are almost definitely using gallons (US, not UK gallons), miles (US, not UK miles), Fahrenheit and other deprecated units. We should probably stop being so considerate.)

8
2muchcoffeeman 5 hours ago 2 replies      
Why not use a cooling rack? Then air can circulate under the toast, eliminating the moisture and removing the need to pre heat a plate.
9
jordan0day 5 hours ago 1 reply      
A plate heated to Murray Temperature seems like it would be too hot to handle comfortably? Did you find this to be an issue?
10
switch007 3 hours ago 0 replies      
Toast sweat is on par with pizza sweat. As a pizza retains its heat for much longer than toast, I usually just put it on a rack or slide a large knife underneath to prop it up.
11
cgtyoder 5 hours ago 2 replies      
So - why is Toast Sweat bad? That question was never addressed.
12
leephillips 5 hours ago 1 reply      
I love this! I am also regularly saddened by the phenomenon of toast sweat. I looked in vain, though, for an RSS (or Atom) feed so I could follow your site, but didn't see one. Did I miss it?
13
zalzane 5 hours ago 0 replies      
does the suggested temperature also eliminate toast sweat in different varieties of bread?
14
jamespitts 5 hours ago 0 replies      
C'mon, no need to experiment. Just make a toast teepee -- stand those suckers up against each other and let them evaporate before making that sammich.
15
retroencabulato 4 hours ago 0 replies      
Can't you just use an rack?
16
kostyk 5 hours ago 0 replies      
Important information.
8
Japanese railway operator to license maglev tech to US for free nikkei.com
77 points by nkurz  9 hours ago   22 comments top 5
1
arrrg 6 hours ago 3 replies      
The technology has existed for so long but never really went anywhere. After half a century there are 30km in Shanghai, thats all.

Maybe it will work out better in the future.

The biggest problem I see is that the gains of this technology would be rather small anywhere with conventional high-speed rail or existing infrastructure that can be upgraded to that. With Maglev everything needs to be built from scratch. Thats just not very attractive for any place that has consistently expanded its rail network ever since the height of the industrial revolution.

And for what? A 180km/h faster train? I personally very much want that, sure, but is it worth it for anyone building it? Im pretty sure I know the answer in Western Europe (though I can always hope that maglev has a future there), Im not so sure when it comes to the US.

2
Istof 6 hours ago 0 replies      
Japan also offer loans to USA for building maglev trains http://www.telegraph.co.uk/news/worldnews/asia/japan/1055533...
3
jlj_20 6 hours ago 3 replies      
What about Elon Musk's hyperloop venture, are they in the running?
4
ChrisNorstrom 5 hours ago 2 replies      
Japan's maglev technology (EDS) is very different from Germany's maglev technology (EMS) that's being used in Shanghai. One pulls the train up electromagnetically to a steel track and the other uses permanent magnets to push the train away from the track. Both are still going nowhere. And it's for good reason.

Both are extremely expensive per mile compared to HSR (high speed rail). Sure it's faster than high speed rail but a technology doesn't make it because it's better, but because it's more practical to implement.

Service is not as tried and trusted as HSR. Germany's test facilities have been torn down after the Shangai maglev was built and Japan's maglev hasn't been expanded. On an emotional human level it just doesn't feel trustworthy. If you're not growing you're dying.

Both technologies are proprietary whereas HSR has more companies and manufacturers to chose from.

HSR could probably compete with maglev speeds by building a wider gauge track, using larger wheels, and implementing more aerodynamic designs to reduce drag and power consumption.

Germany and Japan are pitching their maglev trains while they themselves aren't avid users of them.

=== Lessons ===

If you want something to succeed sometimes you have to set it free.

If you're not expanding or growing you're dying.

If you want people to use your solution, instill trust by investing in and using your own solution.

5
markbao 7 hours ago 1 reply      
Japanese railway operator to license maglev tech to US for free
9
Windows is not a Microsoft Visual C/C++ Run-Time delivery channel msdn.com
67 points by nkurz  8 hours ago   49 comments top 8
1
mattgreenrocks 7 hours ago 5 replies      
A lot of 'real' (cough) hackers like to look down at MS, but Raymond Chen is one of those guys I'd hate to get into a technical argument with: he has experience, is extremely sharp, and very sarcastic. Those three attributes make Old New Thing my favorite MS blog even if I don't write Win32 anymore.

If anything, Windows' level of backward compatibility is a giant cautionary tale: enable poor behavior from devs, and it will proliferate. You cannot trust app devs to do the right thing, they need to be forced to; whether by gatekeepers at app stores, or OS restrictions. It is a tragedy of the commons. Whether it's inane programs inserting themselves into the systray, 'preloaders' for bloated apps (which slow startup), browser extensions, Explorer add-ons, or other garbage, app devs still seem to do a fantastic job of gunking up a Windows install.

This is why it's a bit of a blessing that webapps can't do much; because the more powerful they become, the more annoying and inane they will be.

2
rossy 7 hours ago 2 replies      
The discussion in the comments is interesting. MinGW, the compiler for VLC, LibreOffice and most other FOSS projects on Windows does exactly what Raymond says not to do. In fact, the entire purpose of MinGW is to make GCC able to target msvcrt.dll. Developers and users love this, since they don't have to distribute and install the CRT with the program. Developers following the GPL aren't even allowed to distribute the CRT with their program, so they have to use one already present on the system.

Though Raymond is correct. The MinGW guys should probably "write and ship their own runtime library," or at least use the msvcrt from ReactOS or Wine. That should make it possible to statically link to it.

Having said that, Microsoft probably won't change their msvcrt.dll in a way that breaks MinGW software, since their users will complain that they broke VLC.

3
gilgoomesh 6 hours ago 2 replies      
Why can't Windows officially include standard versions of this library? You know, like Windows already does with .NET versions since Vista or every other OS does with libstc++ or libc++? Forcing every C/C++ program to bundle their own MSVCRTXX.dll is pretty silly.
4
malkia 6 hours ago 1 reply      
71, 80, 90, 100, 110, 120 - Did I miss any of these? - 6 different "C" runtime versions (apart from side-by-side sub-versions) for compiler that was released in span of 10 years.

It's not that I like the MSVCRT runtime. It's just that I have to target it. Any popular commercial product that has some form of plugin architecture (Autodesk for example) through DLLs would require more or less for one to compile it's own plugins with the exact version the main application was compiled.

It's a bit of strange moment - when one developer cries that OpenSSL should've not used it's own malloc implementation, and then another cries - don't expose malloc/free interface (but do say your_api_malloc, your_api_free) and this way you can target any "C" runtime.

Now these are completely two different things, but not so much. What if say OpenSSL used the "malloc" runtime - what version of MSVCRT.DLL would've they target? Does anyone really expect to target all these different versions and all these different compilers that you can't even find the free versions now through MSDN?

(Now I'm ignoring the fact that you can't easily hook malloc and replace it with "clear-zero" after alloc function, but that's just a detail).

What I'm getting is that there are too many C runtimes, hell DirectX was better!

I only wished MS actually somehow made MSVCRT.DLL the one and only DLL for "C" (C++ would be much harder, but it's doable).

5
rwallace 6 hours ago 2 replies      
Microsoft C++ supports static linking with the standard library, which you should be using for release builds. That way, your program will always be using the exact version of the standard library that it was tested with, and it's guaranteed not to interfere with anything else on the target system.
6
userbinator 7 hours ago 5 replies      
Sorry, but I'm not going to link to a huge convoluted mess of variously versioned DLLs just to get standard C library functions. Maybe the situation is different with C++ (probably due to no real ABI standard), but the C library functions shouldn't change since they were standardised. If an application breaks because the internals of a library were changed, that's none other than the applications' fault.
7
csense 3 hours ago 1 reply      
Does Linux suffer from this problem?

I think the answer is "no" because Linux distros generally recompile the world with each new major standard library version. If any C standard library gurus are reading this, feel free to chime in!

8
jevinskie 4 hours ago 0 replies      
Could symbol versioning, like ELF has, help the situation? I know that glibc has made backwards incompatible changes and they up the symbol version when they do so. I don't know if that handles changes in struct sizes though.
10
Next Attenborough documentary being filmed for Oculus Rift wired.co.uk
34 points by shaneofalltrad  6 hours ago   12 comments top 8
1
rwmj 1 hour ago 0 replies      
This is an interesting Oculus app using 6 GoPro cameras to capture a vertical flight in a small drone copter of some sort:

https://share.oculusvr.com/app/hiyoshi-jump

It's kind of interesting to "play" with. It's an absolutely massive download however because of all the captured video that is necessary to allow the user to look in any direction.

2
aresant 4 hours ago 0 replies      
I subscribe to the r/oculus where this is the big thread and top comment is from the team behind idea and slightly debunks the claim while suggesting they'd like to kickstart it:

Hey guys,I'm from Atlantic Productions and this whole article is about 60% correct. We're currently working with the rift and we're really excited by it. We've got a couple of things in development at the moment, maybe three things in fact. They're all potentially fantastic projects but as you all know it's quite a difficult thing right now to fund development of these things.We're considering putting out a kickstarter for a project but we'd only put it out there if we knew you guys were interested. So as a very simple show of hands kind of thing, if we were to make an immersive documentary, where you are in the scene, would you be interested in helping fund that in a kickstarter?Would love to hear your thoughts and suggestions.

http://www.reddit.com/r/oculus/comments/22rqvu/next_attenbor...

3
etiam 34 minutes ago 0 replies      
With Oculus' now rather unsavoury connections I really hope documentaries like these will be made available in some format that's easily portable to other VR devices.As for the plans to support VR at all: Great. I hope this is going to work out well. I can't think of many recordings more deserving of immersive visual experience than those of Sir David Attenborough.
4
machbio 3 hours ago 1 reply      
After Bashing Facebook for acquiring Oculus Rift, finally there is something to show the hn people - thinking beyond the use cases previously thought will help bringing the technology closer to consumers. Oculus Rift use case of hardcore gaming is still alive and facebook acquiring is a good nothing so that people David and his team can invest time to bring their content Oculus Rift. Thanks David for showing us new uses for the Oculus Rift..
5
badsock 4 hours ago 2 replies      
There's something I don't understand: I've heard that one of the keys for avoiding VR motion sickness is having both rotational and positional head tracking.

A naive interpretation would be that for that to be possible from a prerecording, you'd have to have a 360 degree recording from the perspective of each cubic millimeter whithin the given volume of space that you'd expect someone's head to move.

Of course that's impossible, and there's certainly ways to interpolate from fewer viewpoints, but I've not heard of any that sounds like it's convincingly solved the problem. Is there one?

6
vdaniuk 4 hours ago 0 replies      
I want to express my great respect to David Attenborough and love for his documentaries. They are absolutely incredible. For me, this is a killer content for Oculus Rift, even if they are owned by Facebook now.
7
conchy 4 hours ago 1 reply      
Attenborough has been pushing the envelope of new visual technologies for a LONG time. Bravo to him for keeping at it. I can't wait to see it.
8
shaneofalltrad 6 hours ago 0 replies      
His first documentaries gave me I life long passion for nature and animals. I hope they do it right.
12
Let Me Google That For You Act loc.gov
88 points by cwisecarver  11 hours ago   28 comments top 10
1
primitivesuave 9 hours ago 3 replies      
NTIS has compensated for its lost revenue by charging other Federal agencies for various services that are not associated with NTIS's primary mission.

So when a federal entity isn't providing enough revenue to the federal government, it can compensate for it by charging other federal entities. This seems like a very convenient way for government businesses to misreport their actual income - I'd like to see how much of the NTIS revenue actually came from the real market.

2
mmmmax 9 hours ago 4 replies      
TL;DR: This bill attempts to disband The National Technical Information Service (NTIS), which collects and sells information and research. The bill asserts that the agency is no longer important, since you can basically just Google it now.
3
waterlesscloud 9 hours ago 2 replies      
Oh those wacky congressional staffers.

Also, they should Google the creation date of the internet. :-)

"(2) NTIS was established in 1950, more than 40 years before the creation of the Internet."

4
spankalee 9 hours ago 3 replies      
I wonder if this is such a good idea, since we all know that everything on the internet is true.

It takes some real skill to find reliable, accurate, up-to-date information on the internet. Could NTIS still serve a purpose by Googling for more critical research? Or maybe the idea is that that job is for the Congressional Research Service.

5
outside1234 9 hours ago 2 replies      
the capital G on www.Google.com is a nice touch that makes you feel extra good about the internet savvy-ness of our representatives.
6
nashashmi 7 hours ago 0 replies      
I am very concerned about what will become of the archives. A lot of articles stored by NTIS are not easily available on the internet. Some of those articles were produced by old disbanded research offices of the U.S. Government (and they were really good articles). However, my mind cannot recall what offices they were and what papers they had produced to be able to give you a proper example.
7
ErikTheViking 8 hours ago 1 reply      
My issue with this bill is that it ignores the crucial function of NTIS as a library. That is, an official source of these documents with a responsibility to maintain, categorize, and retain them. Various other public sites have no such responsibility.
8
talder 8 hours ago 0 replies      
> Effective on the date that is 1 year after the date of the enactment of this Act

So government documents are the inspiration for the way the Krang talk in the new TMNT cartoons...

9
RyJones 9 hours ago 0 replies      
It would be nice to pull the plug; color me skeptical it will ever happen, though.
10
seigel 7 hours ago 0 replies      
Are they advocating that we should not pay for movies and such as well? Read:

"No Federal agency should use taxpayer dollars to purchase a report from the National Technical Information Service that is available through the Internet for free."

It doesn't say 'legally' in there any where in there.

Anyhoo....

13
StartSSL, please revoke me My private key has been compromised tonylampada.com.br
33 points by tonylampada  1 hour ago   23 comments top 7
1
techsupporter 42 minutes ago 2 replies      
Classic Big Lebowski moment: You're not wrong, you're just an asshole. Their stance is entirely correct. The customer used a file that StartCom provided in software that turns out to have had a security flaw. That's neither StartCom's problem nor liability. They didn't say "use this certificate with anything other than OpenSSL; you'll be sorry if you use OpenSSL," nor could they have foreseen it.

On the other hand, showing a cold unwillingness to help when doing so is by far the above-and-beyond response doesn't engender good customer loyalty. It's also how StartCom operates. This is the same cert authority that insisted that I send them a full, unredacted copy of a mobile telephone bill with every "family plan" member's full call, SMS, and data history in order to call me. Otherwise, they could only "verify" me by sending a snail mail letter from Israel to South America (where I lived at the time). Independently-linked, outside verification databases operated by local government entities weren't sufficient.

At least they're consistent with their "rules are rules" processes.

2
Nanzikambe 32 minutes ago 2 replies      
To better understand the stupidity in leaving the power with the CI for SSL/TLS :

  $ gpg --gen-revoke $(whoami)@$(hostname -f)     gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.  This is free software: you are free to change and redistribute it.  There is NO WARRANTY, to the extent permitted by law.  How would you like to pay us?        (1) Mastercard      (2) VISA      (3) Other   Your selection?
Also, a dark cynical part of me wants to ask exactly what the business model behind "free" SSL certs is? You're not paying them, someone else is?

3
pritambaral 1 hour ago 0 replies      
Why is the power of revocations in cert issuer's hands? As long as the private key is private, I don't see how a malicious entity could add your private key to the revocation list.

In fact, a place in the revocation list should be reserved every time a cert is issued, possibly with a mechanism to trigger it with the private key. For example, if I send a message encrypted/signed with my private key to the revocation authority, they can decrypt/verify it with my public key, which they received when the CA issued my cert.

4
tonylampada 1 hour ago 2 replies      
So now it's official. They got the evidence that the certificate is compromised yet they refuse to take action. If that's not violation of CA policy I don't know what is.
5
jrockway 15 minutes ago 0 replies      
Personally, I'd just send a patch to my favorite browser removing their certificate from the trust chain, and then send StartSSL an email with a link to that. Although I doubt anyone will merge your change, it sends a cynical message about how their entire business lives and dies at the whims of people with commit access to the list of trusted CAs.
6
lstamour 52 minutes ago 2 replies      
I've used these guys in the past and quite like them, but yeah, this is poor PR and I hope they get pulled for not paying attention to, you know, the overall security of the trust product they're selling. I don't want lock-in on my SSL cert but it's effectively a contract if I have to pay a fee to break it and the SSL padlock on my domain is held hostage if I don't. Maybe someone should open a bug report on Bugzilla...
7
bananas 1 hour ago 0 replies      
Money trumps security always.

PKI as it stands is fucked up.

14
Write Code Every Day ejohn.org
854 points by slig  1 day ago   208 comments top 64
1
munificent 1 day ago 19 replies      
I am a total convert to the "don't break the chain" idea. I started writing a book on game programming[1] about four years ago. At the time, I was working at EA, miserable, and highly motivated to have the book done so it could help pad my resume. I got a book deal (O'Reilly) then, when that fell through, another (Apress). I had a real writing schedule, and a very supportive wife, and I would work on it for hours at a time.

Then I left the game industry, moved across the country, and had another kid. Suddenly, motivation and time were scarce. I backed out of the book deal and basically put it on hiatus for two years. I still really wanted to finish it but it just wasn't happening.

About a year ago, I realized that if I didn't finish it soon, I never would. My familiarity with the domain was fading every day. I didn't want the project to be a failure, so I decided to try writing every day.

I didn't have a set goal each day, but I try to do around 30-45 minutes. That ends up being ~500 words of first draft, ~1,000 words of later revisions.

In the past 309 days, I've finished 12 chapters. That's 59,568 words, plus a few thousand more for intro sections. I've redesigned the site twice, set up a mailing list, gotten a business license, and a bunch of other grunt work.

I'm about halfway through the very last chapter now (!). In less than a month, I should be able to say the book is done. (Though what I mean is that the manuscript is done, I'll be doing ebook and print versions after that.)

I absolutely could not have done this without working on it every day.

    [1]: http://gameprogrammingpatterns.com/

2
hawkharris 1 day ago 5 replies      
Telling a programmer to write code every day is a bit like asking an aspiring carpenter to swing a hammer: it's a necessary component of improving your skills and building things, but it is also a narrow, technical task that has limited value in isolation.

Having said that, programmers should spend at least as much time reading and thinking about code as they do writing it. You can write code for hours each day and do nothing but revert to the technologies and techniques that you find most comfortable.

3
tibbon 1 day ago 2 replies      
I was onboard with this school of thought for a while, right now it isn't my flow.

I work hard as-is teaching WDI at GA. I commit code frequently, but I also really want to focus more on work-life balance at the expense of getting more done. This summer, I'm taking two months off to do Burning Man and travel the country via motorcycle. During that time I expect no code to be committed. Do I feel bad about that at all? Not in the least bit, in fact I'm super excited to do it.

Currently, I try to not do much work on weekends. I like working hard during the week and then stepping away from the computer. I'll go and play music, ride my motorcycle, hang out with friends, travel, etc. The more time spent on my laptop on weekends feels like I'm missing out on things that matter strongly to me right now.

Now I am nowhere near the prolific coder that John is, and nowhere near his skill. I don't think he's wrong for doing it this way, but it isn't right for me and I'm glad that its producing results for him. I also go through periods of wanting to code daily, and other times where I'm ok with not coding for several days at a time.

To each their own. Also, Hi John!!! I haven't seen you since betahouse or you holding a Jelly at your place in Cambridge.

4
kyro 1 day ago 0 replies      
Do everything you want to excel at everyday.

One big problem I've learned with not working consistently at any one task is that after dropping and returning to a project, I find myself being familiar enough with areas I last touched that I want to speed through them to reach a point where I begin working on new ideas and concepts. But in most cases, those areas I left off at were the very reasons I jumped ship, either because they were too difficult or mind-numbing to wade through, leaving them incomplete/unlearned, and resulting in me having to take a few steps back to fully refresh myself before I can continue building, which leads to a lot of frustration and feeling like I'm wasting a ton of time.

5
Lambdanaut 1 day ago 4 replies      
That was a beautiful post.

Currently I'm in the complete opposite modus operandi. I don't do a lick of side-project work during the week, and on weekends I take a modafinil(wakefullness promoting medication) and stay up nights on end to crack out as much as I can.

I get an INSANE amount done on the weekends that I have the energy to pull this off, but it's horrible for my health. The rest of the week I have anxiety about the coming weekend, and it completely throws of my circadian rhythm. Not to mention that I'm only able to pull this off perhaps once or twice a month.

I'll definitely be changing my work schedule to be more in-line with a daily habit. Being able to look back and see a lot of consistent work being done sounds way preferable to being able to look back at a few weekends of consistent insanity.

6
gdubs 1 day ago 9 replies      
This "don't break the chain" approach has worked extremely well for me, particularly during busy periods of high stress. I first learned about it in my college writing classes, where you're supposed to write something, anything, meaningless jibberish even, every single morning. Recently I read about Seinfeld using this approach to great successs. Every day he works on material, and puts a big, fat, "X" on the calendar.
7
iSnow 1 day ago 2 replies      
In the long run, this is unhealthy.

Yes, it makes you more productive, but what if you fall in love, get sick, have a child...? Then you feel guilty about not catering to your side projects and guilt breeds procrastination.

I learned how to break down work into small pieces and rather finish one small piece and then call it a day instead of leaving something half-working for the next day. Because of this, I left projects dormant for 3 months and then picked them up again.

Granted, my side-projects are for-fun and not for-money, that makes it easier...

8
antonius 1 day ago 1 reply      
"No more zero days" is a good quote to live by. No matter how busy I get, I try to code something everyday.
9
LanceH 1 day ago 0 replies      
Zero days are great. Enjoy them without guilt. Don't fear going back to day 1. Make your decision each day if you're going to enjoy a zero day or get something done and long streaks can follow.
10
beat 1 day ago 2 replies      
On a slightly related note, I'm trying to impose a new behaviorist training on myself. I've never been one to listen to music when working, although I love music (as in member of two active bands, produced many albums love music).

So now, I'm trying to do work in album-length increments. Put on the headphones, pick an album, and work on one task all the way through it. No breaks, no interruptions. It's kind of a Pomodoro technique variant, a bit longer and with the headphones involved for extra habit and insulation from the outside world.

11
chris_va 1 day ago 1 reply      
I just want to caution folks, from experience, that it is easy to miss the forest for the trees if you are constantly trying to code.

I think the key takeaway here is that sticking to a plan is helpful, and that a coding heavy plan is a productive one. This is a great post for that.

I would argue that a good plan should include time off for reflection, and to avoid burning out. I have seen too many engineers burn out because they were convinced that working constantly was optimal for progress.

12
balou 1 day ago 1 reply      
Really? I'm so surprised to see so many "Awesome! Go for it" answers to this.

While I admire the dedication and focus it takes to stay up to such routine, I am certainly concerned by the quality of life and the narrow mindedness of enforcing upon oneself to code on a daily basis. What about days off? Going out friends / family for a weekend or holidays? One would suggest to bring your laptop so you can stick to it? This is madness to me...

I love to code, contribute to OS projects, do code for a leaving and for myself - but for nothing in the world I'd even attempt such thing.

Setting yourself with goals is great and required to some extend but on a proper schedule. Going to the gym 3 times a week can be achieved without being complexed by the fact you didn't go there every single day - and yet you can substantially improve yourself. I don't envy those buffy dudes that stick to it.

I'll stick to enjoying evenings with my wife, do code maybe 1 or 2 times during the week days, spend an extra day on more complex issues on the week end, and rest for the last day. Just saying.

13
tieTYT 1 day ago 0 replies      
This is a good article. I believe this idea comes from Jerry Seinfeld^1.

Here's an article that really complements the submission: http://start.jcolemorrison.com/how-i-fight-procrastination/ It's titled "How I fight Procrastination" and gives advice on how to break up tasks into day-sized activities.

Finally, I want to say I personally disagree with the OP's 2nd point:

    2. It must be useful code. No tweaking indentation, no code     re-formatting, and if at all possible no refactoring.     (All these things are permitted, but not as the exclusive work of the day.)
I've noticed that when I'm really tired or "not feelin' it" sometimes I just want to do something that takes 10 minutes so I can keep the chain going. When I spend a day (ie: 10 minutes) refactoring some code, I don't lose my motivation to work on my project tomorrow. It's breaking the chain makes me lose motivation and if I forced myself to write something "useful" on a day I don't feel like it, I may just end up breaking the chain instead. It's of the utmost priority to lower the bar to work on your project and rule 2 is an obstacle to that. Plus, I take mild offense to the idea that refactoring is not considered useful :)

And, if I had this rule I think I'd avoid refactoring a lot of code that needs it. I'd spend more effort squeezing that square feature into that round hole if refactoring "didn't count".

    ^1: http://lifehacker.com/281626/jerry-seinfelds-productivity-secret

14
Sindrome 1 day ago 2 replies      
Why is it so noble and healthy to be a workaholic if you are a Software Engineer?
15
nicholassmith 17 hours ago 0 replies      
I love coding, but the idea of doing it every day would make me hate it so quickly. I love learning new technologies when I want too, or fiddling with a concept, but I want it to be because I want to do it and not because I feel obligated too. I've put myself in that situation before and all it did was bum me out and push me close to a burnout situation, it works for some people sure but people should be doing things because they enjoy it.

Don't write code every day, do something you want to everyday.

16
zachlatta 1 day ago 0 replies      
Great post! A similar approach has been working really well for me. I'm on target to hit a year of consecutive days of coding this weekend. GitHub: https://github.com/zachlatta

I had a bit lower baseline than the author. My rules are as follows:

1. Commit something, anything. Even if it's just fixing a typo in a readme or phrasing some documentation better.

2. You must commit every day.

3. Every contribution must be useful.

17
redmaverick 1 day ago 0 replies      
This essay[1] made a deep impression on me and I rationalize not working if I don't have a long chunk of time available to work on my side projects.

[1]http://www.paulgraham.com/makersschedule.html

Will need to change my attitude and get more done. Good piece.

18
zhemao 1 day ago 0 replies      
While Resig's dedication is admirable, I'd caution against applying his advice too broadly. He is doing it because he has side projects that he wants to complete. There's no reason to force yourself to code every day for coding's sake.

I've been doing a lot of side-project hacking the past three months, as evidenced by my Github activity graph (https://github.com/zhemao), which, admittedly, is not as impressive as Resig's. However, this week, I finished up my latest side project and found myself at a loss for new ideas. At first, I did feel a bit guilty about not doing any coding, since it had been a long time since I had nothing to work on. But then I realized that there's more to productivity than a nice contribution graph and sometimes it's good to take a step back in order to think, reflect, and get inspiration.

I'm currently reading through Patterson and Hennessy's "Computer Organization and Design" to learn more about computer architecture. I'd also like to practice my saxophone some more, start learning how to draw, help a friend who is still in college find a job, and expand my social life a bit. My Github account will still be there when I am ready to get back into it.

19
gbhn 1 day ago 1 reply      
I'm curious if 30 minutes is the lower bound on meaningful project time if the constraint of writing code is lifted. I've been thinking about what kinds of projects could be decomposed into 20 minute work units, allowing some work units to basically be all thinking (design, specs, figuring out how to split up a problem, etc.) and others to be more coding.

I'm suspicious that there are many projects that could be decomposed like this, or even into 10 minute blocks, but that it'd be really helpful to have tools that makes this more achievable -- ones that basically remind you of where you were and help with the what-do-I-do-next decision.

Does anyone have any experience with this kind of development process?

20
mdoerneman 1 day ago 0 replies      
It's crazy that I found this today...I just started something similar except instead of coding every day I set a goal of 4 hours per week and I use Beeminder to track my progress. Many of the benefits you listed are spot on especially "the feeling of making progress is just as important as making actual progress." Now that I have children, coding for 8 hours on a Saturday just doesn't work. To reach my current goal of 4 hours per week, I plan on coding for 30 minutes in the morning or night where I can but I also have arranged with my wife one weeknight where I leave the house and go code at a coffee shop for 2-3 hours.
21
josephschmoe 1 day ago 0 replies      
Don't focus on the how. You can produce good code by:1. Coding every day2. Hackathons3. Coding on certain days4. However you want.

What matters though, is how -you- work. Are you the sort of person who prefers to code as much as possible? Code every day. Do you enjoy getting a big thing done fast? Hackathons are for you. Do you have children, a life or a job? You might want to code whenever you can instead of trying to force yourself into something that might not work for you.

22
darkFunction 1 day ago 1 reply      
Absolutely this. You can also use a tool like Gitstats (if you don't use Github) to track your progress. A lot of my code is written inside thirty minutes on the bus and tube on the way to work. Sometimes you might feel like there's no point even pulling the laptop out of your bag since the time window is too small- but every time you will surprise yourself with how much you manage to get done.

The best thing about the 'little and often' approach is how you get drawn into fixing something big just by starting to fix something small. Getting into The Zone for hours at a time is great and everything but honestly I'm starting to view the whole process as just clocking in keystrokes.

My gitstats (http://notes.darkfunction.com/gitstats/index.html) is showing commits on 56 of 85 days. A week of the remainder I was on holiday, and I tend to rebase quite a lot so actual days committed should be higher. But in that time I have written over 18,000 lines of code and removed over 6000. Almost a full iPhone application since January in my spare time, now onto the home stretch and couldn't be more pleased with the results.

23
coolsunglasses 1 day ago 1 reply      
This doesn't improve you much unless you're really new or skilled at challenging at yourself and finding new things to learn.

If the latter is true, do you really need advice?

Said differently: flow is the opiate of the masses.

24
steveklabnik 1 day ago 2 replies      
The real problem with "don't break the chain" is that once it does, things collapse.

See my graph: https://github.com/steveklabnik

As you can see, I'm about to lose a ton of green. I'm at 87 days as my longest, but July 6, 2013 was brutal for me. I was actually flying, and had saved a small bit of work to do during a layover, but then I totally forgot.

Once that chain was broken, it was super easy to justify taking some time off...

25
endlessvoid94 1 day ago 0 replies      
Related: for the first time in my 15-year programming career, I've spent the past year or so doing more engineering management than actual coding, and it has noticeably improved my programming ability.
26
thewarrior 1 day ago 0 replies      
I decided I would write a short story everyday. Ended up forcing myself to write random gibberish for a few days before i gave up.
27
chewxy 1 day ago 0 replies      
As much as I like this idea, I do wish John talked more about HOW he did it. "No More Zero Days" is a good thing as a target, but it's often unachievable.

For me at least, the context switch required between what pg calls the manager's schedule and maker's schedule is so huge that it takes hours to cross that gulf (that's what I'm mostly switching between anyway)

Do you just sit down and force yourself to hammer out code?

28
Cthulhu_ 1 day ago 0 replies      
I'm guessing this doesn't apply to the average developer; after all, we write code for a living. I sure do. I don't usually do any coding when not on the clock, after all, I've already done 8ish hours of work by then (and if I'm lucky, most of it spent coding).
29
karangoeluw 1 day ago 1 reply      
Inspirational post.

Last year, I [1] set a goal to teach myself git by committing at least once every day for a month. At the end of this, I saw the streak, and was too afraid to see it go down to 1 in a snap. Ever since, I've been committing code daily, and it's been about 40 weeks, and I'm still going strong. Being a full time student, this wasn't really easy for me, but I'm proud of myself.

The one thing I learned is, that the problem isn't the lack of ideas or time, but the lack of motivation to work on them.

[1] https://github.com/karan

30
prezjordan 1 day ago 1 reply      
I tried this a few months ago and failed miserably: https://medium.com/lessons-learned/ab219377be93

Really enjoyed your post, though. I think I might give it another shot from a different perspective.

31
Bahamut 1 day ago 0 replies      
I don't write code every day, but that is largely due to my military obligations as a reservist. Other than that, I often work on various side-projects and/or help others with their coding woes as a way to learn & help keep sharp. I'm disciplined enough to learn what I need to on my own time whenever I want.

Sometimes I burn out, and in those instances I take my free time away from programming.

The most important take away is to figure out how you want to improve yourself, instill passion in doing so, then executing.

32
ethanhunt_ 1 day ago 0 replies      
"An interesting side effect of writing side project code every day is that your current task is frequently running in the back of your mind. Thus when I go for a walk, or take a shower, or any of the other non-brain-using activities I participate in, Im thinking about what Im going to be coding later and finding a good way to solve that problem."

Is that not a negative? I find it hard to stop thinking about what I'm working on, and it negatively impacts my life. I leave the office after 8 hours, but the next 2 hours are spent turning over problems in my head, and the 2 hours before I sleep are spent on it too. The days that I work on a problem at the office for a few hours and can't unblock myself before leaving are hell. My brain won't turn off until I can get into work the next day and begin on the problem. Some days I will even wake up in the morning or night with answers to the problem. Why is the AWS instance in my head turned on all night long when I'm not even getting paid for it?

33
paisible 1 day ago 0 replies      
2 years ago some friends and I started writing one song each per week (and met every Thursday to listen to our respective master-pieces). We mostly ended up composing and recording the songs on iphones the Wednesday night before (thank god for GarageBand), and after 3-4 weeks were producing more creative content in this compressed timeframe than we'd had been able to with no deadline before. A few months in we skipped a Thursday or two, and suggested the solution was to write one song per month instead. That was definitely not the solution. We didn't find more time to write, and the lack of schedule killed the momentum. My biggest regret of the last year is not sticking to it - however a friend just moved to our city with the condition that we'd get it started again with the weekly frequency, so I'm optimistic :)
34
danso 1 day ago 0 replies      
I know for some people, TDD is the kind of friction-causing mechanism that kills the desire for everyday coding...but I've found it extremely helpful, even for small personal projects.

On nights when I absolutely cannot write a piece of working code, I scaffold out the tests. When I wake up the next morning and have 5 minutes with my coffee, I pass a test. Not much gets done, but by building the habit and ability to "jump into coding", no matter the time, place, or circumstance...that's how I've been able to build the coding-zen-mentality needed to write "real" code when the time comes.

35
legierski 1 day ago 0 replies      
This reminds me my 'Half hour' productivity hack that I've been testing last year: http://blog.self.li/post/34104114881/4-weeks-into-half-hour-...
36
moron4hire 1 day ago 0 replies      
I have the opposite problem. I need to code less per day and work on the other important things in my life.
37
Fenicio 1 day ago 0 replies      
Very inspiring, but take into account that John (most likely) lives in a good enviroment to proceed with this, his job is intellectually demanding but he is not overworked or extramulti-tasked.

If your job leaves you depleted, and when you arrive home you're like a husk of a human being you can't expect to do something like this.

Take into account that great developers like John live in a place where they can grow, you can't copy what they do and expect to have the same great results in a not so great environment.

38
rajlal 1 day ago 0 replies      
Great blog post john,

I had this experience when i was working on a book and i had to spend considerable amount of time every week on one example. The book had 100 examples so it took me two years to complete the book but the experience was amazingly satisfying because i was able to justify the effort, going slow and steady.

The other thing i noticed is the increase in quality when you do less but give more time to think. Keeping the problem in your mind create innovative solutions. Which is impossible if you want to just hack up everything one weekend.

My personal favorite is keeping a point system for all the good things you want to do in your day and add them up for weeks and at the end of the month, check the total and see where are you lacking behind. What percentage of life you are actual able to live the way you want. Haven't got to 100% yet but above 60% i give a pat in the back.

39
patrickford 1 day ago 0 replies      
I recently started the same discipline. I spent a good part of my career writing code then worked my way up to executive management and stopped. After several years as Director of this, or VP of that, my skills had eroded. Late last year I decided to take a sabbatical and get back into the game by applying to Hack Reactor in SF. It was an intense period of two months of pre-course work (18 Code School classes and a bunch of coding assignments), followed by three months of intense work on site where we went 11+ hours a day for 6 days a week. One of the disciplines there is to work on a short toy problem every morning for 30-60 minutes. Although I finished the program recently I am keeping up that practice as well as working on my real project, a new startup for social video. Ill never stop coding again!
40
jhtan 1 day ago 0 replies      
I'm actually testing this strategy for competitive programming. Was very difficult to train for the ACM-ICPC when I was working as developer, but now that I only study in the University I solve at least a problem a day for maintain my streak in my Github in these are the results in some online judges for competitive competitions:http://community.topcoder.com/tc?module=MemberProfile&cr=227...http://codeforces.com/profile/jhtanIt works!..
41
mburst 1 day ago 1 reply      
I totally agree with this. About a month ago I created a site that puts up a new programming or logic puzzle every day Monday-Friday. The exercises usually take no more than 30min and the community has been steadily growing. If you're interested you can check it out at http://problemotd.com/
42
beat 1 day ago 0 replies      
I really needed to read this, today. Thank you.
43
midas007 16 hours ago 0 replies      
I find my daily intelligence is highest in the early morning 6:30 am but productivity peaks at about 9:30 am.

Anything in the afternoon is a steady decline and by evening I should just do something that doesn't involve sitting in front of the glowing box. Trying to push yourself too hard results in overall productivity loss.

44
da02 1 day ago 0 replies      
I tried to do this, but in the end I ended up getting a job at a fast food place (part time). It uses different parts of my brain (as opposed to freelancing), and forces me to stick to the schedule. I can't exactly explain it, but it really gave me a BIG productivity boost.
45
jsutton 1 day ago 0 replies      
The pitfalls of context switching is mentioned often in this article. I have a "problem" where I'll work heavily on a side project for a few days, before getting bored and wanting to move onto another project. The result is that I have many unfinished side projects.

Is it better to focus on one project until completion, even if you aren't as into it anymore? What do other HNers do regarding multiple on-going side projects?

46
jes5199 1 day ago 3 replies      
Is your side project really this important to you? It's your source of identity and self-worth?

I say: take three months off from even touching a text editor and practice guitar every day.

I think my system leads to happier, healthier human beings.

47
kf5jak 1 day ago 0 replies      
I've read multiple articles on people doing this. Practice is the best way to learn. Admittedly, I've tried and failed on this before. I love seeing people succeed and become better at their practice using this method. It can only serve as inspiration for others. Thanks and congrats!
48
vayarajesh 1 day ago 1 reply      
I am totally facing the same issue which you faced about working only during weekends. Your idea of working everyday seems nice i will try giving it a go :)

Nice post!

49
ElHacker 1 day ago 0 replies      
I really like this approach. I'll do my best to write meaningful code every day to my side projects.
50
az0xff 1 day ago 0 replies      
Does working toward your side projects without necessarily writing code count? There are some days where I devote myself to figuring out something on my system that's essential for my side project, and those days I don't necessarily write any code.
51
Thiz 1 day ago 2 replies      
Hey John, just quit KA.

Life is too short to waste it in things you don't love. Remember jQuery brought you fame, not because you were chasing fame itself but because your love for jQuery and programming.

Love for what you do comes first, money is just a secondary effect.

52
cbp 1 day ago 0 replies      
It's best if you actually just _read_ code everyday and the writing is just the side effect of tinkering with it. Like chess you will save a lot of time by learning from other people's games before you actually do something on your own.
53
lukasm 1 day ago 0 replies      
This is exactly my approach on my side project, but rather than "write code every day" I say "make some progress every day". Simply because it's not a Open Source framework, but a wannabe product.
54
cnaut 1 day ago 0 replies      
Doing this helped me start my startup while working full time and eventually feel confident enough to quit my job and work full time on my startup
55
drderidder 1 day ago 0 replies      
That post was awesome... slow and steady wins the race. Inspiring!
56
osetinsky 1 day ago 0 replies      
How long do you spend every day on your side coding? Do you try to set a minimum/maximum amount of time?
57
ribs 1 day ago 0 replies      
"I realized that the feeling of making progress is just as important as making actual progress."

Yeah...no.

58
finalight 17 hours ago 0 replies      
there's a saying; practise makes perfect

it applies not only to coding, but also to other areas

59
shanwang 22 hours ago 0 replies      
thank you, this is the best advice about side projects i have ever read, I'm going to practice this from today!
60
mildtrepidation 1 day ago 0 replies      
ABC.

Always be coding.

Always.

Be.

Coding.

61
dstavis 1 day ago 0 replies      
Hey THIS IS AWESOME!
62
Avishai_Bitton 1 day ago 0 replies      
If you don't use it, you lose it...
63
chris_mahan 1 day ago 0 replies      
The best code is no code.
64
pipukamal 1 day ago 0 replies      
Highlanders vs Bulls Live Super 15 Game Free Streaming xv rugby Online http://storify.com/superrugbyoz/higvbulnzt
15
Amazon Will Pay You $5,000 to Quit Your Job time.com
216 points by scottkduncan  21 hours ago   131 comments top 31
1
jordn 20 hours ago 13 replies      
The first (and stated) effect of this policy is to weed out the unmotivated employees.

However, Dan Ariely has explained that the secondary effect is potentially more powerful. For those that choose to stay, they will forever live with their past action of having turned down lots of money to work there. So, when they're having a crappy day and hating their job, they're probably thinking "why didn't I take the money and quit?!". The only way to reconcile their thoughts and actions is to explain that, in fact, they must really love this job and therefore should work hard at it. This effect is known as Cognitive Dissonance[1] and is fascinating.

Here's a link to a video of Dan explaining this[2] and a really excellent Coursera course he does on Irrational behaviour[3].

[1] http://en.wikipedia.org/wiki/Cognitive_dissonance

[2] http://bigthink.com/videos/dan-ariely-zappos-and-the-offer

[3] https://www.coursera.org/course/behavioralecon

2
nemtaro 18 hours ago 1 reply      
As someone who actually worked at Amazon for a few long years, I'm always skeptical of such seemingly positive news, and often think "hmm, could this be another marketing trick to influence people's perception of Amazon rather than actually changing anything", and %90 of the time I'm right :)

Here's how their typical financial offer is structured for new software engineers:

1st year: signing bonus + relocation bonus + 5% of stock grant

2nd year: signing bonus + 15% stock grant

3rd year: 40% stock grant

4th year: 40% stock grant

If you quit within the first year, you have to give the relocation and signing bonus back. That's much much more than $1k. So there's a strong financial incentive / golden handcuffs to keep you there for at least 1-2 years, even if you are unhappy!

After the 2nd year, the financial incentive of staying is still there in form of the large stock grant (which has grown due to their stock price rising) that you've been promised and waiting on for a long time.

I can see someone rationally and happily taking the incentive after the third or fourth years and quit (i.e. after they've done damage to the work environment as an unhappy/unmotivated employee, and no longer have to give a fortune back to the company)... but before then, I doubt it'll change the behavior of any currently employed, overworked, over-paged, under-paid, under-appreciated software engineers.

Who this policy might affect though is future hires, and their perception of Amazon. People who have a choice between offers from MS and Amazon for example. They might consider this an interesting policy and assume that it would have improved employee morale at Amazon even though it's common knowledge that Amazon has terrible work life balance, etc.

I should also note that the Zappos policy makes a lot of sense to me, but this is very different from that, as is the employee culture of Zappos from Amazon.

3
JackC 19 hours ago 2 replies      
Purely on a legal level, getting your most disgruntled employees to identify themselves and waive all claims in exchange for $2,000-5,000 is probably a pretty good deal.
4
pvnick 18 hours ago 2 replies      
A few years ago I worked at a place where the CEO instituted this policy in the wake of reading about Zappos doing it. I remember him standing up at the weekly company meeting and offering a few thousand dollars to anyone who quit. The thing is, a week earlier, another coworker had put in his two weeks, and the look on his face upon hearing that announcement... I'm not sure if he ever got the money, but oh well. C'est la vie.
5
orky56 15 hours ago 0 replies      
This reminds of the unlimited vacation policy. Essentially with both these policies, the company is deflecting issues regarding job satisfaction and burnout onto the employee. The employee almost gets bullied into not taking the offer so as to show that they are above the petty reward. These psychological games are not created by chance; they are instituted to keep everything black and white, with us or against us. By drawing a line in the sand, they are eliminating the necessary conversations employees should be having with HR or supervisors to improve the workplace and their own individual situations.
6
afterburner 20 hours ago 1 reply      
Although nobody else seems to have mentioned it, this sounds to me like they are trying to avoid employees being dissatisfied but sticking around in order to get severance pay. Obviously if you quit voluntarily, no severance pay, but Amazon gives you a bit of money anyways (much less than severance pay), so that maybe you won't stick around longer than is good for either of you.
7
donretag 19 hours ago 2 replies      
At my company, I am both the most unmotivated and most productive employee. Where would that leave me? :)
8
bobjordan 19 hours ago 3 replies      
About 5 years ago during worst of the downturn, I took $20,000 USD option our consulting firm gave and voluntarily quit a $120,000 base salary job. At the time, I thought that job sucked pretty bad and was looking for an out anyhow. I've been an entrepreneur ever since.

No quick success story to tell - I've been bootstrapping for 5 years in China and it's been hard. But I've been happier overall focusing everyday on pretty much whatever-the-hell I want to think about, and my business just broke $1M USD revenues this year by doing that, so overall it seems right decision for me.

Policies like this are probably a win-win for all involved.

9
ninv 19 hours ago 3 replies      
It starts with 2000$ then 1000$ increment per year up to 5000$

This deal is for warehouse employees only and most of the people(90%+) working in warehouse are contractors.

They handpick employee(s) and once a year and offer him/her this deal.

BS!

10
smurph 20 hours ago 0 replies      
Big defense contractors have a yearly VRIF (Voluntary Reduction In Force), which is when they offer slightly better than average retirement packages to expendable older employees. Young people would never get the offer (because they couldn't retire) and important older engineers would also never get the offer, even though many of them wanted it. This is a big improvement over that since it can be used by younger employees and the employee decides unilaterally if they want the package.
11
ritchiea 19 hours ago 0 replies      
This is genius. Many people feel stuck in jobs they don't like for financial reasons. They're surely not as productive as they could be if they were happier. Providing even a small bit of assistance to help them out the door helps both sides. Employees don't feel trapped and employers don't have to wonder if their employee is just have a rough time or if he/she does not want to be there any longer.

Not to mention, as another commenter pointed out [1], once you decline the money you will look back and remember you made the decision to stay when presented an opportunity to leave.

1. https://news.ycombinator.com/item?id=7572688

12
ozh 20 hours ago 1 reply      
Zappos pays you $2,000 to quit... during the recruiting process

http://www.businessweek.com/stories/2008-09-16/why-zappos-of...

13
prbuckley 15 hours ago 1 reply      
I wonder what the chances of labor coordinating and everyone deciding collectively to take the offer at the same time? That would put Amazon in a tough spot and allow labor to negotiate a better deal. Amazon must be very confident that won't happen.

Maybe this type of program says more about the weak state of organized labor in the US then it does about breeding a healthy and good company culture. There seems to be something you can read in between the lines with this.

14
muyuu 20 hours ago 0 replies      
I wonder where do they apply this policy? There are plenty of sob stories of sweatshops in Germany (for instance).

If this is a global policy then the argument that they're the scum of the Earth with regard to employees cannot hold much water.

15
codeonfire 13 hours ago 1 reply      
Most blue collar workers turn over in a year, so this is just PR fluff. Companies can and do write them up at any time for the smallest of mistakes and then fire them, often within the first three months. Any statement made about a manual labor job and "After the first year" is ridiculous as is the tuition plan.
16
vaadu 19 hours ago 1 reply      
Can we get the US federal government to institute this policy? With the caveat that once you quit you are prohibited from collecting a federal paycheck elsewhere.
17
yen223 19 hours ago 1 reply      
Isn't this just a cheaper version of a voluntary separation scheme? Makes brilliant sense actually.
18
MBCook 17 hours ago 0 replies      
A similar policy at Zappos was discussed a few years ago on the Freakonomics podcast. Here's the transcript:

http://freakonomics.com/2011/09/30/the-upside-of-quitting-fu...

19
donutdan4114 20 hours ago 0 replies      
I very much agree with this. A great way to weed out employees who aren't happy there, and as such, will be unmotivated, unproductive, and bring down overall morale.
20
everyone 18 hours ago 0 replies      
Yeah Amazon are great to their permanent workers, management and so on. But what about the vast of majority of permatemp workers who do all the moving, warehouse work etc. ?

https://www.youtube.com/watch?v=waeMkka60po

21
JTon 14 hours ago 0 replies      
I wonder what the tax implications of taking this offer are. My gut says it becomes considerably less desirable. Too bad
22
blazespin 15 hours ago 2 replies      
Isn't this just Severance?
23
dorfsmay 9 hours ago 0 replies      
If you leave, work somewhere else for six months or a year, can you come back at Amazon?
24
qwerta 19 hours ago 1 reply      
Voluntary redundancy offer is pretty widespread. But there is usually exception for developers and other highly qualified people. Devs cant quit :-)
25
smackfu 19 hours ago 0 replies      
One tricky bit is that it seems to be only once a year. So someone just moving on normally wouldn't easily be able to take advantage.
26
ForHackernews 19 hours ago 0 replies      
Does this apply to Amazon's subcontractor warehouse employees? Because those are some jobs seriously worth quitting for $5,000: http://www.motherjones.com/print/161491

This would never be worth it for a developer working for Amazon proper.

27
nargz503 17 hours ago 0 replies      
I would think that it would be more of a tiered system. For some higher pay employees it might benefit them to remain in unsatisfactory job just to make the big bucks. They are then draining amazon and not contributing like they would be if the y were truly pleased with their job
28
_wdh 20 hours ago 2 replies      
I would be amazed if anyone accepted this offer, it's not enough money to justify making every other job interview afterwards harder. I bet it's just a PR trick to make them look like better employers after the warehouse conditions were exposed.
29
bowlofpetunias 18 hours ago 0 replies      
Seems to me that this can only work in countries with little in the way of job protection.

If you want to get rid of someone in most EU countries, it's going to cost you a lot more for them to sign away their rights by quitting. From that perspective, this is just an attempt to get rid of people cheaply.

But exactly those people you actually want to take the offer won't, they are much better off forcing their employer to either fire them or make them a better offer.

30
sharemywin 20 hours ago 1 reply      
Too bad the guy didn't take his design and patent it. Then turn around and license it to GE and all their competitors.
31
elwell 10 hours ago 0 replies      
So OpenSSL team has opportunity to make $3K profit and not have to deal with Heartbleed-scale shenanigans?
16
The race to contain West Africa's Ebola outbreak wired.co.uk
78 points by Vik1ng  12 hours ago   29 comments top 7
1
mediaman 11 hours ago 2 replies      
There are two companies that have demonstrated cures to the Ebola and Marburg viruses: Tekmira, based in Vancouver BC, and Sarepta, in Boston. Initial data shows that they have 85%-100% cure rates receiving the medicine up to 72hrs from initial exposure, as tested on monkeys.

Unfortunately a portion of this research was funded by the DoD, which decided to cut funding in 2012 at least for the Sarepta Marburg cure, which significantly slowed down progress and prevented any stockpiling of the medicine.

2
timr 11 hours ago 3 replies      
This outbreak is scary because it's finally hit a major city in Africa.

All you need is for one case to hop on an international flight, and things get much harder to control.

3
danmaz74 2 hours ago 0 replies      
This article is so poorly written/researched that it makes me doubt its content. If you know anything about Africa, how can you confuse Gaddafi's Libya with Tunisia? And, according to wikipedia, Ebola has a mortality rate of 68%; where does the 90% come from?
4
ibrad 9 hours ago 0 replies      
What's worse in guinea right now, is that anyone that is showing any symptoms even remotely related to the disease is quarantined with others with it. It sucks because what is the other thing to do?

The panic is spreading and people don't know what to do, hence why violence is now breaking.

5
dperny 2 hours ago 0 replies      
Ebola has a startlingly high fatality rate, but most cases of it are in developing nations. Does anybody know what fatality rate you could expect from advanced medicine and techniques in a developed nation?
7
chatman 11 hours ago 0 replies      
The title currently reads:"OpenStreetMap and The race to contain West Africa's Ebola outbreak"

It should be "OpenStreetMap and The Red Cross / MSF ..."

18
Testing with Jenkins, Ansible and Docker mist.io
118 points by cpsaltis  16 hours ago   20 comments top 5
1
SeoxyS 15 hours ago 2 replies      
You should optimize your RUN commands. Every time you RUN in a Dockerfile, it creates a new filesystem layer. There's a hard limit (42, iirc) to the number of layers that Docker can support.

Instead of doing:

    RUN echo bar >> foo    RUN echo baz >> foo
You could do:

    RUN echo bar >> foo \        echo baz >> foo

2
wiremine 14 hours ago 2 replies      
I've been using Ansible for a few production-related tasks lately, and think it's great. It provides the right level of abstraction, IMHO: you can crack open a playbook, read through it, and know exactly what it is doing. There's also a growing number of playbooks if you google around.

That said, the biggest downside I've seen with Ansible is reusable components. They have something called Galaxy in beta [1], which should help, although it feels a bit rough yet...

[1] https://galaxy.ansible.com/

3
trjordan 15 hours ago 0 replies      
Cool stuff! We actually use a similar setup [1], but with the additional hassle of figuring out how to handle hardware connections back to the app. Docker + Jenkins has definitely been a big win.

[1] http://www.appneta.com/blog/automated-testing-with-docker/

4
Silhouette 14 hours ago 6 replies      
At the same time, we need to move fast with development and deliver updates as soon as possible. We want to be able to easily deploy several times per day.

Genuine question, not intended as any sort of troll: what benefits would people with this philosophy say their organisation gains from routinely deploying multiple times per day?

I have nothing against better testing tools or more efficient development processes, of course, and if you have a serious bug then being able to fix it as quickly as possible is obviously beneficial. I just don't understand where this recent emphasis on always trying to move fast has come from, or what kind of management strategy someone might use to take advantage of such agility.

5
regecks 9 hours ago 0 replies      
Take a look at github.com/drone/drone for an upcoming CI platform built around Docker. It's totally open source, but you can use the hosted version at drone.io as well.
19
Bitcoin Mining Boom Sputters as Prospectors Face Losses bloomberg.com
18 points by T-A  6 hours ago   11 comments top 5
1
ghshephard 1 hour ago 2 replies      
At one point, people used Graphic Cards (or, more specifically their GPUs) to mine Bitcoins. Over the last couple years, dedicated ASICs eliminated graphic cards, to the point where (according to eBay) - a $30 USB dongle [1] @ 1.6GH/s is more efficient than any graphic card [2] which top up around 1GHs/ at the high end.

Given that a lot of graphics cards were purchased for the sole purpose of mining bitcoins - I'm wondering if there is a huge surplus of cheap graphic cards out there now?

Strangely enough - eBay still shows graphic cards that were popular for mining, like the ATI Radeon 5970, still selling for around $300. [3]

I'm guessing that's evidence that the graphic card market wasn't wildly impacted by bitcoin mining?

[1] http://www.ebay.com/itm/Brand-new-AntMiner-U1-USB-Bitcoin-Mi...

[2] https://en.bitcoin.it/wiki/Mining_hardware_comparison#Graphi....

[3] http://www.ebay.com/sch/i.html?_odkw=AMD+5970&LH_Sold=1&_osa...

2
lelandriordan 55 minutes ago 0 replies      
I am a bitcoin miner and I have made a 8 BTC profit this year taking into account electricity fees. A common misconception is that you need to actually mine bitcoins to mine bitcoins. Instead over the last few months smart miners have been ditching their Bitcoin ASICS and instead have been using old GPU rigs and new scrypt(aka Litecoin) asics to mine through auto profit switching alt-coin pools. These pools automatically mine the most profitable alt-coins like Litecoin and Dogecoin, then take the earnings and automatically trade them on exchanges for Bitcoins. While obviously profits are down compared to last fall, I am still making a nice ROI everyday, even from the very first GPU rig I ever built.

This week CEX/Ghash.io the largest bitcoin mining pool just launched their own auto-switching pool https://ghash.io/MULTI (Warning must sign up through CEX.IO). This will probably be the biggest one soon as its hashrate has tripled today already, but for the last few months the big three have been the following:

http://www.clevermining.com/http://wafflepool.com/https://www.scryptguild.com/

This is an always up to date profitability comparison of these pools vs. straight Litecoin mining made by Bitcointalk user Suchmoon: https://docs.google.com/spreadsheets/d/1VOAhFX1XRizdaTp71qnY...

3
mantrax4 33 minutes ago 0 replies      
Well, don't worry, malware writers will generate enough bitcoins to sustain the system.

The cost of mining to them is practically zero.

4
api 5 hours ago 1 reply      
The technology of cryptocurrency is cool, but Bitcoin has definitely been in a ridiculous bubble. Seems to be popping, but you can never be sure.
5
clef 3 hours ago 2 replies      
There's no such thing as a free lunch, and many people believed they could mine one lunch at no (or very little) cost and expect that lunch to last forever. That's very unfortunate, but who wouldn't see that such a utopia would come to an end. It's like thinking that electric cars will replace other cars in 10 years or 20 years (there's too much at stake , the environment being last on the list).Even if the public was to embrace a such a "currency", governments would step in (and they did) realising they would have no control. The very fact that it became an object of speculation and was making people richer meant it wasn't a currency anyway, it was no better than property, therefore spawned a bubble as a natural phenomenon.

Bitcoin, like Punk certainly isn't dead.

Bitcoin sounded and looked like the sex pistols a year ago, now it sounds and looks like Green Day.

20
Memories of Steve donmelton.com
331 points by zekers  1 day ago   132 comments top 22
1
salgernon 1 day ago 9 replies      
Back in 1999 or 2000, shortly after Steve had rejiggered the cafeteria staff, I was walking back to my office in another building with an "afternoon doughnut" - that is, one that hadn't sold in the morning at the coffee place in the main lobby, and probably sold at a discount.

I passed Steve in the hall and he glared at me as I walked with my doughnut. Steve was in great health in those days while I was pasty and obese. (Still am, sad to say.).

But I was happy with my doughnut. Steve glared at me but didn't say anything. I slunk away.

The next day, there were no more doughnuts at any of the cafs on the main campus. I don't think it's a coincidence.

2
baldfat 22 hours ago 1 reply      
I had to stop reading. I also worked for a micro managing CEO/President and I HATED EVERY MINUTE. Knowing that if you did the slightest misstep or were falsely accused you were fired and there was a morning meeting the next day to tell everyone that so and so was no longer with the company. NO THANK YOU!!!
3
adrianoconnor 1 day ago 1 reply      
I always love to read Don's stories, they're always pretty great, and this post is no exception. The last few paragraphs are poignant, and not because it's about Steve, but because the emotion is real and you can relate to it.

Anyway, if you enjoyed this, you should read the history of Safari posts he did a while back, also a podcast he was a guest on one time, though I forget who it was with -- ah, Debug I think -- that was really excellent and well worth listening to.

4
general_failure 1 day ago 4 replies      
It looks like some people like Steve are charismatic enough to get the complete devotion of very talented people. It's a great personality trait to have and pretty much guarantees success. We all know geniuses in our everyday life like Wozniak, Bob, cook. But how many of us can get these guys be terrified of us, make them change their lives for our visin and make them give us their complete attention... That's the beauty of Steve. Despite flaws in his character, people seem to be feel previliged working for him.
5
ZeroGravitas 20 hours ago 0 replies      
If after working with him for a decade you have to take a deep breath before you can give him your honest opinion on something, then he's not a busy executive who prioritizes efficient information exchange, he's an asshole.
6
mildtrepidation 19 hours ago 0 replies      
It's certainly interesting to read this sort of reflection. The author discusses Jobs' mannerisms without either worshiping or demonizing him, which is refreshing.
7
xcntktn 15 hours ago 0 replies      
Stories like this one and Glenn Reid's essay[1] about working with Steve on iMovie seem to be vastly more informative than any movie or book on SJ.[2] One of the biggest takeaways from both of these essays is that working with Steve was an iterative process. Pop culture always highlights "eureka" moments where a problem is solved all at once in a brilliant flash of insight, yet when you read these first-hand accounts, the story is the opposite: that making something great is a slow and repetitive process, with lots of follow-up meetings and gradual improvement towards the final product. Eureka moments look good on TV, but in the real world, great things are built by long-term focus and hard work from highly talented people with uncompromisingly high standards. I have no idea how or even if that could be shown in a movie, but I'm very thankful we have these accounts. I hope more people who worked with Steve during his second tenure eventually put their thoughts down in writing and share them so that we can all gain more of these types of insights.

[1]http://inventor-labs.com/blog/2011/10/12/what-its-really-lik...

[2]There's also Andy Hertzfeld's folklore.org, however that is focused on Steve's original tenure at Apple, not the "comeback" from the late-90s on.

8
gdonelli 1 day ago 0 replies      
Don has always been such a positive person to be around. Great memories. Thanks for sharing.
9
ghiculescu 1 day ago 1 reply      
Some great stories there. Wasn't sure on the Apple stores presentation joke though, can anyone explain the reference?
10
ksec 1 day ago 3 replies      
I was reading and hoping there was an explanation why Safari for Windows discontinued. It is the only popular WebKit browser ( after Chrome fork to Blink ) on Windows.

Otherwise another great piece.

11
tareqak 18 hours ago 2 replies      
I enjoyed the recollections. I probably would have been afraid of his shadow if I was there.

On another note, it would be interesting to see if a website containing all these memories of Steve Jobs ever comes about. A crowdsourced biography if you will: storiesabout/stevejobs .

12
hubtree 7 hours ago 1 reply      
This part sums up why I quit using OS X for my personal projects: "And if your software crashed, you didnt make excuses. You just made damn sure that particular scenario didnt happen again. Ever."

In making sure nothing ever crashes, Apple has moved more and more to an OS that is too restrictive for my taste.

13
mathattack 11 hours ago 0 replies      
Great stories. It says a lot about Apple that time, in addition to Steve. The personal side is good too.

Yes, Steve could be intense at times. But he was also a real person. He had to deal with the ordinary and mundane aspects of life like everyone else. Maybe even enjoy them.

14
theRhino 21 hours ago 0 replies      
this is hilarious
15
SimHacker 13 hours ago 0 replies      
At the National Air and Space Museum reception during Washington DC EduCom in 1988, I took a big bite out of one lobe at the bottom of a three lobed red bell pepper so it looked like an alien's face, and held it up to Steve Jobs, and said "Earthman, give me your seed!"

He looked at me funny, but I couldn't tell if he got the reference to Bizarre Sex #10: http://silezukuk.tumblr.com/post/3151672333 [NSFW]

16
pskittle 1 day ago 0 replies      
Thanks for posting this!
17
luser 22 hours ago 1 reply      
Alternative title: Hagiography of a Dead Psychopath CEO
18
jayvanguard 16 hours ago 0 replies      
Sounds like you have to be a sycophant to work for him.
19
throwaway7548 1 day ago 5 replies      
According to Wozniak, Jobs told him that Atari gave them only $700 (instead of the offered $5,000), and that Wozniak's share was thus $350.[65] Wozniak did not learn about the actual bonus until ten years later, but said that if Jobs had told him about it and had said he needed the money, Wozniak would have given it to him.[66]---

End of story. Before continuing celebrating Jobs, ask yourself a question, do you want to promote that kind of behavior in the Valley?

20
jmnicolas 1 day ago 2 replies      
Am I the only one fed-up with Steve Jobs stories ?
21
misingnoglic 1 day ago 0 replies      
Lol, some of it seems a bit stockholm syndrome-y, but hilarious nonetheless.
22
normloman 20 hours ago 0 replies      
Why are we still talking about this guy. I'll bet my life savings that when Woz dies, we'll talk about it for around 2 months.
21
"Let me know how I can help" a proposal to HN tomcritchlow.com
144 points by topcat31  19 hours ago   67 comments top 22
1
dzink 18 hours ago 6 replies      
We have build http://DoerHub.com to tackle this head on. You post whatever projects you are working on (hackers, researchers, scientists, are are 50%+ of the community, but there are also marketers, subject matter experts, designers, etc). Examples:

http://www.doerhub.com/for/doerhub (dogfooding)

http://www.doerhub.com/for/robopaint

http://www.doerhub.com/for/surgery-boards-app

http://www.doerhub.com/for/coincashcard

http://www.doerhub.com/for/synaptor

http://www.doerhub.com/for/securityfirst

Whoever sees your project can help in little or big ways, from joining the team to becoming an advisor or a beta user. Teams are soon getting public/private collaboration tools inside projects as well.

At the same time your profile shows what areas you are great at or looking for help in/learning in, example:http://doerhubassets.s3.amazonaws.com/assets/badge-67f14a8ee...

So you can really easily see people you have a lot in common with and share complementary skills with. An app with real-time chat and serendipity matching is in the works as well. It is entirely free, we haven't made a cent with it but some amazing projects are now in beta because of our work and people who would have never ever met otherwise (a hacker and a surgeon for example) are now doing projects together. There are growing past 600+ doers already and 80+ projects as of yesterday. You are welcome to join.

We don't spread it randomly. Instead we mention it only to communities of doers we respect and would want to work with and I hope you will do the same if you join in.

2
petercooper 17 hours ago 2 replies      
I'd actually like to see a monthly post along these lines, in the same was as the "Who's hiring" or "Freelancers required/available" posts. I don't think it works as well on a post by post basis but as a collection it'd make for good scanning.
3
basicallydan 18 hours ago 4 replies      
Just to clarify:

a) You're suggesting that we start an HMO meme on Hacker News which clearly means "I'm looking for help, this is what I'm looking for help with"

b) This post is the first one and it's your list of things?

If so: Cool :) I can't help with any, but I thought that this clarification may help others.

I was also wondering to myself: would this be a good idea for a monthly thread a la Jobs/Freelancers/Open Source? I decided that it probably wouldn't be, because you'd end up with a difficult-to-read list of things that people may or may not need help with.

Articles like Tom's which include specific requests are probably the best format for such things. We don't want any information overload, right?

4
lhnz 18 hours ago 2 replies      
I've actually been working on an app to help create these serendipitous situations for a month. Exactly the same concept, but perhaps developed further.

And for the record:

I can help with JavaScript things and with honing your ideas. And I need help with JavaScripty things and honing my ideas.

London-based if anybody wants to get in touch and discuss changing the world or just creating something awesome!

5
pbhjpbhj 18 hours ago 2 replies      
When you say "local" I assume you mean local to Brooklyn, NY. Might have been wise to mention that.

Framing does seem to be a dark art in the creative world. Framing local to where I am in the UK is expensive and there isn't anything [anymore] between the expensive, custom, wait-a-few-days, framing and IKEA.

Perhaps I should try and make a robot-controlled cross-cutting mitre saw and start a new business.

6
gk1 18 hours ago 2 replies      
I'm still regularly amazed at the diversity of people who read and post on HN. There are artists, doctors, triathletes, cartoonists, real estate agents, veterans, vagabonds, marketers, ... And you may never know they're reading unless there's some catalyst to make the connection.

A monthly HMO post can be that catalyst.

And this can be the first. Just don't use this opportunity to purely pitch your product.

7
ams6110 8 hours ago 0 replies      
Wow thiat turned out to to be something different from what I was expecting. I cringe every time I hear (or read) the words "Let me know how I can help" because that was the SIGNATURE phrase of one of the most useless, incompetent people I've ever worked with in my life. Nearly every email and converstation with this guy ended with that phrase. I now view it as a throwaway line of someone who doesn't have anything really helpful to offer.
8
Justen 18 hours ago 0 replies      
I've been working on a website for almost 2 years (on&off). I'm right there at the final push to get it live, but I think I'm just a little burnt out from it. One of the things I'm struggling with is the pricing model I want to use. I'm trying to find that balance of a simple pricing scheme that scales well.

My site was made to run leagues & tourneys, and I'm a single founder. If anyone would like to talk with me on my business model ideas, my email is in my profile.

9
carrollgt91 18 hours ago 0 replies      
I've been working on some web programming with a coworking space in Nashville, TN that has a focus on artists and creatives. I think they would be interested in your service.

I can't provide feedback on their behalf, really, but I'd be happy to introduce you to them.

Also, I really love the idea of the HMO meme. The rate problems can be solved when pushed to a distributed, diverse audience such as HN is amazing.

10
adidash 12 hours ago 0 replies      
What a nice idea! Inspired me to quickly create a very simple and basic tool. If there is sufficient interest, I will add additional features like categories, leaderboards, profiles, social login, ratings etc. If anyone wants to help me out, email me. :)

http://www.helphero.co/

11
Vekz 11 hours ago 0 replies      
Need:

Help configuring IPV6 public address pool to Ubuntu hosted lXC containers

12
harvestmoon 17 hours ago 0 replies      
Cool idea!

As for me, I've been working on a career finding tool to help people find good career fits. It's almost done, and I'm excited about launching it. But I'm not sure how to get the word out about it.

If you have any ideas, or would like to try it!, my email is dgurevich5 [at] gmail.com

13
vijayr 18 hours ago 0 replies      
Like the who is hiring threads, may be we could post 'Help HN' thread once a month, at the beginning of the month
14
JeremyMorgan 15 hours ago 0 replies      
> bearded, plaid-shirt-wearing startup guy

You just described every coder in Portland. I can send you a few truckloads if you need them.

15
ajiang 17 hours ago 0 replies      
This is great. For a community that so often looks to speak with target customers / audiences, we can really help each out either whether it's sharing our own experience or connecting people.

Here's to HMO :)

16
nekopa 18 hours ago 1 reply      
HMO HN: security for node based website.

How about this format?

17
vincvinc 18 hours ago 1 reply      
Here's hoping "who's hiring"-type of recurring threads get popular, they bring a new sense of community and value to HN IMO.

They could all be organised on the same day, saving lots of time for those who don't want to be on HN too often!

18
Ryel 19 hours ago 1 reply      
Sorry I can't really provide an intro but I'm sure these guys could answer some of your questions. (they seem like good people)http://www.artsicle.com/
19
jasdeepsingh 10 hours ago 0 replies      
You should talk to my friend who's doing Wallrent here in Toronto: http://wallrent.com

his email: richardsondx [at] gmail.com

20
archildress 15 hours ago 0 replies      
Need:

- Some experience working with customer service. I will

- Any type of remote, non-technical (think business) work. I like finance and Analytics.

Offering:

- Analytics help (setup and mostly data interpretation, telling the story of your traffic)

21
joshdance 17 hours ago 0 replies      
I really like it. People want to help. Give them a way to help!
22
peterwwillis 18 hours ago 0 replies      
I always have people asking me if I can do X or Y, and I have to say no, and that I also don't know anyone who can do those things. This could be a good place to connect those dots.
22
In a typical year the OpenSSL project receives about US $2000 in donations groups.google.com
313 points by blazespin  14 hours ago   142 comments top 22
1
patio11 13 hours ago 4 replies      
Note the almost painfully predictable response to the thread. Instead of focusing how OpenSSL can pull in, let me pick a number, $800k in revenue in the next year, they immediately zero in on $70 of Paypal fees as the organization's leading financial problem.
2
AaronFriel 12 hours ago 2 replies      
What other people have said in comments is completely right: OpenSSL, or maybe just this Steve Marquess guy, is missing the forest for the trees. Or in this case, the six figure donations for the pennies. OpenSSL could raise more money in a few months of pan handling in a major city than they raise in a year[1].

A student group that I will soon be President of at the University of Northern Iowa[2] received more in donations and financial support. Our student group is not the best managed, but we care a lot about large sponsors, keeping good relations with them, and making asks that matter.

If someone told me that panhandlers and Midwest student organizations are out-fundraising OpenSSL, I would scoff and laugh. OpenSSL? That's mission-critical software running on nearly every PC and post-PC device in the world. You know what OpenSSL reminds me of in this respect? SQLite.

SQLite charges $75,000 for consortium members[3] to have 24/7 access to phone support direct to developers, guaranteed time spent on issues that matter to them, and so on.

The fact that this doesn't exist for OpenSSL is an embarrassment to project management. I made an offer in that email thread to try to raise $200,000 for OpenSSL by the end of 2014, and I'm repeating it here for visibility:

If you are an employee of a corporation that wants to donate to directly support OpenSSL development by funding staff time, send me an email right now: friela@uni.edu

If you are in the OpenSSL foundation, send me an email right now and I will try to solve your problem by finding a phone number at every major OpenSSL using corporation and making an ask. Want me to do that? Send me an email right now: friela@uni.edu

[1] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC121964/

[2] http://www.unifreethought.com

[3] http://www.hwaci.com/sw/sqlite/prosupport.html

[4] https://sqlite.org/consortium.html

3
tptacek 13 hours ago 4 replies      
A sponsored bug bounty might be just as useful as more money directly to the project (especially if Google is porting Chromium to it). The nice thing about sponsoring a bug bounty is that anybody can do it; it doesn't require coordination with the project.
4
Nelson69 12 hours ago 0 replies      
The donations are one aspect. I'm on the dev mailing list, been lurking for a few years, I've used openssl for various things for years and I have had an interest in when some newer TLS standards were going to be supported. It's a pure bazaar as best I can tell. It's nearly magical how releases happen. I don't know if there is a secret mailing list for the core developers or some IRC channel or something, people post patches to the list, there are some occasional questions and answers, it's insanely low volume for a project as popular as it is. Every now and again some big patches with a lot of new stuff drop. Every now and again someone ponys up some big money and FIPS certification happens. It just sort of keeps meandering a long without a a benevolent dictator.
5
kenrikm 13 hours ago 2 replies      
Wow, I'm surprised that someone that's so crucial to the well being of so much of our internet security is funded on $2000/year in donations. I think I'm going to start donating more to stuff like this.
6
paulbaumgart 13 hours ago 7 replies      
Soo, throwing a little bit of economics out there: BSD-licensed open source software is pretty much a Public Good (http://en.wikipedia.org/wiki/Public_good). There are basically two ways we've figured out how to create public goods: taxation and assurance contracts (like Kickstarter).

Thoughts on the pros and cons of either approach with respect to improving information security infrastructure?

7
saurik 13 hours ago 2 replies      
So, first: I agree with patio11. But past that, this thread also bugs me because it is so ill-informed: the very first question that has to be asked is "what is the distribution of donation amounts", as the way to minimize processing fees of "we got one donor who gives almost $2k, and then a handful of people we choose not to turn away who give a few dollars each" is very different than how you handle "we have $2k donors, they all give a dollar". PayPal's micropayment fees are $0.05+5%, which is a massive difference from the default $0.30+2.9% quoted.

And if you have only one really large donors, you get them to give you a check. And then you put their name somewhere. And you send them some thank you letters. And you ask for their advice on how to talk to their friends, as maybe they might also want to donate. Because patio11 is just dead-on right: it is more useful to increase the incoming money here, not avoid losing some fees :/. But again: even if we choose to nitpick fees... this conversation is still going nowhere if the distribution of donations and the process of receiving them (if you have mostly random donations, having them do bank transfers is going to massively increase the loss rate ;P) is not where the discussion started.

8
wnoise 1 hour ago 0 replies      
That's unfortunately still too much. Raising any more money will only delay the death of a project that has suppressed the use of better written projects by dominating that niche in the ecosystem due to first-mover advantage.
9
socalnate1 12 hours ago 0 replies      
I'm surprised I haven't seen anyone mention the "tragedy of the commons" economic theory yet. Though in this case it seems to be happening in reverse, rather than depleting the common resource, we are all neglecting to invest in it.

http://en.wikipedia.org/wiki/Tragedy_of_the_commons

10
teemo_cute 13 hours ago 1 reply      
OpenSSL is like a guardian angel who's invisible to a person. The guardian angel has been helping the person all the time even though he/she doesn't know it. Then the time came that the guardian angel made a little unintentional mistake that led to large consequences. The person then starts blaming the guardian angel, forgetting all the good things the angel has done for him/her.
11
dpweb 11 hours ago 0 replies      
The OpenSSL debacle exposes a real problem with Open source sw. There is massive financial incentive to break it, none to make it safe. Funding its dev does little. Fund guys to break it who will tell you how they did it.
12
mercurial 11 hours ago 0 replies      
My usual suggestion would be "that's part of the infrastructure, so governments should get together and foot the bill", but this approach doesn't work for this particular use case.
13
lazylizard 7 hours ago 0 replies      
i think, generally, the tendency to think openssl needs help right after seeing openssl need help is..ignoring the problem that there might be other projects similiar to openssl, who need help. its like donating to 1 disaster victim because she appeared in a news story.this thing should be left alone and looked into after a few months(i dont know how long it takes for people to forget,actually) of no stories in the press about openssl.

otoh, if there were a foundation that collected money and funded many projects..it'd look like apache perhaps..

personally, i wouldn't mind an option to donate to apache or openssl in a humblebundle, nor do i mind an option to stick a donate button/widget on my website..or even better, have the widget rotate recipients..

14
btbuilder 13 hours ago 0 replies      
I'm interested in how the payments by third-party companies to OpenSSL foundation for white labeled FIPS-mode OpenSSL are accounted for. Maybe it's a seperate entity?
15
higherpurpose 13 hours ago 1 reply      
Shameful that so many billion dollar corporations rely on it in such a vital way, and only so little is being donated to it.

I think we need a score card for donating to open source projects, in the same way we have score cards for using green materials in devices, or using renewable energy for data centers. We should see periodic reports of how much money these companies donated to open source projects.

16
betadreamer 11 hours ago 0 replies      
I'm very surprised how low the donation is. This proves that OpenSSL was maintained more from contribution / volunteer rather than professionally. No wonder why they were not the first one to find the heartbleed bug...
17
jokoon 12 hours ago 1 reply      
Why not rewrite the whole thing ?
18
dalek2point3 9 hours ago 0 replies      
this might not necessarily be a good thing. see: http://en.wikipedia.org/wiki/Motivation_crowding_theory
19
keithgabryelski 11 hours ago 0 replies      
it's time for the community (and possible all major opensource projects) to have code review parties.

1 week before, a module is declared the subject. at the time of the party, the major owners are on the hook for function by function questions, and line by line when it merits.

reddit? or even a special github community service.

20
nobodyshere 13 hours ago 2 replies      
Is it so vaguely undervalued or does it just work so well that it does not need too much improvement?
21
ry0ohki 13 hours ago 6 replies      
Dumb question perhaps, but what do they need money for? What would they use it for? It says they pay it out to team members, but if people are doing this work for the money, doesn't that defeat the point?
22
raverbashing 13 hours ago 2 replies      
Underfunding is not an excuse for a code that gives headaches to people, lack of testing and blind acceptance of "new features" just for the sake of it.
23
Heartbleed Bug's 'Voluntary' Origins wsj.com
11 points by T-A  4 hours ago   4 comments top 4
1
simonster 1 hour ago 0 replies      
The article seems to suggest that there is something unreasonable about Stephen Henson's "Before you email me..." page on his website, which is here: http://www.drh-consultancy.demon.co.uk/contact.html

It turns out that the way that he "compares his responsibilities to those of Bill Gates when he managed Microsoft" is by stating:

The occasional person sends this query to both mailing lists (in almost all cases only one mailing list is appropriate) and when they do not get an immediate response email the entire core and development team. Presumably this is the same kind of person that emails Bill Gates whenever they have a Windows problem.

Emailing open source developers who you do not know at their personal email addresses is rarely appropriate when a public mailing list for the project exists. The tone is a little prickly but what Henson says seems reasonable to me.

2
lifeisstillgood 1 hour ago 0 replies      
I struggle to work out the tone here :

it varies from "My god we are all dependant on half a dozen volunteers" to "why doesn't someone pay these guys?" to "what a bunch of fools - we cannot all use the same code"

3
PhantomGremlin 53 minutes ago 0 replies      
I hate articles like this. They're so one sided. Here's a money quote: "Errors in complex code are inevitable". Even the headline sets the same tone and calls the flaw a "fluke". Oops, this programming stuff is hard, be thankful that it "works" at all.

Bullshit. There's something that was left unsaid in the article, specifically "best practices". Why wasn't the length validated at all? There's nothing new or "complex" about simple defensive programming. How can anyone (even a part timer) working on software that's so security critical be so clueless? Forget about more obscure stuff like the full disclosure mailing list, just reading CERT alerts should make this abundantly clear to anyone in security. Hell, the xkcd cartoon [1] makes it abundantly clear. If you can't take that cartoon to heart, you have no business writing Internet facing software.

I think Marco Peereboom got it right oh so many years ago when he said that OpenSSL was written by monkeys. [2]

However, the article does get something right. It's insane that something so critical to internet commerce is essentially a hobby project by a few people mostly in their spare time. That's not simply crazy, that's totally fucking insane. That's the biggest takeaway of this entire fiasco.

[1] http://xkcd.com/327/[2] https://news.ycombinator.com/item?id=7556407

4
logn 3 hours ago 0 replies      
>Last decade, Steve Marquess, a former U.S. Defense Department consultant living in Maryland, started the OpenSSL Software Foundation to secure donations and consulting contracts for the group.

That's kind of like a former Christian preacher living in Alabama raising money for Planned Parenthood.

24
Heartbleed certificate revocation tsunami yet to arrive netcraft.com
18 points by soundsop  6 hours ago   6 comments top 3
1
rikacomet 24 minutes ago 0 replies      
2
nnx 1 hour ago 1 reply      
The first graph shows an interesting bump in SSL cert reissue activity on April 2nd - 5 days before public disclosure.

Could this be the day Google, CloudFlare, and other major internet companies in-the-know before the public disclosure, patched their servers?

Is this graph generally available, for any time range, from NetCraft or another monitoring service?

I'm aware the graph shown has a time range too narrow to conclude anything but this made me think that monitoring this graph or noticing unusual reissues from major internet services (Google/CloudFlare/AWS/Facebook) could be used as an advance warning mechanism that a significant SSL flaw is about to be publicly disclosed.

3
mkonecny 5 hours ago 2 replies      
Out of curiousity, it there really any benefit to revoking a certifcate? Most (all?) of the leading browsers do not check the revocation list, so this move seems like an empty gesture. Is the Internet vulnerable to MITM attacks until this generation of certificates expire?

Do you think Firefox, Chrome will release an update in the next few weeks with revoked certificate checks enabled?

25
Dudley Buck's Forgotten Cryotron Computer ieee.org
67 points by spectruman  14 hours ago   9 comments top 3
1
cyanoacry 9 hours ago 1 reply      
This is absolutely fascinating. I didn't realize that today's nanofab labs used technology that was available in the 1950's (e-beam lithography is pretty common, with a second contender for nano-scale structures being FIB[1] milling).

The article's amazing for being a look into a future that could've been. A number of physics tools that are around now depend on microscale cryogenics, but they're still fairly rare (like SQUIDs[2]).

[1] http://en.wikipedia.org/wiki/Focused_ion_beam[2] http://en.wikipedia.org/wiki/SQUID

2
wglb 11 hours ago 1 reply      
3
dang 11 hours ago 1 reply      
It's hard to imagine a more perfect HN post than this one.
26
Is There Anything Beyond Quantum Computing? pbs.org
95 points by ca98am79  17 hours ago   37 comments top 10
1
dmunoz 15 hours ago 0 replies      
> Whats more, there are recent developments in quantum gravity that seem to support the opposite conclusion: that is, they hint that a standard quantum computer could efficiently simulate even quantum-gravitational processes, like the formation and evaporation of black holes. Most notably, the AdS/CFT correspondence, which emerged from string theory, posits a duality between two extremely different-looking kinds of theories. On one side of the duality is AdS (Anti de Sitter): a theory of quantum gravity for a hypothetical universe that has a negative cosmological constant, effectively causing the whole universe to be surrounded by a reflecting boundary. On the other side is a CFT (Conformal Field Theory): an ordinary quantum field theory, without gravity, that lives only on the boundary of the AdS space. The AdS/CFT correspondence, for which theres now overwhelming evidence (though not yet a proof), says that any question about what happens in the AdS space can be translated into an equivalent question about the CFT, and vice versa.

If you would like to read more about this, the author of this article has another blog post [0] that discusses the Susskind paper "Computational Complexity and Black Hole Horizons" [1] in its first half.

They key point, for those who don't have time to read the post:

> On one side of the ring is AdS (Anti de Sitter), a quantum-gravitational theory in D spacetime dimensionsone where black holes can form and evaporate, etc., but on the other hand, the entire universe is surrounded by a reflecting boundary a finite distance away, to help keep everything nice and unitary. On the other side is CFT (Conformal Field Theory): an ordinary quantum field theory, with no gravity, that lives only on the (D-1)-dimensional boundary of the AdS space, and not in its interior bulk. The claim of AdS/CFT is that despite how different they look, these two theories are equivalent, in the sense that any calculation in one theory can be transformed to a calculation in the other theory that yields the same answer. Moreover, we get mileage this way, since a calculation thats hard on the AdS side is often easy on the CFT side and vice versa.

[0] http://www.scottaaronson.com/blog/?p=1697

[1] http://arxiv.org/abs/1402.5674

2
saalweachter 5 hours ago 0 replies      
Hah.

So Roger Penrose, not content with the now-seemingly-attainable quantum computing, speculates there is an even more magical quantum gravity computing, which brains just happen to use, that makes them special, that Turing machines can't compute?

The man just really hates the idea of AI, doesn't he?

3
calhoun137 14 hours ago 0 replies      
The most important application of quantum computers seems to actually be to enable scientists who do fundamental research to answer questions from reporters such as "What practical purpose does your research serve?" with a single catch all buzz word. So the answer to the title of this article is clearly: Yes.
4
rhth54656 15 hours ago 3 replies      
But the more you accelerate the spaceship, the more energy you need, with the energy diverging to infinity as your speed approaches that of light. At some point, your spaceship will become so energetic that it, too, will collapse into to a black hole.

That is simply wrong. It is called relativity for a reason. An object might be traveling arbitrarily fast, but in its reference frame it is not moving.

That pretty much invalidates that part of the article.

For curious readers: http://physics.stackexchange.com/questions/3436/if-a-1kg-mas...

5
jamiis 15 hours ago 0 replies      
If you liked this, checkout the interview "Scott Aaronson on Philosophical Progress" [0]. Possibly my favorite read of last year.

[0] http://intelligence.org/2013/12/13/aaronson/

6
trhway 15 hours ago 1 reply      
> If so, then much like in Zenos paradox, your computer would have completed infinitely many steps in a mere two seconds!

The next target isn't infinitely many. The target is infinitely infinitely many.

Like something along the lines of forking infinite number of parallel Universes - that is pretty much "many-verse" interpretation of the quantum superposition(and thus computing) - enhanced with forking of infinitely many time dimensions inside each of the said Universes...

7
z3phyr 16 hours ago 1 reply      
8
diziet 16 hours ago 2 replies      
Actual article is here: http://www.pbs.org/wgbh/nova/blogs/physics/2014/04/is-there-... but Scott's blog has the comment section
9
Havoc 15 hours ago 2 replies      
Scott is making it really difficult for people to follow along. My initial reaction was "wtf am I looking at" followed shortly by "why wasn't the HN link pointed straight to the PBS article?". And clearly others agree given that the top comment at the moment is a PBS link.

No doubt the guy is competent but this just smells like a poorly executed attempt to move traffic to the blog. I fully expect it to get ignored as such despite the PBS article being good.

10
mrtriangle 16 hours ago 1 reply      
Yes. Super Duper Computing.
27
Apple declines to join Microsoft in funding patent troll Intellectual Ventures gigaom.com
133 points by dashausbass  19 hours ago   80 comments top 15
1
chasing 18 hours ago 5 replies      
When you talk about IV, use the name Nathan Myhrvold. Hell, they have a picture of him right there.

One of the best measures we have against Mr. Myhrvold -- given that he seems interested in portraying himself as a public genius of some sort -- is to drag his name through the mud over this. He's not the guy who studied with Stephen Hawking. He's not the guy who wrote the molecular gastronomy tome. He's the very, very rich guy who wants to drag down the entire tech industry to get even richer.

2
rajbala 14 hours ago 1 reply      
From an interview of Nathan Myhrvold on Fareed Zakaria:

Zakaria: How worried are you that the United States is no longer going to be the place that invents the future?

Myhrvold: I'm very worried. Current course and speed --- we're very good at inventing, uh, but we're also undermining our ability to do that in lots of ways.

<facepalm>

3
LukeWalsh 18 hours ago 2 replies      
I'm hoping this has to do with a change in policy in regards to these types of issues, but I'm skeptical. It could just be an anomaly.

We need a major tech company like Apple to take a stand against these types of lawsuits before we will see any real policy change.

4
amaks 17 hours ago 3 replies      
Why new, open, honest Microsoft is still doing it? Or, judging by this action, may be it's still the same, large company, and openness is just a facade?
5
jgamman 1 hour ago 0 replies      
not even sure if it's off topic but ex-IV senior manager Chris Somogyi is now a GM at NZ's newly created R&D/innovation/commercialisation agency:http://www.callaghaninnovation.govt.nz/about-us/key-people/e...

I have no idea what he's up or why he was head-hunted to lil ol' NZ but an ex-IV guy in a major role in a central funding hub of an entire country's R&D system kind of weirds me out. My conspiracy tendencies are high normally, this takes it to 11. Any comments from a community that might have worked/interacted with him?

6
tzs 14 hours ago 3 replies      
How come it is news that Apple invested in an IV fund in the past, but is not investing in this new IV fund, but it is not news that Google invested in an IV fund in the past, but is not investing (as far as we know) in this new IV fund? Or Yahoo? Or Nvidia?
7
mercurial 12 hours ago 0 replies      
That's the sort of thing that I keep mind whenever I see Microsoft moving in the right direction (eg, by opening their C# compiler). I can't help but liken it to the mob giving some of its extortion money to charity in a bid to show that they're good people.
8
yp_maplist 5 hours ago 0 replies      
I always thought there was a humorous irony in Mr. Myhrvold's half-baked efforts to eradicate malaria.

That's because IV itself is a parasite.

9
jgable 18 hours ago 3 replies      
Can someone explain the motivations of the major players here? The posted article references another Reuters article that goes into a little more detail: "Several large tech companies previously invested in IV, which gave them low-cost licenses to IV's vast patent portfolios as well as a portion of royalties IV collected." However, investments like this seem pretty short-sighted, and I would have thought that all players in the tech space would have woken up to the dangers of patent trolls years ago. What explains the continued behavior of Microsoft? Is this just another version of paying off the trolls to make them leave you alone?
10
higherpurpose 18 hours ago 1 reply      
Well, that's a first. But good on them, I guess. Now if only they did the same with Rockstar.
11
yuhong 15 hours ago 0 replies      
Another one to put on my wishlist for Satya.
12
Karunamon 19 hours ago 2 replies      
Is it admirable when someone does the right thing for the wrong reason?
13
Bahamut 17 hours ago 2 replies      
This is misleading - Apple previously invested in IV.
14
collyw 18 hours ago 1 reply      
I thought Apple were king of the patent trolls these days.
15
will_brown 18 hours ago 2 replies      
>in February, [Apple] complained that it has had to go to court with trolls 92 times in the last three years.

Yet Apple holds a $1Billion+ judgment on Samsung for violation of Apple's design patent for rectangular device with rounded corners in addition to "pinch and zoom" and "bounce back".

28
Frama-C is a suite of tools dedicated to the analysis of software written in C frama-c.com
61 points by nkurz  14 hours ago   14 comments top 6
1
nullc 10 hours ago 1 reply      
I like Frama-C a lot in theory, though in practice I found it hard to use except on very small code segments.

When its unable to prove a range the feedback you get from the solvers is inscrutable enough that its very hard to figure out what additional data it would need to satisfy the analysis. (Sort of like parsing C++ compiler template related errors)

I'd hoped that language features like the typestate stuff that used to be in Rust would someday make the work required to use sound analysis tools in production code smaller. I'm not sure if much thought has been given to what kinds of accommodations languages could give to ease static analysis while still being programmer friendly.

It seems that newer languages have actually moved away from analysis friendliness in some respects, however. E.g. in C a signed overflow is always a bug so if analysis can prove one is possible you have something to fix. Several modern languages have defined signed operations to wrap and so that obvious safety test is no longer available. (You could define in your own code that it should never wrap, effectively writing in a subset of the language, but as soon as you call into third party code you never know if an overflow was intended and safe or not not without extensive analysis)

2
TorKlingberg 10 hours ago 0 replies      
It would be good to hear other HN:ers experience with various static analysis tools for C.

I have had good experiences with Flexelint (PC-Lint). It does not attempt to deeply analyze control flow, more like compiler additional warnings. It flags a lot of common mistakes and can basically turn C into a more strictly typed language. I feel a lot more confident in C code if I know that it passes lint, since it warns if you try to mix unsigned and signed ints, cast away const, call functions with wrong types etc.

Like many static analyzers it takes some work to set it up, and tune which warnings you actually car about. It is definitely business-priced and feel a bit old (although command line tools age well.

The is a clear lack of good open source tools. I tried all i could find, but Splint was the only one that would flag switch-cases without break. It was last updated in 2010.

3
eliteraspberrie 8 hours ago 0 replies      
One feature you can start using right now: prove an assertion with the WP plugin. [1] It's especially useful to prove the absence of some undefined behaviour, bounds on variables, or invariance conditions.

[1] http://frama-c.com/wp.html

4
xvilka 11 hours ago 1 reply      
It is very handy - http://xvilka.me/frama-c.png - screenshot of the integer overflow found.
5
rhth54656 12 hours ago 3 replies      
The only option for windows is manual compilation using a POSIX library.

Any suggestions for a similar tool?

6
glifchits 12 hours ago 1 reply      
What would this say on the OpenSSL code that had the Heartbleed bug? (Edit) this: https://news.ycombinator.com/item?id=7571506
29
OpenBSD Foundation reaches funding goal for 2014 marc.info
74 points by adamnemecek  15 hours ago   9 comments top
1
diziet 13 hours ago 2 replies      
To put the number in perspective -- this is about how much a single full time engineer will cost to employ. Let's lowball and imagine that OpenBSD in comparison to firefox is 1/100th the size. Mozilla has annual revenues of >300 million dollars, compared to OpenBSD's $150 thousand.

They are asking for donations to cover electricity costs. The real donations, of course, has been in the time the community has put in this.

30
Fake audiophile opamps: OPA627 (AD744?) zeptobars.ru
131 points by atomlib  22 hours ago   64 comments top 7
1
leephillips 18 hours ago 5 replies      
Counterfeiting of chips is such a big problem that the US DARPA has a major program to develop tiny cryptographic chips that can be embedded inside chip packages to prove their authenticity. It's called the SHIELD program (solicitation number DARPA-BAA-14-16 if you want to ask for money).
2
natejenkins 20 hours ago 4 replies      
Can someone point out where are the laser trimmed resistors in the photos, and maybe explain some more of the components?
3
joosters 20 hours ago 7 replies      
If you can't tell the difference without dissolving the chip in acid... perhaps then there is no difference?
4
boise 18 hours ago 0 replies      
Best to only buy from TI franchised suppliers. Can't find any that go down to $16 though

http://octopart.com/opa627au-texas+instruments-420817

5
njharman 19 hours ago 1 reply      
til; that resistors are trimmed with lasers http://en.wikipedia.org/wiki/Laser_trimming
6
GFK_of_xmaspast 17 hours ago 0 replies      
"Hey! This is lizard oil, not snake oil!"
7
darksim905 19 hours ago 3 replies      
How do they get such high resolution? An electron microscope or something? Such a cool blog, I've seen it featured on HaD a few times.
       cached 12 April 2014 10:02:01 GMT