That is true - but this exploit doesn't depend on setting a length of 65,536. The server takes whatever length the client gives it (which is, afterall, the bug). Most of the early exploits just happen to set the maximum packet size to get as much data out (not realizing the nuances of heap allocation). You can set a length of 8bytes or 16bytes and get allocated in a very different part of the heap.
The metasploit module for this exploit supports varied lengths. Beating this challenge could have been as simple as running it with short lengths repeatably and re-assembling the different parts of the key as you find it.
edit something that I want to sneak in here since I missed the other threads. Cloudflare keep talking about how they had the bug 12 days early. Security companies and vendors have worked together to fix bugs in private for years, but this is the first time i've ever seen a company brag about it or put a marketing spin on it. It isn't good - one simple reason why: other security companies will now have to compete with that, which forces companies not to co-operate on bugs (we had the bug 16 days early, no we had the bug 18 days early!, etc.).
As users you want vendors and security companies co-operating, not competing at that phase.
 Cloudflare - Can You Get Private SSL Keys Using Heartbleed? http://blog.cloudflare.com/answering-the-critical-question-c...
 see https://github.com/rapid7/metasploit-framework/blob/master/m...
How do you not love this guy.
If you only change your current cert to get a new key but you don't go through the revocation process of the old certificate if someone managed to get the old one they can still use it for a MiTM attack - as both certs would be valid to any client.
That doesn't make sense to me, seems like the key needs to be in memory all the time, or at least during every session.
So far, two people have independently solved the Heartbleed Challenge.
The first was submitted at 4:22:01PST by Fedor Indutny (@indutny). He sent at least 2.5 million requests over the span of the challenge, this was approximately 30% of all the requests we saw. The second was submitted at 5:12:19PST by Illkka Mattila using around 100 thousand requests.
We confirmed that both of these individuals have the private key and that it was obtained through Heartbleed exploits. We rebooted the server at 3:08PST, which may have contributed to the key being available in memory, but we cant be certain.
Pic of the CloudFlare team reviewing the attack. Ten guys crowded around one monitor.
As I said, I am not sure that is right or if that was the method used to exploit cloudflare, as I didn't had the time nor the knowledge of openssl implementation to test it out, I am just throwing my guess out there before the official exploit comes about.
This is an examplary response from google. They respond promptly (with humor no less) and thank the guys that found the bug. Then they proceeded to pay out a bounty of $10.000.
Well done google.
This should scare anyone who has ever left an old side project running; I could see a lot of companies doing a product/service portfolio review based on this as a case study.
So anyone can create a trap link such as
Then it's just a matter of seat and wait for the result to show up wherever that spider shows its indexed results.
It's too much hidden power in the hands of those who don't know what they're doing (loading external entities pointed in an XML automatically? what kind of joke is that?)
The pricing models has apparently worked so far. Are any active users of Detectify here and can share their experience?
They also discovered vulnerabilities in many big websites (dropbox, facebook, mega, ...). Their blog also has many great write-ups : http://blog.detectify.com/
Input from potentially malicious users should be in the simplest, least powerful of formats. No logic, no programability, strictly data.
I'm putting "using XML for user input" in same bucket as "rolling your own crypto/security system". That is you're gonna do it wrong, so don't do it.
Reading the spec. which led to the implementations, can often reveal interesting things, like support for external entities..
Actually digged it when i read it a few years ago and awesome knowing that it was probably used for this reply :)
Getting the source?
"Sir, I am sorry to inform you that another backdoor has been found. We will introduce two more as agreed upon in our service level agreement."
This sells for at least 10 times more on the black market. Why would one rationally chose to "sell" this to google instead of the black market.
Some people don't break the law because they are afraid to get caught, but I like to believe that most people don't break the law because of the moral aspect. To me at least, selling this on the black market poses no moral questions, so, leaving aside "I'm afraid to get caught", why would one not sell this on the black market? Simple economic analysis.
Very serious question.
def fourier_transform(signal, period, tt): """ See http://en.wikipedia.org/wiki/Fourier_transform How come Numpy and Scipy don't implement this ??? """ f = lambda func : (signal*func(2*pi*tt/period)).sum() return f(cos)+ 1j*f(sin)
What you want is the power spectral density in the discrete case, called the power spectrum. It can be calculated by multiplying the discrete Fourier transform (FFT) with its conjugate, and shifting. NumPy can do it. Here is an example: http://stackoverflow.com/questions/15382076/plotting-power-s...
If was a fun place to see in the 70's after watchingmy father rebuild our player piano.
Services (virtual memory/swapping, file systems, the network stack, etc.) in microkernel systems typically can't be modified or replaced by applications any more than in monolithic kernels, which is probably party of why microkernels have stayed in the realm of embedded systems, etc. where you have control over the whole system.
Exokernels bring the flexibility that microkernels don't, by moving the security boundary down the stack. Instead of moving services into trusted user-level processes, they manage protection at the level of hardware resources rather than services. This enables those services to be in untrusted shared libraries that can be securely modified or bypassed on a per-application basis.
Thus, instead of the lingering "eh, it's a little slower but we can ignore that," exokernels provide much better opportunities for optimization and tend to be much faster. For example, a database could choose to discard and regenerate index pages rather than swap them out to disk and back; a file copy program could issue large, asynchronous reads and writes of all the copied files at once; a web server could use its knowledge of HTTP to merge packets, or co-locate files from web pages to improve disk seek time.
Further, exokernels and microkernels are not mutually exclusive; they are rather orthogonal concepts (you could move an exokernel's drivers into user space processes if you wanted). If we had hardware that were more conducive to a microkernel design, for example with direct process switching rather than going through the kernel (32-bit x86 did this with task gates, but they weren't used much and were abandoned with 64-bit), this would probably be the optimal design, rather than a purist microkernel approach. Incidentally, the in-development Mill CPU design does this very efficiently, as well as a few other things that are good for both micro and exo-kernels.
Edit: The paper helpfully provides much of this itself, with boxed section footers with the change from then to now in how that component is handled. It makes for an interesting way to skim and zero in on sections you may find of interest.
e.g. 4.2 Lazy scheduling ends with *Replaced: Lazy scheduling by Benno scheduling"
Their machine-supported security features will be very interesting to see realized.
The L4 microkernel has undergone 20 years of use andevolution. It has an active user and developer community,and there are commercial versions which are deployedon a large scale and in safety-critical systems.In this paper we examine the lessons learnt in those 20years about microkernel design and implementation. Werevisit the L4 design papers, and examine the evolutionof design and implementation from the original L4 to thelatest generation of L4 kernels, especially seL4, whichhas pushed the L4 model furthest and was the first OSkernel to undergo a complete formal verification of itsimplementation as well as a sound analysis of worst-caseexecution times. We demonstrate that while much haschanged, the fundamental principles of minimality andhigh IPC performance remain the main drivers of designand implementation decisions.
All of us need to learn this, re-learn it, revisit it, internalise it, live it and breathe it every day. I'm sure I could do better at attaining such an ideal. So too can these gentlemen.
...and one showing the structure in detail:http://www.architectureandvision.com/projects/chronological/...
TLDR: Great for pulling moisture out of the air if the air already has a really high moisture content. Pretty much useless in other circumstances.
Why would "two Warka towers" be a target for a year, when, on the surface, reading this - it would make sense to go install a thousand of them and see how they played out over a year. If this device really could pull, even 10 gallons of water a day for $500 cost, it would have zero problem attracting funding on that kind of tiny pilot scale.
The ones in the article look like they're cheaper, possible to construct with local materials, and importantly: more user friendly - you don't even need a droid that understands the binary language of moisture vaporators.
Good site, though.
I realize that this may make the plate "too hot to handle", but I'd gladly eat breakfast with one hand in an oven mitt if it would result in better toast.
Would something like neoprene/polystyrene plates (as insane as it sounds) provide a solution to slightly soggy toast at a lower temperature?
It's one thing to find an optimal temperature, but a completely other beast to find a practical solution to it!
I never measured it, but I had the feeling that the A form (looking from the side) slightly interferes with the rising steam.
How do you do it?
This is no different than why you have vapor barriers in certain climates.
Without trying to sound dismissive, i was not aware there was a lot of experimentation here necessary to figure out the temperature at which the water would stop condensing on the plate again.
Still a fun article, of course :)
(I live in one of the 190+ countries that use Celsius, but I know that 99.8% of things on the Internets are written by people from just one of those three other countries that doesn't. I'm also aware that for reasons that are a bit bewildering, everyone in those 190+ countries politely goes out of their way to make it clear that we're talking metric, because we're now used to the idea that if people don't mention units then they're probably from North America, and consequently are almost definitely using gallons (US, not UK gallons), miles (US, not UK miles), Fahrenheit and other deprecated units. We should probably stop being so considerate.)
Maybe it will work out better in the future.
The biggest problem I see is that the gains of this technology would be rather small anywhere with conventional high-speed rail or existing infrastructure that can be upgraded to that. With Maglev everything needs to be built from scratch. Thats just not very attractive for any place that has consistently expanded its rail network ever since the height of the industrial revolution.
And for what? A 180km/h faster train? I personally very much want that, sure, but is it worth it for anyone building it? Im pretty sure I know the answer in Western Europe (though I can always hope that maglev has a future there), Im not so sure when it comes to the US.
Both are extremely expensive per mile compared to HSR (high speed rail). Sure it's faster than high speed rail but a technology doesn't make it because it's better, but because it's more practical to implement.
Service is not as tried and trusted as HSR. Germany's test facilities have been torn down after the Shangai maglev was built and Japan's maglev hasn't been expanded. On an emotional human level it just doesn't feel trustworthy. If you're not growing you're dying.
Both technologies are proprietary whereas HSR has more companies and manufacturers to chose from.
HSR could probably compete with maglev speeds by building a wider gauge track, using larger wheels, and implementing more aerodynamic designs to reduce drag and power consumption.
Germany and Japan are pitching their maglev trains while they themselves aren't avid users of them.
=== Lessons ===
If you want something to succeed sometimes you have to set it free.
If you're not expanding or growing you're dying.
If you want people to use your solution, instill trust by investing in and using your own solution.
If anything, Windows' level of backward compatibility is a giant cautionary tale: enable poor behavior from devs, and it will proliferate. You cannot trust app devs to do the right thing, they need to be forced to; whether by gatekeepers at app stores, or OS restrictions. It is a tragedy of the commons. Whether it's inane programs inserting themselves into the systray, 'preloaders' for bloated apps (which slow startup), browser extensions, Explorer add-ons, or other garbage, app devs still seem to do a fantastic job of gunking up a Windows install.
This is why it's a bit of a blessing that webapps can't do much; because the more powerful they become, the more annoying and inane they will be.
Though Raymond is correct. The MinGW guys should probably "write and ship their own runtime library," or at least use the msvcrt from ReactOS or Wine. That should make it possible to statically link to it.
Having said that, Microsoft probably won't change their msvcrt.dll in a way that breaks MinGW software, since their users will complain that they broke VLC.
It's not that I like the MSVCRT runtime. It's just that I have to target it. Any popular commercial product that has some form of plugin architecture (Autodesk for example) through DLLs would require more or less for one to compile it's own plugins with the exact version the main application was compiled.
It's a bit of strange moment - when one developer cries that OpenSSL should've not used it's own malloc implementation, and then another cries - don't expose malloc/free interface (but do say your_api_malloc, your_api_free) and this way you can target any "C" runtime.
Now these are completely two different things, but not so much. What if say OpenSSL used the "malloc" runtime - what version of MSVCRT.DLL would've they target? Does anyone really expect to target all these different versions and all these different compilers that you can't even find the free versions now through MSDN?
(Now I'm ignoring the fact that you can't easily hook malloc and replace it with "clear-zero" after alloc function, but that's just a detail).
What I'm getting is that there are too many C runtimes, hell DirectX was better!
I only wished MS actually somehow made MSVCRT.DLL the one and only DLL for "C" (C++ would be much harder, but it's doable).
I think the answer is "no" because Linux distros generally recompile the world with each new major standard library version. If any C standard library gurus are reading this, feel free to chime in!
It's kind of interesting to "play" with. It's an absolutely massive download however because of all the captured video that is necessary to allow the user to look in any direction.
Hey guys,I'm from Atlantic Productions and this whole article is about 60% correct. We're currently working with the rift and we're really excited by it. We've got a couple of things in development at the moment, maybe three things in fact. They're all potentially fantastic projects but as you all know it's quite a difficult thing right now to fund development of these things.We're considering putting out a kickstarter for a project but we'd only put it out there if we knew you guys were interested. So as a very simple show of hands kind of thing, if we were to make an immersive documentary, where you are in the scene, would you be interested in helping fund that in a kickstarter?Would love to hear your thoughts and suggestions.
A naive interpretation would be that for that to be possible from a prerecording, you'd have to have a 360 degree recording from the perspective of each cubic millimeter whithin the given volume of space that you'd expect someone's head to move.
Of course that's impossible, and there's certainly ways to interpolate from fewer viewpoints, but I've not heard of any that sounds like it's convincingly solved the problem. Is there one?
So when a federal entity isn't providing enough revenue to the federal government, it can compensate for it by charging other federal entities. This seems like a very convenient way for government businesses to misreport their actual income - I'd like to see how much of the NTIS revenue actually came from the real market.
Also, they should Google the creation date of the internet. :-)
"(2) NTIS was established in 1950, more than 40 years before the creation of the Internet."
It takes some real skill to find reliable, accurate, up-to-date information on the internet. Could NTIS still serve a purpose by Googling for more critical research? Or maybe the idea is that that job is for the Congressional Research Service.
So government documents are the inspiration for the way the Krang talk in the new TMNT cartoons...
"No Federal agency should use taxpayer dollars to purchase a report from the National Technical Information Service that is available through the Internet for free."
It doesn't say 'legally' in there any where in there.
On the other hand, showing a cold unwillingness to help when doing so is by far the above-and-beyond response doesn't engender good customer loyalty. It's also how StartCom operates. This is the same cert authority that insisted that I send them a full, unredacted copy of a mobile telephone bill with every "family plan" member's full call, SMS, and data history in order to call me. Otherwise, they could only "verify" me by sending a snail mail letter from Israel to South America (where I lived at the time). Independently-linked, outside verification databases operated by local government entities weren't sufficient.
At least they're consistent with their "rules are rules" processes.
$ gpg --gen-revoke $(whoami)@$(hostname -f) gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. How would you like to pay us? (1) Mastercard (2) VISA (3) Other Your selection?
In fact, a place in the revocation list should be reserved every time a cert is issued, possibly with a mechanism to trigger it with the private key. For example, if I send a message encrypted/signed with my private key to the revocation authority, they can decrypt/verify it with my public key, which they received when the CA issued my cert.
PKI as it stands is fucked up.
Then I left the game industry, moved across the country, and had another kid. Suddenly, motivation and time were scarce. I backed out of the book deal and basically put it on hiatus for two years. I still really wanted to finish it but it just wasn't happening.
About a year ago, I realized that if I didn't finish it soon, I never would. My familiarity with the domain was fading every day. I didn't want the project to be a failure, so I decided to try writing every day.
I didn't have a set goal each day, but I try to do around 30-45 minutes. That ends up being ~500 words of first draft, ~1,000 words of later revisions.
In the past 309 days, I've finished 12 chapters. That's 59,568 words, plus a few thousand more for intro sections. I've redesigned the site twice, set up a mailing list, gotten a business license, and a bunch of other grunt work.
I'm about halfway through the very last chapter now (!). In less than a month, I should be able to say the book is done. (Though what I mean is that the manuscript is done, I'll be doing ebook and print versions after that.)
I absolutely could not have done this without working on it every day.
Having said that, programmers should spend at least as much time reading and thinking about code as they do writing it. You can write code for hours each day and do nothing but revert to the technologies and techniques that you find most comfortable.
I work hard as-is teaching WDI at GA. I commit code frequently, but I also really want to focus more on work-life balance at the expense of getting more done. This summer, I'm taking two months off to do Burning Man and travel the country via motorcycle. During that time I expect no code to be committed. Do I feel bad about that at all? Not in the least bit, in fact I'm super excited to do it.
Currently, I try to not do much work on weekends. I like working hard during the week and then stepping away from the computer. I'll go and play music, ride my motorcycle, hang out with friends, travel, etc. The more time spent on my laptop on weekends feels like I'm missing out on things that matter strongly to me right now.
Now I am nowhere near the prolific coder that John is, and nowhere near his skill. I don't think he's wrong for doing it this way, but it isn't right for me and I'm glad that its producing results for him. I also go through periods of wanting to code daily, and other times where I'm ok with not coding for several days at a time.
To each their own. Also, Hi John!!! I haven't seen you since betahouse or you holding a Jelly at your place in Cambridge.
One big problem I've learned with not working consistently at any one task is that after dropping and returning to a project, I find myself being familiar enough with areas I last touched that I want to speed through them to reach a point where I begin working on new ideas and concepts. But in most cases, those areas I left off at were the very reasons I jumped ship, either because they were too difficult or mind-numbing to wade through, leaving them incomplete/unlearned, and resulting in me having to take a few steps back to fully refresh myself before I can continue building, which leads to a lot of frustration and feeling like I'm wasting a ton of time.
Currently I'm in the complete opposite modus operandi. I don't do a lick of side-project work during the week, and on weekends I take a modafinil(wakefullness promoting medication) and stay up nights on end to crack out as much as I can.
I get an INSANE amount done on the weekends that I have the energy to pull this off, but it's horrible for my health. The rest of the week I have anxiety about the coming weekend, and it completely throws of my circadian rhythm. Not to mention that I'm only able to pull this off perhaps once or twice a month.
I'll definitely be changing my work schedule to be more in-line with a daily habit. Being able to look back and see a lot of consistent work being done sounds way preferable to being able to look back at a few weekends of consistent insanity.
Yes, it makes you more productive, but what if you fall in love, get sick, have a child...? Then you feel guilty about not catering to your side projects and guilt breeds procrastination.
I learned how to break down work into small pieces and rather finish one small piece and then call it a day instead of leaving something half-working for the next day. Because of this, I left projects dormant for 3 months and then picked them up again.
Granted, my side-projects are for-fun and not for-money, that makes it easier...
So now, I'm trying to do work in album-length increments. Put on the headphones, pick an album, and work on one task all the way through it. No breaks, no interruptions. It's kind of a Pomodoro technique variant, a bit longer and with the headphones involved for extra habit and insulation from the outside world.
I think the key takeaway here is that sticking to a plan is helpful, and that a coding heavy plan is a productive one. This is a great post for that.
I would argue that a good plan should include time off for reflection, and to avoid burning out. I have seen too many engineers burn out because they were convinced that working constantly was optimal for progress.
While I admire the dedication and focus it takes to stay up to such routine, I am certainly concerned by the quality of life and the narrow mindedness of enforcing upon oneself to code on a daily basis. What about days off? Going out friends / family for a weekend or holidays? One would suggest to bring your laptop so you can stick to it? This is madness to me...
I love to code, contribute to OS projects, do code for a leaving and for myself - but for nothing in the world I'd even attempt such thing.
Setting yourself with goals is great and required to some extend but on a proper schedule. Going to the gym 3 times a week can be achieved without being complexed by the fact you didn't go there every single day - and yet you can substantially improve yourself. I don't envy those buffy dudes that stick to it.
I'll stick to enjoying evenings with my wife, do code maybe 1 or 2 times during the week days, spend an extra day on more complex issues on the week end, and rest for the last day. Just saying.
Here's an article that really complements the submission: http://start.jcolemorrison.com/how-i-fight-procrastination/ It's titled "How I fight Procrastination" and gives advice on how to break up tasks into day-sized activities.
Finally, I want to say I personally disagree with the OP's 2nd point:
2. It must be useful code. No tweaking indentation, no code re-formatting, and if at all possible no refactoring. (All these things are permitted, but not as the exclusive work of the day.)
And, if I had this rule I think I'd avoid refactoring a lot of code that needs it. I'd spend more effort squeezing that square feature into that round hole if refactoring "didn't count".
Don't write code every day, do something you want to everyday.
I had a bit lower baseline than the author. My rules are as follows:
1. Commit something, anything. Even if it's just fixing a typo in a readme or phrasing some documentation better.
2. You must commit every day.
3. Every contribution must be useful.
Will need to change my attitude and get more done. Good piece.
I've been doing a lot of side-project hacking the past three months, as evidenced by my Github activity graph (https://github.com/zhemao), which, admittedly, is not as impressive as Resig's. However, this week, I finished up my latest side project and found myself at a loss for new ideas. At first, I did feel a bit guilty about not doing any coding, since it had been a long time since I had nothing to work on. But then I realized that there's more to productivity than a nice contribution graph and sometimes it's good to take a step back in order to think, reflect, and get inspiration.
I'm currently reading through Patterson and Hennessy's "Computer Organization and Design" to learn more about computer architecture. I'd also like to practice my saxophone some more, start learning how to draw, help a friend who is still in college find a job, and expand my social life a bit. My Github account will still be there when I am ready to get back into it.
I'm suspicious that there are many projects that could be decomposed like this, or even into 10 minute blocks, but that it'd be really helpful to have tools that makes this more achievable -- ones that basically remind you of where you were and help with the what-do-I-do-next decision.
Does anyone have any experience with this kind of development process?
What matters though, is how -you- work. Are you the sort of person who prefers to code as much as possible? Code every day. Do you enjoy getting a big thing done fast? Hackathons are for you. Do you have children, a life or a job? You might want to code whenever you can instead of trying to force yourself into something that might not work for you.
The best thing about the 'little and often' approach is how you get drawn into fixing something big just by starting to fix something small. Getting into The Zone for hours at a time is great and everything but honestly I'm starting to view the whole process as just clocking in keystrokes.
My gitstats (http://notes.darkfunction.com/gitstats/index.html) is showing commits on 56 of 85 days. A week of the remainder I was on holiday, and I tend to rebase quite a lot so actual days committed should be higher. But in that time I have written over 18,000 lines of code and removed over 6000. Almost a full iPhone application since January in my spare time, now onto the home stretch and couldn't be more pleased with the results.
If the latter is true, do you really need advice?
Said differently: flow is the opiate of the masses.
See my graph: https://github.com/steveklabnik
As you can see, I'm about to lose a ton of green. I'm at 87 days as my longest, but July 6, 2013 was brutal for me. I was actually flying, and had saved a small bit of work to do during a layover, but then I totally forgot.
Once that chain was broken, it was super easy to justify taking some time off...
For me at least, the context switch required between what pg calls the manager's schedule and maker's schedule is so huge that it takes hours to cross that gulf (that's what I'm mostly switching between anyway)
Do you just sit down and force yourself to hammer out code?
Last year, I  set a goal to teach myself git by committing at least once every day for a month. At the end of this, I saw the streak, and was too afraid to see it go down to 1 in a snap. Ever since, I've been committing code daily, and it's been about 40 weeks, and I'm still going strong. Being a full time student, this wasn't really easy for me, but I'm proud of myself.
The one thing I learned is, that the problem isn't the lack of ideas or time, but the lack of motivation to work on them.
Really enjoyed your post, though. I think I might give it another shot from a different perspective.
Sometimes I burn out, and in those instances I take my free time away from programming.
The most important take away is to figure out how you want to improve yourself, instill passion in doing so, then executing.
Is that not a negative? I find it hard to stop thinking about what I'm working on, and it negatively impacts my life. I leave the office after 8 hours, but the next 2 hours are spent turning over problems in my head, and the 2 hours before I sleep are spent on it too. The days that I work on a problem at the office for a few hours and can't unblock myself before leaving are hell. My brain won't turn off until I can get into work the next day and begin on the problem. Some days I will even wake up in the morning or night with answers to the problem. Why is the AWS instance in my head turned on all night long when I'm not even getting paid for it?
On nights when I absolutely cannot write a piece of working code, I scaffold out the tests. When I wake up the next morning and have 5 minutes with my coffee, I pass a test. Not much gets done, but by building the habit and ability to "jump into coding", no matter the time, place, or circumstance...that's how I've been able to build the coding-zen-mentality needed to write "real" code when the time comes.
If your job leaves you depleted, and when you arrive home you're like a husk of a human being you can't expect to do something like this.
Take into account that great developers like John live in a place where they can grow, you can't copy what they do and expect to have the same great results in a not so great environment.
I had this experience when i was working on a book and i had to spend considerable amount of time every week on one example. The book had 100 examples so it took me two years to complete the book but the experience was amazingly satisfying because i was able to justify the effort, going slow and steady.
The other thing i noticed is the increase in quality when you do less but give more time to think. Keeping the problem in your mind create innovative solutions. Which is impossible if you want to just hack up everything one weekend.
My personal favorite is keeping a point system for all the good things you want to do in your day and add them up for weeks and at the end of the month, check the total and see where are you lacking behind. What percentage of life you are actual able to live the way you want. Haven't got to 100% yet but above 60% i give a pat in the back.
Anything in the afternoon is a steady decline and by evening I should just do something that doesn't involve sitting in front of the glowing box. Trying to push yourself too hard results in overall productivity loss.
Is it better to focus on one project until completion, even if you aren't as into it anymore? What do other HNers do regarding multiple on-going side projects?
I say: take three months off from even touching a text editor and practice guitar every day.
I think my system leads to happier, healthier human beings.
Life is too short to waste it in things you don't love. Remember jQuery brought you fame, not because you were chasing fame itself but because your love for jQuery and programming.
Love for what you do comes first, money is just a secondary effect.
it applies not only to coding, but also to other areas
Always be coding.
However, Dan Ariely has explained that the secondary effect is potentially more powerful. For those that choose to stay, they will forever live with their past action of having turned down lots of money to work there. So, when they're having a crappy day and hating their job, they're probably thinking "why didn't I take the money and quit?!". The only way to reconcile their thoughts and actions is to explain that, in fact, they must really love this job and therefore should work hard at it. This effect is known as Cognitive Dissonance and is fascinating.
Here's a link to a video of Dan explaining this and a really excellent Coursera course he does on Irrational behaviour.
Here's how their typical financial offer is structured for new software engineers:
1st year: signing bonus + relocation bonus + 5% of stock grant
2nd year: signing bonus + 15% stock grant
3rd year: 40% stock grant
4th year: 40% stock grant
If you quit within the first year, you have to give the relocation and signing bonus back. That's much much more than $1k. So there's a strong financial incentive / golden handcuffs to keep you there for at least 1-2 years, even if you are unhappy!
After the 2nd year, the financial incentive of staying is still there in form of the large stock grant (which has grown due to their stock price rising) that you've been promised and waiting on for a long time.
I can see someone rationally and happily taking the incentive after the third or fourth years and quit (i.e. after they've done damage to the work environment as an unhappy/unmotivated employee, and no longer have to give a fortune back to the company)... but before then, I doubt it'll change the behavior of any currently employed, overworked, over-paged, under-paid, under-appreciated software engineers.
Who this policy might affect though is future hires, and their perception of Amazon. People who have a choice between offers from MS and Amazon for example. They might consider this an interesting policy and assume that it would have improved employee morale at Amazon even though it's common knowledge that Amazon has terrible work life balance, etc.
I should also note that the Zappos policy makes a lot of sense to me, but this is very different from that, as is the employee culture of Zappos from Amazon.
No quick success story to tell - I've been bootstrapping for 5 years in China and it's been hard. But I've been happier overall focusing everyday on pretty much whatever-the-hell I want to think about, and my business just broke $1M USD revenues this year by doing that, so overall it seems right decision for me.
Policies like this are probably a win-win for all involved.
This deal is for warehouse employees only and most of the people(90%+) working in warehouse are contractors.
They handpick employee(s) and once a year and offer him/her this deal.
Not to mention, as another commenter pointed out , once you decline the money you will look back and remember you made the decision to stay when presented an opportunity to leave.
Maybe this type of program says more about the weak state of organized labor in the US then it does about breeding a healthy and good company culture. There seems to be something you can read in between the lines with this.
If this is a global policy then the argument that they're the scum of the Earth with regard to employees cannot hold much water.
This would never be worth it for a developer working for Amazon proper.
If you want to get rid of someone in most EU countries, it's going to cost you a lot more for them to sign away their rights by quitting. From that perspective, this is just an attempt to get rid of people cheaply.
But exactly those people you actually want to take the offer won't, they are much better off forcing their employer to either fire them or make them a better offer.
Unfortunately a portion of this research was funded by the DoD, which decided to cut funding in 2012 at least for the Sarepta Marburg cure, which significantly slowed down progress and prevented any stockpiling of the medicine.
All you need is for one case to hop on an international flight, and things get much harder to control.
The panic is spreading and people don't know what to do, hence why violence is now breaking.
Here is also a great visualization of the development after satelite images were available:https://wiki.openstreetmap.org/w/images/a/a8/Gueckedou-mappi...
And the changesets:https://wiki.openstreetmap.org/w/images/f/fe/Gueckedou-chang...
It should be "OpenStreetMap and The Red Cross / MSF ..."
Instead of doing:
RUN echo bar >> foo RUN echo baz >> foo
RUN echo bar >> foo \ echo baz >> foo
That said, the biggest downside I've seen with Ansible is reusable components. They have something called Galaxy in beta , which should help, although it feels a bit rough yet...
Genuine question, not intended as any sort of troll: what benefits would people with this philosophy say their organisation gains from routinely deploying multiple times per day?
I have nothing against better testing tools or more efficient development processes, of course, and if you have a serious bug then being able to fix it as quickly as possible is obviously beneficial. I just don't understand where this recent emphasis on always trying to move fast has come from, or what kind of management strategy someone might use to take advantage of such agility.
Given that a lot of graphics cards were purchased for the sole purpose of mining bitcoins - I'm wondering if there is a huge surplus of cheap graphic cards out there now?
Strangely enough - eBay still shows graphic cards that were popular for mining, like the ATI Radeon 5970, still selling for around $300. 
I'm guessing that's evidence that the graphic card market wasn't wildly impacted by bitcoin mining?
This week CEX/Ghash.io the largest bitcoin mining pool just launched their own auto-switching pool https://ghash.io/MULTI (Warning must sign up through CEX.IO). This will probably be the biggest one soon as its hashrate has tripled today already, but for the last few months the big three have been the following:
This is an always up to date profitability comparison of these pools vs. straight Litecoin mining made by Bitcointalk user Suchmoon: https://docs.google.com/spreadsheets/d/1VOAhFX1XRizdaTp71qnY...
The cost of mining to them is practically zero.
Bitcoin, like Punk certainly isn't dead.
Bitcoin sounded and looked like the sex pistols a year ago, now it sounds and looks like Green Day.
I passed Steve in the hall and he glared at me as I walked with my doughnut. Steve was in great health in those days while I was pasty and obese. (Still am, sad to say.).
But I was happy with my doughnut. Steve glared at me but didn't say anything. I slunk away.
The next day, there were no more doughnuts at any of the cafs on the main campus. I don't think it's a coincidence.
Anyway, if you enjoyed this, you should read the history of Safari posts he did a while back, also a podcast he was a guest on one time, though I forget who it was with -- ah, Debug I think -- that was really excellent and well worth listening to.
There's also Andy Hertzfeld's folklore.org, however that is focused on Steve's original tenure at Apple, not the "comeback" from the late-90s on.
Otherwise another great piece.
On another note, it would be interesting to see if a website containing all these memories of Steve Jobs ever comes about. A crowdsourced biography if you will: storiesabout/stevejobs .
In making sure nothing ever crashes, Apple has moved more and more to an OS that is too restrictive for my taste.
Yes, Steve could be intense at times. But he was also a real person. He had to deal with the ordinary and mundane aspects of life like everyone else. Maybe even enjoy them.
He looked at me funny, but I couldn't tell if he got the reference to Bizarre Sex #10: http://silezukuk.tumblr.com/post/3151672333 [NSFW]
End of story. Before continuing celebrating Jobs, ask yourself a question, do you want to promote that kind of behavior in the Valley?
Whoever sees your project can help in little or big ways, from joining the team to becoming an advisor or a beta user. Teams are soon getting public/private collaboration tools inside projects as well.
At the same time your profile shows what areas you are great at or looking for help in/learning in, example:http://doerhubassets.s3.amazonaws.com/assets/badge-67f14a8ee...
So you can really easily see people you have a lot in common with and share complementary skills with. An app with real-time chat and serendipity matching is in the works as well. It is entirely free, we haven't made a cent with it but some amazing projects are now in beta because of our work and people who would have never ever met otherwise (a hacker and a surgeon for example) are now doing projects together. There are growing past 600+ doers already and 80+ projects as of yesterday. You are welcome to join.
We don't spread it randomly. Instead we mention it only to communities of doers we respect and would want to work with and I hope you will do the same if you join in.
a) You're suggesting that we start an HMO meme on Hacker News which clearly means "I'm looking for help, this is what I'm looking for help with"
b) This post is the first one and it's your list of things?
If so: Cool :) I can't help with any, but I thought that this clarification may help others.
I was also wondering to myself: would this be a good idea for a monthly thread a la Jobs/Freelancers/Open Source? I decided that it probably wouldn't be, because you'd end up with a difficult-to-read list of things that people may or may not need help with.
Articles like Tom's which include specific requests are probably the best format for such things. We don't want any information overload, right?
And for the record:
London-based if anybody wants to get in touch and discuss changing the world or just creating something awesome!
Framing does seem to be a dark art in the creative world. Framing local to where I am in the UK is expensive and there isn't anything [anymore] between the expensive, custom, wait-a-few-days, framing and IKEA.
Perhaps I should try and make a robot-controlled cross-cutting mitre saw and start a new business.
A monthly HMO post can be that catalyst.
And this can be the first. Just don't use this opportunity to purely pitch your product.
My site was made to run leagues & tourneys, and I'm a single founder. If anyone would like to talk with me on my business model ideas, my email is in my profile.
I can't provide feedback on their behalf, really, but I'd be happy to introduce you to them.
Also, I really love the idea of the HMO meme. The rate problems can be solved when pushed to a distributed, diverse audience such as HN is amazing.
Help configuring IPV6 public address pool to Ubuntu hosted lXC containers
As for me, I've been working on a career finding tool to help people find good career fits. It's almost done, and I'm excited about launching it. But I'm not sure how to get the word out about it.
If you have any ideas, or would like to try it!, my email is dgurevich5 [at] gmail.com
You just described every coder in Portland. I can send you a few truckloads if you need them.
Here's to HMO :)
How about this format?
They could all be organised on the same day, saving lots of time for those who don't want to be on HN too often!
his email: richardsondx [at] gmail.com
- Some experience working with customer service. I will
- Any type of remote, non-technical (think business) work. I like finance and Analytics.
- Analytics help (setup and mostly data interpretation, telling the story of your traffic)
A student group that I will soon be President of at the University of Northern Iowa received more in donations and financial support. Our student group is not the best managed, but we care a lot about large sponsors, keeping good relations with them, and making asks that matter.
If someone told me that panhandlers and Midwest student organizations are out-fundraising OpenSSL, I would scoff and laugh. OpenSSL? That's mission-critical software running on nearly every PC and post-PC device in the world. You know what OpenSSL reminds me of in this respect? SQLite.
SQLite charges $75,000 for consortium members to have 24/7 access to phone support direct to developers, guaranteed time spent on issues that matter to them, and so on.
The fact that this doesn't exist for OpenSSL is an embarrassment to project management. I made an offer in that email thread to try to raise $200,000 for OpenSSL by the end of 2014, and I'm repeating it here for visibility:
If you are an employee of a corporation that wants to donate to directly support OpenSSL development by funding staff time, send me an email right now: firstname.lastname@example.org
If you are in the OpenSSL foundation, send me an email right now and I will try to solve your problem by finding a phone number at every major OpenSSL using corporation and making an ask. Want me to do that? Send me an email right now: email@example.com
Thoughts on the pros and cons of either approach with respect to improving information security infrastructure?
And if you have only one really large donors, you get them to give you a check. And then you put their name somewhere. And you send them some thank you letters. And you ask for their advice on how to talk to their friends, as maybe they might also want to donate. Because patio11 is just dead-on right: it is more useful to increase the incoming money here, not avoid losing some fees :/. But again: even if we choose to nitpick fees... this conversation is still going nowhere if the distribution of donations and the process of receiving them (if you have mostly random donations, having them do bank transfers is going to massively increase the loss rate ;P) is not where the discussion started.
otoh, if there were a foundation that collected money and funded many projects..it'd look like apache perhaps..
personally, i wouldn't mind an option to donate to apache or openssl in a humblebundle, nor do i mind an option to stick a donate button/widget on my website..or even better, have the widget rotate recipients..
I think we need a score card for donating to open source projects, in the same way we have score cards for using green materials in devices, or using renewable energy for data centers. We should see periodic reports of how much money these companies donated to open source projects.
1 week before, a module is declared the subject. at the time of the party, the major owners are on the hook for function by function questions, and line by line when it merits.
reddit? or even a special github community service.
It turns out that the way that he "compares his responsibilities to those of Bill Gates when he managed Microsoft" is by stating:
The occasional person sends this query to both mailing lists (in almost all cases only one mailing list is appropriate) and when they do not get an immediate response email the entire core and development team. Presumably this is the same kind of person that emails Bill Gates whenever they have a Windows problem.
Emailing open source developers who you do not know at their personal email addresses is rarely appropriate when a public mailing list for the project exists. The tone is a little prickly but what Henson says seems reasonable to me.
it varies from "My god we are all dependant on half a dozen volunteers" to "why doesn't someone pay these guys?" to "what a bunch of fools - we cannot all use the same code"
Bullshit. There's something that was left unsaid in the article, specifically "best practices". Why wasn't the length validated at all? There's nothing new or "complex" about simple defensive programming. How can anyone (even a part timer) working on software that's so security critical be so clueless? Forget about more obscure stuff like the full disclosure mailing list, just reading CERT alerts should make this abundantly clear to anyone in security. Hell, the xkcd cartoon  makes it abundantly clear. If you can't take that cartoon to heart, you have no business writing Internet facing software.
I think Marco Peereboom got it right oh so many years ago when he said that OpenSSL was written by monkeys. 
However, the article does get something right. It's insane that something so critical to internet commerce is essentially a hobby project by a few people mostly in their spare time. That's not simply crazy, that's totally fucking insane. That's the biggest takeaway of this entire fiasco.
 http://xkcd.com/327/ https://news.ycombinator.com/item?id=7556407
That's kind of like a former Christian preacher living in Alabama raising money for Planned Parenthood.
Could this be the day Google, CloudFlare, and other major internet companies in-the-know before the public disclosure, patched their servers?
Is this graph generally available, for any time range, from NetCraft or another monitoring service?
I'm aware the graph shown has a time range too narrow to conclude anything but this made me think that monitoring this graph or noticing unusual reissues from major internet services (Google/CloudFlare/AWS/Facebook) could be used as an advance warning mechanism that a significant SSL flaw is about to be publicly disclosed.
Do you think Firefox, Chrome will release an update in the next few weeks with revoked certificate checks enabled?
The article's amazing for being a look into a future that could've been. A number of physics tools that are around now depend on microscale cryogenics, but they're still fairly rare (like SQUIDs).
 http://en.wikipedia.org/wiki/Focused_ion_beam http://en.wikipedia.org/wiki/SQUID
If you would like to read more about this, the author of this article has another blog post  that discusses the Susskind paper "Computational Complexity and Black Hole Horizons"  in its first half.
They key point, for those who don't have time to read the post:
> On one side of the ring is AdS (Anti de Sitter), a quantum-gravitational theory in D spacetime dimensionsone where black holes can form and evaporate, etc., but on the other hand, the entire universe is surrounded by a reflecting boundary a finite distance away, to help keep everything nice and unitary. On the other side is CFT (Conformal Field Theory): an ordinary quantum field theory, with no gravity, that lives only on the (D-1)-dimensional boundary of the AdS space, and not in its interior bulk. The claim of AdS/CFT is that despite how different they look, these two theories are equivalent, in the sense that any calculation in one theory can be transformed to a calculation in the other theory that yields the same answer. Moreover, we get mileage this way, since a calculation thats hard on the AdS side is often easy on the CFT side and vice versa.
So Roger Penrose, not content with the now-seemingly-attainable quantum computing, speculates there is an even more magical quantum gravity computing, which brains just happen to use, that makes them special, that Turing machines can't compute?
The man just really hates the idea of AI, doesn't he?
That is simply wrong. It is called relativity for a reason. An object might be traveling arbitrarily fast, but in its reference frame it is not moving.
That pretty much invalidates that part of the article.
For curious readers: http://physics.stackexchange.com/questions/3436/if-a-1kg-mas...
The next target isn't infinitely many. The target is infinitely infinitely many.
Like something along the lines of forking infinite number of parallel Universes - that is pretty much "many-verse" interpretation of the quantum superposition(and thus computing) - enhanced with forking of infinitely many time dimensions inside each of the said Universes...
No doubt the guy is competent but this just smells like a poorly executed attempt to move traffic to the blog. I fully expect it to get ignored as such despite the PBS article being good.
One of the best measures we have against Mr. Myhrvold -- given that he seems interested in portraying himself as a public genius of some sort -- is to drag his name through the mud over this. He's not the guy who studied with Stephen Hawking. He's not the guy who wrote the molecular gastronomy tome. He's the very, very rich guy who wants to drag down the entire tech industry to get even richer.
Zakaria: How worried are you that the United States is no longer going to be the place that invents the future?
Myhrvold: I'm very worried. Current course and speed --- we're very good at inventing, uh, but we're also undermining our ability to do that in lots of ways.
We need a major tech company like Apple to take a stand against these types of lawsuits before we will see any real policy change.
I have no idea what he's up or why he was head-hunted to lil ol' NZ but an ex-IV guy in a major role in a central funding hub of an entire country's R&D system kind of weirds me out. My conspiracy tendencies are high normally, this takes it to 11. Any comments from a community that might have worked/interacted with him?
That's because IV itself is a parasite.
Yet Apple holds a $1Billion+ judgment on Samsung for violation of Apple's design patent for rectangular device with rounded corners in addition to "pinch and zoom" and "bounce back".
When its unable to prove a range the feedback you get from the solvers is inscrutable enough that its very hard to figure out what additional data it would need to satisfy the analysis. (Sort of like parsing C++ compiler template related errors)
I'd hoped that language features like the typestate stuff that used to be in Rust would someday make the work required to use sound analysis tools in production code smaller. I'm not sure if much thought has been given to what kinds of accommodations languages could give to ease static analysis while still being programmer friendly.
It seems that newer languages have actually moved away from analysis friendliness in some respects, however. E.g. in C a signed overflow is always a bug so if analysis can prove one is possible you have something to fix. Several modern languages have defined signed operations to wrap and so that obvious safety test is no longer available. (You could define in your own code that it should never wrap, effectively writing in a subset of the language, but as soon as you call into third party code you never know if an overflow was intended and safe or not not without extensive analysis)
I have had good experiences with Flexelint (PC-Lint). It does not attempt to deeply analyze control flow, more like compiler additional warnings. It flags a lot of common mistakes and can basically turn C into a more strictly typed language. I feel a lot more confident in C code if I know that it passes lint, since it warns if you try to mix unsigned and signed ints, cast away const, call functions with wrong types etc.
Like many static analyzers it takes some work to set it up, and tune which warnings you actually car about. It is definitely business-priced and feel a bit old (although command line tools age well.
The is a clear lack of good open source tools. I tried all i could find, but Splint was the only one that would flag switch-cases without break. It was last updated in 2010.
Any suggestions for a similar tool?
They are asking for donations to cover electricity costs. The real donations, of course, has been in the time the community has put in this.