hacker news with inline top comments    .. more ..    31 Aug 2014 Best
home   ask   best   4 years ago   
Coffee naps are better than coffee or naps alone
451 points by dctoedt  2 days ago   157 comments top 41
pqs 1 day ago 4 replies      
This is a well known practise in Spain. I com from Spain and I have always seen my family (parents and grandparents) drinking coffee after lunch and, then, doing a nap on the couch. Usually, we sit on the couch, take the coffee together and then every person takes a newspaper or magazine and after a few minutes of pseudo-reading everybody is sleeping or in a state of deep relaxation. We sleep for 20 minutes. That's all. I guess this is very common in Spain.

This works well because we often take lunch at home. But, now that I have lunch at the office, I do the same. I have a 1 hour pause. The first half hour I have lunch, then I take the coffee and I go to the office to sleep on the floor, on a very thin mattress. If my brain is too active, I listen to a foreign radio station with my iPhone. I like to listen to ICI Radio Canada (in French). When I have lunch here, they broadcast the morning news and commentary. The news are about stuff happening in Quebec. It is interesting enough for me to forget the work stuff, and dull enough to induce me into a deeper relaxation state, which allows me to fall sleep fast. Being the broadcast in a foreing language, also helps to fall sleep. I often dream during this 20-30 minutes naps.

An interesting detail is that I had to learn this habit. I remember being a child and being pissed off because I wasn't allowed to make noise after lunch. Now, that I'm a father, the roles are changing and I'm the one sleeping after lunch.

By the way, at night, in order to fall sleep, I never take phones, tablets or computers to my bedroom. Instead, I take a shortwave radio and I tune the BBC world service news (fortunately, in Spain we can hear the broadcast directed to Africa). I put a 30 minutes timer on the radio and I almost never hear it stopping, because I fall sleep before. The day that the BBC will shut the SW broadcast, I guess I will take a bluetooth headset or speaker to my bedroom, but not the iPhone. It is very important to avoid computers in the bedroom.

ArcticCelt 2 days ago 13 replies      
I wish I was able to take naps. Do people can really fall asleep on command just like that when they have some spare minutes in their day? Shit, even when I am sleep deprived, I can barely fall asleep in my own bed, sometimes it literally take me hours.

When I manage to take a nap it's usually involuntary by falling asleep in front of the TV.

300 2 days ago 2 replies      
I found this by experience few years ago. It works. At first, I started with naps, and drinking coffee after. Accidentally I had few times coffee before nap, and noticed the difference.

But I don't recommend this kind of nap all the time - because after a while it's not that effective.

So, after years of napping (I'm an expert - I became so good at napping, that I can fall a sleep in 1-2 minutes, and wake up without alarm after 15-20 minutes), my advice would be: take a nap after lunch, between 1 and 3pm, without coffee before nap. Only in special situations, when you are under high pressure and lots of work, take "coffee nap" as they call it in the article.

virtuabhi 2 days ago 7 replies      
I have read this before. I do not understand how you can sleep for 20 minutes. It takes more than 15 minutes just to fall asleep.
danelectro 1 day ago 0 replies      
I support the napping for productivity without having artificial stimulants (OK, natural organic drugs) involved.

The studies likely involved mostly habitual users rather than unindoctrinated subjects, but from the abstracts' documentation it is difficult to be sure. Perhaps the researchers did not recognize the distinction and just selected random participants, therefore likely to include mostly habitual users.

When I was a caffeine addict (habitual user) I could alsosleep after ingestion.

In fact without a fix right before bedtime it was more difficult to get to sleep because nominal concentration level was unsatisfied, resulting more in agitation than identifiable craving.A nice hit would actually help me relax.I was not the only one who could relate to this as an indication of true addiction, where you need the substance just to be normal.

Complete withdrawal took a few weeks (of hellish tiredness, achiness, and irritability) but after this it was even easier to more alertly conduct high-stamina activities and out-perform my still-addicted colleagues without the toxic load on my system.

Sleep & nap much more productively without it after kicking the habit too.

Sure can write a lot better code when drug (addiction) free as well.

When I do feel it's necessary (never for marathon coding), a single cup of coffee (after I have already been awake 24hrs and need a little boost) naturally keeps my otherwise drug-free system awake for the next 24hrs easily which can be helpful for things like long-distance driving once or twice per year.As an addict I would have needed two or three times my nominal habitual doses to feel as alert on those same lonely roads.

To me coding does not benefit from this type of non-stop alertness, even if you are working to exhaustion, when tiredness truly comes a plain nap is better whether it is after 6hrs, 12hrs, 20hrs, whatever, then freshly go into another session, exahustion relieved.

Same with driving too, but if the schedule is too tight, a couple times a year will not make you a habitual user like everyday dosage does.I'd rather drive slow for long hours than exceed the speed limit, waste energy, and prematurely wear out my machinery.

YMMV [0] but just because everybody does caffeine won't make it good for you, especially in the long run.

[0] depending largely on body weight, metabolism, and dosage, and for driving, road speed

blutoot 2 days ago 0 replies      
What is the tolerance level for naps in silicon valley companies (both big and small)? Does it depend on the role/position? Wouldn't they rather have all their employees down mugs of coffee instead of taking even a quick nap?
Multiplayer 2 days ago 0 replies      
The Jawbone up is the BOMB of all bombs for taking naps. Put it in nap mode and it waits for you to fall asleep before setting the alarm. So you can get a proper nap without wondering whether you're taking too long to fall asleep, etc.

It can also attempt to time your nap based on your latest sleep patterns. No idea how effective this is.

I love this thing just for it's nap mode.

tempestn 2 days ago 1 reply      
Is there any downside to doing this? It seems like there must be a reason why the adenosine signals that it's time to sleep - like your brain actually needs that rest to recuperate properly after activity. Does a "coffee nap" (or caffeine in general) lead to increased fatigue once the effects wear off?

Although I guess to some extent if you're going to ignore the sleep signal otherwise, there likely isn't much harm in feeling alert while doing so, except inasmuch as the alert feeling causes you to tax your brain more than you otherwise would.

mmanfrin 2 days ago 3 replies      
After a couple months of protracted late nights at a startup I worked for, I came up with an idea for a place which would offer by-the-hour nap+shower+coffee pods. You'd go in, pay $20 or whatever, have access to a shower and a bed, and at the end (or beginning) you'd get a coffee of your choice (if after, it would be ready for you when your nap was over).

I think this could do really well in Soma or the Financial District in SF. Just gotta enforce one-person-one-rule so that it doesnt become a place for prostitution.

staunch 2 days ago 2 replies      
> ...it takes around 20 minutes for the caffeine to get through your gastrointestinal tract and bloodstream anyway.

This isn't really true, right? On an empty stomach a strong coffee seems to get me high as a kite in about two minutes.

I've been doing this coffee+nap thing for a long time. It definitely works for me. My trick for falling asleep reliably and quickly is listening to an audiobook.

duffx 14 hours ago 0 replies      
Keep in mind that after 2-3 weeks of habitual coffee drinking, your body acclimates to the effects of caffeine and you no longer receive a performance benefit. At that point all drinking coffee does for you is ward off the effects of withdrawal and return you to baseline performance levels - until the effects wear off and withdrawal kicks in again, at which point your performance is below average.

Caffeine is best used strategically, which means not every day. Save it for days where you're feeling particularly low energy, or need a boost to tackle a problem you're not particularly excited about doing.

That said, the half-life of caffeine is around 6 hours, which means if you have a coffee at 3pm, at 3am you still have a quarter-cup's worth of caffeine in your system, preventing you from entering deep sleep. So even then I'd keep caffeine as a first-thing-in-the-morning type of deal.

Naps I'm a big fan of. I take one every day after lunch. The trick is to simply lay down and close your eyes for 20 minutes. Don't worry if you fall asleep or not - just give your eyes a rest. For the first few days/weeks, you might not fall asleep. But eventually your body grows accustomed to the ritual and it becomes easier to actually fall asleep at hat time.

rootbear 1 day ago 0 replies      
I have read before that drinking coffee before a short nap is a good combination. It's nice to read that someone has actually tested it. I can't always do it at work but I have used this method at home and it works well. One other suggestion I follow is that I don't take short naps lying down. That encourages my body to go into a longer, deeper sleep. I nap sitting in a comfy chair, in a quiet, dim room if possible. It's very refreshing; I'm more alert and productive for the rest of the day.
copperx 2 days ago 2 replies      
I have found that a Provigil/Nuvigil nap works wonders. Take a Provigil and take a nap. When you wake up you'll be crisp morning-fresh.
cmdrfred 2 hours ago 0 replies      
No shit.
MichaelGG 2 days ago 0 replies      
This works well with other stimulants and other medications too. Take methylphenidate and an opioid, take a quick nap, wake up feeling fantastic.

It's also got emotional benefits. If you are feeling down, then simply waiting for the medicine to do its job can make things worse, as your mind goes in a negative spiral. Taking a nap can give you peace that you're just putting everything aside and going to concentrate on the warmth of the nap and a dream. And before you know it, bam - everything's perfect again.

djtriptych 1 day ago 0 replies      
I thought I invented this 5 years ago during a weeklong coding sprint before a live demo. I called it a red bull nap but otherwise it was exactly the same. They are amazing.
vegancap 1 day ago 0 replies      
I don't know if it's just me but, how the hell do you nap?! I tried once, and I was still wide awake when my alarm that was supposed to wake me up from my hour long nap went off. I'd have to be really drunk, or not have slept for days in order to nap. I don't understand how people just nap on queue, I feel like I'm missing a trick here :(
mzs 1 day ago 0 replies      
Works great on road trips, I need to be just right amount of tired for it to work for me though then stop at a diner like Waffle House, get a bite and a coffee, head back to the car and nod off for a spell. You wake-up peculiarly alert.
smegmalife 2 days ago 0 replies      
Hardest part for me is falling asleep in such a short period of time, especially during the day. When I have succeeded with these naps, it's been amazing.
ollysb 2 days ago 0 replies      
This is definitely how siesta time goes in Southern Spain. It seemed an odd habbit when I was introduced to it but it definitely does the job.
jdnier 2 days ago 0 replies      
I have noticed this occasionally on weekends, when the luxury of a nap is more practical. I'll have a small late-morning coffee, feel cozy/sleepy, then nap briefly and wake up refreshed. Seemed crazy at the time. I'd guess you need to feel somewhat tired for this to work (and I am able to fall asleep quickly).
iandanforth 2 days ago 0 replies      
I use this napping mp3. White noise for about 23 minutes followed by silence then animal sounds. It's fun give it a try. http://www.grelly.com/napping/Polynap5_23_minutes.mp3
cheald 2 days ago 0 replies      
Caffeine naps are my secret "crunch time" weapon. It's astounding how effective they are - it feels like a reboot. I wake up feeling like I've been out for 8 hours. The effect only lasts for a couple of hours post-nap, but it's often enough to get you over the hump.
giardini 2 days ago 0 replies      
For those who have trouble falling asleep quickly I recommend good earplugs, a quiet place where you can nap undisturbed and a dark opaque towel or cloth with which you can cover your eyes and block light completely. Given that, there's little to prevent sleep once you've settled in.
rhema 2 days ago 0 replies      
I've done this many times. It's really quite refreshing. I made a habit of this after lunch for a time. It helped give me an extra 3 or 4 hours of high productivity. The only downside is that it's better to chug the coffee, rather than sip it slowly.
darkmethod 2 days ago 0 replies      
I can attest to this. I've done this several times for the energy boost. I didn't know there is a name for it; "coffee nap". I tend to add dance/techno music to know when to wake up and get back to work.
Bahamut 2 days ago 0 replies      
This is anecdotal, but I discovered this in undergrad - I would drink coffee before a class, and end up falling asleep briefly during a lecture, and then end up wide awake the rest of the time and more. It really does wonders.
educated_idiot 2 days ago 0 replies      
Can confirm it works. Have done this often in about 3-4 years. It's easy to fall asleep really fast if you are actually tired (physically or mentally)...doesn't work if you are just bored and wanna pep yourself up.
cgtyoder 2 days ago 1 reply      
Serious Question: What can I do where I really need to do this in the afternoon (and would of course really increase productivity), but I'm not allowed to nap at work?
jonwachob91 1 day ago 0 replies      
Common practice in elite military units for putting off proper sleep for long periods of time.
fluff3141592653 1 day ago 0 replies      
Hmmph! No Thanks. Coffee makes me sort of nervous when I drink it. -Slingblade
shire 2 days ago 0 replies      
It's hard for me to just take a nap on command in the middle of the day. I wish it was easy for me because a nap would help a lot.
shimon_e 2 days ago 0 replies      
Sounds like this would pair well with specially designed time release caffeine capsules that release 20 minutes before you wake up.
scelerat 2 days ago 0 replies      
A 20 minute nap does way more for me than a cup of coffee, but this is a promising approach I'll definitely try.
bernardlunn 1 day ago 0 replies      
I knew this worked in practice, now I understand how it works in theory.
snarfy 2 days ago 0 replies      
When I tried this as a teenager I ended up having lucid dreams.
kbart 1 day ago 0 replies      
It's thought this practice is a common knowledge, strange that it's "news". I have been using this for years.
halfcat 22 hours ago 0 replies      
How long can coffee napping be used, and what are the after effects?

The article says the test subjects used it to good effect for 24 hours. I wonder if the following day was a compete productivity waste.

jqm 1 day ago 1 reply      
(from the article) "energy drinks are disgusting.."

I disagree. Monsters in particular I find delicious. (I try not to drink them often though... they tend to really hype me up then wear me out not long after).

Dewie 1 day ago 0 replies      
I'm thinking of putting a mattress beneath my desk/cubicle. I have a hard time falling asleep on command or taking naps at all. Maybe properly lying down instead of lying kind of uncomfortably on the floor will help with that.

This is in a university setting, so I have no managers to worry about. The only problem might be noise (I often sleep with earplugs anyway), and weird looks.

x0x0 2 days ago 0 replies      
coffee naps? It's like Joseph is talking to my heart.
Yahoo stopping all new development of YUI
440 points by traviskuhl  1 day ago   216 comments top 32
clarle 1 day ago 3 replies      
First posted on /r/javascript, but I think it's worth posting here too:

I was a member of the YUI team until a few months ago. I'm still at Yahoo now, just on a different team, but just wanted to give my own thoughts on this (I don't represent the company or the YUI team).

My software engineering career started with the YUI team - I actually joined as an intern at Yahoo because of a Reddit post on /r/javascript. I was pretty new to engineering in general back then, and as a biology major with no real professional experience, I didn't have an easy time getting internships. Jenny, the manager of the YUI team back then, really took a chance on me, and that really changed my entire career path.I solved a bunch of YUI bugs, added a few features here or there, and I always tried to help other folks on #yui on IRC, the mailing list, or in-person here at Yahoo, which I really enjoyed. I learned a crazy amount of JavaScript, some pretty advanced debugging / performance profiling techniques, and even gave some talks. Eventually, a lot of people always came to me first whenever they had a question about YUI, which was pretty cool.

From the view of some people in the JavaScript community, YUI was always considered a huge, monolithic framework that was only good for widgets. I never thought that was the case - YUI pioneered a lot of the techniques that are popular in advanced JavaScript development today, like modules, dynamic loading, and creating logical view separation in your code. A lot of the influence in RequireJS / CommonJS / ES6 modules can be seen from what YUI did first, which people used to consider "over-engineering".

With a lot of new development in JavaScript though (data-binding, tooling like Grunt / Yeoman, promises and other async handling techniques), it was always hard for YUI to keep up with new features while still being able to maintain backwards compatibility with the constantly deploying products that people were building at Yahoo. We had to support product teams while also building out the framework at the same time, and making sure the user-facing products were the best was more important. Eventually, it was hard when developers who were familiar with newer JavaScript tools tried to use YUI, but ended up having to spend quite some time with the framework just to get it working with the rest of the JS ecosystem.

In the end, I wasn't involved with this decision, but I think it was the right thing to do. A lot of the YUI (now YPT) team and other front-end teams at Yahoo are now working on helping out with more cutting-edge core JavaScript work, like internationalization (https://github.com/yahoo/intl-messageformat) and ES6 modules, as well as building out components for newer frameworks like React and Ember (https://github.com/yahoo/flux-examples). Yahoo still has a lot of really strong front-end developers, and working on these more important core components is more beneficial to both Yahoo and the JS community as a whole, than continuing to maintain a framework that's a walled garden.

The one thing to take away from this is that no technology lasts forever, and in the end, what the user sees is the most important, whether it's JavaScript, Android / iOS, or holographic smartwatches.

I'll be a bit melancholy today, but I'll raise a glass to YUI tonight. RIP.

columbo 1 day ago 8 replies      
I know Wells Fargo went full-bore with YUI to the point of creating their own derivative (http://www.yuiblog.com/blog/2013/11/08/yuiconf-2013-an-amazi...)

I have to say, enterprise companies like WF really have it tough. With thousands of applications and tens-of-thousands of developers by the time they implement anything it's already been rendered obsolete.

At least they didn't go 100% Flex like some other companies

kingmanaz 1 day ago 8 replies      
"Node.JS", "isomorphic single page applications", "npm", "bower", "Grunt", "Broccoli", "Gulp", "Backbone", "React", "Ember", "Polymer", "Angular", "Mocha", "Casper", "Karma", "evergreen web browsers", ad infinitum.

While the above bouquet of random monikers may excite the cutting-edge startup developer, try pitching such an amalgamation to management in an enterprise environment.

Inevitably, this week's fashionable web technologies will be supplanted by next week's fads. YUI was nice because it channeled the Borg in absorbing the good from multiple technologies while attempting to provide users with some form of a migration path, usually through its better-than-average documentation. YUI evolved. Many of the above technologies will be cut-and-run like the projects they supplanted last week.

Perhaps the answer to this industry's flightiness will be found in the increasing use of transpilers. Javascript, with its callback-heavy code reading like so much thread through so many needle-eyes, does not seem to engender longevity in its creations. A framework built around something like gopherjs may be more libel to grow and adapt rather than becoming yet another abandonware.

sam-mueller 1 day ago 2 replies      
I think Yahoo is moving in the right direction on many fronts, and this decision is definitely welcomed within the company. At the breakneck pace of the web, every framework must eventually meet its demise; YUI is no different.

As the person who first brought Ember to Yahoo a year ago, I can tell you that both the mood and perspective towards SPA development has changed significantly; it's refreshing. Developers are definitely beginning to embrace the direction of web components, and the majority now see the value that these newer frameworks provide.

There was a time (not too long ago) where Yahoo mandated the use of YUI. We are now seeing the pendulum swing in the other direction, and teams have more freedom to choose which framework works best for their situation.

Out of all the modern SPA frameworks, Ember is currently leveraged the most right now here at the 'hoo, with almost two dozen projects using it. This is mainly because our division adopted it early, and we were fortunate that our UI engineers were able to get over the learning curb and build some impressive apps pretty quickly. Besides Ember, there are pockets of Backbone and even Angular apps. However, it's pretty clear that the YUI team is especially intrigued with React right now, mainly because it is the most lightweight (again, pendulum) and allows more freedom for the developers to do things their way without opting into a more opinionated framework.

Some on this thread have expressed that they wished Yahoo would have recommended an alternative. Well I can give you my personal answer:

Choose the best framework that fits the job.

Each brings its own strengths and weaknesses, and the best approach you could take is to understand the nature and scope of your project to know which one makes the most sense for your needs. For example, many projects at Yahoo have a ton of code that can't immediately be replaced or refactored. For those projects, React may make more sense because it only solves just one piece of the puzzle (albeit very well), and can add a ton of incremental value. If you are starting a project from scratch, choosing Ember or Angular might be the better choice if you want a more mature framework that addresses the many facets of SPA development. We happened to put more weight behind Ember for our greenfield projects because it provided more structure than Angular, and that helped us immensely when our apps grew in complexity.

It's really great to see the state of JavaScript development in 2014. Even though we are losing a great framework in YUI today, the future does indeed look bright. Cheers!


ecaron 1 day ago 3 replies      
When teams halt development on projects, I really appreciate when they say "People should go use project X" instead. I know it is difficult to full their full-weight behind a single endorsement, but the team is obviously picking an alternative and since their followers trusted their original code they should trust the successor.

It would be great if the YUI team stood up and said "We're moving to something, and think you should too."

mythz 1 day ago 12 replies      
Abandonment is a risk facing any heavy "all-or-nothing" frameworks, not only is this bad for existing apps built on YUI, but it's also bad for developers skill set investments that will soon become obsolete.

It's hard to imagine heavy popular frameworks like AngularJS falling to the same fate, it would need something far superior with a lot of traction to displace it. But it's still a risk if you build your application the "Angular Way", "The React Way" or "The Ember Way", etc where if the primary developers halt development for whatever reason, your app dev stack becomes obsolete making it harder to attract great devs (who don't want to invest in a dying platform).

It's less of a risk with lightweight frameworks and libraries like Backbone.js where the code-base is so small and extensible, anyone can easily maintain their own fork. It's also less of a risk for WebComponents as the component model leverages the browsers DOM and lets you use modularized encapsulated components built with different technologies, so if one of the technologies ever becomes obsolete you can always start writing new components with newer tech and integrate it with your existing app, without having to rewrite it.

cousin_it 1 day ago 2 replies      
First they say:

> New application frameworks (Backbone, React, Ember, Polymer, Angular, etc.) have helped architect web applications in a more scalable and maintainable way.

Then they say:

> The consequence of this evolution in web technologies is that large JavaScript libraries, such as YUI, have been receiving less attention from the community. Many developers today look at large JavaScript libraries as walled gardens they dont want to be locked into.


jdelic 13 hours ago 1 reply      
The one thing that YUI does better than any other library is isolation. You can have multiple versions of YUI in the same page, each sandboxed against each other. That means that if you deliver JavaScript for other developers to include in their pages, YUI is an awesome framework or even the only real option. Thanks to YUI Loader it's even self-repairing. jQuery's noConflict is a far cry from that.

Can any if the other libraries mentioned here (especially the newer ones like React and EmberJS) provide the same thing?

When I read threads like this and I see pages like polygon.com that pull in tens if not hundreds of external JavaScript resources, I always think that the YUI sandbox model is still 5 years ahead of the status quo in other libraries.

Again, does anyone know here of alternatives for sandboxing?

onlywei 11 hours ago 0 replies      
There are some core YUI methods that I think were really good. The one that comes to mind the most is Y.extend(), which is different from _.extend() in that it actually subclasses a class for you, and not just "mixes" two objects.

I know some people have some kind of hate or disgust for JavaScript's emulation of classical inheritance in this manner, but I liked it a lot!

preinheimer 1 day ago 0 replies      
I remember teaching a JS class with YUI like 9 years ago. At the time it was well featured, and had documentation that blew the competition out of the water.

Good documentation was critical, I couldn't in good conscience teach a class where my students would be out of luck for more help after they left.

zmmmmm 22 hours ago 0 replies      
Makes me a bit sad, I built some very detailed and rich products on YUI. It was, at the time, the most well documented and comprehensively supported (for browsers such as IE6)framework around, and it had a complete set of widgets for every task. It was one of the few completely free JS frameworks I could show to enterprise and professional customers and not be embarrassed about.

Unfortunately YUI3.x was a complete derailment for me. They tried to match the expressiveness of jQuery (which they only half achieved), but along the way the documentation got much worse, and half the widgets I was relying on disappeared and never got migrated to 3.x (with the excuse that you could run legacy 2.x alongside 3.x - do not want!)). I invested a lot of time, sweat and tears into this library and ultimately it turned out to be a big negative as my resume suffered from not having more light weight technologies like jQuery on it.

jashkenas 1 day ago 1 reply      
This is interesting, and more than a little bit sad, given that one of the big recent pushes that YUI had done was to build out their own "MVC" App Framework:

Docs: http://yuilibrary.com/yui/docs/app/

Video: https://www.youtube.com/watch?v=wCexiX_eUJA

slg 1 day ago 2 replies      
I honestly didn't know it was still actively being developed. It has long been surpassed by other options, but I do have some [mostly] found memories of working with YUI. In the early days it was a lot more feature packed than most other frameworks I tried. In the first big professional project I helped build some 7 or 8 years ago, I even fought to use YUI over jQuery. Maybe its time to send a mia culpa over to my old company...
dmitrygr 1 day ago 2 replies      
[...] Node.JS [...] JavaScript [...] isomorphic single page applications [...] npm [...] bower [...] ecosystem [...] use cases [...] Grunt [...] ecosystem of plugins [...] Broccoli [...] Gulp [...] cohesive [...] Backbone [...] React [...] Ember [...] Polymer [...] Angular [...] Mocha [...] Casper [...] Karma

Reading this buzzword soup makes me so happy to be an embedded guy who gets to work in C i think i'll go dance a little just to celebrate. Wow... seriously just wow...

tszming 22 hours ago 0 replies      
This is the right approach to sunset an opensource project from a big company, so people don't need to discover the project is dead by checking the last commit date, which happen quite often these days especially when the key developer of the opensource project left the company.
soseng 1 day ago 1 reply      
I work in Liferay Portal and AUI, which is a fork of YUI. Liferay will probably be impacted greatly by this. My company has done a few large scale Enterprise application implementations in recent years and continue to do more Liferay work (It's actually booming). YUI is a huge framework and not just a library. It contains a lot of neat UI Components and Utilities. Although styling the components and making them responsive always seemed tough.
rip747 1 day ago 1 reply      
honestly at this point, i think most people are using jquery with either jqueryui or bootstrap. Still its really sad to see such an established framework become EOL. Thank you Yahoo for all the work you've done on YUI.
joeblau 1 day ago 1 reply      
I'm hoping they still continue work on Purecss. Pure is a far more manageable framework that doesn't come with all of the YUI bloat.
abruzzi 1 day ago 1 reply      
YUI is used heavily in and enterprise application I work with (Alfresco). I wonder how this will impact them.
BrandonM 1 day ago 1 reply      
Why do people use "the number of [...] issues and pull requests" to measure the usefulness of an open source project? Shouldn't a project gradually trend toward maturity, when the vast majority of bug fixes and big-win features are already part of it? Is that really the best point at which to start spinning it down?
gorkemyurt 1 day ago 1 reply      
I wonder if YUI was solving any Yahoo specific problems, their goal (should be) is to make FrontEnd Dev easy for Yahoo employees not to build a framework that's a good fit for the rest of the world. So who cares if its losing traction in the open source community?
progx 1 day ago 0 replies      
Good decision and there is no further argument to say, you wrote everything in your post.

Hope Yahoo continue to develop more smaller libraries, purecss is a really nice example for a small clean project.

jamesbowman 1 day ago 0 replies      
"Isomorphic". I do not think it means what Yahoo thinks it means.
gourneau 1 day ago 1 reply      
YUI still has the best data grid around IMO. Thanks y'all.
noelwelsh 1 day ago 0 replies      
All is change. Only those who don't create don't create legacy.

Abandoning YUI is a sensible move by Yahoo for a technology whose time has passed.

_RPM 1 day ago 1 reply      
I thought YUI was obtrusive when I saw a line of code that looked like this:

YUI.util.Event.addListener('el', 'click'...);

shawndumas 1 day ago 0 replies      
I for one welcome our new EmberJS overlords at Yahoo.
Touche 1 day ago 0 replies      
Dojo is next. When is that going to happen?
pesto88 1 day ago 0 replies      
interesting to see what Squarespace is going to do in response to this
misterbishop 1 day ago 0 replies      
Looks like YUI is going to have bright years ahead!
like_do_i_care 1 day ago 0 replies      
In other news, a UI toolkit becomes abandonware. World still spins, over to you Kent for the weather.
laurentoget 1 day ago 4 replies      
tumblr is an interesting choice of venue for an official yahoo announcement. you would think they could host their own blog.
Citymapper is what happens when you understand user experience
351 points by ryanwhitney  2 days ago   166 comments top 38
bonaldi 2 days ago 7 replies      
Citymapper has this bizarre flaw (in London at least) where it relies on average times to give you journey times.

So, for example, if your train takes 10 minutes to reach your destination, leaves your local station every 30 minutes and you've just missed one, CityMapper will show an estimated journey time as 25 minutes (avg wait 15 minutes + 10 min journey), instead of the actual 39.

On multi-train journeys like many Londoners have, that error compounds quickly, and it can make choosing the fastest route near-impossible. And I don't know why they do it, because they have all the timetable data (including live timetables).

bhouston 2 days ago 9 replies      
Slow page. Ah, I see. One of the gif's in that page is 24MB:


And this one is 32MB:


Total size of gifs in this page: 149MB.

Is Gfycat embedable? We should encourage its adoption.

buro9 2 days ago 6 replies      
Google Maps is for car drivers.

CityMapper is for multi-transportation within a few metropolis.

The UX of Google maps for routing when driving is incredible and beats everyone: Apple, TomTom, Garmin... everyone. Down to showing labels on side streets "3 minutes slower" allowing you to evaluate every decision you could make when faced with traffic.

But... for public transport, or mixed transport solutions like crossing a city when so many external factors are at play... CityMapper wins.

It's not an either/or, and it's not that Google Maps don't understand UX. Google Maps just has a different focus... the car.

josephschmoe 2 days ago 3 replies      
Honestly, the biggest fault of Google Maps is that I can't find something "along the way" to something, just nearby. I'd be much happier if I could specify "TGIF's on the way to Alice's house" or "Gas stations along the way to Las Vegas near the halfway point."

I can't get too mad at it, since I can just zoom out and it'll search from the center of the map - but it's not very effective if the distances are long or if I'm using the text interface.

mcphage 2 days ago 7 replies      
It's fantastic that they understand user experience, but they don't seem to understand that 95%+ of the world doesn't live in 8 specific cities. Google Maps might be an obnoxious app (it is!) but at least I can use it.
TD-Linux 2 days ago 3 replies      
OpenStreetMaps has begun to add routing information, and it's becoming very good lately. It would be nice to see Citymapper work with them, as both sides could benefit - OpenStreetMap quality in citymapper's supported cities would improve (though OSM is already quite good in those cities), and citymapper could gain support for more cities. This is the route MapQuest has gone with MapQuest Open.

Still, I'm glad to see Google Maps competitors alive and kicking. Google Maps is one of the main lock-in apps that keeps people inside the closed Google Play Services ecosystem.

austenallred 2 days ago 0 replies      
Not everyone has the same needs.

When I use Google Maps all I really want is for it to take me from point A to point B in my car.

I type the name of a place and It loads directions, time, traffic, and guides me there using the best route possible. One click. Quite literally the best UX I could imagine.

For most of Google Maps users (the ones who don't live in those 8 cities), Google Maps is almost perfect. It's not that Google Maps doesn't have a good UX, it just wasn't built for you.

keehun 2 days ago 0 replies      
Great, but the fact that Citymapper is only available in eight cities makes it completely useless to those outside of those cities. From this post, however, it does seem like a very well put together experience.
solutionyogi 2 days ago 3 replies      
I love Citimapper! I live in NYC and it's the only app I use for navigation now. Highlights for me:

1. Get me Home/Get Me To Work quick links. It takes your current location as starting point and tells you your options for going home/work.

2. You can configure it to monitor your subway line during commute time (7AM to 10AM and 4pm to 6pm) and it will notify you if the subway is delayed for any reason. I use line 6 which breaks down more often than I want and thanks to the App, I can take N/Q/R if Line 6 is down.

3. The way 'Get Me Somewhere' works. In addition to being able to type an address, you can move the map around to point the destination pin to a particular point in map. It's hard for me to explain but it works amazingly well especially in NYC where people often go by cross streets and don't have the actual street address.

I completely agree with author that Citiymapper has an exceptional UI/UX.

xwowsersx 2 days ago 0 replies      
I agree that the UX of Google Maps isn't great, but one thing that might not be quite fair in the critique is the point about having to enter the origin destination. On mobile, I'd venture to guess that most of the time you simply want to get somewhere from where you are right now. So I personally don't see the origin destination as something that I need quickly accessible. I enter address, click when it auto-populates, and click to navigate. It's usually 2-3 steps for me on Android and I'm ready to go in 5 or so seconds.
darren_ 2 days ago 2 replies      
It's nice to see a thoughtful critique/comparison (dev on iOS gmaps here). Interestingly I don't think anyone who works on iOS maps lives in a citymapper-enabled city.

Couple of points that might be helpful if the writer reads this:

- you can switch off the shake-to-send feedback (it even prompts you to do so if you dismiss it a few times without leaving feedback).

- I'm not sure how the Citymapper UI in scenario 2 (getting directions from a->b) differs from using the directions search screen in googlemaps (reached via the little button with arrows next to the profile button). It presents the directions search screen immediately and would cut down the time in that scenario drastically. (But I guess if the person writing the article doesn't know about that button that's rather a UI smell. I wonder how many people don't know that our things that don't look like buttons are buttons?)

- Scenario 5 is also fixed by using the directions non-button, your previous searches show up in the zero type suggestions there.

I am however going to try and convince the powers that be that Citibike integration is a must, so I can have a holiday to new york.

Jabbles 2 days ago 1 reply      
"OK Google, how do I get to X?"

"It takes 22 minutes to get to X, here are your directions."

I guess that's an unfair comparison. Fair enough about actually using the app though.

smackfu 2 days ago 1 reply      
I would sure hope that a site that only works in seven cities would be better than a site that works worldwide. Otherwise what's the point?
Peroni 2 days ago 0 replies      
Citymappers founder, Azmat Yusuf, gave a talk at the Hacker News London meetup earlier this year and it was categorically one of our most popular talks in a long time. Unfortunately Azmat (very politely) requested that we not record his talk for our vimeo page (vimeo.com/hnlondon) which is a shame because it was incredibly insightful.

If you get the opportunity to attend a talk by Azmat or any of the team, take it. They are an exceptionally clever and humble bunch and I'm thoroughly enjoying being a witness to their growth and success.

itisbiz 2 days ago 0 replies      
"Scenario 3: What are the trains near me?" is great feature.

Even better would be "Where do trains/buses go from where I am?"

It would be very useful to see all route lines radiating out from my location, with next times and freq.

This would help understand the multiple ways to get to another location.

One route might get me directly to location, but leave in 15 min.

Another route me get me 2 blocks from location, but leave in 2 min.

crazygringo 2 days ago 0 replies      
I use Citymapper to use Citibike, and it has the following big flaws:

- Reloading the app, it never remembers I was on the biking screen, and always goes to the main screen. Minor detail, but annoying, and so easy for them to fix!

- Every time I move the map even slightly, it removes all bike stations from the map and queries the API again for a new set of stations, even if the last request was only five seconds ago. If there's any wireless interference, it complete destroys the functionality. I mean, cache the station locations at the very least! And don't hide the old data while waiting for the new data, if it's less than a couple minutes old.

- The screen real estate devoted to the map is only ~50%! The bottom third is taken up by a totally unnecessary textual list of stations, which can't be hidden

- And you can't even rotate the map!

Granted, it's a million times better than the default Citibike app. But it's still got a ways to go to achieve "ideal" UX.

rumble_king1 2 days ago 2 replies      
Bye Google Maps? It uses Google Maps...
EmilLondon 2 days ago 1 reply      
I'm a backend engineer at Citymapper. Just to let everyone know that we're hiring... https://citymapper.com/london/jobs
djhworld 2 days ago 0 replies      
Citymapper is okay, it sometimes screws up with its notifications feature though. I live in London and have it set to notify me if my "favourited" tube lines are suffering problems.

A signal failure happened earlier this week on my line and I received no notification of it, only finding out when I got to the station.

It's probably an issues to do with Androids background notification system or something, but I dunno

yid 2 days ago 2 replies      
I was in London recently and was astounded at how prevalent Citymapper has become there. So I'm now taking bets on who will acquire them:

1. Google --> unlikely, they have a large maps team already, and have recently gorged on Waze

2. Apple --> most likely, IMO

3. Facebook --> who knows why? But probably in the running too.

dazbradbury 2 days ago 0 replies      
Feature Request / Suggestion - So I tried the "Share Destination / Meet me sonewhete" feature the other day and thought this was going to do something completely different.

How it worked: sent a link to the end location to a friend.

Expected / Hoped for usage: Citymapper would separate who clicked the link (allow you to add a name perhaps), then show where everyone else was on the map, and their ETA.

So I can say, let's meet at this place, send the link, and know everyone's ETA. A bit like latitude, but smarter. Once there, or on my way, I can track progress.

Sure, people might shout about privacy, but it would be insanely useful and save constantly being punished for being on time!

Any Citymapper devs out there fancy championing this one?

matthewmacleod 2 days ago 1 reply      
Citymapper's UX is really awesome, and I always use it as my example of the degree to which attention to detail is so absolutely vital in this sort of app. Like, first-class, best-app-I've-ever-used level of UX.

I'm still not totally sold on their routing algorithm in London, but I'm not sure exactly what's wrong with it - all I know is it often suggests routes that no same person would take. It definitely seemed to get a little worse maybe 6 months or so ago; it's possible it's just been tuned in a way that doesn't get on with my usage pattern.

dalek2point3 1 day ago 0 replies      
"Turn-by-turn directions"

One of the reasons Im guessing it doesnt have this is because the Google Maps API prohibits building turn-by-turn applications quite directly. If only they trusted OSM, they could have this up and running pretty quickly.

HugoDias 2 days ago 2 replies      
Btw, Citymapper website doesn't load in Firefox Aurora, v33.0a2 :/ (https://citymapper.com/apps)
omouse 1 day ago 0 replies      
I don't understand why they can't use Google Maps or OpenStreetMaps or some other API to grab city data for other cities. It only covers NYC, London, Barcelona and a few other cities. Nearly useless for anyone else in the world!
gburt 2 days ago 0 replies      
This looks great. It needs more coverage and I don't think the silly options are a good selling feature for a real map app. It's just more visual clutter.
pthreads 1 day ago 0 replies      
Meh! I find these features just incremental improvements. Most apps these days quickly catchup on UI/UX features. Even OSes are mostly the same these days.
kin 2 days ago 0 replies      
If I hit "snap to location", Google decides it needs to auto-zoom to a default zoom-level. It's annoying to have to re-zoom to whatever zoom level I'm actually interested in.
alttab 2 days ago 0 replies      
I havent ever tried Citymapper. That said, if Google Maps re-routes me one more time during my transit I will kill an innocent puppy.
serve_yay 2 days ago 0 replies      
Yes! I've really been wanting something like this, maps apps are only good at doing one thing at a time and not good for comparing.
bundaegi 2 days ago 0 replies      
Anyone know what screen recording software might have been used to create the gifs/vids in this post?
eli 2 days ago 0 replies      
It would be interesting to compare to Ridescout, which seems like a more direct competitor than Google Maps.
robot 2 days ago 0 replies      
I hate the popup in google maps when I search for a location. It takes up 1/3rd of the screen.
VikingCoder 2 days ago 0 replies      
https://citymapper.com/ is down. Google Maps is up. I don't think I can say "Bye, Google Maps" just yet...
qxmat 1 day ago 0 replies      
How do I buy you a beer?
farmdve 2 days ago 0 replies      
And yet Google maps has mapped the world, has my little town mapped, Citymapper is but a fraction of what Google maps has done.
hnriot 2 days ago 0 replies      
I wanted to try it but it doesn't seem to support San Francisco! What kind of app doesn't start with San Francisco, let alone not even include it :)
blutgens 2 days ago 1 reply      
I suppose when you're on an iOS device it doesn't take much improvement in the UX department to get you fired up.
C3.js: D3-based reusable chart library
339 points by nekgrim  3 days ago   78 comments top 25
capkutay 3 days ago 8 replies      
Out of raw d3.js, nvd3.js, cubism, and rickshaw.js, I've by far had the best experience with nvd3.

I was particularly pleased with the way nvd3 supports sliding data off line charts without any extra work.

D3.js is an excellent, low level visualization library. But you will find yourself spending days to a couple weeks with custom styling, tooltips, legends, etc. Many high level charting libraries are nice because they have this out of the box.

However, I want a library that lets me make visualizations that I can run on my monitor for 10 days straight without running into obscure bugs. Rickshaw failed me in this regard. I have a caching scheme in my client-side application. Rickshaw has its own local copy of data which requires the developer to write custom, messy javascript to evict. I found that rickshaw actually has some custom 'sliding window' logic. I was unhappy because I had to go to stackoverflow to discover that feature rather than using the documentation.

nvd3.js simply worked for me.

muyueh 2 days ago 2 replies      
I build visualizations with d3.js for my clients (http://muyueh.com/42/), and I have always wonders whether I will be one day replaced by library such as c3.js.

I have been watching closely different library, and felt that most of the library are just providing default value to d3.js, plus adding some nice plug-in (such as tooltips). If we are looking for a bar chart, then it's very fast to use a bar chart library to build what we want. Yet some problems may occur:

1. Control, we thought that we need bar chart, but we actually need bar chart. In my process of developing effective visual, I often need to access different part of the API.

2. Choice, currents charts provided by most of the library are available in Google Charts and Excel. This might a question of time: these library are fairly new, and maybe in the near future these library will provide all visualization available in the d3.js page. But maybe its not because of time, but of complexity. If these library were providing better abstraction, it should be easier to develop more complex charts.

3. Extendability: We probably don't just need the bar-chart as a stand alone visual, but as building block that can interact with other component (texts or maps).

An interesting question is to ask why d3.js was designed so low level, why not directly releasing a high level library? My hypothesis is that maybe this is the right level of abstraction. When you tried to get higher, you actually lose power/control over your visual. Maybe thats the reason why people are sharing a lot of their work on http://bost.ocks.org/, doing some small experiments, laying down some building block, perhaps currently that is the most efficient way of building re-usable chart.

I shared my thought on the issue, in hope that people challenge my hypothesis: my current work depends on it.

cereallarceny 3 days ago 1 reply      
This totally goes against the whole point Mike Bostock (creator of D3) laid out when he talks about creating reusable charts...


"To sum up: implement charts as closures with getter-setter methods. Conveniently, this is the same pattern used by D3s other reusable objects, including scales, layouts, shapes, axes, etc."

You get none of that with C3.js...

AYBABTME 3 days ago 7 replies      
The type of chart I've found most useful is heat map charts, and none of the charting library I've looked at provide that:

Here's an example from a Datadog dashboard: https://cloud.githubusercontent.com/assets/1189716/4064956/6...

I wish such a chart style was more common, it's so much more useful than a typical line chart of the median/p9x. Unfortunately, I'm not familiar enough with D3 and such to write my own; it would likely be crap.

ryanmarsh 3 days ago 2 replies      
I don't intend to diminish the hard work you put into this. It looks clean and useable. Congratulations.

If you (people in general) are using d3 for simple charts you are doing it wrong.

d3 is a visualization library. It excels at helping you make non-trivial things. For the trivial there are some great chart libraries such as Highcharts. I build visualizations at work and for those I use d3. Whenever I need something as simple as a stacked area chart I simply use Excel or Highcharts. Everytime I've tried to use Highcharts for a complex viz, or d3 for a simple chart, it's been a waste of time. I've used NVD3 as well as some other D3 based charting packages. None of them are as simple as Excel or Highcharts IMHO.

aw3c2 3 days ago 2 replies      
Careful, this has pretty much zero documentation and inconsistent options available for each chart type. I started a project with it and after a great start realised I had to scrap it all and start from scratch with some more complete alternative (haven't started looking yet).
webXL 3 days ago 0 replies      
I like C3, but I discovered it has some major performance issues with thousands of data points a couple months back. It has something to do with the pie chart rendering code: https://github.com/masayuki0812/c3/issues/172#issuecomment-4...

I've been evaluating HighCharts, Vega, Rickshaw, NVD3, C3, but the one I've been impressed with in terms of performance thus far is Epoch: http://fastly.github.io/epoch/

daigoba66 3 days ago 1 reply      
I also like http://nvd3.org/. We almost now need an index of the various D3 based charting libraries.
ne8il 3 days ago 0 replies      
We evaluated Rickshaw and Dimple (http://dimplejs.org/) for awhile (to migrate some custom charts initially built with D3.Chart http://misoproject.com/d3-chart/). We went with Dimple and I've been very pleased with it. There is ample documentation, the source code is readable, and the developer behind it is responsive to questions and issues. Also, it's very easy to get access to the underlying axes/series/legends/etc if you want to do more complicated work not provided out of the box and are familiar with d3. I would highly recommend it if you're looking for a d3 chart library.
jbogp 3 days ago 2 replies      
This is nice, easy to use. It is however only using an extremely limited subset of d3 capabilities, which makes me wonder what added value is brought by the fact of making this d3 based compared to other "pure" and lightweight chart libraries which have the same functions [1]

My usual go to library: Flot http://www.flotcharts.org/

frik 2 days ago 0 replies      
Great, but I dislike the combined JS filesize. 267kb is simply still too big for mobile websites, given that it is only a part of a website. It seems to consume a bit too much memory for my iPad2, the first time I saw a blank demo; had to re-open Safari. (combined: c3 is 120kb and d3 is 147kb) c3.min: https://github.com/masayuki0812/c3/blob/master/c3.min.js , d3.min: https://github.com/mbostock/d3/blob/master/d3.min.js

A charts library that has no dependecies (no d3, no jquery, etc), has a filesize of less than 100kb and still looks great and offers useful features would be awesome. SVG vs. Canvas2D is another hot topic.

auvrw 2 days ago 0 replies      
I'm not familiar with all the available chart libraries built on top of d3.jsbut having used vanilla d3 on several projects, my first response to this article was that c3 looks really useful/cool... That said, it's probably worth mentionting that there are some useful examples of "reusable charts" on mr. bostock's website, and constructing a customized version of this JSON chart format probably wouldn't be too difficult, although making it performant on large datasets might be.
thinkersilver 3 days ago 0 replies      
This looks like an interesting project. The struggle I've had in the past with similar libraries is the level of extensibility provided by the API. It's not clear from he docs how to add your own custom viz. I would love a d3 based chart library that would provide an easy to use chart api,such as we see in c3js. Which I do like. But also a DSL that is not as low level as D3 but provides enough flexiblity for creating new visualizations. There have been some nice declarative charting libs posted on HN, which links I can't find right now. Keep up the good work, I'll be following this project closely.
state 3 days ago 0 replies      
This is how demos are supposed to be done. Seriously. Spare me your video, your soundtrack, and all of your BS. Just show me the thing working and let me play with it.
Alex3917 3 days ago 0 replies      
I'm currently using this for a project. My only complaint is that there is no API documentation, so you just have to look for a relevant example every time you want to do something.
pselbert 3 days ago 0 replies      
I could have sworn there was a ClojureScript wrapper for D3 that was also called C3.

Taking a quick search shows that it is actually called C2[1], from Keming Labs.

[1]: http://keminglabs.com/c2/

dsjoerg 3 days ago 3 replies      
I'm a long-time highcharts user. What are the advantages of this over highcharts?
based2 3 days ago 1 reply      
j_s 3 days ago 1 reply      
dc.js has similar goals (though more focused on large, multi-dimensional data sets), with 1 extra year to mature


cereallarceny 3 days ago 1 reply      
C3 looks nice, but it'd be really great if the library didn't demand passing things in object format. Why can't we have something that implements chaining like D3? It seems a bit counter-intuitive...
novaleaf 2 days ago 0 replies      
I like hearing about other options, but I wish the top rated posts were about the OP, not alternatives....
omouse 3 days ago 1 reply      
annnd it suffers from the same problem as other javascript libraries: the documentation is awful.

If you want your reusable library to be adopted widely, you have to write good docs. I was hoping I could replace NVD3.js with C3.js but the docs are on par.

deutronium 3 days ago 4 replies      
I'm looking for a chart library that can plot real-time data, can anyone give any suggestions?
EGreg 2 days ago 0 replies      
How does this compare with nvd3?
Death Valley National Park: First Observation of Rocks in Motion
341 points by McKittrick  3 days ago   53 comments top 10
IvyMike 3 days ago 3 replies      
Lorenz (one of the co-authors of this piece) has disproven his previous theory--that the rocks were temporarily embedded in a slab of ice which then floated. [1] I mean, I know that's how science is supposed to work, but in these days of politicized everything it's cool to see someone say "new data disproves my previous hypothesis" and continue working to find the real truth.

[1] http://www.livescience.com/37492-sailing-stones-death-valley...

steven2012 3 days ago 1 reply      
I've been to Racetrack Playa, and it's awesome. I'm glad a rational, scientifically plausible explanation has been found for the movement of the rocks.

You need to rent a Jeep with reinforced tires, otherwise you could find yourself in the same predicament as some people we met, who got a flat tire on their rented SUV at Racetrack Playa.

Luyt 2 days ago 0 replies      
Ice sheets moving rocks, this has been seen before: http://skeptoid.com/episodes/4021

"The surrounding mountains were still covered with snow, and the playa itself was firm but had a large lake covering about a fifth of its surface, perhaps an inch or two deep at its edges, concentrated at the playa's south end where it's lowest. We ventured out, armed with cameras, shortly before sunrise. The temperature was just above freezing. The wind, from the south, was quite stiff and very cold. When we reached the lake, we found to our great surprise that the entire lake was moving with the wind, at a speed we estimated at about one half of a mile per hour. The sun was on the lake by now and we could see a few very thin ice sheets that were now dissolving back into water. This whole procession was washing past many of the famous rocks. It's easy to imagine that if it were only few degrees colder when we were there as it probably had been a couple of hours earlier the whole surface would be great sheets of thin ice. Solid ice, moving with the surface of the lake and with the inertia of a whole surrounding ice sheet, would have no trouble pushing a rock along the slick muddy floor. Certainly a lot more horsepower than wind alone, as has been proposed. The wind was gusty and moved around some, and since the surface is not perfectly flat and with rocks and various obstructions, the water didn't flow straight; rather it swapped around as it moved generally forward. Ice sheets driven by the water would move in the same way, accounting for the turns and curves found in many of the rock trails."

duncancarroll 3 days ago 3 replies      
Still not quite sure how the trails themselves are formed if the rocks are riding on ice sheets 10's of meters in area, or am I reading this wrong / quickly?

Edit: n/m, it appears the ice sheets are massive and free-of-friction enough to actually push the rocks around. Cool!

nickhalfasleep 3 days ago 0 replies      
Great to see off-the-shelf technology creatively applied to assist in good research in any field of science.
jonah 3 days ago 0 replies      
Here's a human interest story on the study and its authors in a local paper: http://www.independent.com/news/2014/aug/27/death-valley-mys...
brunorsini 3 days ago 0 replies      
"In addition, rock movement is slow and relatively briefour GPS instrumented stones traveled at speeds of 25 m/minute for up to 16 minutesso casual observation is likely to miss rocks in motion." Sounds like a passage that could be in Einstein's Dreams :)
bmoresbest55 2 days ago 0 replies      
I would really enjoy seeing a time lapse video of this stuff. It is incredibly fascinating.
hellabites 3 days ago 1 reply      
They use the term velocity to mean speed a lot :(.
JoeAltmaier 3 days ago 9 replies      
A little puzzled by all the effort put into this. Some wind moved some rocks - call out the physicists? Where is this research going?
Building 3D with Ikea
330 points by panarky  3 days ago   61 comments top 19
fizixer 3 days ago 5 replies      
In case anyone has about 30-60 minutes and is interested to get a quick glimpse of how easy (I guess meaning free and accessible at the least) it has become to do such graphics:

- Download and install Blender 2.71 (http://blender.org/download). On linux (Ubuntu) I did not even have to install it; I just extracted the tarball and ran the blender binary.

- Go through this two part ceramic mug tutorial (30-60 minutes): http://youtu.be/y__uzGKmxt8 ... http://youtu.be/ChPle-aiJuA

As someone who does not have graphics training, I was blown away when I did this. Apparently there is this thing called 'path tracing' based rendering, that takes care of accurate lighting, as long as you give the correct specification of geometry and materials.

Some interesting videos:

- Octane 2.0 renderer: http://youtu.be/gLyhma-kuAw

- Lightwave: http://youtu.be/TAZIvyAJfeM

- Brigade 3.0: http://youtu.be/BpT6MkCeP7Y

- Alex Roman, the third and the seventh: http://vimeo.com/7809605

Brigade is an effort towards real-time path tracing, and it's predicted that within 2-3 GPU generations, such graphics would be possible in games.

bhouston 3 days ago 0 replies      
There was more information given on this at Martin's talk at SIGGRAPH as part of V-Ray Day's - but I can not find out if it is online anywhere:


The renderer used by Ikea is V-Ray, the same renderer we have integrated into our online 3D modeling & rendering tool: http://Clara.io :)

Here are two simple Ikea-like furniture scenes which if you click "Edit Online" you can edit it in your browser, both the geometry, the materials and the lighting setup, as well as rendering it photoreal via V-Ray:



groovur 3 days ago 0 replies      
Even IKEA doesn't like putting their furniture together ...
bjelkeman-again 3 days ago 1 reply      
I'll never be able to look at an IKEA catalogue the same way ever again.
mixedbit 3 days ago 2 replies      
johansch 3 days ago 6 replies      
So when do they take the obvious step of providing an easy to use 3d modeller where customers can model their homes using ikea furniture?

They do have a tool sort of like this for kitchen design (developed by Configura in Linkping, Sweden). But I want something for the entire home!

cyberjunkie 2 days ago 0 replies      
I wonder how active the 3D-CG scene is these days. In the mid-2000s there was so much activity on CGsociety (then cgtalk.com). The kind of work people posted there was just out of this world. Absolutely impressive attention to detail. I was an enthusiast too so I would visit the site many times a day.

Of late, I haven't been in touch. Good to see stuff like this on Hacker News.

mentos 3 days ago 0 replies      
Here are some realistic renders from Unreal Engine 4: https://www.youtube.com/watch?v=iO7Bp4wmd_s

Epic is encouraging all kinds of applications such as architecture simulations and not just video games. I'm interested to see how the engine can be used to do something similar to what Ikea is doing.

jlarocco 3 days ago 2 replies      
I wish they had given a bit more information about the actual workflow.

Specifically, I wonder if they leverage the original CAD models? And if so, how are they converted to 3D Studio Max, and if the process is automated in any way?

cluster1 2 days ago 1 reply      
> We use every computer in the building to give power to rendering as soon as they are not being used. As soon as someone goes to a meeting their computer-power is used, and of course there is overnight when people go home.

I'm very curious how they manage the distribution of computation?

swalsh 3 days ago 2 replies      
It would be great if ikea sold some of these model libraries for use in videos etc.
emmanueloga_ 2 days ago 0 replies      
I was hoping this was about an app to build 3d things by mixing and matching ikea parts. I know there's at list one community around that idea [1] (and I've done it myself :).

1: http://www.ikeahackers.net/

neovive 3 days ago 1 reply      
The entire concept is very interesting and is a logical extension of the product catalog business (think about the impact of 3D and CG on movies, architecture, etc.).

I've been experimenting with Blender and Skulptris lately and 3D modelling is quite amazing. A wonderful mix of technical and artistic skills. I wonder if IKEA will ever rethink their large super-store model and move towards smaller stores where you virtually walk into and interact with rooms and furniture.

mcpherrinm 3 days ago 0 replies      
A lot of my furniture is from Ikea: It would be great to have this software for seeing how new room layouts look!
igl 2 days ago 1 reply      
I am amazed on how this could be cheaper and faster than actually doing real life photos. The scenery and lighting quality is amazing though. Can't do that in a warehouse full of ikea products and fake housings either.
trhway 3 days ago 0 replies      
the photograph in the office with people - are these real people on the office or rendered models too?


A model of model rendering itself....

By the way, instead of home furnishings of different colors for people with Google Glass or similar devices IKEA can just sell an app which will color a furnishing (only in the image projected onto retina) into "bought" color whenever owner looks at the piece, Emerald City style.

wmf 3 days ago 0 replies      
This is becoming pervasive in the industry: http://www.wired.com/2013/03/luxion-keyshot/
johnnyio 3 days ago 0 replies      
They should use Sketchfab for that. It is lighter and better thought for the basic user to see the 3d model.
borgchick 3 days ago 0 replies      
I am Jack's 3D hatred.
Useful Unix commands for exploring data
338 points by aks_c  3 days ago   150 comments top 38
zo1 3 days ago 7 replies      
"While dealing with big genetic data sets I often got stuck with limitation of programming languages in terms of reading big files."

Hate to sound like Steve-Jobs here, but: "You're using it wrong."

Let me elaborate. If you're coming across limitations of "too-big" or "too-long" in your language of choice: Then you're just a few searches away from both being enlightened on how to solve your task at hand and on how your language works. Both things that will prevent you from being hindered next time around when you have to do a similar big-data job.

Perhaps you are more comfortable using pre-defined lego-blocks to build your logic. Perhaps you understand the unix commands better than you do your chosen language. But understand that programming is the same, just in a different conceptual/knowledge space. And remember, always use the right tool for the job!

(I use Unix commands daily as they're quick/dirty in a jiffy, but for complex tasks I am more productive solving the problem in a language I am comfortable in instead of searching through man pages for obscure flags/functionality)

etrain 3 days ago 2 replies      
Some more tips from someone who does this every day.

1) Be careful with CSV files and UNIX tools - most big CSV files with text fields have some subset of fields that are text quoted and character-escaped. This means that you might have "," in the middle of a string. Anything (like cut or awk) that depends on comma as a delimiter will not handle this situation well.

2) "cut" has shorter, easier to remember syntax than awk for selecting fields from a delimited file.

3) Did you know that you can do a database-style join directly in UNIX with common command line tools? See "join" - assumes your input files are sorted by join key.

4) As others have said - you almost invevitably want to run sort before you run uniq, since uniq only works on adjacent records.

5) sed doesn't get enough love: sed '1d' to delete the first line of a file. Useful for removing those pesky headers that interfere with later steps. Not to mention regex replacing, etc.

6) By the time you're doing most of this, you should probably be using python or R.

CraigJPerry 3 days ago 5 replies      
>> If we don't want new file we can redirect the output to same file which will overwrite original file

You need to be a little careful with that. If you do:

    uniq -u movies.csv > movies.csv
The shell will first open movies.csv for writing (the redirect part) then launch the uniq command connecting stdout to the now emptied movies.csv.

Of course when uniq opens movies.csv for consumption, it'll already be empty. There will be no work to do.

There's a couple of options to deal with this, but the temporary intermediate file is my preference provided there's sufficient space - it's easily understood, if someone else comes across the construct in your script, they'll grok it.

WestCoastJustin 3 days ago 2 replies      
My personal favorite is to use this pattern. You can do some extremely cool counts and group by operations at the command like [1]:

  grep '01/Jul/1995' NASA_access_log_Jul95 |     awk '{print $1}' |     sort |     uniq -c |     sort -h -r |     head -n 15
Turns this: - - [01/Jul/1995:00:00:01 -0400] "GET /history/apollo/ HTTP/1.0" 200 6245  unicomp6.unicomp.net - - [01/Jul/1995:00:00:06 -0400] "GET /shuttle/countdown/ HTTP/1.0" 200 3985 - - [01/Jul/1995:00:00:09 -0400] "GET /shuttle/missions/sts-73/mission-sts-73.html HTTP/1.0" 200 4085  burger.letters.com - - [01/Jul/1995:00:00:11 -0400] "GET /shuttle/countdown/liftoff.html HTTP/1.0" 304 0 - - [01/Jul/1995:00:00:11 -0400] "GET /shuttle/missions/sts-73/sts-73-patch-small.gif HTTP/1.0" 200 4179
Into this:

    623 piweba3y.prodigy.com    547 piweba4y.prodigy.com    536 alyssa.prodigy.com    463 disarray.demon.co.uk    456 piweba1y.prodigy.com    417 www-b6.proxy.aol.com    350 burger.letters.com    300 poppy.hensa.ac.uk    279 www-b5.proxy.aol.com
[1] https://sysadmincasts.com/episodes/28-cli-monday-cat-grep-aw...

CGamesPlay 3 days ago 1 reply      
For working with complex CSV files, I highly recommend checking out CSVKit https://csvkit.readthedocs.org/en/0.8.0/

I've just started using it, and the only limitation I've so far encountered has been that there's no equivalent to awk (i.e. I want a way to evaluate a python expression on every line as part of a pipeline).

pessimizer 3 days ago 0 replies      
I'd like to repeat peterwwillis in saying that there are very Unixy tools that are designed for this, and update his link to my favorite, csvfix: http://neilb.bitbucket.org/csvfix/

Neat selling points: csvfix eval and csvfix exec

also: the last commit to csvfix was 6 days ago; it's active, mature, and the developer is very responsive. If you can think of a capability that he hasn't yet, tell him and you'll have it in no time:)

tdicola 3 days ago 4 replies      
If you're on Windows, you owe it to yourself to check out a little known Microsoft utility called logparser: http://mlichtenberg.wordpress.com/2011/02/03/log-parser-rock... It effectively lets you query a CSV (or many other log file formats/sources) with a SQL-like language. Very useful tool that I wish was available on Linux systems.
hafabnew 3 days ago 0 replies      
Not to sound too much like an Amazon product page, but if you like this, you'll probably quite like "Unix for Poets" - http://www.lsi.upc.edu/~padro/Unixforpoets.pdf . It's my favourite 'intro' to text/data mangling using unix utils.
ngcazz 3 days ago 2 replies      
No one gives a shit about cut.

    $ man 1 cut

zerop 3 days ago 1 reply      
sheetjs 3 days ago 1 reply      
caveat: delimiter-based commands are not quote-aware. For example, this is a CSV line with two fields:

However, the tools will treat it as 3 columns:

    $ echo 'foo,"bar,baz"' | awk -F, '{print NF}'    3

xtacy 3 days ago 0 replies      
I am surprised no one has mentioned datamash: http://www.gnu.org/software/datamash/. It is a fantastic tool for doing quick filtering, group-by, aggregations, etc. Previous HN discussion: https://news.ycombinator.com/item?id=8130149
nailer 3 days ago 2 replies      
I love Unix pipelines, but chances are your data is structured in such a way that using regex based tools will break that structure unless you're very careful.

You know that thing about not making HTML with regexs? Same rule applies to CSV, TSV, and XLSX. All these can be created, manipulated and read using Python, which is probably already on your system.

jason_slack 3 days ago 2 replies      
The author states:

    uniq -u movies.csv > temp.csv     mv temp.csv movie.csv     **Important thing to note here is uniq wont work if duplicate records are not adjacent. [Addition based on HN inputs]  
Would the fix here be to sort the lines first using the `sort` command first? Then `uniq`?

letflow 3 days ago 0 replies      
To run Unix commands on Terabytes of data, check out https://cloudbash.sh/. In addition to the standard Unix commands, their join, group-By operations are amazing.

We guys are evaluating replacing our entire ETL with cloudbash!

LiveTheDream 3 days ago 2 replies      
I use this command very frequently to check how often an event occurs in a log file over time (specifically in 10-minute buckets), assuming the file is formatting like "INFO - [2014-08-27 16:16:29,578] Something something something"

    cat /path/to/logfile | grep PATTERN | sed 's/.*\(2014-..-..\) \(..\):\(.\).*/\1 \2:\3x/' | uniq -c
results in:

    273 2014-08-27 14:5x    222 2014-08-27 15:0x    201 2014-08-27 15:1x    171 2014-08-27 15:2x    349 2014-08-27 15:3x    230 2014-08-27 15:4x    236 2014-08-27 15:5x    339 2014-08-27 16:0x    330 2014-08-27 16:1x
This can subsequently be visualized with a tool like gnuplot or Excel.

emeraldd 3 days ago 4 replies      
uniq also doesn't deal well with duplicate records that aren't adjacent. You may need to do a sort before using it.

   sort | uniq
But that can screw with your header lines, so be careful there two.

bmsherman 3 days ago 0 replies      
I may as well plug my little program, which takes numbers read line-by-line in standard input and outputs a live-updating histogram (and some summary statistics) in the console!


It's useful if you want to, say, get a quick feeling of the distribution of numbers in some column of text.

aabaker99 3 days ago 0 replies      
You should mention this behavior of uniq (from the man page on my machine):

Note: uniq does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use sort -u without uniq.

Your movies.csv file is already sorted, but you don't mention that sorting is important for using uniq, which may be misleading.

$ cat tmp.txt






$ uniq -d tmp.txt


zo1 2 days ago 0 replies      
Just one other thing I'd like to mention before everyone moves on to another topic. Not all of the unix commands are equal, and some have features that others don't.

E.g. I mainly work on AIX, and a lot of the commands are simply not the same as what they are on more standard linux flavors. From what I've heard, this applies between different distros as well.

Not so much the case with standard programming languages that are portable. E.g. Python. Unless you take in to account Jython, etc.

forkandwait 2 days ago 2 replies      
"rs" for "reshape array". Found only on FreeBSD systems (yes, we are better... smile)

For example, transpose a text file:

~/ (j=0,r=1)$ cat foo.txt a b cd e f~/ (j=0,r=0)$ cat foo.txt | rs -Ta db ec f

Honestly I have never used in production, but I still think it is way cool.

Also, being forced to work in a non-Unix environment, I am always reminded how much I wish everything were either text files, zipped text files, or a SQL database. I know for really big data (bigger than our typical 10^7 row dataset, like imagery or genetics), you have to expand into things like HDF5, but part of my first data cleaning sequence is often to take something out of Excel or whatever and make a text file from it and apply unix tools.

billyhoffman 3 days ago 3 replies      
For last line, I always did

   tac [file] | head -n 1
Mainly because I can never remember basic sed commands

(Strange, OS X doesn't seem to have tac, but Cygwin does...)

michaelmior 3 days ago 1 reply      
It's good to note that `uniq -u` does remove duplicates, but it doesn't output any instances of a line which has been duplicated. This is probably not clear to a lot of people reading this.
baldfat 3 days ago 1 reply      
Certain people might miss the point of why to use command line.

1) I use this before using R or Python and ONLY do this when this is something I consistently need to be done all the time. Makes my R scripts shorter.

2) Somethings just need something simple to be fixed and these commands are just great.

Learn awk and sed and your tools just go much larger in munging data.

vesche 3 days ago 1 reply      
Using basic Unix commands in trivial ways, am I missing something here?
dima55 3 days ago 0 replies      
Then you can make plots by piping to https://github.com/dkogan/feedgnuplot
known 3 days ago 0 replies      
sort -T your_tmp_dir is very useful for sorting large data
OneOneOneOne 3 days ago 0 replies      
awk / gawk is super useful. For C/C++ programmers the language is very easy to learn. Try running "info gawk" for a very good guide.

I've used gawk for many things ranging from data analysis to generate linker / loader code in an embedded build environment for a custom processor / sequencer.

(You can even find a version to run from the Windows command prompt if you don't have Cygwin.)

forkandwait 3 days ago 0 replies      
There is a command on freebsd for transposing text table rows to columns and vice versa, but I can't remember or find it now. It is in core, fwiw.
peterwwillis 3 days ago 0 replies      
You can also find tools designed for your dataset, like csvkit[1] , csvfix[2] , and other tools[3] (I even wrote my own CSV munging Unix tools in Perl back in the day)

[1] http://csvkit.readthedocs.org/en/0.8.0/ [2] https://code.google.com/p/csvfix/ [3] https://unix.stackexchange.com/questions/7425/is-there-a-rob...

CharlesMerriam2 2 days ago 0 replies      
Do you have a pastebin of the CSV file? Time to play...
squigs25 3 days ago 0 replies      
sort before you uniq!
geh4y806 3 days ago 0 replies      
just checking!
cyphunk 3 days ago 0 replies      
really HN? if you find yourself depending heavily on the recommendations in this article you are doing data analysis wrong. Shell foo is relevant to data analysis only as much as regex is. In the same light depending on these methods too much is digging a deep knowledge ditch that in the end is going to limit and hinder you way more than the initial ingress time required to learn more capable data analytics frameworks or at least a scripting language.

still, on international man page appreciate day this is a great reference. the only thing it is missing is gnuplot ascii graphs.

gesman 3 days ago 0 replies      
Use splunk.

'nuff said.

lutusp 3 days ago 3 replies      
Quote: "While dealing with big genetic data sets ..."

What a great start. Unless he's a biologist, the author means generic, not genetic.

The author goes on to show that he can use command-line utilities to accomplish what database clients do much more easily.

Hal Finney being cryopreserved now
298 points by mlinksva  2 days ago   267 comments top 29
declan 2 days ago 0 replies      
This is sad news, but less sad than a funeral and cremation would have been. I met Hal in the 1990s via the cypherpunks list, where a young Julian Assange was also hanging out. Hal went on to work for PGP Corp. in its glory days, and was involved in the early stages of Bitcoin as well. He is (not was!) the consummate cypherpunk and extropian.

A lot of those discussions have been lost to time, but here's a note from Hal that I posted to Politech in 1999 where he warned against building surveillance backdoors in Internet standards:

http://seclists.org/politech/1999/Oct/24"If the IETF sets the precedent of acceding to the wishes of countries like the US and Europe, it may find itself forced to similarly honor the desires of less open societies."

And here's Hal responding to one of my Wired articles by pointing out the absurdity of the MPAA's claims against Napster:

http://extropians.weidai.com/extropians.1Q01/3833.html"Looking at it over the history of Napster the amount would have to runwell into the quadrillions. Surely this would be the largest legalclaim in history! I wonder if the record companies can present thisfigure with a straight face."

I'll miss Hal. At least there's a very slim, but non-zero, chance he'll log back on again.

FiloSottile 2 days ago 2 replies      
It's fascinating, and personally a bit disturbing, to read about his death as an announcement of cryopreservation ("being cryopreserved now") instead of the sad news it is anyway ("died today"). Also the discussion here revolves around he coming back, not he leaving.

Maybe it impresses me because he seemed so hopeful to be able to choose life in his post back then when he was diagnosed: "I may even still be able to write code, and my dream is to contribute to open source software projects even from within an immobile body. That will be a life very much worth living." http://lesswrong.com/lw/1ab/dying_outside/

I guess one can argue it is a good thing about cryonics, less mourning, more hope. Anyway, I'd like to write down a regular epitaph:

Hal Finney (May 4, 1956 August 28, 2014), second PGP developer after Zimmerman, first Bitcoin recipient, cypherpunk who wrote code.

cousin_it 2 days ago 0 replies      
In 2009, he described his experience with ALS in a poignant post titled "Dying Outside": http://lesswrong.com/lw/1ab/dying_outside/ . I'm very sad to see him go.
WalterBright 2 days ago 0 replies      
Hal was one year ahead of me, and the next dorm room over at Caltech. Hal was scary smart - but you had to get to know him for a while before you'd find that out. He was completely unpretentious, just a regular guy.

And a great person - I never knew anyone who had anything but good things to say about Hal. It was a privilege to know Hal.

rdl 2 days ago 0 replies      
Hal Finney was one of the best people on the cypherpunks list -- wrote frequently, great developer, involved in some of the most interesting products of the past 30 years. He was also remarkably friendly and civil, even more amazing in a place like the cypherpunks list. A really great person, and will be missed. (but hopefully only for a few decades until the reverse-cryopreservation thing is worked out...)
mef 2 days ago 3 replies      
Interesting that someone would choose to be cryopreserved somewhere where there's no legal assisted suicide. Wouldn't your chances of being successfully revived improve if you got cryopreserved while still alive?
donohoe 2 days ago 1 reply      
Hey Hal, when you Google all of this and stumble across this old page I hope you find this note and get in touch for a drink.

I'd love to hear your story!

ericb 2 days ago 9 replies      
I'm curious--in these cryopreservation arrangements, what is done to incentivize future people to resurrect you? Anyone know?
haakon 2 days ago 0 replies      

If you haven't read his post "Bitcoin and me", now is a good time. https://bitcointalk.org/index.php?topic=155054.0

strlen 2 days ago 0 replies      
Here's an interesting discussion results about near-term studies on inducing hypothermia was a way to save gun shot victims from brain death:

The article -- http://www.newscientist.com/article/mg22129623.000-gunshot-v...

HN discussion with some insightful commentary -- https://news.ycombinator.com/item?id=7477801

bryanstrawser 2 days ago 0 replies      
I knew Hal back in the early days of remailing (94-96) where he ran several remailers and I had one nestled within the hidden confines of Indiana University (the gondolin remailer).

He will be missed.

owenversteeg 2 days ago 1 reply      
Black bar anyone? I think he is very worthy of one.

Either way I just set my topcolor to 000000. Seems it doesn't take hex triplets.

aerovistae 2 days ago 1 reply      
The thing I don't understand is how circulating cryoprotectant chemicals through the brain doesn't destroy the tissue. How could anything other than blood safely circulate through the brain?
simonebrunozzi 2 days ago 0 replies      
Thanks Hal Finney. You've done some good in the world. Hope to see you alive again.
jdnier 2 days ago 3 replies      
Ancillary to the sad news, for a fictional treatment of cryopreservation (circa 1993) -- one that I couldn't help think about as I read that email/release -- Gregory Benford's novel, Chiller, works through many of the thought-provoking implications. http://www.gregorybenford.com/product/chiller/
ryan-c 2 days ago 1 reply      
I'm surprised nobody has made the point that "death" is not a binary thing - it is a progression. If we consider a scale where "1" is alive (breathing, heart beating, higher brain functions working, etc) and "0" is when your brain has been unambiguously destroyed, we have a probability distribution of whether a person can be returned to "1".

At one time if someone left "1" (e.g. heart stopped), that was pretty much it. Now we can often recover someone whose heart has been stopped for several minutes with little to no long term damage through medical intervention. A cryonic procedure pushes a person to a place on the scale where the probability distribution provided by current technology is a big fat zero. There is some hope that as technology advances that probability distribution will look favorable to the those frozen. We're pretty much just guessing about that last part though.

pron 2 days ago 1 reply      
I didn't know Hal Finney, but condolences to his family.

Comparing different religions' various flavors of afterlife is very interesting, but perhaps this isn't the right forum for it (though maybe it is; I don't know). One thing is certain, though: cryonics's promise of an afterlife is definitely the most materially expensive of all religions -- on average, that is (some Christians spent what probably amounts to more than the cost of cryopreservation to expunge their sins). It is also the most strictly transactional since Catholicism prior to the reformation. The burial practice itself, however, bears a lot of resemblance to ancient Egyptian religion, and probably some other religions of antiquity.

morpheous 2 days ago 0 replies      
Sad, and in a way beautiful (I hope that's not an inappropriate word to use), at the same time. This man lived his life to the full, and faced the end of his life (as he has known it), with the courage of an adventurer.

I must admit i had never heard of the man himself, although I do know of bitcoin. But I am humbled by how he is smiling in all of his pictures, despite his physical body slowly giving up on him; his wife constantly by his side, through thick and thin.

I get the impression that Hal would have been a genuinely nice man to know. His life and the way he has faced his challenges head on is (should be) an inspiration to all.

TeMPOraL 2 days ago 0 replies      
Seeing people going all "cryonics is another religion" saddens me in a way. It's a tragedy that we've learned to accept mortality to the point that as a species we're not only unwilling to try and fix it, we're calling those few who try nutcases.

Even if current cryonics won't work, how about focusing on trying to find another, better way to fix death, instead of throwing the towel and sneering?

cowmix 2 days ago 0 replies      
Any time cryogenics comes up I'm reminded of this story from This American Life:


ogig 2 days ago 0 replies      
In this post[1] Hal Finney writes about ALS and his involvement with bitcoin. Recommended read:[1]: https://bitcointalk.org/index.php?topic=155054.0
letstryagain 2 days ago 0 replies      
Hal Finney's ALS Ice Bucket challenge
dghughes 2 days ago 1 reply      
Funny how the mind plays tricks I read that as cryptopreservation.
staunch 2 days ago 0 replies      
I'm sad for him that we haven't progressed far enough to treat him. I'm glad he has a shot in the future.
jaekwon 2 days ago 0 replies      
Oh, now I understand mummies.
Schwolop 2 days ago 0 replies      
Hal Finney (May 4, 1956 August 28, 2014 [probably])
techdragon 2 days ago 1 reply      
So rare to hear about this kind of thing so "present tense"

And is it just me or is it just a little bit funny to hear about an ALS patient getting frozen cryogenically right as the ice bucket challenge sweeps the globe... I dare say he's truly taken the ultimate ice bucket challenge!

NhanH 2 days ago 7 replies      
On the topic of cryopreservation, to borrow a question from Scott Aaronson [1]: have you signed up for cryopreservation? And regardless of your answer, how do you defend yourself against the charge of irrationality?

On one hand, it seems like a different version of Pascal's wager - if you can afford it, the upside is potentially far more beneficial than the downside. On the other hand, well, it is crazy...

I can think of one reason for not doing it (personally): I don't necessary want to live in a society that can perform my revival. Not to say that there's anything wrong with that society, they would just be so far away from me that I can't fathom how that would be like.

[1]: http://www.scottaaronson.com/blog/?p=455

amalag 2 days ago 1 reply      
He is legally dead, so good luck bringing him back to life.
US cable giants calls on FCC to block cities' expansion of high-speed internet
293 points by primelens  18 hours ago   104 comments top 18
sehrope 7 hours ago 3 replies      
> Chattanooga has the largest high-speed internet service in the US, offering customers access to speeds of 1 gigabit per second about 50 times faster than the US average. The service, provided by municipally owned EPB, has sparked a tech boom in the city and attracted international attention. EPB is now petitioning the FCC to expand its territory. Comcast and others have previously sued unsuccessfully to stop EPBs fibre optic roll out.

I did a quick search to see pricing for EPB's internet service and wow is it cheap[1]. The two plans listed on the site are 100 Mbps for $57.99/mo or 1Gpbs $69.99/mo. Oh and both plans are symmetric so you get that for upload as well.

I don't know of any city where the local cable co offers anything like that, let alone at those price points. No wonder they want to block this through legislation; competing in the market would not be pleasant for them.

[1]: https://epbfi.com/internet/

acjohnson55 8 hours ago 2 replies      
My opinion of those companies is so poor at this point that my knee jerk reaction is to assume that whatever they want is the opposite of what's good for me.
raarts 6 hours ago 1 reply      
I think by now it's hard to deny that Comcast & friends are stifling the progress of the US economy. The US is 31st worldwide on bandwidth speed (which really can't be explained by the US's size alone), consumers and businesses are complaining across the board about high pricing, bad service, and lack of options.

These companies do the country and its inhabitants a big disservice. I think the only reasonable option for the FCC and the government would be to increase competition, and remove existing roadblocks.

rayiner 7 hours ago 6 replies      
This is a really good example of how lobbying works. On one hand, these telecom companies have a clear interest in not having to "compete" with cheap, publicly funded broadband. On the other hand, their position just happens to tie in with a broader political debate: states and municipalities are on the verge of bankruptcy, and municipal infrastructure projects are a disaster outside a few well-managed cities like NYC. The telcos aren't incorrect in saying that these sorts of projects often turn into boondoggles, or disproportionately benefit parts of the population,[1] or cost too much and end up underfunded and poorly maintained in the future.

With these lobbying measures, the telcos can tie something that benefits them to a position (municipal infrastructure projects), that many people oppose for entirely different reasons.

[1] It is worth noting that only 1/3 of adults 65+ subscribe to broadband at home, and that many poor people access the internet through libraries or cellular phones instead of home computers.

golemotron 3 hours ago 0 replies      
> The success of public broadband is a mixed record, with numerous examples of failures, USTelecom said in a blog post. With state taxpayers on the financial hook when a municipal broadband network goes under, it is entirely reasonable for state legislatures to be cautious in limiting or even prohibiting that activity.

That's the cable companies for you - always looking out for the consumer. It's almost as if they see themselves as a public utility.

stretchwithme 1 hour ago 0 replies      
I'm all for people owning their own last mile for all kinds of services, if that's what they wish to do. Cooperatives are a great thing.

And the marketplace where these coops buy their internet access, electricity or water can still be competitive. It would actually be MORE competitive than the monopolies handed out by governments that you cannot easily leave.

And the decision to grant such a monopoly or revoke or allow an increase would not be in hands of the few. It would be in many different collections of hands and be much harder to pay off all those hands for some sweetheart deal.

djokkataja 6 hours ago 3 replies      
2 questions:

1. Why aren't cities everywhere doing this?

2. Who do I have to bother in my city to get this to happen?

ams6110 3 hours ago 2 replies      
Ugh. The answer is not municipally owned ISPs.

Sure it's easy for a municipal ISP to have "cheap" rates when you consider that they are taxing authorities and don't have to run a profit. How exactly would this service be put into place? It would take a massive capital investment by the municipality, full of opportunities for fraud. I live in a fairly small, highly liberal town and yet in the past few years we've had financial scandal after financial scandal, from city staffers abusing credit cards to money losing investments in everything from technology parks to parking garages to outright fraud by contractors with inside partners in the government.

And what does your typical city council know about running an ISP business? Nothing. Where I live, they can't even stay ahead of the pothole repair. It takes a team of a dozen laborers a month to install a block of sidewalk. The incompetence and inefficiency is just staggering. I'm simply unwilling to believe that they would do any better trying to offer internet service. I also don't buy the arugment that municipalites provide better "utility" services than businesses can. My parents lived in a neighborhood with poor water pressure for nearly two decades before the city finally got around to installing a booster pump station that resolved the problem.

I'm not in favor of the status quo either. Regulated monopolies have given us the current mess with indifferent and expensive providers, product packaging that forces you to buy services you don't want, and little competition.

Remove the monopolies. Let providers provide backbone, last mile, or both. Let them buy and sell bandwidth in bulk, and compete over customers from house to house. I don't see anything else that can possibly resolve the problem.

orbitingpluto 4 hours ago 0 replies      
Law of unintended consequences. Cable giants lobby to abolish net neutrality and for the FCC to not designate Internet service as a common carrier. Consequently other people band together to provide their own service.

Now the cable giants, one pure local geographic monopoly, lobby to control municipal remediation of the crappy service. The hypocrisy is, unfortunately, unsurprising.

astrocat 5 hours ago 0 replies      
Nothing, not even the risk of municipal-infrastructure-shennigans-leading-to-disaster, should stand in the way of more competition entering the ISP market. Google Fiber is awesome and all but they're pretty much the only company big enough to tackle large-scale rollouts nationally. These smaller, municipal efforts have much lower barriers and will ultimately provide the market the kind of competitive landscape it needs to begin improving.
a3n 5 hours ago 0 replies      
Good thing we don't get our water from Comcast.
davesque 6 hours ago 1 reply      
Telcos are a disease in the American economy.
pessimizer 8 hours ago 0 replies      
I wish there had been a lot more stimulus, and a lot more projects like Chattanooga.
dsjoerg 5 hours ago 0 replies      
Most interesting part of the article: USTelecom would like "public entities [to] bear the same regulatory burdens as private service providers".

What's that about? What regulatory burdens are these public providers avoiding, and are those burdens in the public interest or not?

Tloewald 3 hours ago 0 replies      
And then, in a fit of pique, he napalmed Cheltenham. Even the police began to take notice.
jgalt212 7 hours ago 3 replies      
I remember the day in US when the consumer was king, and its interests were second to all others (govt, big business, etc). Sadly that day has passed. Without getting too political, the root cause is unrestrained campaign giving.
hoilogoi 6 hours ago 1 reply      
Counterpoint: try googling "Volstate EPB".

I'm no expert, but I think it would be a shame if small struggling ISPs get the short end of the stick. It's hard for me to make a judgement on this specifically.

whoisthemachine 6 hours ago 0 replies      
* sigh
Leading Anti-Marijuana Academics Are Paid by Painkiller Drug Companies
288 points by Multics  1 day ago   89 comments top 10
refurb 21 hours ago 4 replies      
Drug companies are not trying to stop the legalization of marijuana. Painkiller manufacturers has gotten into a lot of shit for the abuse of their products (some execs almost went to prison and paid $34M in fines personally; see Purdue Pharma [1]. How do they remedy that? By funding anti-drug groups. Unfortunately there are no "bud is ok, but Oxycotin is bad" groups, so they fund the ones that are anti-all-drugs. The main goal of funding these groups is to stop abuse of their own drugs, while looking good doing it.

I work in the drug industry. Trust me, none of them consider marijuana a threat. There may be one or two exceptions, but they certainly aren't the companies making narcotic painkillers.


hkmurakami 1 day ago 4 replies      
This kind of indirect monetary "investment" to maintain the moats to your market reminds me a lot of how ridiculously cost effective lobbying can be to companies like Intuit. $10mm/year in lobbying can virtually guarantee that legislation remains in their favor, which blows any kind of product R&D in terms of ROI out of the water.

It's frustrating and disheartening to read things like this, and yet the evil business side of me can't help but think, "damn that's evil but so smart of them..." :(

robg 23 hours ago 0 replies      
Sounds like they knew what would happen:

States with Medical Marijuana Have Fewer Painkiller Deathshttps://news.ycombinator.com/item?id=8245373

danelectro 13 hours ago 0 replies      
With no single known cure available for a given condition, one of the most useful parameters in case treatment is chosen might just be simple toxicity itself, depending on how the outcomes are judged by the treated, and treator if involved.

In corporations which historically benefit enormously fromregulatory and media influence, it should not be unexpectedfor them to obfuscate or propagandize to influencers and the public on topics such as harm vs benefit to consumers, especially when the public is becoming threateningly powerful politically on that exact subject.

If you are in the toxic materials business, nature may be against you and depending on ethics, a very profitable approach has been shown to be not only playing unfairly but underhandedly tilting the playing field in your favor at the same time.

Not like there's any question.

shanev 1 day ago 0 replies      
Pairs well with this book review of Bad Pharma:


kazinator 1 day ago 1 reply      
The drug companies fight against any effective remedy that isn't covered by a patent. Besides marijuana, another example is the substance DMSO (dimethyl sulfoxide).


If the remedy has any slightest shred of controversy attached to it, opponents can latch on to it and blow it out of proportion.

3rd3 23 hours ago 0 replies      
Da passt mal wieder alles wie den Arsch auf den Eimer.

Not sure what's that in English.

bayesianhorse 1 day ago 6 replies      
So, the big news here is that pharmaceutical researchers (researching small molecule drugs like THC) are largely funded by companies who earn their money from a large number of small molecule drugs. Vice doesn't actually compare pro-legalizing and anti-legalizing scientists.

I'll also get on the record that I would not recommend using THC containing products for pain relief without medical supervision or advice.

Don't get me wrong: funding bias is a problem, but it gets overstated. The scientific process has to deal with much worse problems, like personal egos, evil publishing, malstructured career mechanics and outright fraud. Still, "paying for the right results" is a lot harder than it is often taken to be.

jedanbik 1 day ago 0 replies      
Follow the money.
MichaelGG 1 day ago 7 replies      
The article loses some credibility by saying Zohydro is a new opioid. It's just hydrocodone, without the harmful acetaminophen additive. Nothing special.

Is pot an effective painkiller if you need to really think while getting relief? Opiates don't have psychedelic effects or even the general mental impairment of pot.

Microsoft Defies Court Order, Will Not Give Emails to US Government
263 points by xamlhacker  8 hours ago   86 comments top 22
jnbiche 8 hours ago 5 replies      
The casualness with which the U.S. Government asks a private company to violate EU and Irish law is truly disturbing.

The U.S. Gov has gone mad with power.

And for perhaps the first time ever: bravo Microsoft! I don't even care if you did if for the PR, it's still a brave stand.

burgers 7 hours ago 1 reply      
> Judge Preska of course feels differently, and she has consistently agreed with the prosecution argument that the physical location of email is irrelevant because Microsoft controls the data from its base in the United States.

I find this bit very interesting. As opposed to Microsoft being a US company, it is that it's operations are located in the US. I wonder what effects this decision could have on the US labor market if companies relocate operations in the same way they relocate certain things for tax avoidance.

spydum 49 minutes ago 0 replies      
So, I wonder if Microsoft wins this appeal, how practical would it be to stripe encrypted data across data centers in 2+ countries. The idea being that to obtain the data stored, would require legal authorization in each country?
ryanburk 8 hours ago 2 replies      
if the ruling is upheld, web services that face legal discovery like google, dropbox, facebook, microsoft, etc will face an amazing burden of data retention cost.

there is an amazing tax already on these services having to implement per government specific retention policies based on where they do business. for example in ireland, by law you need to be able to produce up to a year of content even if an account has been deleted. in the u.s. the period is much shorter. so if other countries create similar legislation after seeing a u.s. version of this law stick, everyone will have to implement a myriad of retention policies, or worst case retention, in every datacenter they operate. it drives up cost and complexity in the services.

this might not be popular to say, but microsoft taking a stand here is an amazingly good thing for our industry.

mercurial 8 hours ago 3 replies      
> Despite a federal court order directing Microsoft to turn overseas-held email data to federal authorities, the software giant said Friday it will continue to withhold that information as it waits for the case to wind through the appeals process. The judge has now ordered both Microsoft and federal prosecutors to advise her how to proceed by next Friday, September 5.

> Let there be no doubt that Microsoft's actions in this controversial case are customer-centric. The firm isn't just standing up to the US government on moral principles. It's now defying a federal court order.

Whoever wrote this clearly didn't bother wondering if, just maybe, handing out customer data "overseas" ("overseas" apparently means Ireland) would be illegal under EU and Irish law. But let's not minor details like this get in the way of good PR.

thrownaway2424 2 hours ago 1 reply      
This is interesting but let's give the cheer leading a break. What were really talking about here is corporations testing the size of their stick versus the government's. The feds are pursuing a USA case against and american entity and the data in question is held by another american entity, which happens to have moved it to Ireland. Well why did they do that and when? Was is always there and will it always be there? In what country is the data chiefly accessed? If it is sent and received by Americans exclusively then perhaps the place where it is nominally stored might not even matter. In that case the place of storage would be just the kind of corporate fiction that courts are happy to pierce.

What if the data is striped among all the countries where Microsoft has datacenters? Do you get the union of all possible data protections? Or the intersection?

There are actual legal questions here and Microsoft's position is not neutrally good.

mindvirus 46 minutes ago 0 replies      
This raises the question: if the judge's option ends up being held, would any non-US based company buy services from a US company?
serve_yay 7 hours ago 3 replies      
Not that I don't respect the decision, but something tells me that we would be less happy, in other instances, to see giant companies like MS decide when the law should apply to them.
mnglkhn2 5 hours ago 1 reply      
Maybe I've missed it, but is data requested belonging to a US or non-US resident?
jrapdx3 3 hours ago 1 reply      
This case may be the leading edge of a huge wave with a global sweep.

The sticky point may be that the locality of data is impermanent and ambiguous. In the MS case, though the data is said to be stored on a server in Ireland, it could just as well be distributed, moved or duplicated anywhere, and for all we know it already has been.

Eventually laws will have to come to terms with the implications of the Internet: data, like a flock of migratory birds, for its own reasons goes one place to another and knows nothing about national boundaries.

simonblack 2 hours ago 0 replies      
I cancelled my Dropbox subscription several years ago for precisely this sort of situation. Not that any of my files are particularly wonderful, but the point being that I would no longer have control over other people having any and all access to them.

Microsoft will eventually roll over.

aikah 4 hours ago 0 replies      
Impressive move by Microsoft, frankly i'm more enclined to use MS cloud services,if they challenge US court orders on a regular basis. do some people know what they risk?
wfunction 3 hours ago 0 replies      
Am I the only one who's worried this may make the government less careful about giving orders in the future? (i.e. won't they figure "hey, let's just give the order; if they disagree then they'll defy it"?)
hartator 8 hours ago 2 replies      
maybe I am some kind of sheep but this kind of stand makes me strongly consider again Microsoft as a platform of choice against Apple.

Bravo Microsoft.

notastartup 7 hours ago 2 replies      
Good job for Microsoft being the first to stand up against a surveillance government. If only everyone else was brave enough to follow, we would see change.
edoceo 8 hours ago 0 replies      
+1 to MS!
jrochkind1 3 hours ago 0 replies      
Thank you, Edward Snowden.
hellbanner 6 hours ago 0 replies      
This is because the USG already has backdoors, right?
yutah 3 hours ago 1 reply      
So I guess the US government is not logging everything yet... so this is 2 good news.
niels_olson 7 hours ago 0 replies      
Watch my left hand waving while my right fist delivers a body blow.
okasaki 7 hours ago 1 reply      
They give it to the NSA, and the NSA shares it with other govt. bodies through that search engine (and probably a dozen other ways).

Anyway, it always freaks me out a bit when people cheer a megacorp like MS. They're not fighting for you, they're fighting for your perception of them. The faster you cheer, the less they'll do.

venomsnake 7 hours ago 1 reply      
I have a feeling that USG already have the data they need and are just running "parallel discovery/whitewashing" here.

Still it is nice to see MS take a stand.

Readings in Databases
262 points by luu  2 days ago   44 comments top 14
tokenrove 2 days ago 1 reply      
Great list. Almost anything Jim Gray wrote, or any of Michael Stonebraker's C-store stuff, is worth reading.

Database Systems: The Complete Book by Garcia-Molina, Ullman, and Widom seems to be one of the few full books with decent coverage of implementing a database, even though anyone wanting to implement a modern database will have to go a long way from what it covers.

There are a ton of great papers out there easily accessible, though, at least in part thanks to the VLDB Endowment. The people at MS Research working on databases publish a lot of great papers, too.

rxin 1 day ago 2 replies      
Author of the GitHub repo here.

I wanted a list of papers that would be essential in building database systems. It was little bit sad to see many members of the community re-discovering and re-inventing the wheels from 30+ years of relational database and systems research. This list of papers should help build a solid foundation.

etrain 2 days ago 1 reply      
See also: http://www.cs286.net/home/reading-list - Joe Hellerstein's graduate database class.
walterbell 1 day ago 1 reply      
These lists (of lists ..) are reinventing early Yahoo/DMOZ/webrings. If lists were available in a parseable format with accurate metadata (title, author, date, publisher), one could monitor github RSS and generate a local master list for analysis.

Related: https://github.com/inukshuk/jekyll-scholar

AlisdairO 1 day ago 1 reply      
Wonderful list! The C-Store papers are particularly fine reading.

For people who looked at this wanting to get going with using database systems (as opposed to creating them), I'd recommend:

    Learning SQL by Alan Beaulieu    SQL Performance Explained by Markus Winand    SQL Cookbook by Anthony Molinaro
The three of these will take you up to a pretty useful level of SQL knowledge/understanding. The first two are fairly easy reading to boot.

St-Clock 1 day ago 2 replies      
I really loved reading "CAP Twelve Years Later: How the "Rules" Have Changed".

This sentence nailed what I thought was wrong with some early decisions in NoSQL systems: "because partitions are rare, there is little reason to forfeit C or A when the system is not partitioned."

ahmett 1 day ago 1 reply      
Paxos paper is in "Basics" category just like it is meant to be a joke. Even the "Paxos made simple" paper is not easily understood by graduate students as many studies have shown. (see Raft paper for a study on this.)
handojin 1 day ago 1 reply      
Chris Date, Database in Depth

No idea if legal...


checker659 1 day ago 1 reply      
If you'd like to learn about databases (at least RMDBs anyways), there's a paid course at Harvard's Extension School. It's offered online for graduate credit at $2k a pop.

Link : http://www.extension.harvard.edu/courses/database-systems

jmcatani 1 day ago 1 reply      
Is there anything like this list for Operating Systems?
justinmk 1 day ago 1 reply      
link to "Anatomy of a Database System: Joe Hellerstein"[1] appears to be broken.

[1] http://mitpress.mit.edu/books/chapters/0262693143chapm2.pdf

sean_the_geek 1 day ago 0 replies      
Nice list, bookmarked!I am going through the SHARK paper and would say that its definately worth a read.
ExpiredLink 1 day ago 1 reply      
Readings in database research.
jedanbik 1 day ago 0 replies      
Really looking forward to reading these. Thanks for contributing.
What happened to Motorola
254 points by marban  3 days ago   92 comments top 10
m_throwaway 3 days ago 1 reply      
(Sorry for the throwaway; I still work for the same company and we we still have various relationships with each other.)

I worked in like '04-'08 building third party mobile software for a large variety of manufacturers and devices. Remember, this was pre-iphone, so there was a big business in building component X for device Y on platform Z.

In my experience of device manufacturers:

Nokia was insanely arrogant; the google of its day. Sony Ericsson were quite good, it showed that they wanted to build a quality experience.

The Japanese manufacturers were absolutely insane. They would routinely produce P1 bug reports in the style of "we opened the X application repetatively. On the 1154th time it crashed on start-up. Sometimes."

We had a few projects with Motorola together with US carriers. They were total nightmares - the Motorola engineers seemed like they had been picked up from the streets of Hyderabad the day before. (This is not a racial prejudice: most of the Indian engineers I've come across in the industry have actually been remarkably talented. This is not valid for the Motorola engineers though.) Most of these device projects were very high profile in the US, and presumably important for Motorola; yet they couldn't or wouldn't muster better staffing.

cwal37 3 days ago 4 replies      
This is really a fantastic, in-depth article, and highlights a lot of pieces about Motorola's history of which I was totally unaware. The breadth of their innovations was far greater than I realized, and the early decision to set up a world class shop in China seems to be very visible today in that area of the world.

That the internal competitions eventually led to internal war between different "tribes" doesn't totally surprise me. It seems you always read about that eventually happening in the post-mortem of any company with that type of structure. I wonder if there's a good way to balance internal competition. I imagine you would have to keep close watch on the overall silo-ing of each department.

I'm from the Chicago area, and I remember driving by their campus many times over the years as a kid, only really having an idea that they were somehow involved in phones. I was in high school when the Razr came out, and based on how popular it was I thought Motorola was an absolutely world-dominating company, not a business on the rebound from extremely heady heights.

bane 3 days ago 2 replies      
It really is shocking what a shadow of its former self Motorola has become. There was a time, not all that long ago that Motorola CPUs were a really valid alternative for a huge percentage of personal computers, that's really impressive.

Atari, Commodore, Sega, Apple, SNK, Sharp, Texas Instruments, Sun, and more all made significant systems with their chips.

sirkneeland 3 days ago 1 reply      
As a Nokia employee, reading this article is like viewing my own corporate history through a funhouse mirror.
MIKEMAC972 3 days ago 1 reply      
Great article and very interesting comments. One big takeaway for me is the fact that Google retained ownership of all patents after they sold the business to Lenovo. It'll be interesting to watch Google utilize those across their entire business, not just on the mobile side.
fndrplayer13 2 days ago 1 reply      
As a Chicagoan who worked in another closely related telecom company, the presence of Motorola was huge. Not so anymore, but its certainly a 'hallowed' name in the area. You definitely hear a lot of stories regarding the way bonuses used to work, and what it was like working for a company that was sort of "Google-ish" for its day. Its sad whats happened to Motorola, but that is the way the economy seems to work. Another great unmentioned Chicago Telecom name here is Bell Labs. Chicago Mag could certainly write an entire article about the storied rise and fall of that company too.

It just seems like all the Chicago technology greatness has melted outwards to the coasts. Silicon Valley and NYC (as well as foreign businesses) tend to dominate the areas that Motorola, Bell Labs, etc used to rule.

immy 2 days ago 0 replies      
I interned in PCS 2002-2004. Android wasn't out yet (but Danger Sidekick was) and Moto was already working on a Linux+Java OS. That project was the division's great hope, but missed deadlines on and on.

Cramming a Java OS onto 2004 mobile hardware, very risky choice of a savior.

Half the interns used Treo, Sidekick, LG. Half had a Razr.

silverlake 3 days ago 1 reply      
I was at Motorola from 90-98. I recall that they were a bloated engineering bureaucracy that didn't understand software at all. The other thing is everyone knew Chris Galvin was going to run the company. Lots of executives left during that period. Finally, the iPhone killed everybody: Nokia, Ericsson, Blackberry. It was inevitable. When markets make big shifts, big companies can't adapt.
mikeash 3 days ago 4 replies      
The rise and rapid fall of Iridium fascinates me. They poured so much money into something with so little chance to succeed. And yet, it survives to this day. Essentially, the original investors inadvertently gave the system as a gift to the world.

One story that has stuck with me goes that there was a presentation pitching the business case for Iridium. One part of the presentation goes, cell phone usage is projected to rise by X by the time Iridium becomes operational, leaving the market of Y - X for Iridium. Another part of the presentation goes, cell phone usage has consistently outstripped expectations by a factor of four, therefore our upside is even greater than we might expect. Apparently not realizing that the two claims contradicted each other, and that if cell phones really were growing so fast, it would mean little would be left for Iridium, which is of course what happened.

But I can't remember where I saw it, and I can't dig it up now. Anyone happen to know if I'm remembering anything remotely close to reality, and where I might find info on it?

rayiner 3 days ago 2 replies      
tl;dr: We gave away our competitive edge to the Chinese in return for goosing short term profits, and are now paying the price.
Myths About Working as an Engineer at a Startup
256 points by karlhughes  2 days ago   135 comments top 23
rayiner 2 days ago 5 replies      
From the perspective of someone who worked for a startup as his first real job:

> You are also likely to get some input on the way future engineers are hired and the way your technology team interacts with the business team.

This is true, and not entirely a good thing. I was interviewing new technical hires within a year or two at the organization. I asked stupid nerd questions (if you knew C++ like you claimed, you'd know the ins and outs of this template expansion), instead of important ones like: if you're stuck, will you ask for help, or just silently beat your head against the wall and not have anything to show weeks later?

> An environment that encourages learning and experimentation keeps engineers more motivated than one that stifles its technical talent.

It's also an environment that encourages reinvention of the wheel, especially by younger engineers with little supervision that don't necessarily realize that a particular wheel already exists and is ready to use.

> One of the best skills you can learn if you intend to work for a startup is the ability to figure out things on your own.

This is an existential issue at a startup, and particularly important if you're fresh out of college, because you don't know anything and thus must learn everything. This is both good and bad. On one hand, I learned a lot more in a short time than my friends that went to work for big organizations. On the other hand, I always felt like I never really learned how to do things "right" because a startup is a setting where you don't necessarily have the time to dot your I's and cross your T's.

> It seems like startups move faster and create solutions to difficult problems more efficiently than large companies, but the truth is that they normally have a lower quality threshold than their corporate counterparts.

This rings true to me. It was really a revelation to me when I joined a large organization for the first time. It was a well oiled machine, with roles for everyone and someone whose job it was to do anything that needed to be done. Startups have many advantages, but they're not necessarily productive or efficient places.[1] You can spend a lot of time yak shaving at a startup.

Anyway, I'd do it over again in a heartbeat, but I'm not sure if I'd advise someone to go to a startup straight out of school.[2] Maybe one of those well-funded, well-established ones where they're far enough along to have real internal processes and real management, but I'd save the "three guys in a basement" stuff for later, when you know what you're doing. Again, YMMV.

[1] Of course, I imagine they can be if we're talking about a shop run mostly by highly experienced engineers. But in general, I think big organizations are better at getting more output from less-skilled labor.

[2] At least if you intend to make programming a long-term career. If your intention is to jump over to the business side relatively soon, that advice would change. You'll get far more exposure to the business side at a startup than you will at a large company.

silverbax88 2 days ago 10 replies      

Rock star engineers sometimes work at startups, but usually they work for big companies who can pay them twice the salary and have multiple indoor water polo pools and racquetball courts.

THIS is something the 'startup world' and a lot of people on HN are not aware of. Really, really bad-ass programmers usually work for big corporations for good salaries. Not always, but usually. The reason I mention it is because there is a myth that startups are built with the latest, greatest tech by the brightest minds. Often this just isn't true...many startups are hacked together by inexperienced programmers fresh out of college. Sometimes this works, but often it means that nobody on staff is really a master coder, nobody on staff has ever had to fend off Russian hackers attempting to take down your code at 3 am, nobody on staff has had to deal with thousands of users who wake up one morning, can't log in and no money is coming into the company. These are the battles that most programmers learn in the trenches of big companies, fighting tough battles, fixing complex problems under fire. It's not ideal, but I'm not sure if you can really become battle-hardened without going through real battles with real jobs on the line.

This is what happened at Twitter - when the government came after them because they were getting hacked too often, Twitter went out and hired 'real' programmers and got things working. That's something that gets missed a lot in the 'startup story'.

bane 2 days ago 0 replies      
I bounce back and forth between startups (5), small companies (2) and mega corps (3) and so might some unique perspectives.

The truth is that for most of the world, bigco's are the default and startups have to sell their employment in order to get employees. They have to find some way to offset the inherent employment instability that working for a startup entails. This has created many of the myths in this article that continue to be perpetuated.

Some observations:

Like a few other comments, I've noticed that the absolute best-of-the-best work for "real companies" and not startups. The reason working for a bigco doesn't get the reputation of having the best engineering talent is simple, talent is like any other thing with a group of people, it follows a pyramid distribution. Most of the people are terrible and at the bottom of the pyramid, and as you move up the pyramid you end up with smaller and smaller populations of very good people. Except you can't have fractions of a person, so in bigco, where the projects might involve 100 engineers. That top-notch engineer accounts for only 1% of the staff. Sure maybe 80% of the staff may not be any good, but that still leaves you with 20% who are at least decent work-a-day engineers.

In a startup, your entire engineering staff might be 10 people, and your top-notch engineer now accounts for 10% of all the staff time spent on a project. It's almost like having 10 top engineers instead (if you try to scale the numbers to match). It's easier to find 3-4 above average folks to round out the top of the pyramid and the rest are just there to fill in the mortar and won't last long.

This means that your individual contribution is magnified in a startup. Everything is magnified in a startup. Decision making power, individual contribution, etc. But so are negative aspects of working in a company. When things go bad, they go bad really fast in a startup. The worst political dramas I've ever seen were all in startups. Points of failure are enormous in small startups.

One of the thing that happens as companies mature and grow is they learn they can no longer tolerate so much magnified failure, so they build in controls to spread around responsibility so they can better survive what would be a catastrophic failure in a startup (founder leaving, tech decision turns out to be wrong 18 months in etc.).

The side effect is of course that in order to turn down this magnified negativity, the magnified good stuff also gets squashed.

gaius 2 days ago 2 replies      
3 times I have worked for a small company that had a liquidity event, as an early employee. That already makes me incredibly lucky, that those companies didn't sink without trace. In each case, my stock was worth between 3-6 months of my then salary. Which was a nice chunk o'change to be sure, but not FU money, not by a long shot.

Now I work for big companies for a good salary. I put in the hours to get the work done, but I'll never do 80+ hour weeks again unless I own a ton of equity in the venture.

The truth is that startups, like the music biz, like video games, like PR require a steady stream of young people with unrealistic ideas who can be exploited. Don't let yourself be played, is my advice now to young engineers. Especially don't take career advice from a VC or a founder...

hox 2 days ago 4 replies      
Things not addressed that I think are important:

1. Salary. Startups always seem to offer lower salary in return for said equity.

2. Career development. Startups demand that you build somewhat hacks solutions for the sake of time. In many instances you might not get the opportunity to learn how to do things right. Too much of this and you get used to building hacks products.

3. 80-hour work weeks might not be the norm, but what about 60 hour work weeks? that's a deal bresker for some.

4. Diversity. Startups are notorious for hiring people just like the founders.

cik 2 days ago 0 replies      
Trading stock for equity is almost never worth it - as a basic reality. Clearly it is occasionally, or the system would collapse.

Let's say you receive $40k in salary a year (versus equity), for 5 years (let's call that $200k, just for fun). Assuming you're diligent, that salary difference can be invested, even in something as simple a preferred shares, yield a compounded return of +/- 5%, or ~$232k. So, we're not really talking about high levels of risk.

Assuming your startup, like most others has 6 funding rounds before an exit, and that insiders hold the traditional 4% at exit - we'd be looking at someone who started with a full 1% (of the initial, pre-dilution point) of the company, yielding ~$341k over the same period.

Given that 1% holdings are rather high - 0.5% or less would be well expected. That in turn means that unless your company exits for ~275M (over this 5 year period), and you hold 0.5% then FINANCIALLY it's not worth being part of that endeavour.

That being said there are several other great reasons to enjoy startups.

jacquesm 2 days ago 3 replies      
I miss: profit sharing is just as good as equity.

And: don't worry, you will get your shares.

ehurrell 2 days ago 0 replies      
I'd agree with almost all of those, the biggest thing I'd say is, as with all startup jobs, risk is a factor you shouldn't ignore, the last point I kinda disagree with for that.
computerjunkie 2 days ago 2 replies      
As a recent graduate who is actively looking for a job, this is an eye opener. Many thanks to the author for explaining what really happens in "Start-up life".

What really struck me was the "full-stack rock star hacker" part. As someone who has little to no experience in commercial software engineering and then sees this phrase on many start up roles (includes ninjas and gurus), it really sets an image that start up work is only for the incredibly bright individuals who know it all.

I also believe some of the unrealistic expectations start ups have are one of the major reasons why they fail. [0] "Rome wasn't built in a day".

Working long hours goes with the kind of person you are. 80 hour work weeks may be productive for some people whilst others perform better with 37 hour work weeks (or less). We are all different. Some athletes are born marathon runners, whilst others are natural short distance sprinters. I think this is where most start-ups also get it wrong and its also something that contributes to their failures. Long hours once in a while is fine as long as you are compensated for it.

Media is also a major contributor to what start ups should be like. As mentioned in the article, you only hear about major acquisitions, huge crowd funding campaign success, and crazy company valuations. But its a lot less common to hear about the failures of start-ups, people burning out before their first product release. Why does media never cover these failures?

As much as I would like to join a start-up or even start one, I will continue looking for work at small-medium companies and big companies first, get good experience first and still a personal life outside work. I guess its different for other people, but I value work/life balance and I also think its the key to having a successful career, be it entrepreneurial or employee based.

[0] https://en.wikipedia.org/wiki/Rome_wasn%27t_built_in_a_day

karcass 2 days ago 1 reply      
Having done nine startups now, all that rings pretty true to me. :)
frobozz 1 day ago 0 replies      
>Truth: youll have more freedom to choose the projects you work on and how you do them

This is contrary to my experience. In startups, any time you're not spending developing the core product in the quickest way possible (i.e. the currently established way, don't go exploring whether a new way might be quicker) is seen as time ticking away on the death clock.

In a large corporate environment, I've been much freer, because we can afford to play the long game. I've been able to choose projects, define completely novel ones, choose different technologies to use, rather than the established stack (and indeed, change what the "established stack" is).

Ragnarork 2 days ago 1 reply      
As someone who's currently in his second ever job, which is in a startup (first one was in a startup as well), I find most of it is quite accurate, but then there'll always be edge cases. The startup world is really heterogeneous and it's hard to draw conclusion about it that won't be really vague.

Just a small point I'd like to comment:

> Truth: youll get to set the culture, standards, and technologies used

This is quite funny because both startup I worked for were completely different on this point. In the first one I was the only developer, working on an application I created completely from scratch. I could decide entirely on my own which technologies/tools I could use (heh, being able to start a project in c++11 was really sweet). On the other hand, the startup I'm in currently is rather resistant to change with regards to what we use, even though we tried to push these change with arguments as to how it could speed up work and eliminate some recurrent pain in the development process.

Apocryphon 2 days ago 3 replies      
"Big companies put junior engineers through training programs, send them to advanced classes, and make them sit through certification tests, but Ive never seen any of those at a scrappy startup."

They do? Is employee training (outside of on-the-job learning) even a thing in engineering anymore?

breaksyourheart 2 days ago 1 reply      
What about the myth that you're doing something important?
mrbird 2 days ago 0 replies      
Andy Rachleff has great advice on people looking to start a career in startups: Don't make a brand-new startup your first job--rather, look for a mid-sized, growing company, where you'll learn some valuable skills and set yourself up for better opportunities in the future.


The list is getting a bit out of date, and he's writing for business students, but I think the advice is very sound for engineers as well. If I could do it over again, I would have tried to find a similar path.

rokhayakebe 2 days ago 8 replies      
Would you rather work for a startup (vs. established co.) that pays you 130% market salary without equity or take 90% with the possibility of great upside?

Edit: I am asking because I think startups should be paying more than established companies because of many reasons: 1) the work environment is usually shitty (nice, clean offices are nice), 2) no financial security because they are not established, 3) no one outside the startup world wants to hire someone who failed numerous times before, 4) it is much easier to go from Big Co to Small/Tiny Co, then the opposite. (4) is particular true and quite dangerous for recent grads. Young recent grads are better off founding their own company or joining Big co instead joining fresh startups.

steven2012 2 days ago 4 replies      
I disagree with the assertion that most "rock star" engineers work at big companies. The only big companies that have true "rock stars" are Google and Facebook and probably Microsoft too. All other "big" companies in the valley started with rock stars, who probably got rich from options/IPOs, who then got bored and left for another smaller company. Most companies have a huge brain drain after the 4 years vesting period after the IPO. And this happens more especially if the nature of the business changes, if management comes in that ladens the company with process, slows down innovation, etc, basically make the company less fun to work at.
legohead 2 days ago 0 replies      
as for culture.. it's great in the beginning, but as the company grows to medium and large, communication begins to break down, and bureaucracy creeps in, poisoning the old culture.

with more people being brought in (usually at a rapid pace as you hit your growth spurts), you have to adapt to more personality types, and company culture&feel will suffer as a result

swayvil 2 days ago 0 replies      
Engineering school : Work em like mules. Weed out the misfits.

Engineering job : Work em like mules.

squidmccactus 2 days ago 1 reply      
So will I get a big payday from company shares even if my startup isn't very successful?
porter 2 days ago 1 reply      
how much equity does the average early employee get? How about the average late employee? What about for a company like twitch, that just exited?
quarterwave 2 days ago 0 replies      
Well written, tallies with my start-up experiences.
yeahsathish 2 days ago 0 replies      
I Agree! Totally!
Ask HN: How to start earning $500/month in passive income in next 12-18 months?
234 points by rtcoms  13 hours ago   169 comments top 42
simonhamp 12 hours ago 8 replies      
I run Built With Bootstrap (http://builtwithbootstrap.com) that brings close to $2,000 on average each month in "mostly passive" income.

How? Here are a few things that have helped:

1) I got lucky - I jumped on something early and got picked up quickly and rode on the back of a giant gorilla as it kept getting higher and higher - right place, right time

2) I keep costs low - I use Tumblr (free) and cheap, pluggable services like Campaign Monitor, Wufoo, Buffer, and IFTTT to automate a lot of the process.

3) Don't be afraid to ask for money - I started doing this very early on, but even that wasn't soon enough! People will pay if there's a benefit, so don't be afraid to ask.

4) Keep it simple - I write brief emails, I don't respond to everything and I only spend a few hours each week on the site - updating content, checking stats, emailing etc.

5) Get multiple sources of revenue - I use a number of affiliates as well as offering traditional advertising.

6) Be social - Twitter, Facebook, Tumblr, Pinterest, Google+ - BWB is on all of these. I try to engage with people and respond to things

7) Start a mailing list - this is not to be overlooked!

I hope this gives you some helpful ideas :) don't be afraid to ask any questions.

patio11 11 hours ago 2 replies      
If you're willing to work five hours a month on your passive income project, then I suggest doing five months of work on Passive Rails Consulting, as that has minimal execution risk.

Can I make a suggestion? Most people who say "passive income" spend a lot of time fantasizing about the lifestyle relative to actually producing value. It is not terribly difficult to produce value as a programmer. Concentrate on producing value. If you achieve that, you'll trip over $500 a month. Most of the straightforward ways involve identifying a problem for a class of business that you're positioned to solve and then solving it in return for money.

If you want concrete suggestions with regards to markets and form factors, in lieu of repeating myself I'll post a link to an old comment: https://news.ycombinator.com/item?id=5904316

apdinin 9 hours ago 0 replies      
Since you're a developer, you already have the most important asset, which is the ability to basically create anything (web services related, obviously). From there, it's "just" a matter of finding something to build that other people will value and then telling as many people as possible about it.

The best way to figure out what to build is by thinking about what YOU would find useful, and then build that. Don't get too big with the idea, though. Just analyze your day-to-day routine and ask yourself what kind of little piece of software would make your day 1%-5% less annoying.

For example, I'm a developer but also a startup founder. I've wasted entire days doing repetitive email follow-ups to investors, partners, customers, etc., which means I wasn't committing code. So I put aside a weekend and built a system to automate my email follow-ups. After it worked well for me, I showed it to some colleagues, they started using it, too, and before I knew it, I had a nice little SaaS app going. With another weekend of work, I added a frontend and billing system, and I launched it as https://autopest.com.

(I'm including the link at the suggestion of some of the other folks in this thread, and also to show how it matches well with their advice -- target B2B, build a SaaS app, keep it simple, rely on quick solutions like Bootstrap and Stripe, etc.)

Step two is getting people to it. Best way I found for that is social media -- especially Twitter. It only takes me 15 minutes a day to be "active" on Twitter, I can easily target BizDev people and GrowthHackers (my target audience), and slowly but surely, they start signing up. It's been a few months and I'm on pace to hit your $500/mo target in the next 30 days.

Best of all, because I built something that I REALLY WANTED, even if no one ever pays me another penny, I'll still come out ahead because the thing I built works really well for me.

xinwen 8 hours ago 0 replies      
It's not easy. I had the same plan a few months ago. Thought I'd share my own (ongoing) story, maybe it will give you some ideas. Some background, I'm a engineer at a YC company in San Francisco. I was also looking for a passive income side project and after a few attempts from scratch sputtered I happened upon a website auction at flippa.com for an interesting webapp: www.postrgram.com

I spoke with the owner over Skype about it, he'd run the site for a couple years, had invested a lot of time into getting the licensed mosaic software tuned correctly, but had put virtually nothing into marketing and was still printing orders himself with a giant industrial Canon printer. Not surprisingly he was tired of it. I realized the printing process could be automated and business could potentially be expended by integrating with Facebook and offering a free digital option if the customer allowed a post on their wall. Long story short I bought 80% of the business a few weeks ago and I'm working on those things right now. I'm sure there will be snags in the road but I'm on my way toward my primary goal of getting a product on the market that will not require my time to run on a day-to-day basis. Currently income is less than 1000/month but I hope to see that grow.

My advice when thinking about a project like this yourself (and it's fine to start from scratch, though that's not what I did) is to take the basic tenants of running a startup to heart and just apply them on a micro scale:

1) let the real world inform your choices. In my case I happened upon a product that already had some validation. In your case maybe you just need to find that one pain-point you can help solve. Always be thinking of ideas, ask your friends, read a lot.

2) be efficient. get good at rapid prototyping and shipping ideas for validation. Always be asking yourself this question: is this the most valuable thing I can be doing with my time right now? Force yourself to move fast. You'll get better at learning what works and what doesn't.

3) Consider finding a partner who you can join forces with. Two people can be more effective than the sum of their parts. Not to mention expanding your network of friends and contacts is in many ways more valuable than wealth.

4) follow the money. It sounds crass but after all it is the goal and it's also the most tangible effect of providing value to someone. Even at a micro scale if you're not converting customers it's a red flag.

5) my personal style is to be be wary of saturated markets like social networks, mobile apps, etc. on the flip-side i personally feel there's potential in the blogging landscape and popular product integrations (like widgets).

Sorry that ran a little long, I'm on a road trip right now (not driving), just some stream of thought ideas. Good luck!

Red_Tarsius 13 hours ago 7 replies      
I'd give a read to Start Small, Stay Small (www.startupbook.net) and other startup literature. Study Paul Graham's essays (www.paulgraham.com).

Random ideas:

- Start something like skoshbox.com, but with Indian goodies. You may not realize it, but foreign countries crave Indian tastes and flavors. I would probably pay for a monthly delivery of Indian spices and sweets.

- Start a podcast on a passion of yours. Offer free newsletter and video lessons, then charge for premium content or up-to-date episodes and consulting.

- Omegle meets paint. A collaborative canvas for children of different nationalities to meet on the Internet. No chat, no video/audio just the canvas. Revenue may come from ads, sponsors, educational programs within the platform.

EDIT: to whoever downvoted my post, may I ask why?

santhoshr 12 hours ago 1 reply      
From Bangalore too. Amateur programmer, but have a ton of experience and networks in a niche sector. I pull in decent amount of money in passive income.

(1) I would say focus first on value rather than money. I delivered my service for free or on trial basis for almost 6 months before clients signed up to pay.

(2) As a freelancer, focus on long-term retainer relationships, built on your value-proposition. And work with clients who have solid reliable cash flows. This way, your income would be guaranteed via 1-2 year contracts.

(3) Please stay away from consumer focused businesses/services if you are looking for small side income (this can be your focus for your big main start-up idea). B2B is always better. The only exception I think is if you get lucky in the app economy or if you could build 1m+ page-view site (Amit Agarwal)

IgorPartola 12 hours ago 1 reply      
Meta: if you respond with "I run a small project and it is generating X per month for me", would you please provide links? I know it seems self-promoting but in reality in discussions like this it is very interesting.
jasonwen 12 hours ago 1 reply      
I have several passive income sources. I started a community websites since 2006 which generates me around $1500/mo in passive income.

Have you any ideas what tools might make your (dev) life easier? You can create a small productive/utility SaaS app. Use bootstrap if you don't have any design skills. Use PayPal/Stripe to setup a (recurring) payment system in a day.

Test using lean methodology, buy a domain, create landing page with bootstrap, pricing page, and a fake sign up page. Takes max a week. Then spend another one week driving some traffic from Google/Facebook ads to see if there's any interest. Usually Facebook, if targeted well, is way cheaper. Start using Google once you know your LTV (Life Time Value of customer) and conversion. Start by spending for example $20.

One tip: Don't target US customers in Facebook for polling interest, they are harder to convert, and from my experience around 10x/20x more expensive than Asian/Spanish speaking countries.

After you see interest, try to find a quick way to see if people are willing to pay. This might not as easy as most users want to use your product before paying. You might explain the product in better detail after the signup and ask what they want to pay for it.

After you are able to confirm if people are willing to pay for your service, only then spend serious time building.

You can apply the same tactic if you're selling an e-book, maybe the spices idea as someone mentioned before.

Good luck!

silver1 12 hours ago 3 replies      
Either you can sell your own product/services or sell someone else's product/services ...

- Take advantage of growing e-commerce in India - sell other peoples product ... buy directly from manufacturer and sell it to consumers thru platforms like Flipkart, Snapdeal, etc. ... most of the manufacturer in tier-2 and 3 cities dont have a clue about e-comm selling so help them out and make a good return...

- Start affiliate business (selling other people's products/service) online...

- Create your own product/service and sell it thru your own e-comm store or thru Amazon or Ebay or Yahoo stores.

- Start writing a blog and make it so popular with your amazing content that you can make money thru Ads, affiliate marketing and/or by email marketing.

- You can also start a drop-ship store and sell to the consumers in north-america, europe and/or australia.

- there are small businesses on sale on Flippa (however you need to learn how to find a good one) that can easily make you $6k a year .... find a business that is of your interest/passion ....

Good Luck!

csomar 13 hours ago 2 replies      
You don't want to earn a passive income. You want to leave your job. Fine. Except that looking for passive income stream is not the best idea out there.

If you can contract for $80/hour for US/EU clients (really possible), you'll need to work only 40 hours to cover a 6 month expenses.

But that won't probably be enough for one month, once you start earning it. You've been warned ;)

revorad 11 hours ago 3 replies      
There is a strange obsession here on HN with passive income. Yet I haven't seen any good examples of passive income, which were built without significant effort or time. Income is anything but passive.

Here's a better way to look at it:

You want to earn $500/month? As a good Ruby developer, you can earn that much in two days doing contract work. And no, it doesn't matter if you're based in Bangalore, you could be based in Belgaum and it wouldn't matter, as long as you had a good internet connection.

You can make more money while doing work you enjoy, instead of trying so hard looking for passive income. And you can save up, invest that money and buy solid chunks of time to totally focus on your crazy startup ideas.

kephra 12 hours ago 1 reply      
Build a pile of small (Ruby) applications. Small in terms that the prototype could be delivered in weeks, and only needs a few weeks till production. So first step is to fine "easy jobs" that you can solve faster then others, and sort out the bad customers. I offer 2% discount if they pay invoices within a week. Only two type of customers would pay late: Those with a cash flow problem, and those with an account problem. Drop them! Also drop any kind of toxic customers. Keep the good ones.

Offer a maintenance contract of $50 per month for the server and the software to the good customer. A year later you will have 4 or 5 contracts paying half of your bill, for doing backups once a week, installing security updates, writing invoices, and perhaps sometimes writing an email to the customer or even patching a bug.

You can play around with own ideas, once you have a semi passive income. I would try to solve a problem with a software as a service.

DanBC 13 hours ago 0 replies      
Good quality videos of construction equipment aimed at children. Put those on Youtube with ads. Link to a website that has affiliate links to stores that sell toy versions of the equipment - Bruder is one manufacturer to investigate. Gently adsense that page. Careful SEO.
mattdlondon 10 hours ago 0 replies      
1) Find a niche area of interest and build a web site and put AdSense ads on it

2) Promote the website a bit

3) Profit.

The trick is working out what your niche is that hadn't already been done by someone else.

I lucked out with some websites in the UK about fuel consumption (for cars) a few years ago. I invested a weekend of my time and a little promotion effort and it snowballed - initially it made maybe 1 a month for the first year to 24 months but now I get about 150,000 hits a month which with ads earns a decent amount of money (more than you are looking for)

euroclydon 13 hours ago 1 reply      
Build curated lists of small businesses in the US. I mean really quality spreadsheets, with city, state, address, owner name, contact name, email, phone, etc. Do this for business categories, like: bakeries, caterers, dentists, doctors, plumbers, etc. For the doctors and such, I would focus on small town entities who are not part of a large medical practice.

Then you sell exclusive, limited access to the list. The internet is virtually littered with small SaaS applications which are targeted toward small businesses. They are selling software for bookkeeping, time tracking, shift planning, appointment reminders, practice management, etc. These SaaS products have LTV numbers such that direct calling is worth their time.

meric 13 hours ago 7 replies      
$500 per month is $6000 per year. $100,000 in high dividend stocks may generate close to that much in dividends per year. So contract for $100 per hour for 1500 hours in the next 18 months. Then make a good investment.
benmorris 9 hours ago 0 replies      
I haven't given up on passive income, but I try to work smarter not harder. That being said what I do would be considered semi passive income. Since consulting work isn't consistent I've spent about 2 years building up a network of design online vinyl lettering and graphics sites that require little to no daily work on my end. I outsource production and shipping and take a good piece of each sale. On average I devote 30 minutes a day handling emails, phone calls, and submitting POs. I've automated nearly everything once an order is placed from generating the vector cut files to submitting purchase orders. My image generating api does all that magic http://ionapi.com closed beta) and a few of the sites http://boatdecals.biz http://letteringhq.com and http://racegraphics.com. Slowly building on past work over these two years I've went from making nothing to being able to live off of the income of these businesses.

So my main advice is start somewhere and don't find yourself so indecisive you do NOTHING. There are lots of opportunities in small niches especially. Pick one you love and try to tap into something. Myself, outside of being a developer I know signs and graphics pretty well, so it was a logical direction to go.

AJ007 9 hours ago 0 replies      
#1 This is way easier, and can last longer, if the business involves user retention.

#2 Study other micro-businesses carefully.

#3 Apply an existing model to a brand new area or technology.

Personally I think the passive income concept is crap. It is real but is a concept that people who don't want to work hard eat up so is used in business opportunity marketing heavily.

There are things that can produce revenue for a long period of time after your initial upfront investment where you work really hard for a while for free or at a great loss (investing money in assets vs just your own time.) However, the nature of traffic flows online & technology mean if you use the revenue for personal consumption rather than re-investing it in the business one day a few years from now you will wake up no better off than the day you started. In some circumstances where the individual increases their standard of living or takes on debt, they will end up much worse off.

Not speculation, I have watched this happen to friends.

As a programmer your best opportunities are most likely writing tools and code that can be resold multiple times. Huge chunks of redundant work is done by freelance developers.

lnreddy 8 hours ago 0 replies      
I'm a Rails developer from Hyderabad,India . I did some freelance work for a client from USA at $12/hour . Ofcourse I realised that I was making next to nothing after taxes(20 %) and oDesk fees, So I quit .

I've been searching for a simple income generating idea too for the past few months . I'm torn between trying to build a well paying freelancing career vs building a Saas product/service that's going to pay over time .

I tried to contact you but your email isn't public. Ping me if you wanna bounce some ideas regarding this sometime !

progx 12 hours ago 1 reply      
Play lottery, win and live from the money.

This advise is good as as 90% from the other postings i read here ;-)

malditojavi 12 hours ago 1 reply      
It's me or passive income posts in HN flourish on weekends?
mjnaus 13 hours ago 2 replies      
Since March this year, I have been able to build a semi-passive income of around $1500 per month (lowest around $1000 and highest $2000) by building/selling items on CodeCanyon.

Granted it's not truly passive income, rather semi-passive. That said, with minimal monthly efforts, the money comes in each month.

shabinesh 12 hours ago 1 reply      
I am from Bangalore. I freelance as a python developer.

Tough this is not passive income this might give an idea: Billing $20/hr (that's what we get in Bangalore), getting $500 a month from a single client is very easy. But finding few more clients will get you more than what you want. You will also get a lot of time. Toughest part is getting the client, good clients usually come from your contacts.

Another thought, finding customers for your SaSS product will not be difficult if you have a good circle, attend conferences and workshops in Bangalore which will build you this circle.

I do a day job as openstack dev, freelance on django, and also working on my own ideas - I am doing this to pursue my passion of traveling(digital nomad). :)

zeynalov 9 hours ago 0 replies      
I started a youtube channel 2 year ago. Just for my personal use. When I want to show someone something, to be able to upload it, if it's not on youtube yet. So I invested 10 minutes per month to upload some videos that I think interesting to share. After a year I saw that to0 many people watch my videos. So I activated ads on my channel. My last video was a year ago I think. And Google still sends me 200-300$ monthly. I've invested only 2-3 hours totally on channel.
lnanek2 10 hours ago 0 replies      
It's easy to make that much on ads and ad removal upgrade fees for a semi-popular game on mobiles (Android, iOS). You don't have to be on the charts. I make that easy and only ever made top 200, I think. Games are a nice category for new entrants because Flurry reports have shown that as a genre they aren't super sticky. Users flow in and out of new offerings constantly.
ca98am79 10 hours ago 0 replies      
you could buy domains on park.io and then sell them immediately on Flippa. For example, see: http://blog.park.io/articles/park-io-users-making-money-flip...
bennesvig 13 hours ago 1 reply      
Listen to James Altucher's podcast with SJ Scott. He made 40k last month by continuing to publish several books to amazon.
tim333 11 hours ago 1 reply      
Sell Indian pharmaceuticals by post? There's some margin there I'm sure.
duiker101 13 hours ago 2 replies      
Make something that people are willing to pay for. I have a series of project, each makes a relatively small amount/month but all together make a decent passive income, not enough for me to live on but I am sure that you can probably do something similar. Try many ideas and see if you can find something people need.
johnnyio 13 hours ago 4 replies      
Become French resident, you will have a guaranteed living income of 434Euros (600$
thegrif 13 hours ago 0 replies      
echoing the prior comment: you have to actually do something of value before it can be positioned as the gift that keeps on giving :)
cauterized 12 hours ago 0 replies      
What on earth made you interpret the original post as "exactly $500" instead of "at least $500"? Or are you being facetious?
CraigJPerry 11 hours ago 1 reply      
Kindle singles.
longtime-lrkr 10 hours ago 0 replies      
www.possiblestartups.com A startup idea generator.
scheff 12 hours ago 0 replies      
forget it. you guys are negative.
TomGullen 13 hours ago 2 replies      
Passive income is generally a unicorn
sainib 12 hours ago 0 replies      
What I suggest you to do first is to stop working on the ideas that will not generate any money...
Thiz 12 hours ago 0 replies      
Everybody in the world loves incense.

Make a beautiful webpage with a mystic style and offer all kinds of incense, scents, shivas, buddhas, elephants, spiritual stuff all over the world.

In no time you'll be as big as amazon.

silver1 12 hours ago 1 reply      
I'm interested in training as well ... whats your area of expertise and how do you make your passive income?
mozilla 10 hours ago 1 reply      
Step 1 make 500/mo after 2mo of work.Step 2 repeat every 2mo.....Be multimilionnaire.

Bonus:Believe in fairy tales. Get a free, pink flying pony.

Also - all developpers are now multimillionaires with the passive income magic formula.

Im astonished this keeps poping up.

heart-os 12 hours ago 0 replies      
No, no, it is not a Unicorn: Open Source, Open Company, Helping Others make money will do it bu growing a very large network of people. This way you too can use the software to make a few dollars. See an open company is many minds in, and very few well made products out: https://github.com/regenerate/snippets Everybody takes the product, and for as long as it was aimed at money making; they make money. You need a lot of products of this kind, similar to how a supermarket has many products for sale: https://github.com/revenue/awesome-revenue Go ahead ask about specifics, I think a lot about this. I dedicated this whole decade to helping people with passive income, without asking for anything in return. My focus is non-programmers, young families. The big idea is: A Global, Paying, Mechanical Turk similar to this: https://www.mturk.com/mturk/welcome
Google's end-to-end key distribution proposal
260 points by abdullahkhalids  3 days ago   87 comments top 15
Semaphor 2 days ago 2 replies      
The first comment by Mailpile seems to highlight the biggest problem to me:

>Hello! Bjarni from the Mailpile team here.

>This is an interesting proposal and sounds like a significant improvement over the current centralized key-server model.

>The main quibble I have with it, is it seems there's no concern given to the privacy of user's communications - the proposed gossip mechanism seems designed to indiscriminately tell everyone who I am communicating with. That's a pretty severe privacy breach if you ask me, worse even then PGP's web-of-trust because it's real-time, current metadata about who is interacting with whom.

>Am I misunderstanding anything here?

>- Bjarni

jfindley 2 days ago 0 replies      
I was amused to see that on neither of the links in the post does SSL work correctly. convergence.io identifies as whispersystems and www.certificate-transparency.org produces an ssl error on every version of openssl and browser I have to hand (I didn't try anything with non-standard ECC curves).

That said, however, the entire end-to-end project is for me one of the most interesting and exciting practical innovations in security in years.

IgorPartola 2 days ago 2 replies      
Interesting. It has bugged me for some time that if the Web of Trust was bigger, it could grow exponentially and become universal: once someone you personally know has entered you into the WoT, you can be trusted and can trust others based on a number of public signatures on their public key. However, currently the WoT is so sparse that you cannot do this.

My idea was to use the existing web TLS platform to bootstrap the WoT to a sufficiently large level. I run my email on my own domain. Why can't I tell the WoT (and have it trust that it is true) that my public key is XYZ by putting it at https://igorpartola.com/pub.pem? GMail could do something similar and at least to start, we could get enough emails validated to start having the WoT spread on its own. Then we could modify the infrastructure to remove the public CA's and central authority entirely, by using the WoT itself. Google's HTTPS cert would then be based on its PGP key and be verified by humans inside the WoT.

I also think that the important part of the WoT is verifying emails/digital identity, not government docs. I don't care if I am talking to "Bob Bobber", I care that I am talking to bob@bobber.com. I may never have met bob@bobber.com, but I see his/her public git repos, blog, etc. and I want to connect to them securely.

bostik 2 days ago 1 reply      
Reading through the spec, there is something eerily familiar with the key directory implementation. Quoting:

Alice will obtain a proof from the Key Directory that demonstrates that the data is in the permanent append-only log, and then just encrypt to it.

Within the message to send to Bob, Alice includes a super-compressed version of the Key Directories that fits in a 140 characters (called STHs which stands for Signed Tree Heads). This super-compressed version can be used later on by anyone to confirm that the version of the Key Directory that Alice saw is the same as they do

Append-only log. Global. Hash of the log's tip state at the time of use...

Smells like a mixed blockchain/git type approach - which is a good thing. The "super-compressed" version of the log tip sounds like git revision hash. The append-only, globally distributed log is pretty much like a blockchain.

And it attempts to solve a really hard global problem. I like it.

michaelt 2 days ago 1 reply      

  The special thing about this Key Directory, is that   whatever is written in the directory can never be   modified, that is, it's impossible to modify anything from   there without having to bring the service down, or telling   to everyone that's looking about what is being modified. 
The MIT PGP key server has eleven different keys in my name which I created in 1997, when I was about 12 years old. Of course, I have long since lost the private keys and e-mail addresses.

I guess with this proposal, the fact you used to go by benstillerfaggot69@verizon.net will be part of your permanent record.

jonttaylor 2 days ago 1 reply      
I have been through this thought process before. The conclusion I came too was that the implementations should be transparent, but that the user information should not.

Basically I was not going to put up a list of everyones email addresses and keys anywhere, and certainly not who they connect with.

The more I looked into the problem, the more I realised that the vast majority of users would rather sacrifice security for usability. Even in my implementation people would rather not see the "Please verify this key with the recipient" page. They just want to get something done. I think this proposal from google would work well so long as their base implementation involves no additional steps beyond that of a normal email client.

My implementation uses a central key authority, however the application is pure javascript, and the entire javascript is downloaded to the browser prior to the user entering their email address and password. After that no more code gets sent to the client. You can verify it wont steal your data.

I have the same problem of initial key exchange that everyone else does, but I give the user options to verify they keys themselves. Once they have they encrypt their own contact list (along with keys) and re-upload it. Therefore limiting the attack vector to initial key exchange.

If anyone wants to have a look check out http://senditonthenet.com/

rbobby 2 days ago 1 reply      
I hope this works out. If enough of the big email vendors (gmail, outlook, yahoo, etc.) get on board the network effect could be enough to push adoption to a very high percentage.

Once that happens my "unencrypted" email folder would be viewed about as often as my junk mail folder.

And then... maybe just maybe... spammers will be faced with a serious challenge.

kennu 2 days ago 1 reply      
How will the Key Directories and third party Monitors verify that I'm the real owner of my self-hosted email address user@myowndomain.com, uploading my real public key to the directory?
SudoNick 1 day ago 0 replies      
Wouldn't proper compartmentalization dictate that email providers be explicitly eliminated from the end-to-end encryption process?

Apart from being able to inspect email and recognize that it contains content that looks like an encrypted form of something, I think we wouldn't want them to be explicitly informed that encryption was used, or know anything about the encryption algorithm, or know anything about how keys were distributed, or see any keys even public ones.

I think this would apply to the general case where someone uses an email service that is run by another party. In the common cases of major email providers with business models that conflict with privacy and security in various ways, the risks would higher. Even before factoring in their being high priority targets for hacking, government surveillance of questionable legality, etc.

blueskin_ 2 days ago 2 replies      
So it seems they invented PGP keyservers with a monitoring protocol as a bag on the side?

I've had keys on keyservers for years. The monitoring side is interesting though.

It's also unclear how the whole directory will be compressed to 140 bytes - iirc, the best compression algorithms reduce text by ~80-90%, so it might work for a week or so, I guess.

higherpurpose 2 days ago 0 replies      
My suggestion is for them to work at least another 12 months on this, before they even begin to publicly test it on Gmail accounts. We need more privacy quickly, but let's not rush into a protocol that could last for another 20 years, if all the major e-mail providers adopt it. We need to get this right.

I wish the DIME (former Dark Mail) protocol was out already, too, so we can compare. There's also Adam Langley's Pond [1], but sounds like it's too complex, and it only works over Tor. And TextSecure/Axolotl could probably be used as a secure e-mail protocol, too, if you add a proper e-mail like interface. I hope the team behind End-to-End is looking at all of them.

[1] - https://pond.imperialviolet.org/

bjornsing 2 days ago 1 reply      
Slightly worrying to see the word "checksum" used to refer to a one-way cryptographic hash function...
contingencies 2 days ago 0 replies      
TLDR is: Distributed (as in DNS) directory with append-only entries keyed by hash-of-email and third party replication/validation/logging.

Discussion of spam implications at https://moderncrypto.org/mail-archive/messaging/2014/000727....

Widespread encryption could be a recipe for Google (as the largest of the proposed directories) sliding in an identity reputation scheme (eg. based on location / communications history) and brokering it as a service.

peterwwillis 2 days ago 0 replies      
"The model of a key server with a transparency backend is based on the premise that a user is willing to trust the security of a centralized service, as long as it is subject to public scrutiny, and that can be easily discovered if it's compromised (so it is still possible to compromise the user's account, but the user will be able to know that as soon as possible)."

So it's a central service where we can - eventually - find out if a key changes or is invalid, though not necessarily if anyone just breaks into the service. An attacker can still break in and monitor authentications to gather intel on users. Or they can automate an attack such that the keys are compromised and the attacker gets the access they want while you're asleep at 3am.

"a Monitor could notify the user whenever a key is rotated, and the user should be given a chance to revoke the key, and rotate it once again."

Now the user needs to see this e-mail about their key getting rotated, realize it's a compromise, issue a new rotation, and hope nothing bad happened in the meantime.

"making it so hard to hide a compromised key directory, it would almost require shutting down all internet access to the targeted victim."

This is completely within the scope of many different types of attacks used today, though often you only have to limit it to specific servers.

"The model envisioned in this document still relies on users being able to keep their account secure (against phishing for example) and their devices secure (against malware for example), and simply provides an easy-to-use key discovery mechanism on top of the user's existing account. For users with special security needs, we simply recommend they verify fingerprints manually, and we might make that easier in the future (with video or chat for example)."

So it really is just a PGP keyserver. For users who care about security, do something else to make yourselves more secure.

blueking 2 days ago 1 reply      
The last people in the world you want managing your keys. Why don't you just send them directly to the NSA ?
Inside Googles Drone-Delivery Program
235 points by jackgavigan  2 days ago   109 comments top 22
ryankshaw 1 day ago 7 replies      
Both when I read this and when I heard amazon's announcement, I, like most people I see here, thought about the "get whatever I order off the internet to my doorstep, asap" use case--and I think that's still a ways off (because of the 0.01% edge cases, the regulation, etc).

But the use case that this article touched on, which to me was a "ah ha! we could do that today" is the case where the sending and receiving parties are more fixed (not just any random person's door), where they could actually build a drone-pad (like a heli-pad, but much more basic), where you could program in all of the specific obstacles on the route (exactly where, if there are electrical lines, what flight pattern to take, etc), and where you could train both the sender and receiver, and cord off the landing area (so you'd never run into the random guy or dog tries to grab the drone problem)


* delivery of medical supplies/medicine to a facility in a very remote area where it would not make sense to stock the items there permanently.

* very expensive restaurant at a ski resort in the middle of Colorado that gets a drone delivery of fresh seafood from a CA pier every day (an existing business transaction they do anyway that could be done faster, more efficiently than a human operated aircraft)

* the delivery of anything else to a ski resort (you have very wealthy people wanting things in a very remote location)

* something on top of a cruise ship, for medical items or whatever other thing a passenger really needed while at sea.

* something at the Ebola treatment areas right now in Africa.

jpatokal 2 days ago 3 replies      
The key differentiator to Amazon (and most any other delivery drone system I've seen) is that this is a tailsitter: it takes off vertically, and can hover in place for deliveries, but it flies horizontally.


This means it's much faster and more efficient in flight than your standard quadcopter design.

Also, Google's full official promotional video, the BBC clip seems to be taken from this.


jonjenk 2 days ago 4 replies      
Former Amazonian here. If anyone wants to understand the key difference between Amazon and Google culture there's a great quote in the article.

"In all the testing, Roy had never seen one of his drones deliver a package. He was always at the takeoff point, watching debugging information scroll up the screen, and anxiously waiting to see what would happen. Sergey [Brin] has been bugging me, asking, What is it like? Is it actually a nice experience to get this? and Im like, Dude, I dont know. Im looking at the screen, Roy told me."

Google and Amazon are both great companies. But at Amazon the drone program would start with a description of what the customer experience is when they receive a drone delivery and you'd work backwards to the technology solution. At Google the technology precedes the customer experience.

waterlesscloud 2 days ago 0 replies      
Previous discussion of Amazon's drone plans.

Initial announcementhttps://news.ycombinator.com/item?id=6830547

Don't believe the hypehttps://news.ycombinator.com/item?id=6833223

Delivery drones are nonesensehttps://news.ycombinator.com/item?id=7005702

Is Amazon drone delivery real?https://news.ycombinator.com/item?id=6834561

femto 2 days ago 1 reply      
I'm not sure that Australia has "progressive" rules about the use of drones by design. More likely it's just that the government hasn't yet got around to putting rules in place. Mind you, the more high-level drone research we have going on here, the more examples there will be to counter attempts to impose crazy rules.

Also worth noting is that Australia has a "UAV Outback Challenge" [1], where UAVs compete to carry out a simulated rescue of a tourist lost in the outback. This September the challenge is in Kingaroy, Queensland. (Maybe that's why Google is testing up that way??) Of further interest, there's at least one open source team competing in the UAV challenge [2].

[1] http://www.uavoutbackchallenge.com.au/

[2] http://www.canberrauav.com/


Edit: A bit more detail in the local news: http://www.smh.com.au/digital-life/digital-life-news/google-...

ChuckMcM 1 day ago 3 replies      
While I find this technologically interesting, I wonder if a drone bucket brigade might not be a better use for a technology that can pick up 10 lbs at point A and drop it over point B. A five thousand such drones, even at $10,000 each or $50M could keep 3,000 gallons of water[1] per (speed/distance) seconds dropping continuously on a wildfire.

[1] That being somewhat more than a http://en.wikipedia.org/wiki/Sikorsky_S-64_Skycrane skycrane can deliver, and with better failover characteristics.

rdlecler1 1 day ago 1 reply      
Agriculture is probably a better petri dish for all this technology. Farms are already dangerous, you need to cover large swaths of land, and it's unlikely that you're going to hit anything, or anyone if it falls from the sky.

That said, I could imagine a world where a FedEx drone is permitted to provide last meter service from the roof of a truck, taking the package from the street to the doorstep. Additionally, widespread use of drones may reshape where core resources are stored and allocated. In Manhattan for instance, maybe lightweight warehouses are placed on rooftops for quick deliver of common products within a few minutes of ordering.

suprgeek 2 days ago 1 reply      
The logical next step would be to predictively send these drones up with the most likely packages that the customers could possibly order.

The mobile app would then tell you here are 50 drones with a 10 mile radius - & here is the list of items that are on them - so forget same-day delivery how about same-hour delivery?

selectout 2 days ago 4 replies      
Great to see efforts like this not primarily focused on just delivering books or groceries but being put to the use for AID and resources that otherwise wouldn't be able to get there in a practical manner.

I'm curious what kind of running time these have though and distance they can travel. Would be great if they could just have 2-3 go for a dozen roundtrips each in circumstances like Katrina.

Wingman4l7 1 day ago 0 replies      
What struck me about this design is that the drone is a lot less susceptible to end-user interference / damage / theft -- the drone hovers way out of reach and drops the package from a quickly-retracted line.
drcode 2 days ago 1 reply      
Interesting that they use an hoverable airplane design instead of a quadrocopter... I wonder if this is in part because a failed engine en-route is less likely to cause a human injury, since an airplane is able to glide to the ground.
abad79 1 day ago 0 replies      
I think Google intentions to deliver a Drone program, are much bigger and will try to create a great value proposition for buyers to use their eCommerce solutions and payments. This way you will more eager to choose Google and not Amazon.
ngoldbaum 2 days ago 0 replies      
This reminds me even more than Amazon's program of vernor vinge's prediction of rock deliveries from fedex in his short story "Fast Times at Fairmont High." There's something about a package being lowered onto my doorstep by a VTOL drone...
foobarqux 2 days ago 1 reply      
Anyone have a bio of Dave Voss, the new project leader? I couldn't find one.
damian2000 1 day ago 1 reply      
Could something like this eventually replace the postman? I'm thinking of something on top of your roof that could be an aerial target for the device to drop stuff onto.
maw 1 day ago 0 replies      
"A zipping comes across the sky."

Good allusion, and I didn't even like that damn book!

orasis 1 day ago 0 replies      
One question: How loud is it? The quad rotors that I have been around are disturbingly loud.
trhway 1 day ago 0 replies      
31reasons 1 day ago 0 replies      
Robots , Drones and AI. Sounds familiar!
dang 2 days ago 0 replies      
We changed the url from http://www.bbc.com/news/technology-28964260 because the Atlantic article seems to be the most substantive of the ones submitted on this topic.
crucifiction 2 days ago 1 reply      
When is Google going to start innovating again instead of just copying other big tech companies? They are the new Microsoft it seems like, cash cow being spent on also-rans.
spiritplumber 2 days ago 5 replies      
Good for them? I did this in 2010 using a G1 phone as a controller.


Delivery was of a salami sandwich (we later cleared it out with the police).

I even gave Ryan Hickman at Google the software and hardware specs, when he was trying to fix up the Cellbots project in 2011. We were at maker faire 2011, which was funny because their android 'bot didn't work and ours did... Never heard much since, though.

Sorry for bragging, but it's a pet peeve I got. Anyway, the source and schematics are at http://obex.parallax.com/object/116 the Android side software can be had if you email me at mkb@robots-everywhere.com

We still use this to do autonomous deliveries! https://www.youtube.com/watch?v=urs68vf7ZFY

Facebook's std::vector optimization
215 points by iamsalman  19 hours ago   75 comments top 21
userbinator 13 hours ago 3 replies      
When the request for growth comes about, the vector (assuming no in-place resizing, see the appropriate section in this document) will allocate a chunk next to its current chunk

This is assuming a "next-fit" allocator, which is not always the case. I think this is why the expansion factor of 2 was chosen - because it's an integer, and doesn't assume any behaviour of the underlying allocator.

I'm mostly a C/Asm programmer, and dynamic allocation is one of the things that I very much avoid if I don't have to - I prefer constant-space algorithms. If it means a scan of the data first to find out the right size before allocating, then I'll do that - modern CPUs are very fast "going in a straight line", and realloc costs add up quickly.

Another thing that I've done, which I'm not entirely sure would be possible in "pure C++", is to adjust the pointers pointing to the object if reallocation moves it (basically, add the difference between the old and new pointers to each reference to the object); in theory I believe this involves UB - so it might not be "100% standard C" either, but in practice, this works quite well.

jlebar 2 hours ago 1 reply      
If you're interested in these sorts of micro-optimizations, you may find Mozilla's nsTArray (essentially std::vector) interesting.

One of its unusual design decisions is that the array's length and capacity is stored next to the array elements themselves. This means that nsTArray stores just one pointer, which makes for more compact DOM objects and so on.

To make this work requires some cooperation with Firefox's allocator (jemalloc, the same one that FB uses, although afaik FB uses a newer version). In particular, it would be a bummer if nsTArray decided to allocate space for e.g. 4kb worth of elements and then tacked on a header of size 8 bytes, because then we'd end up allocating 8kb from the OS (two pages) and wasting most of that second page. So nsTArray works with the allocator to figure out the right number of elements to allocate without wasting too much space.

We don't want to allocate a new header for zero-length arrays. The natural thing to do would be to set nsTArray's pointer to NULL when it's empty, but then you'd have to incur a branch on every access to the array's size/capacity.

So instead, empty nsTArrays are pointers to a globally-shared "empty header" that describes an array with capacity and length 0.

Mozilla also has a class with some inline storage, like folly's fixed_array. What's interesting about Mozilla's version, called nsAutoTArray, is that it shares a structure with nsTArray, so you can cast it to a const nsTArray*. This lets you write a function which will take an const nsTArray& or const nsAutoTArray& without templates.

Anyway, I won't pretend that the code is pretty, but there's a bunch of good stuff in there if you're willing to dig.


CJefferson 12 hours ago 0 replies      
I'm glad to see this catch on and the C level primitives get greater use.

This has been a well known problem in the C++ community for years, in particular Howard Hinnant put a lot of work into this problem. I believe the fundamental problem has always been that C++ implementations always use the underlying C implementations for malloc and friends, and the C standards committee could not be pursaded to add the necessary primitives.

A few years ago I tried to get a reallic which did not move (instead returned fail) into glibc and jealloc and failed. Glad to see someone else has succeeded.

shin_lao 8 hours ago 0 replies      
I think the Folly small vector library is much more interesting and can yield better performance (if you hit the sweet spot).


From what I understand, using a "rvalue-reference ready" vector implementation with a good memory allocator must work at least as good as FBVector.

darkpore 14 hours ago 1 reply      
You can get around a lot of these issues by reserving the size needed up front, or using a custom allocator with std::vector. Not as easy, but still doable.

The reallocation issue isn't fixable this way however...

pbw 9 hours ago 1 reply      
Are there benchmarks, speedup? Seems strange to leave out that information or did I just miss it?
cliff_r 12 hours ago 1 reply      
The bit about special 'fast' handling of relocatable types should be obviated by r-value references and move constructors in C++11/14, right?

I.e. if we want fast push_back() behavior, we can use a compiler that knows to construct the element directly inside the vector's backing store rather that creating a temporary object and copying it into the vector.

malkia 2 hours ago 0 replies      
For big vectors, if there is obvious way, I always hint vector with reserve() - for example knowing in advance how much would be copied, even if a bit less gets copied (or even if a bit more, at the cost of reallocation :().
ajasmin 14 hours ago 0 replies      
TLDR; The author of malloc and std::vector never talked to each other. We fixed that!

... also most types are memmovables

14113 10 hours ago 1 reply      
Is it normal to notate powers using double caret notation? (i.e. ^^) I've only ever seen it using a single caret (^), in what presumably is meant to correspond to an ascii version of Knuth up arrow notation (https://en.wikipedia.org/wiki/Knuth's_up-arrow_notation). I found it a bit strange, and confusing in the article having powers denoted using ^^, and had to go back to make sure I wasn't missing anything.
xroche 14 hours ago 2 replies      
Yep, this is my biggest issue with C++: you now have lambdas functions and an insane template spec, but you just can not "realloc" a new[] array. Guys, seriously ?
thomasahle 5 hours ago 1 reply      
The factor-2 discussion is quite interesting. What if we could make the next allocated element always fit exactly in the space left over by the old elements?

Solving the equations suggest a fibonacci like sequence, seeded by something like "2, 3, 4, 5". Continuing 9, 14, 23 etc.

jeorgun 5 hours ago 0 replies      
Apparently the libstdc++ people aren't entirely convinced by the growth factor claims:


general_failure 11 hours ago 0 replies      
In Qt, you can mark types as Q_MOVABLE_TYPE and get optimizations from a lot of containers
johnwbyrd 7 hours ago 0 replies      
If you're spending a lot of time changing the size of a std::vector array, then maybe std::vector isn't the right type of structure to begin with...
judk 5 hours ago 0 replies      
How is it reasonable to expect that previously freed memory would be available later for the vector to move to?
ck2 14 hours ago 1 reply      
Then the teleporting chief would have to shoot the original

As an aside, there was a great Star Trek novel where there was a long range transporter invented that accidentally cloned people.

(I think it was "Spock Must Die")

chickenandrice 10 hours ago 5 replies      
Greetings Facebook, several decades ago welcomes you. Game programmers figured out the same and arguably better ways of doing this since each version of std::vector has been released. This is but a small reason most of us had in-house stl libraries for decades now.

Most of the time if performance and allocation is so critical, you're better off not using a vector anyway. A fixed sized array is much more cache friendly, makes pooling quite easy, and eliminates other performance costs that suffer from std::vector's implementation.

More to the point, who would use a c++ library from Facebook? Hopefully don't need to explain the reasons here.

boomshoobop 11 hours ago 1 reply      
Isn't Facebook itself an STD vector?
johnwbyrd 7 hours ago 2 replies      
Show me a programmer who is trying to reoptimize the STL, and I'll show you a programmer who is about to be laid off.

The guy who tried this at EA didn't last long there.

kenperkins 11 hours ago 1 reply      
> ... Rocket surgeon

That's a new one. Usually it's rocket scientist or brain surgeon. What exactly does a rocket surgeon do? :)

The executive order that led to mass spying, as told by NSA alumni
217 points by ra  3 days ago   87 comments top 11
toyg 2 days ago 4 replies      
You can't make this stuff up. According to TFA:

1. CIA created black propaganda in USSR linking Soviets and international terrorists to foment dissent in Russia.

2. Somebody picked it up from outside and wrote a book about Soviet-terrorism links.

3. CIA director read the book, took it seriously, freaked out, lobbied for more powers.

4. CIA and NSA got more powers.

It's clearly not the first time in history that an intelligence organisation engineered a privilege escalation from fraudulent circumstances, but doing it by accident seems almost funny.

bakhy 2 days ago 6 replies      
The same offensive narrative repeats itself.

- The government is bulk collecting data on foreigners.

- Meh...

- They're also incidentally spying on US citizens in the processs.


If the USA could try to acknowledge the human rights of non-US citizens, that would be really nice.

daigoba66 2 days ago 2 replies      
I recommend that anyone interested in the topic watch this film: http://www.pbs.org/wgbh/pages/frontline/united-states-of-sec...

It's incredibly detailed, though focusing primarily on executive action post 2011.

Padding 2 days ago 0 replies      
The real issue here is that the public has no control over the spying. Imho, many would probably be fine with other people listening in on their phone calls, or going over their browsing history, if the only effect of that would be terrorists getting caught.

The problem is that currently there is too much potential for the surveilance data to be comercialized or abused for trivial legal matters.

I think much of the debate could be settled by alowing the public to scrutinize the surveillance process, and putting any abuse of the data (i.e. its usage in non-terror-related situations) under extordinarily humiliating punishments (like denationalization).

mikeash 2 days ago 3 replies      
This quote struck me:

"After the United States faced another existential threat in the immediate aftermath of the September 11 attacks...."

I'm amazed at how accepting the media is of the threat of terrorism. That statement implicitly compares al Qaeda to the Soviets. The Soviets who, if they had had a bad day and decided to say "screw it" and wipe out capitalism for good could have done so within an hour. A country which, at any moment, could have pressed a button and killed a billion people. (And to be fair, the US of course had/has the same ability.)

This is being compared to a loose organization of fanatics where the worst-case scenario was pretty much, "What if they crashed the fourth plane into the White House and killed the President!"

I think we need an enemy. The country is built on it. After four decades of the Cold War, we needed something to put in the "USSR" slot. Terrorists are a terrible fit, but it's the best we could find.

lasermike026 2 days ago 0 replies      
End to end encryption for everything. Anonymizing technology especially when you need it. Governments always tap everything.
andrewljohnson 2 days ago 0 replies      
Reminds me of the spy novels that led the British to create MI5.


coldcode 2 days ago 1 reply      
There is no room in a supposedly democracy (republic) for law by fiat. Either the people are ultimately in charge or we live in a dictatorship. The Constitution is not an option.
higherpurpose 2 days ago 1 reply      
Figures that the cause for one of the biggest government abuses in history is the extension of the president's power. It's like humans never learn. Never give a single man too much power.

AUMF is just as bad, too. Did you know US is effectively still in a "state of emergency", today? - and will probably continue to be for long, long, LONG time, if nobody does anything about it.


duncan_bayne 3 days ago 1 reply      
Interesting article, but it's a pity they chose to lead with weasel words.

"One thing sits at the heart of what many consider a surveillance state within the US today."

Dolimiter 2 days ago 3 replies      
Another top-voted article about the NSA on Hacker News that is thin on evidence to say the least. It appears to be hearsay from a disgruntled ex-employee.

However I understand that the story fits the political narrative of the forums here, and will therefore be upvoted despite any lack of sense or evidence.

People suck at technical interviews
209 points by themonk  3 days ago   171 comments top 35
tokenadult 3 days ago 1 reply      
I read the comments already posted in this thread before I read the fine blog post. Almost everybody sucks at interviewing. Research shows that even though job applicants think that an interview is one of the more fair procedures for hiring a new worker for almost any kind of a job, it is one of the least effective.

There are many discussions here on HN about company hiring procedures. Company hiring procedures and their effectiveness is a heavily researched topic in industrial and organizational psychology, but most hiring managers and most job applicants haven't looked up much of the research. After reading the blog post kindly submitted here, I'll make some comments on its tl;dr summary at the end of the post.

1. many interview techniques test skills that are at best irrelevant to real working life

Yep, that's why you want your company hiring procedures to be based on research and what really matters for finding good workers.

2. you want somebody who knows enough to do the job right now

That's the ideal. That's why work-sample tests are, by replicated research, a very good hiring procedure, one of the best possible hiring procedures.

3. or somebody smart and motivated enough that they can learn the job quickly

Yep, and that's why tests of "general mental ability" are also a very effective hiring procedure, although there are some legal requirements surrounding use of those that you have to be careful about in the United States.

4. you want somebody who keeps getting better at what they do

For sure, as that is the only way your company can meet new challenges as they come up in the company's business.

5. your interview should be a collaborative conversations, not a combative interrogation

I'm not sure that the author here has provided evidence for the "should" statement in this summary, although I actually don't disagree as a matter of how I do job interviews.

6. you also want somebody who you will enjoy working with

Basically, almost all hiring managers fall victim to overemphasizing likability and underemphasizing ability to get the job done, but, yeah, you don't want to hire someone who makes the workplace miserable--that might cost you losing other good workers.

7. it's important to separate "enjoy working with" from "enjoy hanging out with"

Absolutely. The best worker in your company may not be the same person you go out with socially after work.

8. don't hire assholes, no matter how good they are

The trick here is to figure out how much annoying behavior qualifies a person as an "asshole" in a particular context, and that is not easy.

9. if your team isn't diverse, your team is worse than it needed to be

There is an increasing body of research to back up this idea.

10. accept that hiring takes a really long time and is really, really hard

Hiring is hard. It may or may not be time-consuming, depending on how efficiently you do it.

The review article by Frank L. Schmidt and John E. Hunter, "The Validity and Utility of Selection Models in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings,"[1] Psychological Bulletin, Vol. 124, No. 2, 262-274 sums up, current to 1998, a meta-analysis of much of the huge peer-reviewed professional literature on the industrial and organizational psychology devoted to business hiring procedures. There are many kinds of hiring criteria, such as in-person interviews, telephone interviews, resume reviews for job experience, checks for academic credentials, personality tests, and so on. There is much published study research on how job applicants perform after they are hired in a wide variety of occupations.[2]

EXECUTIVE SUMMARY: If you are hiring for any kind of job in the United States, with its legal rules about hiring, prefer a work-sample test as your hiring procedure. If you are hiring in most other parts of the world, use a work-sample test in combination with a general mental ability test.

The overall summary of the industrial psychology research in reliable secondary sources is that two kinds of job screening procedures work reasonably well. One is a general mental ability (GMA) test (an IQ-like test, such as the Wonderlic personnel screening test). Another is a work-sample test, where the applicant does an actual task or group of tasks like what the applicant will do on the job if hired. (But the calculated validity of each of the two best kinds of procedures, standing alone, is only 0.54 for work sample tests and 0.51 for general mental ability tests.) Each of these kinds of tests has about the same validity in screening applicants for jobs, with the general mental ability test better predicting success for applicants who will be trained into a new job. Neither is perfect (both miss some good performers on the job, and select some bad performers on the job), but both are better than any other single-factor hiring procedure that has been tested in rigorous research, across a wide variety of occupations. So if you are hiring for your company, it's a good idea to think about how to build a work-sample test into all of your hiring processes.

Because of a Supreme Court decision in the United States (the decision does not apply in other countries, which have different statutes about employment), it is legally risky to give job applicants general mental ability tests such as a straight-up IQ test (as was commonplace in my parents' generation) as a routine part of hiring procedures. The Griggs v. Duke Power, 401 U.S. 424 (1971) case[3] interpreted a federal statute about employment discrimination and held that a general intelligence test used in hiring that could have a "disparate impact" on applicants of some protected classes must "bear a demonstrable relationship to successful performance of the jobs for which it was used." In other words, a company that wants to use a test like the Wonderlic, or like the SAT, or like the current WAIS or Stanford-Binet IQ tests, in a hiring procedure had best conduct a specific validation study of the test related to performance on the job in question. Some companies do the validation study, and use IQ-like tests in hiring. Other companies use IQ-like tests in hiring and hope that no one sues (which is not what I would advise any company). Note that a brain-teaser-type test used in a hiring procedure could be challenged as illegal if it can be shown to have disparate impact on some job applicants. A company defending a brain-teaser test for hiring would have to defend it by showing it is supported by a validation study demonstrating that the test is related to successful performance on the job. Such validation studies can be quite expensive. (Companies outside the United States are regulated by different laws. One other big difference between the United States and other countries is the relative ease with which workers may be fired in the United States, allowing companies to correct hiring mistakes by terminating the employment of the workers they hired mistakenly. The more legal protections a worker has from being fired, the more reluctant companies will be about hiring in the first place.)

The social background to the legal environment in the United States is explained in various books about hiring procedures,[4] and some of the social background appears to be changing in the most recent few decades, with the prospect for further changes.[5]

Previous discussion on HN pointed out that the Schmidt & Hunter (1998) article showed that multi-factor procedures work better than single-factor procedures, a summary of that article we can find in the current professional literature, for example "Reasons for being selective when choosing personnel selection procedures"[6] (2010) by Cornelius J. Knig, Ute-Christine Klehe, Matthias Berchtold, and Martin Kleinmann:

"Choosing personnel selection procedures could be so simple: Grab your copy of Schmidt and Hunter (1998) and read their Table 1 (again). This should remind you to use a general mental ability (GMA) test in combination with an integrity test, a structured interview, a work sample test, and/or a conscientiousness measure."

But the 2010 article notes, looking at actual practice of companies around the world, "However, this idea does not seem to capture what is actually happening in organizations, as practitioners worldwide often use procedures with low predictive validity and regularly ignore procedures that are more valid (e.g., Di Milia, 2004; Lievens & De Paepe, 2004; Ryan, McFarland, Baron, & Page, 1999; Scholarios & Lockyer, 1999; Schuler, Hell, Trapmann, Schaar, & Boramir, 2007; Taylor, Keelty, & McDonnell, 2002). For example, the highly valid work sample tests are hardly used in the US, and the potentially rather useless procedure of graphology (Dean, 1992; Neter & Ben-Shakhar, 1989) is applied somewhere between occasionally and often in France (Ryan et al., 1999). In Germany, the use of GMA tests is reported to be low and to be decreasing (i.e., only 30% of the companies surveyed by Schuler et al., 2007, now use them)."














tptacek 3 days ago 4 replies      
This is a really excellent post. It talks about a bunch of things that are hobby-horses of mine (I help run recruiting for a large software security firm), and I find myself agreeing with more of it than I disagree with.

I would go a little further than Laurie does. I think several of the goals he sets up for his process are not in reality achievable in an interview process.

Starting axiom: job interviews are among the most hostile experiences professionals endure in our industry. I think back to my own interviews, and compare them to public speaking, professional disasters, death march projects with insane deadlines, intractable politics, and it's pretty much the interviews alone that increase my heart rate. For the past two years, I've tried to make an effort to peek in on the interviews we do here at Matasano, and what I've seen corroborates the belief. Several of our best hires were physically shaking during their interviews. And we try to be nice! In no other common situation does a tech worker find themselves interrogated by a panel of strangers whose implicit goal is to knock them out of contention.

Given that interviews are a psychologically challenging experience, and thinking about things like the concept of "depletion" of ego or willpower, it's straightforward to see some severe limitations to what can be accomplished in an interview setting. If you're spending lots of energy trying to keep from jumping out of your skin in an unpleasant situation, it's much harder to solve problems that themselves require lots of energy.

Past that, a hypothesis, which is unpleasant for some tech workers to hear (cold comfort: I'm 100% certain it applies to me as well). Software developers are not, as a cohort, particularly effective intuitive psychologists. Virtually none of us have any training in the subject. We tend sharply towards introversion. We train our brains day-in and day-out by repeating tasks that do nothing to developer our ability to read in-person social cues. For that matter, we tend as a group to eschew forms of communication in which tone of voice, body language, and emotional cues are even transmitted!

But several of the objectives Laurie sets out demand exactly that kind of analysis. "Can the candidate intelligently discuss technology?" Well, that's subjective, and worse, vague and abstract. Laurie tries to nail "intelligently" down, but I think we can all see that there are other ways in which someone can be "intelligent" about technology that evade those criteria. Since we all intuitively know that, we substitute our own cognitive biases for "intelligently". All of the sudden, we're gauging "confidence" and "comfort level"... we've decided to be psychologists instead of engineers.

So, two changes I would urgently suggest Laurie consider for his process:

* Audit the whole process for tasks that could generate false positives from a nervous candidate. You aren't interviewing people to determine how good they are at interviewing, because interviewing doesn't generate money for your company. Try to build a process that is immune to discomfort and lack of confidence. It can be done! Another thing that we've found very effective: "prime" candidates early and repeatedly with non-adversarial conversations that aim to disarm them. We start our whole (multi-week) process with an hour-long version of this. We also try to innoculate our process by communicating in as much excruciating detail as we can what it will entail.

* Eliminate all subjective questions and standardize what you're left with. Engineers, in my experience, fucking hate this. But it's the right thing to do: ask every candidate, as much as possible, the same set of questions. We have a question structure that minimizes open-ended questions but has some structured "exercise" questions that give the candidate some freedom to move around --- the results of those questions can be put on a scoresheet and, to some extent, compared apples-apples to other candidates.

Jemaclus 3 days ago 3 replies      
My primary criteria when interviewing junior candidates are:

1) Do you have basic problem solving skills?

2) Can you communicate clearly?

3) Do I want to sit next to you for the next 6 months or longer?

If you don't know Ruby, I can teach you. If you don't know Elasticsearch, I can teach you. What I can't and don't have time to teach you is how to solve a problem on your own without me holding your hand, and I especially don't have time to waste trying to communicate poorly with you.

And obviously, I want to work with someone who is pleasant and interesting. I don't wanna sit next to someone for 40 hours a week who stinks or is rude or can't carry a conversation.

If you meet those three criteria, you've beaten 90% of candidates that walk through the door. The last 10% is what gets you the job. (Obviously, if you DO know Ruby or Elasticsearch, that's a huge plus... but it's not one of the bare minimum requirements.)

benaston 3 days ago 1 reply      
Oh how I agree with this post. And I've been the asshole on the hiring end before - but I (hope) have learned my lesson.

I recently interviewed with a consultancy (role was UK-based) that is desperate (quite literally) to hire experienced developers in the technical area I have experience in. I came highly recommended by a recent senior hire of theirs (he had been my manager).

The technical filter question in the face-to-face interview was (paraphrased and simplified): "write an algorithm on the whiteboard to check for mismatched braces".

I described an approximate solution aloud, but did not complete the solution in the interview.

As it happens I solved it in my sleep the following night - but of course they will never, ever know that.

Now everyone will have their pet theories as to why this is a good or bad question, but these are frankly moot because I am confident the hiring developer had no idea what he was testing for with it. He may have a vague "I know the answer, so he should too" mindset, but not much more.

What does this question, presented verbally in an interview really test? A recursive solution sprung to mind, but I can recall implementing recursive solutions in production code only a few tens of times over a ten year career. The question does not test for experience with the technology they are hiring for. It certainly filters out developers who cannot come up with an algorithm while standing at a whiteboard infront of a hostile audience. I could go on, but the biases implicit in this question are as numerous as they are irrelevant.

After a few years in the industry you typically get to work direct with these so-called "top end" consultancies anyway, so you get to know what these kind of questions don't filter for. I can say with confidence and from experience they don't filter for ability to create high quality software.

The thing is, this 'clever' 'does he know recursion?!' question is almost insultingly trivial when compared to the type of problems most development teams face.

Like the fact that although "Joe" has an IQ of 150, he speaks in sentences so impossibly convoluted he may as well be mute. Or "Alice" the sole domain expert, who begrudges her position in some way and will only communicate after you have negotiated the massive chip on her shoulder.

Hell, most organisations can't even provision accurate testing environments and force their developers to run underpowered Windows laptops.

Let's walk before we can run yeah?

cjslep 3 days ago 3 replies      
For my first job out of uni, I was asked to write pseudocode on a whiteboard to solve a simple scripting problem (call an executable repeatedly, changing the command line args). My background is nuclear engineering, and I was interviewing at a cloud/networking business, so I was already slightly outside my experience comfort zone[0]. I was explaining my thought process while writing on the whiteboard, trying my best to be transparent about how I was thinking, when the interviewer interrupted me.

"What is that?" he asked, pointing at the whiteboard.

"...pseudocode?" I replied, hesitantly, frantically looking for some mistake where he was pointing.

"That's not pseudocode..." he said as he started to berate me for not writing bash.

After that (I did not get an offer), every interview I went to when someone asked me to write pseudocode I'd always clarify "Is there any particular language you want me to use?" because I never want to relive that experience again.

[0] I had previous experience interning at Cisco.

bcantrill 3 days ago 1 reply      
I think a more apt title is "technical interviews suck", and speaking personally, I have entirely given up on them. (I would say that I gave up on them after two decades, but the truth is that I gave up on them a decade ago -- and I really tried hard in that first decade to develop the perfect technical interview.)

My belief has become that the only way to hire the traits that I'm looking for (high technical ability, sufficient education, predisposition to rigor, and -- most importantly -- indefatigable persistence) is by judging a candidate on their work, not their performance in an interview. (After all, software engineering isn't done via pop quiz -- and it's not even a timed test.)

The problem then becomes: how do you judge the works of someone you've never worked with before? Three ways that have worked for me:

(1) Rolodex. This is an easy way to hire reliably: someone on the team has worked with the person in the past, and vouches for them as one of the tribe. Assuming that you trust the person vouching for them, the interview consists of you selling them, not them selling you. This method has high fidelity (though is still fallible), but also suffers from obvious limitations in terms of scale and breadth.

(2) Known curricula. There are some schools where I know (or someone on the team knows) the computer science curriculum very well, and can assess a student simply by the courses that they have taken (or, better, TA'd), the labs they have worked in, or the professors that they have worked for. The fidelity of this method will depend on the school, how well one knows the curriculum, etc. -- and it has all of the strengths and weaknesses of hiring university grads. (It also suffers from serious scale problems and runs the risk of creating monoculture.)

(3) Open source. If you lead open source projects that attract community contributors, you may find it surprisingly easy to coax the best of those contributors into working for you full-time. While I know that this isn't a fit for every company, it has become my preferred way to hire: you are hiring someone who can obviously do the work (because, um, they're already doing it) and who has demonstrated interested in the problem space (especially if their contributions have come during their free time). Importantly, you are also not limiting yourself to a particular geographic area or school or company history or whatever; the people I have hired via open source are people I never would have found otherwise. And, it must be said, they have proven (without exception, actually) to be great hires. There are obvious problems with this as well in terms of scale and (especially) predictability, but I view open source as the farm system for software engineering: it's a way to find and develop talent at lowest opportunity cost.

Edit: I forgot a fourth method that has actually worked for me.

(4) Homework. When interviewing someone who I don't know and who is otherwise a stranger, I will ask them some exceedingly basic questions to assess technical viability, and then proceed to discuss the problems that we're solving and why I personally find them exciting. If they are sufficiently interested at the end of this conversation (which is really more me selling them than them selling me), I will assign homework -- which is generally to take some fixed amount of time (I have usually said no more than eight total hours) to build something fun in one of the technologies that we have built or otherwise use. (When node.js was new, this was to build something in node.js -- but more recently, it's been to build something fun with Manta.[1]) If a candidate comes back with completed homework, you each know something about the other: they were sufficiently interested in you to actually play around (and they like playing around to begin with -- which is essential for us) and you now have some works by which to judge them.

[1] http://www.joyent.com/blog/introducing-kartlytics-mario-kart...

dopamean 3 days ago 0 replies      
I am an instructor at a coding bootcamp and I conduct technical interviews of our applicants pretty regularly. I have recently been very concerned with tackling the idea the author first addresses: not hiring (or in my case admitting) for what people already know. Our program only has 12 weeks with the students and so what I am most concerned with is the applicant's ability to pick up new concepts quickly.

In the past we have tested students on the little bit of Ruby or Javascript they had studied to prepare for the interview. I am of the belief that that method has helped determine who knows a little bit of Ruby but not who can ramp up on complicated topics quickly during the program. My attempts to address this have led me to doing a 15 minute lesson on something totally new to the applicant and then having them answer questions based off of that lesson. So far I've found it to be useful.

Technical interviews are hard. It's easy to suck at them.

nkcpiytdhauwuhc 3 days ago 2 replies      
Fizzbuzz isn't supposed to be a test of your knowledge of the modulo operator. It's supposed to be a very low bar that can filter out a large percentage of candidates who have no business being there.

If you spend 20 minutes on Fizzbuzz, you're doing it wrong. It should take 10 tops, and be limited by how quickly the candidate can write.

greyskull 3 days ago 2 replies      
I'm just about to finish a B.S. in CS, so I've recently been on the other side of the table. I like to think I'm "aware" enough to give good feedback about my experiences.

I've interviewed with two of the large "top" companies. They were two very different experiences.

One decided to have me do multiple interviews, with whiteboard coding. The first interviewer was my favorite, because we got to talk about the design of my internship project, how it could be extended, pitfalls to consider, etc. That actually let me stretch my legs a bit and show that I can make intelligent software decisions. The rest of the interviews were basically worthless; the classic small algorithm problem that I could easily figure out with a bit more time or by working with another more experienced engineer. The code I wrote on the board showed that I could write for loops and use a standard library; it was very difficult to modify or refactor if we saw an issue with what I wrote. Why not at least a laptop, basic text editor, and a projector?

The other company gave me a few hours to write up a solution, in an IDE, to a somewhat beefy problem. We had a couple of discussions about my approach, potential pitfalls, cases that I couldn't handle, optimality, etc. I liked this, since it was much closer to being representative of real software work; collaboration, discussion, and I also got to show how I would actually create a solution.

That's a stark contrast. I'm sure there are flaws in the latter approach, but it is if nothing else a much lesser evil that will likely bring in more valuable engineers.

jbogp 3 days ago 2 replies      
Having just had a 3h long technical interview for Google Deepmind, I cannot agree more with a lot of points raised in this post.

Deepmind being a machine learning/statistics/maths/computer science fuelled company, it made sense for the interview process to follow this simple organisation.

I was however very disappointed by the questions asked for each part. Not a single one of the ~100 questions asked during these 3h of my life demanded some "problem solving" skills, only encyclopaedic knowledge (describe this algorithm, what is a Jaccobian matrix, define what an artificial neural network is, what is polymorphism, give examples of classifiers, what are the conditions to apply a t-test...)

So what if someone doesn't remember every definition of the stats/ML/CS/Maths respective bibles as long as they're clever enough to look it up and understand quickly what's needed?

I mean, I get it these are very basic questions but as a highly qualified interviewee who necessarily has other offers given this set of skills, this fastidious, back to school, time wasting process does not reflect well on the company and makes me consider my other options even more seriously.

yardie 3 days ago 4 replies      
My first technical interview they asked me to write a constructor and I completely panicked. It still haunts me to this day that I think the guys in the room thought I was some sort of idiot.
bcoates 3 days ago 0 replies      
These essays make me think I don't understand why people are doing technical interviews. I give a (brief) technical question/coding question interview when I'm asked to, and I suspect it's the kind the author doesn't like. But all I'm looking for is:

If candidate lists lots of skills/experience, I do a self-assessment where I get them to claim that they're really good at something, then ask them to do something that requires bare-minimum skills in it. The idea is not that they need these skills to do the job, but that if you claim a CS degree, eight years work experience, and being a python expert, but you can't crack out a basic, non-trick whiteboard coding exercise blindfolded, something smells bad.

If a candidate doesn't have much experience, I'm trying to figure out just how green they are; low-experience candidates cover a huge range of skill all the way down to knowing literally nothing but a few buzzwords. If you don't know a language or a library or a technology I can teach you, but if you don't know anything it's going to be years before I can get anything shippable out of you.

I wish these weren't a major worry but the majority of people who come in the door (after a resume and phone screen!) don't pass.

What do those of you who are doing intensive whiteboard-coding interviews think you're getting out of it?

ionwake 3 days ago 1 reply      
This is the most god damn cathartic thread I have read on this site after taking 1.5 years out to work on my own stuff and having failed 4 interviews getting back into the workforce, with the most random god damn shit technical tests.
btipling 3 days ago 0 replies      
I've been interviewed by seldo before, when he was at awe.sm. It was a good technical interview. His questions were relevant and forced me to think. I like interviews in which I walk away learning something and I definitely did learn something in this case.

As for interviewing, I cringe when I think about the poor questions I have asked and the poor decisions I have made, and I try to learn from them. We ended up not hiring qualified people because of bad interviews, although that wasn't obvious to us at the time.

I was thinking for a future interview the candidate and I would try to learn some new technology and try to build something together during the interview or talk about it at least.

euphemize 3 days ago 0 replies      
I'm currently studying for multiple interviews at "established" tech companies, and I wonder where the interviewers like the author are. I've already finished a few, and between having to implement a linked list, analyze a matrix, describe hashmaps and regurgitate sorting algorithms, the questions have not had much to do with everyday coding.

I'd be really happy to discuss how to implement feature X in an app, or how to design the moving parts of Y for a particular infrastructure and talk about the tradeoffs each implies. But I guess I'll get back at implementing a queue using 2 stacks, because that's what technical interviews seem to be for.

jakozaur 3 days ago 2 replies      
I disagree with author about coding question. In my experience if somebody is not able to code quickly a simple task (like fizzbuz, reverse a string), it's a terrible sign. It's a simple objective test that filter out a lot of candidates.

Yeah they may be able to be productive in some specific environment (e.g. deep in some framework, writing templates), but in general they likely don't have solid programming foundation and will produce code of lower quality.

TallGuyShort 3 days ago 1 reply      
>> Somebody who can intelligently discuss technology>> Somebody who knows what they don't know

I believe in these principles especially. Most of my interview questions are vague (and I tell the candidate this up-front, and explain why). For instance, I'll ask them to explain how they would debug a very slow cluster, or to explain everything happens between me hitting the keys 'google.com' to me viewing the web page. This gives them a huge range of topics to cover in as much detail as they want. If they know a lot about Linux administration, or how hardware interfaces with the OS, or a lot about network protocols, they have a chance to show it off. I'll drill deeper into wherever they take the conversation, and a person scores major points with me if I drill down far enough that they get lost and say "I don't know, but my guess would be... and I would confirm that by...". I've found it to work exceptionally well.

Ryel 3 days ago 1 reply      
I've interviewed with dozens and dozens of startups in NYC. I'm an entry level Front-End Dev with passion for a lot of things I (in hindsight) probably should have gone to college for. For example, my current side-project is a completely client-side image metadata reader/writer that has forced me to learn new things like parsing Binary, Endianness, Meta and a lot of things about organizing JS because I've never built this large of a project completely by myself even though it is very, very small. A weekend project for most of you.

Throughout all of my interviews I've noticed one common trait.

Either I do exceptionally well on the personal questions during the interview, or I do really well on the technical questions. There is no in-between.

Most of my interviews could be broken up into 2 sections. The first section is usually where they open up and try to make things more comfortable. Asking me where I'm from, how long I've been writing code, what kind of side-projects I'm working on, etc...

The second part of the interview is when they try to figure out how much I know. Usually we start off by just talking about technology and this is where they try to figure out if I'm BS'ing my way through things. If you can maintain a fluent conversation about technology; you're good-to-go. After that they will generally try to slip in a few questions, usually about event delegation in Javascript.

The problem for me comes from when we switch from personal questions to technical ones, or vice-versa. I will pass either one with flying colors but I will rarely, if ever, pass both. If that particular day I can't sell my personal story very well, I do very well on the technical questions. If I do well on the personal questions, I do terrible on the technical questions. I believe there is a vast cognitive gap when you go from an emotional conversation talking about significant things in your past (like side-projects, old employers, hometown, etc..) and then jump to such rigid topics like code where there is no emotion, it's purely analytical thinking.

What I would prefer is for a company to pay me $100 to come into the office and work a half day. If I can keep up, I get the job.

aliston 3 days ago 3 replies      
I think there are some good points here, but I would argue that a lot of these are essentially points for hiring more junior engineers. There are cases where hiring for what someone already knows DOES matter.

I have come across too many codebases that were clearly written by folks who were in the "get stuff done" mentality. Frankly, when you're learning a new technology in addition to trying to do your job, your code/architecture just isn't going to be that good off the bat. If the company had instead hired someone who knew what they were doing from the get go, they could have started with a more solid foundation and potentially avoided pain down the road.

adricnet 3 days ago 0 replies      
One of the things done right at one place where I did a great many interviews for technical candidates was at a late phase to break out the technical interview and the management interview.

This allowed a group of potential peers to have a friendly discussion of varying depths about not only the recent work and advertised skills of the candidate but about current events and community engagement.

We became adept at evaluating candidates based on how they answered problems we presented, whether we believed they actually had the skills and experience on their CV, and how committed and engaged they were to the profession.

This is especially important for security related careers because community engagement is vital, most people can't talk in detail about previous work and may not have been able to publish public material, and work is highly specialized. It is tough to evaluate someone on skills and knowledge they have and you don't, but we got good at it.

And then the managers could talk to the candidate in a separate panel and ask ... well I have no idea to this day, actually :)

I say more about all of this a presentation I did in February, Breaking Into Security: some InfoSec Career tips, presented at DC404, slides here: http://www.atlbbs.com/sharkin/breakin-dc404.pdf

ap22213 3 days ago 3 replies      
Instead of technical interviews, I would love to just pay a potentially great candidate to come in and work for the day. Not only would it give them an opportunity to demonstrate what they know and show off their other skills, it would give our team a chance to see if they're a good fit. And, there would be some real financial incentive for them to give us a try.

But, trying to get management and HR to change from the conventions is extremely difficult. I have not been successful.

pmiller2 3 days ago 0 replies      
Followup question: Where can I find a list of companies (other than npm and Matasano) that don't interview with the stupid "write code on the whiteboard" method?

I've got some (1.5 years) experience, and I can write code. But put a whiteboard marker in my hand and ask me questions you know the answers to, in a high-stakes, high-pressure environment, and I fall apart.

Case in point: a recent technical interview I did, I did reasonably well until the last 1 hour session, which was the "code on the whiteboard" problem. I got code that functioned but wasn't fast enough. At home, after 1 google search and a few minutes, I solved the problem.

I'm definitely in favor of the "do a small project" type of code interview. It has its own pitfalls, but it's way better, IMO.

SnacksOnAPlane 3 days ago 2 replies      
My go-to question for technical interviews is "how would you write Scrabble?". It's very open-ended. I expect them to ask me questions like "what do you mean, write Scrabble?", to draw out some class diagrams, and to implement some scoring. It shows how self-directed they are and how much they trust themselves to develop architecture.

I hate questions about bit-twiddling and sorting algorithms. I wouldn't remember that stuff because I never do it. I do expect candidates to be able to analyze an algorithm for time and space requirements, because that's important for pretty much any coding job.

darkstar999 3 days ago 1 reply      
> I used to ask people to write code in interviews. This is terrible.

Bull! Last time I interviewed someone, they looked good on paper and decent on the phone. When it got down to solving a _simple_ problem on the whiteboard, he totally flopped. This is a totally realistic situation; we get together at least weekly and hammer out a solution on the whiteboard. Nerves could be an issue, but a good candidate should be able to solve an easy problem in a handful of minutes in an interview. (I'm talking _super_ easy, like joining one SQL table to another after saying that you are proficient)

lazyant 3 days ago 0 replies      
This is one of the best write-ups over tech hiring I've seen in a while. For me is about "get the job done, don't be a dick", problem is how to test for that under time and other constraints.
blutoot 3 days ago 0 replies      
Not sure if this is off-topic but my concern is related to technical interviews.

How do software engineers/devs/etc "stay in shape" for technical interviews? Let's say I wanna switch jobs and the first hurdle to cross is inevitably algorithms and data structures (for dev-centric roles) or something like operating systems/networks/whatever. How does one stay sharp on those _fundamental_ skills on a regular basis while working on a job, to avoid cramming everything in one week before the interview?

darklajid 3 days ago 1 reply      
The other side to that story: I'm scared like shit whenever (and that's often enough) I consider applying for a new thing. And back out, don't even try.

Most of the reasons are listed in the article, but let's be honest: I'd face most of these things.. So the pipe dream atm is to find some time for personal projects, foss contributions and .. use that to get to more reasonable opportunities. Reaching ten years with this company here, these interview stories might keep me here for another decade.

russelluresti 3 days ago 3 replies      
I'm not sure I agree with the "team fit" thing. I get his point - most people don't understand what team fit is or can't separate it from personal bias. But here's my counter-argument.

Every company and team has different core values. "Team fit" means matching company and team values. For example, I work on educational software for teacher and students. My definition of "team fit" (for this particular team/company) is that the person care about improving quality of education. If you're applying just because you need a job but don't care about what it is we're making, I don't care about any of the other criteria listed in this article - I'm not hiring you. You may be smart, able to learn, and a great communicator - but you don't care about what we're building so you don't have the same core values as the rest of the company and team.

This is what "team fit" means - not what they look like, what they do in their off time, or anything else; "team fit" is "do they personally value the same things we value as a team"?

If you value transparency and collaboration and they prefer to work as a lone gunman, they're not a team fit. If you value rapid iteration and user feedback and they don't want to release anything until it's 100% perfect in their eyes, they're not a team fit. So on and so forth with the examples.

jiggy2011 3 days ago 0 replies      
> Variation across teams in big companies is enormous. Just because a company was successful doesn't mean your candidate had anything to do with that.

This might seem to be an over generalisation. It's probably true enough at large consulting body-shops but I'd certainly punt on a lower-end ex google dev comparing favourably to the industry average.

ilyanep 3 days ago 0 replies      
All of these exact same points (including a very similar title) have been floating around my head as an idea for an article I wanted to write. I was so pleased to find that you had basically written my article for me. Very well said on all accounts :D
yeukhon 3 days ago 0 replies      
As someone who recently graduated from college, by far the best interview I ever had was just question all day and asked about projects I would want to work on and asked to explain how I would go about designing my project on the whiteboard. No I didn't get that job; I screwed up the last part (well I did answer some questions incorrectly but I think they didn't care much about that). For what it is worth, the position has to do with security research and engineering. The coding part I had to go through was fairly quickly (but the interviewer, IMO, didn't do a good job at it and caused me a lot of trouble). Geesh, I wish I was given a offer, but looking back I wasn't ready to deliver ANYTHING valuable as an [researcher]engineer yet.

When people just say a bunch of smart words like async and parallelism, ask them to define them with concrete examples. Ask them further to break down their "textbook knowledge." That's by far, IMHO, the best way to evaluate whether a junior prospective engineer knew a lot than average or not. You want a hard question? Just ask people to go as deep as they can to explain to you what happen after a URL is entered into a HTTP client such as browser or curl.

I am terrible at coding under pressure. Actually, I am terrible at coding. I constantly read books and github projects and I end up asking a lot of questions: is this a good practice to do such. You see, I am all for style and I want to be better writing code close to what people prefer to write. I went to talks and learned tips how to write better code and I will immediately employe those new techniques in my new projects.

I will end up with questions and I always wish someone could be around and help. I usually go to IRC or stackoverflow for opinion but I hope when I do get a job soon, I will be able to ask my senior colleague and hope they could provide feedback.

I haven't done everything like some superstar already have at my age. I suck and I am sorry, but I think I am one smart ass person capable of delivering a project with some guidance and day-to-day meeting and mentorship.

In fact there are senior engineers I've met or worked very very briefly during hackathon and when I asked them certain design question they will blur about their ideas and don't really know how to contribute, or up to me to explore what could be done. In essence, neither of us know everything. Senior knows more because they have done more, almost certainly something repetitive. It is like asking me to write hello world every day and I will have no problem responding to it. Sometimes when you look at code written by senior engineers their code smell even worse than what I write (but there are definitely the good one I can always use as reference).

Conclusion is: when you want to hire a new engineer, especially a junior engineer, ask yourself: are you ready to be a mentor and is there anyone ready to believe they can be a good mentor. Actually, when you hire someone, ask your co-workers whether they are ready to be each other's mentor. I don't want to work at a company where everyone is hiding in a cave and think everyone is smart and autonomous. There are times you need to be there to guide someone through his or her obstacle. A company encourages brownbag and training will be ideal for me.

I am serious; don't ask me to code quick sort. I am not going to do it. I know I am just going to memorize it and after I've written one a dozen time it will just become hello world. No, I am kidding. I need a job so I will do it. I will, but feeling a time bomb ticking next to me. I will either fail or go on with another few rounds writing more quick sort.

seiji 3 days ago 0 replies      
Technical interviews are a glaring example of a problem we don't really recognize: being stereotypically "male technical smart" is a disability.

Having smart people try to determine if another person matches their same definition of "smartness" is fraught with peril. There are a dozen dimensions to "smartness," and not everybody aligns properly.

It's similar to the quote "You have to be twice as smart to fix broken code as you were when you wrote it." Extremely clever code with bugs is provably unfixable. Extremely clever interviews are almost provably bad filter criteria.

But, in the context of interviewing, evaluating _a person_ isn't the same as evaluating _their immediate output_. Evaluating the future output of a person isn't the same as evaluating "can this person replicate intro-to-CS exercises they haven't touched in 12 years?"

Correct interviews require, gasp, strong compassionate people skills in addition to domain knowledge where you can challenge candidates. You've always got to figure out what they actually know versus what they think they know versus what they say they can do versus what they can actually do.

Then there's an entire other issue of "Smart People" versus "Capable People." Most people in power end up being "smart," but not necessarily capable any longer (by "failing upward" and now having magic protections). Some people end up being decision makers with little actual creation responsibilities (read: anything they could actually be judged against), so they are free to just be amazing with little detriment for their decisions. But, sometimes you need a number of "smart but not capable" people to balance out half a company being head-down technically but not necessarily aware of larger issues plaguing them. (Did I just invent managers?)

Todoed 3 days ago 1 reply      
Mostly do!
shaunrussell 3 days ago 0 replies      
From my experience this is how everyone is hiring... sounds like you've had some bad experiences.
lnanek2 3 days ago 1 reply      
> The famous fizzbuzz test simply asks "are you aware of the modulo operator?

No it doesn't. It could be implemented with counters you reset when needed. Didn't really read any further, I don't think this person should really be giving technical interviews anyway.

Submarine Cable Map 2014
214 points by bhaumik  1 day ago   107 comments top 27
analyticsjam 1 day ago 3 replies      
I spent a few years supporting these from the financial end. Modern cables are built in a loop so that they automatically fail-over. For instance, most Trans-Atlantic cables have pops in New York, Florida, England, and mainland Europe. If there is a cut in one segment, traffic automatically switches to the other.

Most customers these days are also on products that mux across these cables. So, if your Trans-Atlantic cable has two simultaneous outages, your traffic would automatically route itself across the Pacific. Your latency would go up, but your service would continue.

When I was there, construction was beginning on SMW-4, which had the dubious honor of being the first billion dollar cable. They are typically incorporated via international treaties between states and companies that are roughly about 100 pages long. Each partner has to cover maintenance on the part between the main cable and their drop, while everyone chips in on the main part.

It really is fascinating; outages are typically caused by boat anchors close to shore or large earthquakes. Once a guy in Hawai'i cut the Southern Cross wire with a pair of clippers while doing yardwork.

stevewilhelm 1 day ago 2 replies      
I recommend 'A Thread Across the Ocean: The Heroic Story of the Transatlantic Cable' by John Steele Gordon. [1]

From description,

"But in 1866, the Old and New Worlds were united by the successful laying of a cable across the Atlantic. John Steele Gordon's book chronicles this extraordinary achievement -- the brainchild of American businessman Cyrus Field and one of the greatest engineering feats of the nineteenth century. An epic struggle, it required a decade of effort, numerous failed attempts, millions of dollars in capital, a near disaster at sea, the overcoming of seemingly insurmountable technological problems, and uncommon physical, financial, and intellectual courage."

[1] http://amzn.com/0060524464

RankingMember 1 day ago 3 replies      
It still amazes me that we laid giant bundles of cabling at the bottom of the ocean and they're as reliable as they are.
arc_of_descent 1 day ago 4 replies      
Does each cable actually lie at the sea bed? What if the route comes across a really deep part of the ocean?
hcarvalhoalves 9 hours ago 0 replies      
It's interesting to see how much politics weight on the connections. E.g.:

1. There more connections between USA and UK than to non-english speaking European countries.

2. Venezuela is the only country connecting Cuba.

3. Brazil connects with Cape Verde island and will connect with Angola, another portuguese-speaking country. Those will be the only connections crossing south Atlantic.

4. Southern Asia connects to Europe circumventing the Middle East by connecting to Egypt and then crossing the mediterranean.

rbanffy 1 day ago 2 replies      
When a telco hired me (they were the owners of the portal I worked for) I got, as a gift, a book with beautiful maps detailing where their fiber network, down to street corner level. I called it the Modern Terrorist Manual.

With that book in hand (and I don't know how many copies were actually made) any resourceful bad guy could knock off about 60% of Brazil's phone and data network.

corv 1 day ago 1 reply      
bladedtoys 1 day ago 1 reply      
I wonder do they just drop the transatlantic cables blindly over the mid-atlantic ridge and hope for the best or check that it's not a hot spot or something.

And four cables connect Alaska to the lower 48? I wonder why.

aluhut 15 hours ago 0 replies      
I wonder how the location looks like where they land. Especially when there are so many landing in one place. Must be a really good secured place.
incanus77 1 day ago 0 replies      
What sort of tech is it built on? Looks like Mapbox.js plus probably designed in TileMill, then self-hosted? What else went into making it?
yuribit 1 day ago 1 reply      
Some days ago we were discussing about sharks eating these calbes and how Google stops them http://www.forbes.com/sites/amitchowdhry/2014/08/15/how-goog...
ursusk 1 day ago 1 reply      
I imagine you can use wget or curl to download the images (or other batch download tools that let you put in ranges), then ImageMagick to stitch them together. Not too hard, but all command line tools that may take a bit of experimenting to get right.
frandroid 1 day ago 1 reply      
$250 for the paper copy? Really?
kissickas 1 day ago 1 reply      
Does anyone know of a map where I can select two locations and find the shortest path (via submarine cable, obviously) between them?

By the way, this is a nice one. Happy to see that I can scroll indefinitely in one direction.

dalek2point3 1 day ago 2 replies      
anyone have any idea what happens to user experience when one of these guys breaks? is it a big deal? what about islands who are surviving off of one cable connection?
rootuid 1 day ago 1 reply      
Ireland has multiple undersea cables landing there but none are on this map. Makes one wonder about the readability of this particular map.
nilsimsa 1 day ago 1 reply      
How do they handle it when cables cross over each other and they need to do repair on the one on the bottom?
dmix 1 day ago 0 replies      
I'm curious how many of them are tapped? Or would need to be tapped to get total coverage?
andyford 1 day ago 0 replies      
Totally awesome. But can't buy the wall map until they fix the "asterix" typo!
ksec 1 day ago 1 reply      
Does anyone knows the total capacity of all these submarine cables?
ChrisArchitect 1 day ago 0 replies      
hey what is the difference between this and http://submarinecablemap.com/ just interactivity?
mentat 1 day ago 3 replies      
The lack of interconnect between Africa and South America is interesting. Wouldn't that be a better route with other transatlantic failures than moving to Pacific transit?

Surveillance issue?

junto 1 day ago 2 replies      
Poor New Zealand!
snake_plissken 1 day ago 3 replies      
Why does Saudi Arabia have so many?
ck2 1 day ago 2 replies      
Hard to imagine NSA intercepts most of that.
filipoll 1 day ago 4 replies      
Isn't this incredibly dangerous to have publically available? Please take this down.
Proposal for a Friendly Dialect of C
217 points by 0x09  3 days ago   116 comments top 17
Verdex 2 days ago 4 replies      
I think it's interesting that the famous tech companies are all developing programming languages that are less managed than java/c#, but more predictable (read: less undefined behavior) than c/c++. Facebook seems interested in D, Apple has Swift, Microsoft has some sort of secret project they are working on, Google has Go, and Mozilla has Rust. Even c++ seems to be attempting to modernize with the new additions to it's spec. And now we see a desire for c itself to change. I wonder if our industry is at a turning point where managed languages aren't quite cutting it, but no one is comfortable going back to the 'good old days'.

On a personal note, I like the idea of friendly C so much that I finally made an HN account. One of my favorite things to do is to take things apart and understand them. I was mortified when I learned the real meaning of undefined behavior in c/c++. It seems like the only way to be sure you understand a C program is to check the generated machine code. Even worse is that when I try to talk to other developers about undefined behavior, I tend to get the impression that they don't actually understand what undefined behavior means. I can't think of a way to verify what they think it means without insulting their intelligence, but hopefully the existence of something like friendly C will make it an easier discussion to have.

userbinator 3 days ago 1 reply      
I really like these suggestions since they can be summed up in one sentence: they are what C programmers who write code with UB would already expect any reasonably sane platform would do. I think it's definitely a very positive change in attitude from the "undefined behaviour, therefore anything can happen" that resulted in compilers' optimisations becoming very surprising and unpredictable.

Rather, we are trying rescue the predictable little language that we all know is hiding within the C standard.

Well said. I think the practice of UB-exploiting optimisation was completely against the spirit of the language, and that the majority of optimisation benefits happen in the compiler backend (instruction selection, register allocation, etc.) At least as an Asm programmer, I can attest that IS/RA can make a huge difference in speed/size.

The other nice point about this friendly C dialect is that it still allows for much optimisation, but with a significant difference: instead of basing it on assumptions of UB defined by the standard, it can still be done based on proof; e.g. code that can be proved to be unneeded can be eliminated, instead of code that may invoke UB. I think this sort of optimisation is what most C programmers intuitively agree with.

simias 3 days ago 1 reply      
I can fit the proposed changes in two categories:

* Changes that replace undefined behaviours with undefined values. This makes it easier to catch certain types of coding errors at the cost of certain kinds of optimizations.

* Changes that remove undefined behaviours (wrapping arithmetic, memcpy/memmove, aliasing rules).

I'm comfortable with the first kind, although you can already achieve something very similar to that with most compiler (as far as I know) by building with optimizations disabled. Also stuff like missing return values generates a warning in any compiler worth using, if you ignore that kind of warnings you can only blame yourself.

The 2nd kind bothers me more, because it makes otherwise invalid C code valid in this dialect. I'm worried this makes things even more difficult to explain to beginners (and not so beginners, I still have to check the aliasing rules from time to time to make sure the code I'm writing is valid).

Even if you're very optimistic this friendly C is not going to replace daddy anytime soon. There'll be plenty of C code out there, plenty of C toolchains, plenty of C environment where the definition of friendliness is having a dump of the registers and stack on the UART in case of an error. Plenty of environments where memcpy is actually memcpy, not memmove.

For that reason I'd be much more in favour of advocating the use of more modern alternatives to C (and there are a bunch of those) rather than risking blurring the lines some more about what is and isn't undefined behaviour in C.

twoodfin 3 days ago 4 replies      
Can someone give a rationalization of why a "friendly" dialect of C should return unspecified values from reading uninitialized storage? Is the idea that all implementations will choose "0" for that unspecified value and allow programmers to be lazy?

I'd much rather my "friendly" implementation immediately trap. Code built in Visual C++'s debug mode is pretty reliable (and useful) in this regard.

EDIT: It occurs to me that this is probably a performance issue. Without pretty amazing (non-computable in the general case?) static analysis, it would be impossible to tell whether code initializes storage in all possible executions, and using VM tricks to detect all uninitialized access at runtime is likely prohibitively expensive for non-debug code.

cousin_it 3 days ago 1 reply      
Sometime ago I came up with a simpler proposal: emit a warning if UB exploitation makes a line of code unreachable. That refers to actual lines in the source, not lines after macroexpansion and inlining. Most "gotcha" examples with UB that I've seen so far contain unreachable lines in the source, while most legitimate examples of UB-based optimization contain unreachable lines only after macroexpansion and inlining.

Such a warning would be useful in any case, because in legitimate cases it would tell the programmer that some lines can be safely deleted, which is always good to know.

DSMan195276 3 days ago 0 replies      
The kernel uses -fno-strict-aliasing because they can't do everything they need to do by adhering strictly to the standard, it has nothing to do with it being to hard (The biggest probably being treating pointers as integers and masking them, which is basically illegal to do with C).

IMO, this idea would make sense if it was targeted at regular software development in C (And making it easier to not shoot yourself in the foot). It's not as useful to the OS/hardware people though because they're already not writing standards-compliant code nor using the standard library. There's only so much it can really do in that case without making writing OS or hardware code more annoying to do then it already is.

Roboprog 2 days ago 1 reply      
Back in 1990, it didn't take long to figure out that (Borland) Turbo Pascal was much less insane than C. Unfortunately, it only ran on MS-DOS & Windows, whereas C was everywhere.

Employers demanded C programmers, so I became a C programmer. (now I'm a Java programmer, for the same reason, and think it's also a compromised language in many ways)

For anybody who is willing to run a few percent slower so that array bounds get checked, there is now an open source FreePascal environment available, so as not to be dependent on the scraps of Borland that Embarcadero is providing at some cost. Of course, nobody is going to hire you to use Pascal. (or any other freaky language that gets the job at hand done better than the current mainstream Java and C# languages)

jhallenworld 3 days ago 1 reply      
I have strong feelings that the C standard (and, by extension, C compilers) should directly support non-portable code. It means many behaviors are not "undefined"- instead they are "machine dependent". Thus overflow is not undefined- it is _defined_ to depend on the underlying architecture in a specific way.

C is a more useful language if you can make machine specific code this way.

I'm surprised that some of the pointer math issues come up. Why would the compiler assume that a pointer's value is invalid just because the referenced object is out of scope? That's crazy..

Weird results from uninitialized variables can sometimes be OK. I would kind of accept strange things to happen when an uninitialized bool (which is really an int) is not exactly 1 or 0.

Perhaps a better way to deal with the memcpy issue is this: make memcpy() safe (handles 0 size, allows overlapping regions), but create a fast_memcpy() for speed.

revelation 3 days ago 2 replies      
Plenty of architectures do not trap at null-pointer dereferencing (they don't have traps). Some (like AVR) are not arcane, they are one of the best excuses for still writing C nowadays.
otikik 3 days ago 5 replies      
I haven't done C in at least a decade, so bare with me.

> 1. The value of a pointer to an object whose lifetime has ended remains the same as it was when the object was alive

> 8. A read from uninitialized storage returns an unspecified value.

Isn't that already the case in C?

> 3. Shift by negative or shift-past-bitwidth produces an unspecified result.

What is the meaning of "an unspecified result" there?

> 4. Reading from an invalid pointer either traps or produces an unspecified value.

What is the meaning of "traps" here? Is it the same as later on ("math- and memory-related traps")?

sjolsen 3 days ago 4 replies      
I don't really see what's to be accomplished by most of the points of this. A program that invokes undefined behaviour isn't just invalid; it's almost certainly _wrong_. Shifting common mistakes from undefined behaviour to unspecified behaviour just makes such programs less likely to blow up spectacularly. That doesn't make them correct; it makes it harder to notice that they're incorrect.

Granted, not everything listed stops at unspecified behaviour. I'm not convinced that that's a good thing, though. Even something like giving signed integer overflow unsigned semantics is pretty effectively useless. Sure, you can reasonably rely on twos-complement representation, but that doesn't change the fact that you can't represent the number six billion in a thirty-two bit integer, and it doesn't make 32-bit code that happens to depend on the arithmetic properties of the number six billion correct just because the result of multiplying two billion by three is well-defined.

Then there's portability. Strict aliasing is a good example of this. Sure, you can access an "int" aligned however you like on x86. It'll be slow, but it'll work. On MIPS, though? Well, the compiler could generate code to automate the scatter-gather process of accessing unaligned memory locations. This is C, though. It's supposed to be close to the metal; it's supposed to force-- I mean, let you do everything yourself, without hiding complexity from the programmer. How far should the language semantics be stretched to compensate for programmers' implicit, flawed mental model of the machine, and at what point do we realize that we already have much better tools for that level of abstraction?

dschiptsov 2 days ago 1 reply      
Show us, please, how the dialect of C which is used for development of the Plan9 is unfriendly and not good-enough.
Someone 3 days ago 1 reply      
"Reading from an invalid pointer either traps or produces an unspecified value."

That still leaves room for obscure behavior:

  if( p[i] == 0) { foo();}  if( p[i] != 0) { bar();}
Calling foo might change the memory p points at (p might point into the stack or it might point to memory in which foo() temporarily allocates stuff, or the runtime might choose to run parts of free() asynchronously in a separate thread), so one might see cases where both foo and bar get called. And yes, optimization passes in the compiler might or might not remove this problem.

Apart from truly performance-killing runtime checks i do not see a way to fix this issue. That probably is the reason it isn't in the list.

(Feel free to replace p[i] by a pointer dereference. I did not do that above because I fear HN might set stuff in italics instead of showing asterisks)

rwmj 3 days ago 0 replies      
I would think also something like "if I write a piece of code, the compiler should compile it", perhaps "or else tell me with a warning that it isn't going to compile it".
Ono-Sendai 3 days ago 1 reply      
Not sure this is a good idea. Since a lot of behaviour becomes implementation-defined, code written in this dialect will not be portable.
kazinator 3 days ago 1 reply      
> 1. The value of a pointer to an object whose lifetime has ended remains the same as it was when the object was alive.

This does not help anyone; making this behavior defined is stupid, because it prevents debugging tools from identifying uses of these pointers as early as possible. In practice, existing C compilers do behave like this anyway: though any use of the pointer (not merely dereferencing use) is undefined behavior, in practice, copying the value around does work.

> 2. Signed integer overflow results in twos complement wrapping behavior at the bitwidth of the promoted type.

This seems like a reasonable request since only museum machines do not use two's complement. However, by making this programming error defined, you interfere with the abilty to diagnose it. C becomes friendly in the sense that assembly language is friendly: things that are not necessarily correct have a defined behavior. The problem is that then people write code which depends on this. Then when they do want overflow trapping, they will have to deal with reams of false positives.

The solution is to have a declarative mechanism in the language whereby you can say "in this block of code, please trap overflows at run time (or even compile time if possible); in this other block, give me two's comp wraparound semantics".

> 3. Shift by negative or shift-past-bitwidth produces an unspecified result.

This is just word semantics. Undefined behavior, unspecified: it spells nonportable. Unspecified behavior may seem better because it must not fail. But, by the same token, it won't be diagnosed either.

A friendly C should remove all gratuitous undefined behaviors, like ambiguous evaluation orders. And diagnose as many of the remaining ones which are possible: especially those which are errors.

Not all undefined behaviors are errors. Undefined behavior is required so that implementations can extend the language locally (in a conforming way).

One interpretation of ISO C is that calling a nonstandard function is undefined behavior. The standard doesn't describe what happens, no diagnostic is required, and the range of possibilities is very broad. If you put "extern int foo()" into a program and call it, you may get a diagnostic like "unresolved symbol foo". Or a run-time crash (because there is an external foo in the platform, but it's actually a character string!) Or you may get the expected behavior.

> 4. Reading from an invalid pointer either traps or produces an unspecified value. In particular, all but the most arcane hardware platforms can produce a trap when dereferencing a null pointer, and the compiler should preserve this behavior.

The claim here is false. Firstly, even common platforms like Linux do not actually trap null pointers. They trap accesses to an unmapped page at address zero. That page is often as small as 4096 bytes. So a null dereference like ptr[i] or ptr->memb where the displacement goes beyond the page may not actually be trapped.

Reading from invalid pointers already has the de facto behavior of reading an unspecified value or else trapping. The standard makes it formally undefined, though, and this only helps: it allows advanced debugging tools to diagnose invalid pointers. We can run our program under Valgrind, for instance, while the execution model of that program remains conforming to C. We cannot valgrind the program if invalid pointers dereference to an unspecified value, and programs depend on that; we then have reams of false positives and have to deal with generating tedious suppressions.

> 5. Division-related overflows either produce an unspecified result or else a machine-specific trap occurs.

Same problem again, and this is already the actual behavior: possibilities like "demons fly out of your nose" does not happen in practice.

The friendly thing is to diagnose this, always.

Carrying on with a garbage result is anything but friendly.

> It is permissible to compute out-of-bounds pointer values including performing pointer arithmetic on the null pointer.

Arithmetic on null works on numerous compilers already, which use it to implement the offsetof macro.

> memcpy() is implemented by memmove().

This is reasonable. The danger in memcpy not supporting overlapped copies is not worth the microoptimization. Any program whose performance is tied to that of memcpy is badly designed anyway. For instance if a TCP stack were to double in performance due to using a faster memcpy, we would strongly suspect that it does too much copying.

> The compiler is granted no additional optimization power when it is able to infer that a pointer is invalid.

That's not really how it works. The compiler assumes that your pointers are valid and proceeds accordingly. For instance, aliasing rules tell it that an "int *" pointer cannot be aimed at an object of type "double", so when that pointer is used to write a value, objects of type double can be assumed to be unaffected.

C compilers do not look for rule violations as an excuse to optimize more deeply, they generally look for opportunities based on the rules having been followed.

> When a non-void function returns without returning a value, an unspecified result is returned to the caller.

This just brings us back to K&R C before there was an ANSIstandard. If functions can fall off the end without returning a value, and this is not undefined, then again, the language implementation is robbed of the power to diagnose it (while remaining conforming). Come on, C++ has fixed this problem, just look at how it's done! For this kind of undefined behavior which is erroneous, it is better to require diagnosis, rather than to sweep it under the carpet by turning it into unspecified behavior. Again, silently carrying on with an unspecified value is not friendly. Even if the behavior is not classified as "undefined", the value is nonportable garbage.

It would be better to specify it a zero than leave it unspecified: falling out of a function that returns a pointer causes it to return null, out of a function that returns a number causes it to return 0 or 0.0 in that type, out of a function that returns a struct, a zero-initialized struct, and so on.

Predictable and portable results are more friendly than nonportable, unspecified, garbage results.

allegory 3 days ago 7 replies      
It is friendly, if you're not an idiot. Not becoming an idiot is the best solution (practice, lots).

Edit: perhaps I made my point badly but don't assume that you'll ever be good enough to not be an idiot. Try to converge on it if possible through. I'm still an idiot with C and I've been doing it since 1997.

How to walk through walls using the 4th dimension [video]
216 points by numo16  3 days ago   95 comments top 16
panarky 3 days ago 2 replies      

  Flatland: A Romance of Many Dimensions is an 1884 satirical novella  by the English schoolmaster Edwin Abbott Abbott. Writing pseudonymously  as "A Square", the book used the fictional two-dimensional world of  Flatland to comment on the hierarchy of Victorian culture; but the  novella's more enduring contribution is its examination of dimensions.
"A Square" feels sorry for poor, limited "A Line", but can't wrap his two-dimensional head around what "A Sphere" can do.


metadept 3 days ago 4 replies      
I was really hoping for a continuous (rather than discrete) fourth dimension. The concept is cool and looks to be well executed, but it's not very different from other alternate reality implementations such as the Dark World in Legend of Zelda.
ericHosick 3 days ago 3 replies      
As a game, this is really cool.

But when he got to the 4th dimension it didn't make much sense as an explanation (rubble from 3rd dimension showing up in the 4th).

However, if the 4th dimension was time, then you would walk back (or forward) in time to when the wall wasn't there (isn't there anymore cause it is now rubble), move "past the wall", and then walk forward/backward to the original time.

biot 3 days ago 1 reply      
This is a very interesting way of explaining the 4th dimension by relating it to the difference between the 2nd and 3rd dimensions. However, I think what would really solidify my understanding is to see the 4D level editor.

I'm not sure how truly 4D this world is, but it appears to let you switch between a fixed set of different 4 dimensional positions, each of which has a different 3D representation. Just as a plane in a 3D environment is a 2D slice, a plane in a 4D environment is a 3D slice. You could effectively fake this by imagining a triangular prism (like a cylinder, but triangular instead of round). Each face of the prism contains a fully defined 3D world. Switching to the alternate face of the prism is effectively doing the swap to the other 4th dimension, where you can walk along the alternate 3D environment to avoid the obstacle, then switch back to the original 3D environment.

archagon 3 days ago 3 replies      
I've been getting progressively more excited about Miegakure over the past year, but I haven't heard anyone who's played the game talk about how easy it is to "grok". Does moving between dimensions become intuitive over time? Or do you always have to consciously think about what you're doing?
dschiptsov 2 days ago 2 replies      
Why not consider the idea that any "dimensions" are nothing but "[useful] concepts of the mind", like "universal time" and even "limited space", the way some ancient mystics (?) of various cultures believed?

The simplest example is about time - we (humans) have notion of time "entirely" due to the certain properties of our environment in which we have evolved - that there is day/night, Moon phases, seasons, periodical changes (due to rotation of the planet and its motion around the Sun, of course, but this is a very recent discovery).

Now imagine that you are somehow "suspended" in the outer space without any motion relatively to the Sun. What would be your notion of time?

The time of a certain process (a mass in motion) has nothing to do with time of another, completely different process (like radioactive decay) and the notion of some "common, universal time" is just a "creation of the mind" which is very handy and useful but "does not exist in reality".

Ancient Buddhist notions of "emptiness" or "void", and pre-Buddhist (Upanishadic) notions of "everything is mind" (which is wrong, but very close and accurate) are insights to the same "truths".

Your "dimensions" are "primitive concepts" of the same kind. Any coordinate system imposed on so-called "reality" is nothing but a "concept of the mind".)

smtddr 3 days ago 2 replies      
I've been watching this vid on repeat trying to fully comprehend what's happening.

This is another vid trying to explain the same thing - https://www.youtube.com/watch?v=UnURElCzGc0

I really want to believe this is possible and this is how we'll be able to travel far into space. Like how we have to spend time walking around a very long wall when a bird can just fly over it in a fraction of the time, we wouldn't have to think about traveling light-years because there would something else we can do that's faster than a straight line.

anoother 3 days ago 3 replies      
The game looks fascinating. I'm still trying to get my had around the analogies, but get the feeling that playing the game for real will bring a level of intuition to the experience.

In the meanwhile, can anyone help me out?

In the 2D->3D example, you have one shared axis (Z -- vertical), and an alternating pair for the second axis (X and Y -- horizontal). In effect, from the 2D character's point of view, 'jumping' into the third dimension is just swapping your horizontal axis. Your previous horizontal axis is a vertical section through your current one, and vice versa. As such, when swapping axes, you would expect one 1D line (ie. a 1-pixel-wide vertical band) to be the same before and after the swap.

When going 3D-> 4D, you have 2 shared axes. Your third axis alternates, as before. When switching between your 'third' axes, then, you would expect one entire 2D plane of your perceived world-view to remain the same[1]. This doesn't seem to be the case.

What am I missing?

[1] EDIT: This plane is likely to be a section through some concrete objects, so you won't necessarily have 'seen' it before, though it will have been there.

alpeb 3 days ago 1 reply      
If this topic interests you, you should definitely check "Imagining the Tenth Dimension" https://www.youtube.com/watch?v=zqeqW3g8N2Q
aoldoni 2 days ago 0 replies      
I saw something similar on the demos of what was supposed to be the Sonic X-treme game. This was supposed to be the debut of the Sonic franchise in the 32-bit world.


laxatives 3 days ago 0 replies      
I remember being really excited about this game a few months ago. Part of me is a little concerned whether there is enough to gain some sort of intuition on where you are and what is exactly transforming with each input. I'm a little concerned it will just become a game where you try things and hope for the best and have a very incomplete mapping of whats really going on. But anyways, its neat to see such a well produced game attempt to take on these challenges.
jader201 3 days ago 1 reply      
This is exactly what Guacamelee does:


nanoleoo 2 days ago 0 replies      
That made me think a lot at this really nice open source game : http://stabyourself.net/orthorobot/
yutah 3 days ago 2 replies      
Can you travel faster then the speed of light if you use the fourth dimension? (my Internet ping is too high)
kazinator 3 days ago 0 replies      
People use 4D to go through walls all the time, when those walls take the form of sliding doors. They stand in one place in space, in front of the door, while naturally moving forward in the fourth dimension (time) to arrive to a slice of space-time in which the door isn't there. In this range of space-time they traverse space by a small distance (a meter or two), and also slip forward in time by a several seconds (which is not long enough to leave this special range of space-time). By so doing, they reach a point in space on the other side of where the door was. Then they slip forward in time again, to a time when the door is there again, at which time they smugly find themselves on the other side of the door.
mattwad 3 days ago 2 replies      
Looks like a neat game, but I'm skeptical how it would work in reality. For example, it looks like the 4th dimension (time) gets frozen (for the observer) when you want to traverse it. That means, the second you 'travel in time,' the Earth, at its current velocity, would already be 30km away.
Microsoft filed patent applications for scoped and immutable types
206 points by chx  3 days ago   86 comments top 12
tiffanyh 3 days ago 2 replies      
For those who are claiming prior art, I suggest you open a question up at "Ask Patents" (StackExchange site).

"Ask Patents" has been used by the USPTO to overturn patent requests.


more_original 3 days ago 3 replies      
I haven't read the immutability patent in detail, but it seems like the problem they are considering is not completely trivial. I've seen research papers on similar topics. Example: http://www.cs.ru.nl/E.Poll/papers/esop07.pdf

I think one should have a careful look at it to check if there is anything new here. Maybe there is.

Edit: Found a recent paper by some of the patent authors that looks related:https://homes.cs.washington.edu/~csgordon/papers/oopsla12.pd...

That being said, it clearly shouldn't be possible to patent such ideas.

readerrrr 3 days ago 2 replies      
Prior art: typedef const int cint ;

How wrong am I and what is Microsoft smoking?

webXL 3 days ago 0 replies      
The first thing that came to mind in terms of prior art for the immutable object type was JavaScript's Object.freeze(obj) which was released in ES 5.1 (2011):


Edit: replaced Object.seal with Object.freeze

WalterBright 3 days ago 0 replies      
The D programming language introduced transitive immutable types in 2007 with version 2.000.


hyperliner 3 days ago 0 replies      
It easy to see how this patent application can actually come to life:

At the end of every sprint or dev milestone, an email goes out from some PM or dev manager, asking people for "ideas to be patented." Somebody then collects all of these ideas. Because applications cost money and units have budgets, then there is a preliminary amount of "filtering": ideas that are "cool," the latest shiny feature (who cares if it has been developed elsewhere for ages), etc. This goes from things like tiles for UI to really deep advancements in cloud computing. The specs are a source of leads. Obviously in many cases the list includes a bunch of relatively minor advances (since a lot of development at MS is mostly incremental) that may be prior art except to the junior dev and PM who think they just discovered how to square the circle. There may be some preferential treatment here for patents to special people, given that there is a little bit of money that, if approved, goes back to the employees too. There is a column in some spreadsheet that classifies the patent ideas based on their perceived awesomeness.

At some point the list of patents has to be sent to some IP lawyer. The IP lawyer does real work here, but is also looking at thing like competitive aspects the team might have missed: i.e. if it is in an important business area, then it gets an extra boost to become an application. An exec probably approves the list too.

When the list is approved for applications, the job is then to make sure the application is written by the members of the team and the lawyers. This takes a lot of time and a lot of work, but since the specs already exist, people use that as the basis.

When you get a patent application granted, you get a little plaque which you hang on your wall and/or a little cube you stack on your desk. The higher the stack the more your patent prowess. Given how some people have been at the company for long, they can accumulate 20+ of these cubes. You get an aura of inventor.

Obviously, people find that it is beneficial to get a patent (the plaque, the cube, the little cash), and would lobby for these ideas to be in the approved set, or at least to participate in the shiny specs because the rule is that everybody who was even remotely involved in the patent gets to be listed in it (therefore, managers get a lot of patents because they were in meetings where the idea was discussed and maybe contributed to it).

jskonhovd 3 days ago 0 replies      
Jared Parsons is the VsVim guy. I heard they are working on a systems programing language for Microsoft.


duncan_bayne 3 days ago 0 replies      
A while back, someone was on HN asking why trust levels were so low when it came to MS. If that person is still reading HN, the answer is "behaviour like this".
polskibus 3 days ago 7 replies      
Does anyone know how do companies incentivize employees to come up with such patents? Does the patenting initiative come from the employee that "invents" or is this decision made by someone else?
shmerl 3 days ago 0 replies      
What are they smoking? It could be hilariously funny (like this http://www.theonion.com/articles/microsoft-patents-ones-zero... ) if it wasn't so crazy but true.
jokoon 3 days ago 1 reply      
I still need to understand how the patent office can still validate such patent.

I've read over and over that the patent system is bad, but isn't it just the patent office being incompetent instead ? Isn't that rather an expertise problem ? Or are experts not even asked at all on those subjects ?

Can't anybody sue the patent office for abusive patents ? Or is it just that the people at the patent office just depends on congress, thus making it biased ?

Intel Unleashes Its First 8-Core Desktop Processor
192 points by joaojeronimo  1 day ago   140 comments top 22
reitzensteinm 1 day ago 6 replies      
Even the 5960X, the $999 8 core part, has a maximum memory size of 64gb, unchanged since Sandy Bridge E.

That's disappointing, because while the CPU will likely remain close to state of the art for quite some time to come, you'll most likely max out the memory on day one and be stung by an inability to upgrade.

Of course, this was probably by design, so that they can sell you another, virtually identical 8 core processor in two more years for another $999.


rsiqueira 23 hours ago 0 replies      
Intel's disclaimer says at the end of the page: "products do not contain conflict minerals (tin, tantalum, tungsten and/or gold) that directly or indirectly finance or benefit armed groups in the Democratic Republic of the Congo (DRC) or adjoining countries."
duskwuff 1 day ago 4 replies      
And where did they unveil this new processor?

At Penny Arcade Expo.

Times really have changed.

spiritplumber 1 day ago 2 replies      
Also, Parallax has just open-sourced theirs!


8-core microcontroller in 2006, not bad. They're releasing a better one later this year, so they've opened the verilog design for the current one.

Xcelerate 1 day ago 6 replies      
Could someone give me a simple explanation of what exactly hyperthreading does? They tout 16 logical cores and 8 physical cores in this new chip. I've read the Wikipedia page on it, but it gets too technical.

I do molecular dynamics simulations with LAMMPS, and I've noticed performance on my laptop is best with 4 cores. Using all 8 "virtual cores" is actually quite a bit slower.

Kompulsa 9 hours ago 1 reply      
Why didn't they do this sooner?

AMD already has an Operton 16-core processor. I'm not saying that AMD is any better, but I thought Intel would have started selling these from long ago, judging based on the pace of the computer industry.

lucb1e 17 hours ago 3 replies      
I'm both excited and not. This is more power in a CPU and that's great progress, but for a desktop? I mean servers, games and graphical applications would be faster but the majority of our waiting time when using a computer is on a single-threaded calculations. As someone who doesn't game a lot and uses GIMP only for the most basic of purposes, I would much rather have an improved dual core CPU that produces less heat in total (compared to 8-cores) and can be clocked higher because of that.
qwerta 20 hours ago 3 replies      
This thing supports 8 DDR4 slots. Finally we are moving beyond 32GB RAM limit.
tracker1 8 hours ago 0 replies      
Of course HP will now include it in a desktop with half the features disabled, and no option in the BIOS to enable them.
auvrw 12 hours ago 1 reply      
this is a pretty naive comment, but it's really intended to be totally serious: what's up with cores? like, why do we really need cores? is it really fundametally better architecture to have a RISC sitting at the front of the instruction pipeline to distribute x86 instructions to some internal set (particularly wrt. to power consumption), or do we in fact just have cores in order to increase fab yield [/ootbcomp.com-bootcamping]
Corrado 18 hours ago 2 replies      
I wonder if Apple will announce anything that uses this processor in the Sep. 9th event? I could possibly see it being used in a refreshed Mac Pro or iMac.
coldcode 1 day ago 1 reply      
How does this compare to a 3.0GHz 8-core Xeon E5?
graycat 1 day ago 5 replies      
Why just "client"? Why not use it in a server? What am I missing?

Cost per operation? Can get an AMD 8 core processor, 125 Watts, 4.0 GHz clock, for about $180. So, $1000 for an Intel processor with 8 cores with hyper threading stands to be cost effective? In what sense?

programminggeek 1 day ago 2 replies      
I really don't keep up on this stuff much, but why is this still Haswell based? Why not just do this on Broadwell?
Zardoz84 16 hours ago 0 replies      
And the FX 8120 eight core CPU ??
ck2 14 hours ago 0 replies      
I'd still rather have 6ghz 4-core but I guess that isn't going to happen (anytime soon for a reasonable price).
higherpurpose 15 hours ago 0 replies      
Intel is getting disrupted by the book (they keep moving upmarket now). The funny thing is they know it. But they can't stop it at this point. So they just go along with it.
surak 6 hours ago 0 replies      
5yrs late
Alupis 1 day ago 2 replies      
> Intel's first client processor supporting 16 computing threads and new DDR4 memory will enable some of the fastest desktop systems ever seen.

Not necessarily -- as AMD fans (I'm one) have seen, the entire "more cores is better" is not always true -- it heavily depends on the workload, and frankly, most games and programs are not utilizing these cpu's fully (yet). Now, put something like a 2 x 16 core Opterons in a server and you have yourself quite a powerful virtualization platform.

With that said - I'm interested in seeing it's price point and performance compared to AMD's offerings.

zapt02 1 day ago 0 replies      
fdsary 19 hours ago 0 replies      
Title Caps And "Unleaches". Intel Unleaches 8-Core Paralel Marketing On News.Ycombinator
With $30M More in Hand, IFTTT Looks to the Internet of Things
202 points by novum  2 days ago   89 comments top 28
idlewords 2 days ago 6 replies      
IFTTT strikes me as the canonical example of a useful, small project that people would pay for that instead got stars in its eyes. For the moment, it's a great example of venture capitalists subsidizing a useful adapter cable for Internet services. But I get nervous when infrastructure gets delusions of grandeur. If 'cron' suddenly acquired a staff of 40, I would start looking for alternatives.
jeswin 2 days ago 5 replies      
> While IFTTTs dream is for all companies to play nicely together via its open platform...

A closed-source integration service is not an open platform, and this is especially relevant when it is an integration platform. I am betting (and building something towards the same ends) on more people embracing and promoting open alternatives than at any time before. IFTTT's market is huge. There are millions of programmers around the world who'll never write any code outside work, and they vastly outnumber the rest of us. The next step in open and free software will be making coding, forking and deploying apps as easy as editing Wikipedia.

nreece 2 days ago 1 reply      
@Pinboard: "Right now the IFTTT business model is to charge one user $30M, rather than lots of users $2. The challenge will be with recurring payments"


LokiMH 2 days ago 1 reply      
My personal feelings on IFTTT after using it are that it has a massive potential as an incredible handy tool. However, at the moment the focus seems to be primarily on social media interaction and optimization. This is all well and good, but I would love to see more integration with the features of my Android device.

At this precise moment an example is not jumping to mind, but I know several times now I have seen a trigger option in the list that I could make use of but could not find listed the universal Android feature I wanted to use as the resultant action.

Conversely when trying to create rules for something like having my phone muted in a specific location I had to create two triggers: First a trigger to mute my phone upon entering a location and Second a trigger to turn volume up upon leaving a certain location. Standing back to look at what I was trying do it is easy to understand that yes there are two Ifs that I want my phone to react to (entering an area and leaving it) but having the option to choose "while in area" or "while outside of area" would be great. I think this comes down to a difference between literal if This action then That action and if This idea then That action.

IFTTT is clearly programmed how it operates rather than how people tend to think. As someone that has done programming, I get that and can work with it, but it doesn't seem the ideal model for use by the average person.

All this having been said I think it is a fun, wonderful gadget and I look forward to seeing how it grows. Now if only I could figure out why my triggers have been failing of late...

kenrose 2 days ago 3 replies      
Was going to ask how IFTTT differs from Zapier, then found the following article somewhat useful:


In case others had the same question.

ceejayoz 2 days ago 1 reply      
@Pinboard: "Maybe the $30 M will allow IFTTT to stop asking people for their Pinboard passwords, and use the API token"


jazzychad 2 days ago 1 reply      
And still no webhooks - http://blog.jazzychad.net/2012/08/05/ifttt-needs-webhooks-st...

This one thing would blow up usage by developers across the board.

scintill76 2 days ago 0 replies      
Anyone else want the phrase "internet of things" to die? I guess it's not rational, but it just bugs me more and more every time it comes up. Too buzz-wordy or something.
carsonreinke 2 days ago 1 reply      
Too bad they don't open their platform up without requiring official partnership.
namityadav 2 days ago 2 replies      
IFTTT is at a great spot to bring simple programming & logic to non-programmers who just want to customize how they interact with technology just a little bit. Looks like Yahoo Pipes missed a big opportunity there.

However, the IFTTT homepage and WTF page are still too complicated for a casual visitor. The WTF page throws all these terms like Channels, Triggers, Actions, Ingredients, Recipes, etc instead of focusing on telling the visitor what they can do with IFTTT. The name (If This Then That) is amazing for explaining what IFTTT is. But, beyond that, they seem to be making things look more complicated than they need to be.

phireph0x 1 day ago 0 replies      
Number one feature request for IFTTT: support multiple conditions, i.e. if this AND this AND this then that

As is stands now, for me, IFTTT is cute (I've played around with the app and website) but not really useful for many things without multiple conditions.

afro88 1 day ago 1 reply      
Interesting no one mentioned Huginn [1]. It's open, and perfect for "internet of things" stuff. I've done some of the basic IOT stuff that's mentioned in the article with Huginn (open doors at specific times, turn on lights etc).

Sure it might not be as pretty as IFTTT (and I'm sure however IFTTT implement interfaces for IOT stuff will look great.. $30MM great) but they're working on it.

[1] http://github.com/cantino/huginn

ruff 2 days ago 1 reply      
I'd bet $100M on IFTTT if I had it. They've got an amazing ecosystem of often proprietary devices and services feeding event information into them.

v1 is rules by end-usersv10 intelligence by the system.

What if they could take that data and create concepts around the identity of people using those devices, where they are at certain times, what they're doing, etc. etc. Turn all that data around and broker it back to services/devices, making each of the providers that much more powerful but dependent upon IFTTT. It's super-charging the internet of things in a way that perhaps only Google (with Google Now) seems to be thinking.

chickenandrice 10 hours ago 0 replies      
I really like automators like IFFT as they save the time from having to yet again keep up with breaking API changes from crappy company xyz (ex: Facebook). The problem with all of them seems they don't actually work reliably.

I tried IFFT a number of times. I'd love for it to work, but I only use it as a "it's ok if this breaks, missing data, etc." tool. Admittedly this is not 100% their fault as you are at the mercy of the shoddiness of other APIs, but bugs upon bugs and the less than reliable nature of IFFT actually executing things in a timely, durable manner makes it 100x less useful and impossible to build on top of for real apps.

I also feel that from a business point of view, relying on other people's services is consistently a trap. Whether it's social networks like Twitter, COTS like SharePoint, or something like Evernote, it's always a trap. These businesses can break your code at any time they choose, shut off your service, or simply go under. It takes a lot of work circumventing their bugs and terrible APIs as well. Hats off for trying, but I don't feel like a company like this is usually a good long-term investment.

Add to these problems the fact that most internet "typical" users would never use IFFT or "get it" the way it's presented today. Moreover, most programmers want more control, something more powerful, or can just do everything here better themselves exactly to their use cases. Not seeing how this company is worth much even if it is "useful." Profitable != useful many times.

beamatronic 2 days ago 0 replies      
I developed an IFTTT integration in the last couple of weeks. Their support team was very helpful. They even shared some sample code with me via GitHub. Their documentation was great and I really liked their built in regression test.
JoeAtArrayent 1 day ago 0 replies      
IFTTT is in an excellent position to help users and the IoT industry solve the problem of non-standardization of clouds/APIs. For now they just enable a lot of simple "recipes", but they can now go WAY BEYOND this and make the "everything app" that let's me view/control/mashup all of my devices and social media, etc. Having the huge ecosystem already nailed down puts them so far ahead of everybody. I would pay for this app, but many will not. They could be a great acquisition play for a company that wants to dominate at this layer and is late to the game.
joshdance 2 days ago 3 replies      
How is IFTTT going to make money?
willhinsa 2 days ago 0 replies      
Hopefully this happens:

Google Now integration

Answer whether I need an umbrella today (will have to know where I work, when I go to work, lunch, home.)

Currently I can only build an alert on whether it'll rain tomorrow. I end up having to check Google Now, and sliding across the hours of the day to make a judgment about whether it's worth it or not to bring an umbrella.

bluetidepro 2 days ago 1 reply      
Congrats to IFTTT. I really love all the integrations they have going for them. They are making some great progress with their product.

Slightly off topic, but has anyone figured out a good way to use IFTTT to do "If this AND this then that" type of recipes?

digitalsurgeon 2 days ago 0 replies      
IFTTT surely has a lot of potential. IOT device management might become a problem for normal users, if IOT does become mainstream in the future.

The normal scenarios which we can think of right now for example coming home and things turn on to appropriate settings, blocking phones during meetings etc are surely easy for us to understand, but my parents if they figure these things out and can setup such tasks them selves, I see a huge market and use for such services. It is still too geeky for normal people, it needs to provide use cases for normal folks not just tweeps or facebook generation.

IFTTT needs the money, their servers used to be slow when I last used it.

Havoc 2 days ago 0 replies      
Using IFTTT strikes me as a bit of a security risk or am I missing something?
Discrete 2 days ago 0 replies      
Having developed a couple of "wearable" computing products, I can appreciate what IFTTT offers for early adopting consumers. Most companies making new products don't have the time to work on too many integrations (not to mention all the contractual stuff to even be able to start). They give the early adopter a way around at least some of that problem, and I know a significant number of users appreciate that.
tmarthal 2 days ago 0 replies      
What is interesting to me is that IFTTT is running as a cloud platform, yet some of these devices won't be accessible outside the wireless LAN. I wonder if they are going to have to create a hardware piece, software service or something that communicates to their servers and translates the scripted commands and relevant statuses.

But also, this is a huge market and someone needs to do this.

rrggrr 2 days ago 0 replies      
IFTT and Zapier are two of the most exciting companies to watch in tech right now. Why? Because automation and programming are still inaccessible to most people, but the need and interest is widespread. IFTT and Zapier, both of which I use, are racing for breadth of services they integrate as opposed to depth of a service's API they expose/support. I think that is a mistake. I also believe the first of the two to support industrial devices, a robust security model and open source hardware will run dominate the space. There are only so many Mailchimp, Salesforce, Google Docs integrations of value - but infinite use cases for device integration. On this score perhaps IFTT has a lead. IFTT and Zapier are what, IMHO, Apple's Automator should have been. I think Apple may have missed the mark throwing resources at Swift, when the much larger market is end-user integrations. I might add applications like Tasker and Locale on Android add a ton of value to those platforms, at the expense of the iPhone, and Apple remains largely absent on the integration front.
abad79 2 days ago 0 replies      
IFTTT looks like a great concept that will be replicated in the near future from other services following the massive growth that the internet of things, specially wearable devices will bring to the internet ecosystem.
zubairov 2 days ago 0 replies      
Very interesting... paid IFTTT account, I'm quite sceptical about that, I would expect the licensing contracts with Philips, Belkin, et al. are the main revenue source and will remain to be it in future...
SilasX 2 days ago 0 replies      
Wow, I'm amazed by the progress. I remember when IFTTT was above the DevBootcamp hq on 5th and Market and I stopped by to visit (which would have been early-mid 2012). I remember the cute dog and still have some craigslist alerts going that I set up shortly after. Also remember talking about their goal of getting the average person to do programming-like tasks in terms of setting up chains of conditionals and actions.

Also, the name still makes me think of Riften from Skyrim.

brettcvz 2 days ago 0 replies      
Requisite comment on these sorts of posts - we are hiring for positions across the board: https://ifttt.com/jobs

It's a fun place with cool stuff going on.

The State of NAT Traversal
185 points by api  4 days ago   95 comments top 18
simmons 4 days ago 2 replies      
This a nice summary; I was actually thinking about doing my own writeup of this very thing. I have a few thoughts beyond what was covered in the blog post:

For many protocols and modes of communication, UDP is really lousy for the developer -- we end up re-inventing TCP (congestion control, retransmission, etc.). Unfortunately, if we want to traverse NAT, we often must rely on UDP hole punching and thus use UDP. There's a TCP hole punching technique called "TCP simultaneous open", but it seems too flaky to expect to work in the real world. I've been thinking about trying UDT [1] as a user-mode TCP-like connection layer over UDP.

In addition to the ugly UPnP technique for asking a router to automatically port-forward, there have been attempts to standardize a much simpler solution in NAT-PMP and it's successor, PCP. (PCP even works for IPv6 stateful firewalls.) Many people do have security concerns about this capability. I don't know to what extent NAT-PMP/PCP has made inroads into the router market -- it would be interesting to conduct some sort of survey.

[1] http://udt.sourceforge.net/

zurn 3 days ago 2 replies      
So true in the kittens dept. The worst part is that while this situation has been developing, Stockholm syndrome has taken hold and half the battle is convincing people to be receptive to having real IP addresses again! Few people know the difference between a NAT box and firewall anymore, and "router" now means NAT box to many people.
captainmuon 3 days ago 2 replies      
> Another difficult situation arises if two peers are actually on the same local network behind the same NAT router.


> It might be tempting for peers to encode their private IP address and send it to the intermediate server and/or to the other peer. I thought about this when writing ZeroTier. On the plus side, this would work even on large segmented networks where UDP broadcasts dont make it everywhere. But the problem is that this exposes internal configuration details about the network to potentially random external peers. Many network security administrators are not going to like that, so I decided against it. I tried to think of a way to anonymize the data but couldnt, since the IPv4 private address space is so small that no form of hashing will adequately protect against exhaustive search.

I wonder if it would be possible to encrypt the local address with the internal MAC address of the router/gateway. Then only nodes seeing the same router could decode the information. I also wonder how Dropbox does this (it can tell when two connected computers are on the same LAN, and transfers files directly).

zrm 3 days ago 0 replies      
> Some NAT devices support various methods of intentionally opening ports from the inside. The most common of these is called Universal Plug-and-Play (UPnP).

UPnP should be avoided whenever possible. It's a hideous protocol. NAT-PMP (originally from Apple, now RFC6886) is dramatically simpler and ideal for small networks. Its successor NAT-PCP (RFC6887) is only slightly more complicated in order to better deal with large enterprise networks.

cdoxsey 3 days ago 0 replies      
Actually this is entirely achievable right now, with very little code. You can use WebRTC in chrome / firefox, which implements all the nasty details of peer-to-peer communication for you (and uses TLS-esque security). All you need is a STUN / TURN server and something to do the initial handshake and you're off to the races.

I've not gotten around too it, but someone should setup a webserver that speaks the protocol (perhaps with this: https://code.google.com/p/libjingle/) then you serve your site as a static HTML file on AWS or similar, spin up WebRTC, and then communicate with your server that way. Now you can serve a whole website sitting behind a firewall.

2bluesc 4 days ago 0 replies      
I think the vast majority of my IPv6 traffic is to sites who also care about security (or care about networking in general?) and consequently use TLS where possible.

Anyone know on any stats regarding popular sites reachable over IPv6? For example, what percentage redirect the requests to TLS by default?

2bluesc 4 days ago 0 replies      
If only I had someone else to talk to on IPv6 other than Gmail or YouTube.

+1 to Google for caring more than most and thanks to Comcast for giving me a IPv6 /60 prefix delegation natively. :)

Zarathust 4 days ago 4 replies      
But as a side bonus, nearly every home computer now have a poor man firewall. I find this to be insanely important for the global internet security.
dxhdr 3 days ago 3 replies      
It's surprising that the number of non-traversable hosts is only 4-8%. Aren't most cellular (3G/4G) users behind symmetric NAT, or is this considering only PC users?
JoeAltmaier 4 days ago 2 replies      
Its lots more complicated than ZeroTier would have you believe, especially for Enterprise clients. That can add wrinkles due to VPNs, firewall filtering, non-symmetric packet forwarding. And there's the case of two clients inside the same subnet - to go P2P using the described algorithm requires something called Router Hairpinning which has spotty support.

There are ways around all that, and Sococo supports them all. (Caveat: I work at Sococo).

asdffdsajkl 4 days ago 0 replies      
I found this twitter post interesting - the dude did a writeup of how to use metasploit when both the attacker and the victim are behind nat and you are able to get code execution some how on the victim: https://twitter.com/b1tripper/status/383085600040947712
cornewut 3 days ago 2 replies      
I'm somehow not convinced IPv6 is the solution - someone somewhere will just decide to nat IPv6.
unwind 3 days ago 0 replies      
"The terminology used is somewhat confusing Im not really sure what is meant by a cone."

I think the linked-to Wikipedia page makes it pretty clear, it's simply a reference to the conical (funnel-like) shape of the right-hand side of the illustration (http://en.wikipedia.org/wiki/Full_cone_NAT#Methods_of_port_t...).

Full-cone NAT allows inbound connections from any external host, i.e. it's an unrestricted funnel (or cone).

maqr 3 days ago 1 reply      
> Ziggy then sends a message to both Alice and Bob. The message to Alice contains Bobs public IP and port, and the message to Bob contains Alices.

Is this done by modifying the source address on the outgoing UDP packet? If so, wouldn't modern filtering prevent the spoofed packets from getting to their destination?

danbruc 3 days ago 0 replies      
In some situations Teredo [1] may get the job done.

[1] http://en.wikipedia.org/wiki/Teredo_tunneling

yry4345 3 days ago 2 replies      
> "Lots of people think NAT is a show-stopper for peer to peer communication, but it isnt. More than 90% of NATs can be traversed, with most being traversable in reliable and deterministic ways."

All the traversal methods require coordination with a 3rd party (ie: centralized) server so - yes - this is a show stopper for P2P.

As public addresses become more scarce, and carrier NAT becomes common, the problem of finding that intermediary will only get worse.

IPv6 should be a solution, but it won't get off the ground if carrier NAT gets priority, for example. (Or if ISPs just put firewalls everywhere, and other "best practices"...)

julie1 3 days ago 1 reply      
NAT may kill kittens, but IPv6 is killing polar bears massively.

I recommend the lecture of this:https://www.nanog.org/meetings/nanog46/presentations/Sunday/... (deployment document of ipv6) then proceed to the glitches section.

I recommend reading nanog mailing list for having a glimpse on the added complexity of dealing with IPv6 in real life.

Just peak random mails, it pops up once a month.Like for instance provider turning to LS (Large Scale) NATing or Carrier Grade NATING rather than going IPv6 for rationnale reason: IPv6 engineers are hard to find and LSNAT (+dual stack eventually) since China has proven valid has both a way to handle the scarcity of IPv4 addresses and the need to control user's traffic.


IPv6 is not bad, it is kind of bloated and requires a tinge more training than IPv4. Plus there are the concerns about privacy (but some IPV6 guru will come up with ULA or another obscure trick. It is IPv6 realm: everything is clear on the paper but is seems confusing to quite a few real life engineers since it requires a lot of RFC stacking in the brain)

States with Medical Marijuana Have Fewer Painkiller Deaths
189 points by robg  23 hours ago   35 comments top 4
codeshaman 17 hours ago 2 replies      
I haven't read the scientific study on JAMA, but I would venture to speculate that there could be several reasons for such sharp drops in mortality due to opioid overdoses:

- fewer people resort to opioids for recreational purposes, due to higher availability of marijuana

- alcohol + opioids is a deadly combination. People who are high on marijuana tend to drink less, because alcohol doesn't go well with marijuana either. Marijuana, however, can be combined with many drugs and tends to enhance the experience, requiring less of the drug.

- Marijuana makes people more cautious and health-conscious due to it's capacity to induce temporary paranoia and anxiety at larger doses.

A quick google search for 'marijuana combined with heroin' led me to a forum where people are reporting full-blown panic attacks when combining the two, with users generally reporting that the two drugs tend to enhance one another. Such panic attacks are powerful experiences and can be life-changing in a positive way.

Several countries have eased on marijuana prohibition due to the opiod addiction problem getting out of control (Switzerland, Portugal, Spain and probably other countries) and as a consequence saw decreases in overdoses and HIV infections so this study just corraborates what has been observed in other places around the world.

dbbolton 21 hours ago 1 reply      
Side note: acetaminophen is a much bigger threat than most opioid analgesics themselves. It's directly involved in something like 10% of all overdoses in the US. There's no practical reason for opioids like hydrocodone to be mixed with it (as opposed to just being prescribed/administered concurrently) except that you get sick if you try to take enough of the opioid to get high. Personally, I feel that hepatotoxicity is a poor abuse deterrent, both in theory and practice.
ac29 20 hours ago 2 replies      
"However, the study authors caution that their analysis doesnt account for health attitudes in different states that might explain the association."

As the article suggests, correlation does not equal causation. Looking at this map: https://en.wikipedia.org/wiki/File:Map-of-US-state-cannabis-...

one can imagine numerous other reasons that might lead to differential rates of painkiller (ab)use between states. Socioeconomic status is fairly clear to me (states with higher status are more likely to have medical marijuana), other correlations are a bit more of a can of worms.

Crito 22 hours ago 4 replies      
I'm a big fan of recreational weed (ideally sativa strains), but my experience with indica strains for pain relieve is not good. I find that rather than making me forget about chronic pain that I currently experience (neck cramps, sore muscles, etc) it makes me remember/re-experience pain from years-old injuries. Surgical scars I got when I was in my teens and haven't felt since suddenly have dull unignorable sensations of pain. A toe I broke a year ago starts hurting again. Etc.

Nobody that I have mentioned this too has ever said they experienced the same so I've usually just written it off as some sort of strange psychosomatic thing, but the article's passing mention that "marijuana doesnt replace the pain relief of opiates. However, it does seem to distract from the pain by making it less bothersome." makes me wonder if something is actually going on here.

Firefox 32 Supports Public Key Pinning
184 points by jonchang  3 days ago   100 comments top 10
zdw 3 days ago 2 replies      
I wish that this sort of stuff would come down to API-level interfaces.

For example, for the longest time Python's SSL library wouldn't even verify SSL certs:


And would gladly connect to MITM'ed sites. I think this is rectified, but the information I've found is conflicting.

It seems to me that securing API endpoints would be even more important than end users, as there's could be a much larger quantity of data through an API than a browser.

lucb1e 3 days ago 4 replies      
I wonder how this is going to work. I've been using an add-on to pin certificates for a half a year now and it's a hell on some websites. It nicely worked for my bank for a while, but they now employ the same technique as Google, Twitter, Facebook, Akamai, etc., changing certificates and even their CA seemingly at random. You'd think I'm being MITM'd but I'm pretty sure that's not actually the case.

Edit: I should read more closely, found it:

> the list of acceptable certificate authorities must be set at time of build for each pinned domain.

So it's directly in Firefox' source code right now. Pretty much useless for anyone but a few big sites.

And the pinning RFC doesn't sound much better. It makes the client store something about sites they visited, which roughly translates to supercookies.

StavrosK 3 days ago 1 reply      
Does anyone know why they didn't go with TACK?
cornewut 3 days ago 3 replies      
Maybe FF and Google should just become CAs? It would remove the extra step as currently both pinning and registering with existing CA are required?
mkal_tsr 3 days ago 0 replies      
> Other mechanisms, such as a client-side pre-loaded Known Pinned Host list MAY also be used.

Fantastic addition IMO. You could distribute/sync hash lists on and offline, awesome.

cpeterso 3 days ago 4 replies      
The problem I've run into with public key pinning is captive portals. Mobile operating systems or browsers need to provide a better user experience for captive portals.
eplsaft 3 days ago 1 reply      
this sounds identical to google CRLSet. Basically a list of pinned certs inside source code.

> In the future, we would like to support dynamic pinsets rather than relying on built-in ones. HTTP Public Key Pinning (HPKP) [1] is an HTTP header that allows sites to announce their pinset.

ok cool. Requires initial safe connection once. Like HSTS.

jbb555 3 days ago 0 replies      
Yeah, makes sense. I guess.But I'm more and more worried about the web "platform", it just gets more and more complex each day.
tete 3 days ago 2 replies      
Meanwhile Chrome doesn't even support OCSP (Certificate Revocation) for performance reasons, not even after Heartbleed.

I hope that doesn't sound like fanboyism, but not being able to communicate a certificate revocation properly is worrisome.

Firefox OS is a developer's best friend
199 points by damian2000  4 days ago   40 comments top 12
LowDog 3 days ago 0 replies      
I'm really glad Firefox OS is getting attention and this post highlights the features that Mozilla should be advertising in more developed countries instead of just focusing solely on appealing to developing nations. Competition is good and there really needs to be another player in the game apart from Apple, Google, and Microsoft. We especially need a completely transparent and open mobile operating system like Firefox OS, so I really hope this project succeeds.

I would also really like for Mozilla to focus on higher end hardware and to also address some core features that are completely absent from the OS. Some issues regarding these missing features have been sitting open for YEARS on Bugzilla, and it has prompted me to start learning how to develop for the OS, but I have a long while yet before I can make any meaningful contribution.

I got a ZTE Open C as my first smart phone, and I was really impressed with how capable it is, but there are some incredibly annoying issues that seem like they are never going to be addressed. For one, the screen brightness returns to 100% every single time I wake my phone up front standby. I really hope Mozilla partners with more manufacturers down the line because I've heard nothing but negative things about the Geeksphone and ZTE phones, and my experiences so far confirm these findings.

daveloyall 3 days ago 0 replies      
Firefox OS sounds great, but I don't think that's what this post is really about.

It is about OP gaming his/her carrier's promotional gimmick to get free data usage added to his account, 2mbit (mbyte?) at at time.

I did something like this once, I wasn't caught, but nonetheless it didn't turn out well.

Back in 2005, AT&T would do something like take a dollar off my bill for each dropped call.

At the time, I happened to work in a building that had some kind of Faraday cage built into the walls (EMSEC), so I had a reliable mechanism for producing dropped calls on demand.

I would just dial the local Time & Temp line on my way to the front door, and then when I stepped inside, the call would drop and I would get the dollar credit. This worked, until it didn't.

After about a month, my phone service was terrible and I was unable to place or receive international calls. I ended up talking to tier-III tech support and was informed that the cell tower closest to me was rejecting my device so I was talking to a more distant tower. (I never learned why that prevented international calls.) Apparently the tower had automatically weighed and measured my device and found it lacking!

After the tech reconfigured the tower, my cell phone service was good again. I chalked it up to karma and ceased to commit that particular form of fraud.

wesleyy 3 days ago 1 reply      
I wonder if firefox purposely chose to adopt a developer-first strategy with its mobile OS, similar to Stripe. I know that this strategy works, at least somewhat, on me, as I recently needed a payment processor for a hackathon and, knowing nothing about payment processors and after some quick googling for the most developer friendly payment API, I went with Stripe. Though, it's probably going to be a lot tougher for Mozilla to use this strategy effectively in such a consumer facing product, especially when the mobile OS market is already so saturated with mature ecosystems.
darklajid 3 days ago 1 reply      
Hrm.. While I mostly blame Vodafone for this exploitable and weird offer, I question the idea of using this hack to advertise Fx for developers.

It's novel mostly because of the strange environment, not due to the couple of lines of code to implement the hack.

And finally, I have a hard time believing that you couldn't do something comparable with Tasker etc. - or with a tiny Android app of your own (but I haven't tried managing calls so far, I might be off).

pudo 3 days ago 4 replies      
I think one basic problem with Firefox OS being a "developer-first" thing is the devices it's being shipped on. Of course, a $33 phone is a very cool thing to be able to do.

But as a developer, a lot of my disposable income is going to be directed towards gadgets and a $33 phone is just a decade behind the types of phones I would actually use myself.

I think that to attract developers, Mozilla should aim to find a partner that will ship a current-generation phone with Firefox OS. Asking people to install it themselves on a Galaxy based on a slightly screwed-up wiki page isn't exactly what I want for my personal every-minute-of-the-day phone.

niutech 3 days ago 1 reply      
Another benefit is that Forefox OS is fully open source, so you know it is not spying on you and you can customize every aspect of the OS.
plicense 3 days ago 0 replies      
"On{x}" can be used to achieve the same functionality on Android. The JS might be a bit verbose, but its as simple as the Javascript code that the OP has written.

I used to set auto-reply messages etc to my girlfriend with On{x} to my girlfriend. I even wrote some JS to auto-call her at specified timing, just to wake her up.

frowaway001 3 days ago 2 replies      
I'd say that Firefox OS biggest issue is its reliance on JavaScript.
SixSigma 3 days ago 1 reply      
A developer's phone would operate as a modem.
sobkas 3 days ago 1 reply      
Only if there was a ssh client for it...
yowmamasita 3 days ago 0 replies      
Simple yet powerful
tsbardella2 3 days ago 0 replies      
This is why we cant have nice things
       cached 31 August 2014 02:11:01 GMT