hacker news with inline top comments    .. more ..    8 Nov 2014 News
home   ask   best   5 years ago   
Doing Business in Japan
367 points by waffle_ss  7 hours ago   143 comments top 30
shinymark 6 hours ago 3 replies      
I lived and worked in Japan for a few years and can vouch for everything the author said.

I worked in the game industry in Japan and the salaries were terrible. Salaries were half or less compared to California, while the cost of living in Tokyo was similar (at least back then, it could easily be more expensive in the Bay Area now). I ended up leaving my job to do consulting from home for clients both in Japan and abroad and more than doubled my income.

Living abroad was a a great experience though. The process of studying a new language and using it in day to day life successfully was incredibly satisfying. I'd love to do that again in another country.

fenomas 10 minutes ago 0 replies      
Dev based in Tokyo since '99 here - this is a great article across the board.

If I could offer an addendum, it would be that working for a small non-$MEGACORP company is more common than you might get from the article. I work mainly with web designers and the median engineer I meet typically works at a 5-50 person company, often a startup. Also small places tend to be more progressive than megacorps in some ways (e.g. not expecting lifetime employment, Saturdays off, etc.). But most of the article (company as family, etc.) applies equally to small companies.

Also as an aside, there's plenty of modern cutting edge web design here! Many of my JP colleagues have TheFWA awards and so on. It's fair to say that webdev inside the megacorps is behind the times, but I imagine that's true of most places outside silicon valley.

dkrich 3 hours ago 2 replies      
The overall theme (as an American who has never visited Japan) seems to be one of risk-aversion vs. risk-insensitivity.

In the U.S., the culture embraces risk and risk-takers, if most of us don't actually live out these ideals ourselves.

It seems employees in Japan don't want to risk unemployment, while companies don't want to risk losing employees, while apartments don't want to risk extending credit to anybody who doesn't have a job with a recognized company.

In the U.S. it could certainly be argued that we're way too credit and risk embracing. However, it's also very likely that most of our economic growth in the last century is because we've been able to get credit and that we're a society that takes risks. Americans can be irresponsible, but this description sounds like the other extreme.

aragot 3 hours ago 6 replies      
> No, really, the most formidable Japanese low-touch SaaS entrepreneur I know figured out how to sell SaaS door-to-door in Tokyo.

Seriously, from all the inefficiencies pointed out in this article, how is the economy of Japan not in the verge of collapse?

- They spend hours choosing the text of buttons,

- they're expected to learn the Way We Do Things In This Company until their 30s,

- they attend work for extended hours,

- the workforce capacities are planned dozens of years in advance,

- decision-making is centralized in the hands of the major companies, investments are interlocked between company-arranged rents, company-arranged investments and company-arranged paperwork,

- and they have competition from foreign products which were produced by more officient economies, e.g. the iPhone

I guess there are other employment markets like France and Russia which have friction too, and others like China which may lack the special salt of SV to be exactly as dominating, but how come Japan still succeeds to have major companies and keep selling products?

drzaiusapelord 5 hours ago 2 replies      
This really does sound like working in a union shop in the US, especially as a public sector worker. Seniority, loyalty, etc matter and things like competence and productivity are someone else's problem. Jobs are wielded as political weapons (The Democratic Party leader in Illinois, Mike Madigan famously has a list of every union job he's gifted and calls on favors from that list), etc. Inefficiencies are continually introduced.

I find the far-left often bemoans a lack of paternal aspects in US society, by my god, this blog posting horrified me. I would feel to trapped and powerless in that structure. I think it also explains the milquetoast offers, especially in regards to software, these types of companies deliver.

There's something wonderfully rebellious and wild about US culture, in general, that leads to enough weirdness that somehow gets results. All the early pioneers of the things I love were pretty out there and let their freak flag fly. I can't imagine personalities like these thriving in that type of environment.

petecooper 4 hours ago 0 replies      
Related to this, Derek Sivers (formerly CD Baby, now of Wood Egg) has a range of annual guidebooks [1] aimed at entrepreneurs moving to and/or starting a business in Asia. 14 countries covered individually, or the whole range in a single volume. I have read a selection of them, more for curiosity than need, and I'd recommend shortlisting them should a relocation be on the cards.

[1] http://woodegg.com

e_modad 6 hours ago 8 replies      
The whole time I was thinking "Man, this is crazy, don't they see how that hurts productivity?" And then I thought about Europe compared to the United States. To our European colleagues: Do you think of us as working ourselves to death?
jrockway 2 hours ago 0 replies      
First off, this was a fascinating read. I've never worked in Japan (on any permanent basis), but I did go to high school in Tokyo. So this is like another side of the world that I knew existed, but didn't know much about.

Somehow Google got dragged into this. Google has some benevolence (advice about how to eat healthy, paid gym memberships, etc.), but it's not quite like the Japanese companies he describes. Nobody is managing your rent, or setting you up with dates. (Too bad, I hate dealing with landlords.) You can quit, do a startup, and come back. People that can't program don't end up doing planning meetings and making spreadsheets.

Certainly, if you like your job, your responsibilities will probably expand to take more than the 40 hour workweek. It's been that way everywhere I've ever worked in the US. (I probably err on the side of spending too much time at work, but I'm rewarded in many ways for that time, so I'm not complaining too much. But I'm not the "show up at 9 and set my alarm for 5 and leave when the alarm goes off" type of person.)

I worked in the Tokyo Google office for a month last summer. People go home at 6. People come in as late as 2. It's very much like working in the US. I came in on Saturday a couple times and didn't see many other people around. If 60 hour workweeks are endemic to all companies in Japan, the fact was hidden from me. I don't know if working for Google counts as being a salaryman, though, I didn't ask anyone.

Japan is a big country. There are certainly paths you can take that lead to 60 hour weeks and low salaries. There are other paths that don't.

Edit: and oh yeah, my one piece of advice for living in Japan: just because someone tells you you're good at Japanese, doesn't mean you are.

gknoy 6 hours ago 1 reply      
From reading previous things of patio11's, this has not yet surprised me much -- but I am blown away by the depths of details he's included here. I've only finished the section on the salaryman relationship, and it's phenomenal reading.

Edit: the link to "An Introduction to Japanese Society" seems to be broken. Try this if you're interested:


tptacek 6 hours ago 1 reply      
This is the longest setup to the best punch line about Google ever.
pheon 1 hour ago 0 replies      
As someone who also a "foreigner" who also owns and runs a company here in Japan, much of the article was thats so entertaining... now... but was so not funny at the time.

Unfortunately the vast majority of foreigners here are 20-something year old assholes who demand Japan to be the same as their home country or clueless tourists enjoying the sights and sounds. Its though this lens that Japanese people experience either directly or indirectly foreigners and like everywhere on the planet, people remember (and exaggerate) the negative aspects like it was yesterday, but quickly forget the neutral and positive experiences.

If you can digest the above, then its easier to understand the Japanese perspective which is, all interaction with foreigners has substantial risk attached. All this means is, if a task has a local interaction then its likely got an additional risk management component to it.

This risk component is probably different, possibly offensive based on your own cultural norms, but if everyone has the same expectations and culture as you, then your not living in a foreign country.

smanatstpete 6 hours ago 1 reply      
Interesting read and captivatingly well-written. OT question: How long does it take you to write such pieces? I am trying to better myself in expressing written opinion and looking at these examples I am feeling more intimidated than encouraged. Thanks.
bmmayer1 2 hours ago 2 replies      
This article seems to suggest it's near impossible to leave a megacorp and start a company. So how did the first megacorps get started in the first place? Presumably there is a fair amount of entrepreneurship in Japan, otherwise they would not have been able to achieve their vast economic success. But I'm curious what the traditional path to entrepreneurial success is in Japan, given the way this piece makes entrepreneurship there seem impossible.
notax 3 hours ago 1 reply      
Becoming a salaryman sounds about as appealing as doing time. On the plus side conjugal visits are a bit more frequent.
jacquesm 6 hours ago 2 replies      
> Your company loves you and wants you to be happy, though, so theyll suggest two days for your honeymoon, two if a parent passes away, and one if your wife passes away. You can take that Saturday off, too, because the company is generous. There, thats like four full days five, if you time it with a public holiday.

I can't help but wonder how one would time the demise of one's wife to coincide with a public holiday.

oldspiceman 2 hours ago 0 replies      
I'd love to see Doing Business in Taiwan / Korea / Hong Kong.

On a personal note, I found life in Osaka/Tokyo depressing compared to Taipei and Hong Kong. It felt like Japan's time had come and gone.

myth_buster 4 hours ago 0 replies      
I think the trick to have a good work-life balance in Japan is to work for an MNC from a different country. The reason for this is that they bring their own work culture there and don't mimic the Japanese one. This lets you have time to explore the fascinating society that is Japan. If I get an offer to work there again , I would pick it up in a heartbeat (provided my wife agrees to relocate). I'd life altering experiences there.

Edit:Just remembered that there was a company policy that no one should stay in office after 6PM in which case your manager will be called in to reason the next day.

robbiet480 6 hours ago 2 replies      
Is there as great of a guide as this for the US? I'm an American citizen (by birth) but have many friends that want to move over for work here and I am a horrible explainer.
kazinator 5 hours ago 0 replies      
Language lesson, how to answer the bank manager (and not have him believe you):

Q:Will you use the card to buy alcohol?



rheide 5 hours ago 1 reply      
This was utterly fascinating. Is there a Japanese version of this? I would love to rub it in some people's faces.
crdr88 6 hours ago 2 replies      
I was just reading about what Andreesen had to say why America is a good innovative country. This was a nice supplement. Thanks for the article.
badname 3 hours ago 1 reply      
A bit into the article I had the strangest feeling I was reading a dark Orwellian story.OMG - is _that_ how Japanese employees spent their lives?! I wonder what the suicide rates are over there.
hardwaresofton 5 hours ago 1 reply      
Fantastic in depth guide.

The salaries have kept me from returning to Japan for years now.

One question - I was informed that only a Japanese citizen (national) can create a traditional company... Is that not true (OP kind of touches it but kind of doesn't answer my question as far as I read)

buster 3 hours ago 0 replies      
How did you learn japanese? Any tips or recommendations?
html5web 4 hours ago 0 replies      
matthewwiese 4 hours ago 0 replies      
Fantastic article, read all the way through it.
notastartup 4 hours ago 4 replies      
While reading the article, I'm also reminded of the working culture in Vancouver tech scene without any of the benefits a worker might receive in Silicon Valley.

Just crappier pay, crazy uncompensated overtime, next to nothing in terms of salary bump. You don't like the long hours of work with meager pay and high living costs? You don't like coming in on the weekend? You don't like leaving at 9+ pm? Quit and good luck finding a job in the tech scene here, forever banished to the life of high hourly wages from contract jobs serving some American overlord, getting paid for overtime, not having to commute for hours in the crap public transportation system.

jacquesm 5 hours ago 2 replies      
You made an account just for that?
tzakrajs 1 hour ago 0 replies      
May as well be entitled, "Reasons to never work in Japan."
beavershaw 3 hours ago 1 reply      
> That said: is racism a bigger problem in Japan than e.g. in the United States? Oh, yes. Unquestionably.

This is totally ridiculous. I lived in Japan for a year and as a white male, it was the first time I experienced being on the receiving end of racism. It sucked, but was a good lesson into how it feels.

That said, I'm pretty sure minorities in America have to deal with the same experience on a daily basis often with far more tragic results: http://www.vox.com/michael-brown-shooting-ferguson-mo/2014/8...

Rust and Go
176 points by mperham  6 hours ago   118 comments top 16
Animats 4 hours ago 3 replies      
The article is a lightweight analysis by someone who writes small programs. He does get that, for Rust, "If the compiler accepted my input, it ranfast and correctly. Period." That's was a common experience with the very tight languages, such as Ada and the various Modulas. It's been a while since a language that tight was mainstream. We need one now, badly.

Go isn't bad for writing routine server-side web stuff that has to scale and run fast, which is why Google created it. Go is a modern language with a dated feel. No user-defined objects, just structs. No generics or templates. It was designed by old C programmers, and it looks it. Go has generic objects - maps and channels - and syntax for creating object instances - "Make". Only the built-in generics are available, though; you can't write new ones.

Go's "reflection" package thus tends to be overused to work around the lack of generic. This means doing work for each data item at run time for things that could have been done once at compile time. "interface{}" (Go's answer to type Any from Visual Basic) tends to be over-used.

Go (especially "Effective Go") has a lot of hand-waving about parallelism. Go's mantra is "share by communicating, not by sharing", but all the examples have data shared between threads. The channels are just used as a locking mechanism. Race conditions are possible in Go, and there's an exploit which uses this. (That's why Google AppEngine limits Go programs to single threads.)Go doesn't use immutability much, which is a lack in a shared-data parallel language with garbage collection. If you can make data immutable, you can safely share it, which is a way to avoid copying without introducing race conditions.

Rust, like Erlang, takes a much harder line on enforcing separation and locking. I haven't used Rust myself yet, so I can't say more on what it's like to use it. My hope is that Rust will provide a solution to buffer overflows in production code. After 35 years of C and its discontents, it's time to move on. I really hope the Rust crowd doesn't fuck up.

eridius 3 hours ago 2 replies      
Not a bad write-up. The Rust code snippets can be slimmed down very slightly though. Here's main():

    fn main() {        let args = os::args();        let washed_args = args.iter().map(|arg| arg.as_slice()).collect::<Vec<_>>();        match washed_args.as_slice() {            [_, "review", opts..] => review(opts),            _ => usage()        }    }
although I might actually suggest the alternative approach:

    fn main() {        let mut args = os::args().into_iter();        args.next(); // skip program name        match args.next().map(|s| s.as_slice()) {            Some("review") => review(args.collect::<Vec<_>>()),            _ => usage()        }    }
(this approach requires `review()` to take a `Vec<String>` instead of a `&[&str]`, but that's not a difficult change, and we could fix it using a second line of code if we wanted but at the cost of introducing a new allocation, like the original code does)

For review() I'd suggest changing the original code:

    let cwd = os::getcwd();    let have_dot_git = have_dot_git(cwd.clone());    let dot_git_dir: &Path = match have_dot_git.as_ref() {        Some(path) => path,        None => { panic!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()) },    };
to the following:

    let cwd = os::getcwd();    let dot_git_dir = have_dot_git(&cwd).expect(format!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()).as_slice());
This actually leaves `dot_git_dir` as a `Path` instead of a `&Path`, but I think that's better anyway. It also requires `have_dot_git()` to take a `&Path` instead of (what I assume it takes now,) a `Path`, which is an appropriate change as there's no need for cloning the path.

freyr 3 hours ago 5 replies      
It's not clear how much experience the author really acquired with each language, and whether that experience was sufficient experience to justify his statement:

Go felt that way to meit was good at everything, but nothing grabbed me and made me feel excited in a way I wasnt already about something else in my ecosystem.

He's apparently using each language to write relatively small command-line utilities. If Go is "amazing" at anything, its usually cited as a language of choice for (1) networked systems, and (2) large yet maintainable systems. I'm not sure his initial foray into the language would have provided enough experience to accurately assess those merits one way or the other.

Rob Pike once expressed surprise that people migrating to Go weren't C++ programmers, but Ruby/Python/etc. programmers who needed more performance. That leads you to wonder: (EDIT: removed pejoratives) if a programmer desired to switch from C/C++ to another language but hasn't by now, why not?

1. They require the performance benefits of C/C++ (and as humanrebar pointed out, manual memory management).

2. They're tied to legacy code, with too little incentive to switch.

3. They have an organizational mandate.

Any programmer who wasn't subject to the above constraints and wanted to switch could have done so before Go showed up on the scene. And if a programmer uses C or C++ solely because the above constraints, Go isn't likely to change that.

Rust may have a better chance of converting C++ programmers, if it offers the performance and control demanded by programmers who are using C++ by necessity. It will be interesting to see if people migrating from Python/Ruby to a higher performance language will choose Go or Rust in the future. Kind of like the OP, I like Go but I'm excited about Rust.

alkonaut 4 hours ago 6 replies      
If you are considering Go, or just want a good laugh, just read discussions where higher order functions are discussed. Or for that matter, generics.

Here is a gem: https://groups.google.com/forum/#!topic/golang-nuts/RKymTuSC...

There's a chance you'll laugh at the people dismissing higher order functions as nonsense, in which case Go might not be for you. This is a good test of whether you want to try it out or not.

burntsushi 4 hours ago 2 replies      
> A good (trivial) example was a great command parsing library just doesnt exist yet.

There is a Docopt implementation in Rust[1], which is used by Cargo. It tracks master and is regularly updated.

Interestingly, I've found people either love or hate Docopt, so maybe you knew about it but don't like it. :P

/shameless plug

[1] - https://github.com/docopt/docopt.rs

peterevans 5 hours ago 0 replies      
I really like the idea of testing a language by writing a small command-line utility with it, even one that -- as the author mentioned -- already exists.

Way back when, when I first was learning C, I didn't comprehend it very well. I was OK with it. A friend gave me some CDs with FreeBSD on it, one with the OS, and one with program sources. It was that source code which really opened my eyes, and you could digest small programs (like chmod) and see, you know, this is working, production code, and it's not hard, and you can do this.

codezero 5 hours ago 2 replies      
I was worried when I saw "I decided to write a little Rust and, because everyone in my world is seems swoony over it, Go."

That's a pretty bad reason for using a language and usually leads to some pretty ridiculous criticism.

This post was not that, I think they nailed a lot of the good and bad things about Go, in fact, they could have been a lot more harsh. There is a depth lacking in just checking out a language in this way though as there's no evaluation of some of the larger reasons Go exists, like concurrency and and fast compile times. Then again, most people probably don't even need/care about these.

exacube 3 hours ago 2 replies      
I think to properly write a language comparison, you need to have extensively used both languages and with multiple use cases.

For example:I've recently attempted writing a small service in Go and it only took a few hours for me to figure out how weak a language could be without some sort of type-abstraction or generics: I had to implement a FindValueInArray() twice for two different types. This should be a big issue in any respectable language. (I mostly like Go otherwise though!)

twtwtaway 2 hours ago 0 replies      
Comparing Go and Rust doesn't feel right. They are obviously designed for solving different kind of problems. Go is a simple language, maybe even too simple for my taste. But simplicity is its greatest strength. And I can understand people who would prefer Go as their go-to language for dealing with specific kind of problems. Go is a boring language but gets you where you want to be in a short time and without much suprises. Rust, on the other hand is designed for systems programming. It's got some nice features but it's also a lot more complex language than Go. I don't want to fight compiler all the time. Sometimes I don't need that kind of safety.
bsaul 2 hours ago 2 replies      
I'm surprised at the paragraph about rust having erlang style actor based parallellism... From what i've read, parallel programming was still heavily a work in design in Rust ( had a very recent discussion on using rust for http server side coding on HN whith people confirming this to me).

golang goroutine let me built a standalone binary with embedded https server and websocket support. Would Rust be able to do that even at 1.0 release without relying on low-level C library wrappers ?

toomanymike 2 hours ago 1 reply      
I wonder what actor library the author is using with Rust. Given the status of github issues like https://github.com/rust-lang/rust/issues/3573 I thought there weren't any real options.
amelius 4 hours ago 3 replies      
How mature is Rust and its compiler actually at this moment? Is it in a state ready to replace C++?

Edit: Also, I missed a good overview of features in one language and lacking in the other.In that respect, I find the wikipedia page [1] deeply broken, but that aside.

[1] http://en.wikipedia.org/wiki/Comparison_of_programming_langu...

jeorgun 4 hours ago 3 replies      
Does 'the language prevents errors at compile time' really mean 'better function signatures in the standard library'? Because that's all I'm getting out of the regex example given. So far as I know the fail! macro still exists in Rust.

Edit: Evidently I should have put a /s at the end of that first sentence. I know what the idea behind compile-time checking is, but I don't see that the given example actually illustrates it.

programminggeek 4 hours ago 1 reply      
Having spent a little time with Go, I ended up feeling like it was both better and worse than Ruby. In many ways it's many of the things that I want from a language - static typing, fast complier, pretty sensible defaults and so on. A lot of things I wish Ruby did, Go does great.

I think Go does really well in tooling, but it doesn't feel as great in syntax or language features that I would really like. The two big ones I wish Go had were named parameters on functions, and immutable data structures.

It's probably a blub paradox thing, but my ideal language has immutable data structures and named function parameters. Kotlin, Scala, and Swift all kind of nailed these features, but Go has not.

For me, named parameters and a little verbosity goes a long way to allow me to communicate intent in my code. Immutability allows me to have a built in sense of safety that once the data is set, it stays that way.

In my experience, a whole class of problems goes away when you have those two features (alongside the many other features we'd expect form a language like Go).

What I felt from Go is that a year ago when I was playing with it, that they didn't care much about either of those two features, which is fine, but it keeps go from being my ideal language.

Swift I wish was a bit more general purpose, Kotlin and Scala don't compile as fast as Go, and Ruby doesn't have a compiler to do some of the checking I wish it would.

Go's fast compile times with something like Ruby's syntax and a feature set similar to Kotlin/Swift would be darn near perfect.

For me, there is no perfect language.

fideloper 3 hours ago 1 reply      
tl;dr: Guy who knows Rust better than Go prefers Rust over Go.
jff 4 hours ago 2 replies      
"Where line 9 there just blindly assumes the regex found a match, and causes quite the run-time error message."

You blindly assume the regex found a match, because you ignored the part of the docs where they tell you how to check for a match: http://golang.org/pkg/regexp/#Regexp.FindStringSubmatch

The Best Investment Advice You'll Never Get (2008)
53 points by milesf  3 hours ago   40 comments top 8
jhulla 1 hour ago 2 replies      
Assuming you believe the underlying assumptions (and they very well may not be true), modern portfolio theory allows you to build a mathematically ideal portfolio for a given amount of risk.

The math behind MPT might be hand waved as followed: goal seek a maximum portfolio return by combining assets with minimal correlation under a fixed risk scenario. In the end, you will have portfolio that will give you the maximum theoretical return for your selected amount of risk.

In practice, outside of running a hedge fund, or a mutual fund with explicit investment guidelines (e.g. we invest in emerging market energy companies), a responsible asset manager has no choice but to follow MPT. In other words, if you are not a specialized fund, there is no mathematical justification for deviating from an MPT constructed portfolio. By definition, any deviation from a MPT balanced portfolio means you have either a) taken on more risk than necessary or b) reduced your potential return or c) do not believe in the underlying assumptions of MPT.

So what is the amateur person worth $25M to do today? As with all things, you should seek professional advice. There are many nuances of tax efficiency, estate efficiency, asset protection, personal needs, etc. that a professional advisor should guide you through.

Apparently the folks at WealthFront and FutureAdvisor are selling MPT driven portfolios to employees of SF bay area tech firms.

edit: As a couple of users point out below, there is controversy over the effectiveness of MPT including: whether the models effectively capture the distribution of risk vs return and whether the values desired by the models can be calculated with proper accuracy. PMPT (post-modern portfolio theory) builds upon MPT. Lastly there are critics such as Nassim Taleb (of Black Swan fame) who find some of the core assumptions flawed.

MCRed 1 hour ago 4 replies      
Regarding "Don't beat the market", Warren Buffet says "The game is really easy when your opponent decides not to play".

Unfortunately, giving up has a lot of appeal: It means it's not your fault you didn't beat the market. It lets people say the game is rigged and take comfort.

How many times have you seen buying company stock compared to gambling? Even though, on its face this comparison is ludicrous. (Especially here on HN where so many of you are giving up $40k a year for 0.0004% of a company that is %98 likely to be worthless.)

It becomes like a religion, where anyone saying "Hey, it's easy to beat the market" becomes a "bad guy" who is advising "risky strategies".

The reality is, the tools available to the individual investor are such that you can control the level of risk you want. You can take a stock of high risk and make it low risk with option hedges. You can take a stock of low risk and make it high risk with different options.

"%90 of options expire worthless, the real money is in selling them!" -- Abraham Lincoln

Of course, that's true-- if you have the ability to borrow from the Fed at the overnight rate. Most options are bought as they are intended- as a hedge to minimize risk. IF the underlying asset goes the expected direction, then the option will expire worthless.

My point is really that you have to read books, and hear many points of view, and figure out what's right for you.

Almost every professional and every ideologue is going to lead you down a bad path for you, because there is no one-size-fits all. There is no "best" advice.

sharkweek 2 hours ago 0 replies      
I loved the first bit of this; that Google invested the time in their employee's education of what was about to happen with sudden riches.

I know it's starting to happen, but professional sports needs this same educational process. Recently watched the 30 for 30 documentary 'Broke' and it was so heartbreaking to see what happens to these athletes who have come from nothing, get a ton of cash all the sudden, and find themselves back where they started after the paycheck stops.

oldspiceman 2 hours ago 3 replies      
tldrIf you're suddenly rich, pay $10 and buy $1 Million of Vanguard Total Stock Market Index. Giving your money to anybody else over the last 10 years would have yielded same or worse performance at a higher cost.
Apocryphon 55 minutes ago 3 replies      
General question about index funds: if a market is about to go into a steep correction or even a recession, wouldn't it be more advantageous to invest in specific stable stocks, rather than an index fund that tracks the entire market?
orofino 1 hour ago 0 replies      
This is great advice and what I have been doing for the last 3-4 years for our retirement portfolio. Prior to that we were using an 'investment guy' who had us in a bunch of high-cost funds.

I sent the guy and email and ended the relationship and took a couple months to read and learn about this. I recommend the "investor's manifesto" by William Bernstein for anyone interested.

jcdavis 56 minutes ago 0 replies      
Decent article, but awfully linkbait-y headline. You'll get that advice at many places around the internet - I recommend those curious to read the wiki at bogleheads.org
dd36 1 hour ago 0 replies      

Also, the Frontline episode, "The Retirement Gamble"

Why Broken Sleep Is a Golden Time for Creativity
104 points by benbreen  4 hours ago   25 comments top 10
gwern 1 hour ago 0 replies      
Kind of a tenuous argument. Yes, some famous people do it now - but you could as well argue from, say, the Paris Review interviews that because so many great writers write first thing in the morning, you want block sleep so you don't get up late. It may also be natural, but lots of things are natural and don't cause greater creativity. And one may be able to cite personal anecdotes, but the causation could very easily be the other way: someone gets up because they have a burning idea, not because they got up and then also had a breakthrough. I think it would be much more compelling if they could point to even basic experimental verification; for example, showing greater solution rates to 'insight problems' in a segmented condition.
julianpye 3 hours ago 6 replies      
Ideally, when I wake up at 5am and my head is racing with ideas, I head to my whiteboard in my home office. I write and sketch it all down and three hours later I fall asleep again, waking up at 10 refreshed. Most of such work has really been my best, with proposals that were refined and made it straight to top-management. In some interviews when the question came 'what do you need to work effectively' I have tentatively asked about this - no early meetings, etc... but there was always pushback. My last company, a standalone Vodafone Group R&D lab allowed all of us researchers these luxuries, but was later consolidated and streamlined.

Is there any company that allows people to work this way? What are the best flexibilities that people have encountered? I'd be curious to know....

johnloeber 1 hour ago 2 replies      
I used to have a pretty irregular sleep pattern, and one thing I definitely noticed was the feeling of euphoria that comes with sleep-deprivation. If I stayed up deep into the night, I would feel great: energetic and ready to get things done. Such a state was also creatively useful: I did a lot of pretty good work in the deep of night. Somewhat paradoxically, I found it easier to focus at that hour.

However, these benefits were only the case for light or mild sleep deprivation. If I was more sleep-deprived, e.g. 4 hours a night for a few days, then the pendulum would swing the other way and I would feel pretty useless and slow.

I should also note that the sleep-deprived, energetic creative state was only useful for projects that I could do in a single push. If I could do some project from start to finish in a single creative burst, then an all-nighter wouldn't be a terrible way of getting it done. There would be few distractions, it'd be easy to get "flow" going, and I suspect that the euphoric, sleep-deprived state marginally reduce inhibitions, which can lead to better work.

On the other hand, it's not that great for tackling a section of a larger project, which I think are better crafted slowly during the day, over the course of a few days.

bitL 2 hours ago 0 replies      
I noticed when I push myself a bit deeper into the night (I normally go sleeping at ~10pm), then at around midnight when I am already tired I have the best ideas for composing electronic music. If I stay awake, I often continue until 2-3am with an outline of a song and while very very tired, I often finish something surprisingly good and balanced so that I just need to refine/add "candy" around to finish the song. Basically I trade enhanced creativity for discomfort and feeling awful the next day, and it works surprisingly well.
earlz 3 hours ago 0 replies      
I really wonder what this says about children who wake up in the middle of the night, and the effect on parents. If you follow through with "everyone did segmented sleep", then at least the parents of children might be somewhat mismatched. Children wake up earlier, or take longer to wind back down to sleep. Is that still segmented sleep in the same vein? Maybe it was at one time natural for everyone to wake up at the same time basically, because they were so close together.

Either way, the explicit no children "rule" for experimenting with segmented sleep is an interesting thing to think about.

Of course, 9-5 jobs make it practically impossible anyway, which are assumed to go along with children in many cases

scott_karana 2 hours ago 0 replies      
He's made a reasonable historical case for the commonness of a split cycle sleep.

However, I wanted more evidence, so I looked to an ancestor: research on chimpanzees in wild seems to indicate that they start their days at dawn, and sleep mainly uninterrupted through the night...

Did humans really diverge that much? I suppose further studies will tell. (After all, the creative differences seem undeniable, anecdotally...)

japhyr 3 hours ago 1 reply      
This is a wonderful reflection on the joys and productivity of getting up when you wake up, and running with your naturally wakeful energy while it lasts.

My job is the only thing keeping me on a set schedule. When I retire, or if I make a move to freelancing full-time, I'm really curious to see what becomes of my schedule. I'll stay grounded in time somehow, but I'll also lose myself in time on a regular basis.

smokey_the_bear 2 hours ago 3 replies      
I dunno, I haven't slept four hours straight since my baby was born, and this may be the least creative I've ever felt.
contingencies 2 hours ago 0 replies      
I have been changing routine to get some more exercise recently... I've started sailing again. The general routine is work the morning, sail the afternoon, sleep damn early (8:30PM?), wake up 2AM, work awhile, sleep again about 5AM, wake again about 8AM, eat, rinse, repeat.

Yesterday I woke up early for a cancelled meeting, which broke my pattern. As a result, I sailed too early and too long .. and slept right through. This morning three dogs woke me up, apparently they were having sex. I have no doubt they will sleep again :)

Google's phone number handling library
170 points by wslh  6 hours ago   31 comments top 13
jackocnr 4 hours ago 2 replies      
Ha, good to see this here - so incredibly useful, and already ported to lots of useful languages. Handling phone numbers gets messy really quickly: formatting/validation for national/international numbers, in different forms (land-line/mobile/premium etc), in hundreds of different countries... These guys have done a great job, and are also super responsive/helpful when you raise issues.

A word of warning: if you ever set out to handle international numbers in a web frontend - you may think "it can't be that hard - maybe take me a couple of hours" (like me), then do yourself a favour and save yourself a week of unexpected work and use the jQuery plugin that I ended up creating (I don't understand why this didn't already exist) which uses libphonenumber for all the magic: https://github.com/Bluefieldscom/intl-tel-input. Hope it saves you some time.

makeramen 6 hours ago 4 replies      
Not entirely sure how this made front page, but it is a super handy lib.

I don't understand why Android includes this lib in their source but makes it internal so you have to provide your own copy if you want to use it in your app:https://android.googlesource.com/platform/external/libphonen...

groby_b 3 hours ago 1 reply      
If you care about i18n and the physical world, there's also https://github.com/googlei18n/libaddressinput and googlei18n in general
rdegges 4 hours ago 1 reply      
I've been using this for quite a while -- it's excellent -- without a doubt the best library around for dealing with phone numbers in E.164 format (the international standard).

As someone who does a lot of telephony work, <333

adpreese 1 hour ago 0 replies      
I've used this to great effect to help handle partially obfuscated phone numbers and validate whether they could be valid or not. There are far too many edge cases to try to handle on your own if it's not core to the problem you're trying to solve.
vinhboy 1 hour ago 1 reply      
Is there such a thing as an address library? I imagine something with a standard format, and a table schema you can just put into any app and have a functioning contacts list. Preferably a ruby gem maybe?
boydjd 4 hours ago 2 replies      
I use the PHP version from time to time.


wclax04 4 hours ago 0 replies      
I've been using the python port lately, its great.


sqren 2 hours ago 0 replies      
I ported this to Github last year, due to my frustration with Google Code :phttps://github.com/sqren/libphonenumber
ashmud 5 hours ago 0 replies      
I see they added carrier lookup by phone number since I last looked at it.
hughlang 3 hours ago 0 replies      
Also try the iOS port. https://github.com/iziz/libPhoneNumber-iOS

Really nice.

lifeisstillgood 3 hours ago 3 replies      
I guess I shouldn't but One thing that surprises me is the lack of developer friendly libraries like this coming out of major corporations that should know better.

Why, for example, is Stripe leading the charge on decent credit card front ends when Visa could spend a few million and produce the one and only Unicode, works everywhere, looks up addresses credit card entry form.

Why is an advertising company doing this when any of the major telcos have this internal knowledge lying around?

The blindness does seem almost wilful at times.

Jarred 6 hours ago 0 replies      
super helpful library
Pulling JPEGs out of thin air
302 points by atulagarwal  15 hours ago   65 comments top 17
tux3 8 hours ago 3 replies      

If it's smart enough to learn how to build a JPEG in a day, use it with netcat and it could probably send quite a lot of things down in flames.

Who needs static analysis :) ?

vinhboy 7 hours ago 5 replies      
At the risk of sounding really stupid. Can someone ELI5 what's going on here and why everyone thinks its so amazing?
Mchl 4 hours ago 0 replies      
I like to imagine that given enough time it eventually generates the Lenna [1] jpeg and exits

[1] : https://en.wikipedia.org/wiki/Lenna

zackmorris 4 hours ago 0 replies      
Potential instructions for trying this on Mac (I was unable to make it work, perhaps we can build upon this):

curl -LO http://lcamtuf.coredump.cx/afl.tgz

tar zxvf afl.tgz

rm afl.tgz

cd afl*

make afl-gcc

make afl-fuzz

mkdir in_dir

echo 'hello' >in_dir/hello

# there is a glitch with the libjpeg-turbo-1.3.1 configure file that makes it difficult to compile on Mac, so I tried regular libjpeg:

curl -LO http://www.ijg.org/files/jpegsrc.v8c.tar.gz

tar zxvf jpegsrc.v8c.tar.gz

cd jpeg-8c/

CC=../afl-gcc ./configure


# error: C compiler cannot create executables

# if the above command worked to build an instrumented djpeg, then this should work

cd ..

./afl-fuzz -i in_dir -o out_dir ./jpeg-8c/djpeg

im2w1l 7 hours ago 2 replies      

>if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;

How impossible would it be to look at the branching instruction, perform a taint analysis on its input and see if there is any part of the input we can tweak to make it branch/not branch.Like, we jumped because the zero flag was set. And the zero flags was set because these two bytes were equal. Hmm that byte is hardcoded. This other byte was mov'd here from that memory address. That memory address was set by this call to fread... hey, it come from this byte in the input file.

userbinator 4 hours ago 1 reply      
I remember a very similar technique being used successfully for automatically cracking software (registration keys/keyfiles, serial numbers) before Internet-based validation and stronger crypto became common; the difference is that method didn't require having access to any source code or recompiling the target, as it just traced execution and "evolved" itself toward inputs producing longer and wider (i.e. more locations in the binary) traces.
rainforest 7 hours ago 0 replies      
See also: Microsoft Code Digger [1], which generates inputs using symbolic execution for .net code, and EvoSuite, which uses a genetic algorithm to do the same for Java [2].

[1] : http://blogs.msdn.com/b/nikolait/archive/2013/04/23/introduc...

[2] : http://www.evosuite.org

raisedbyninjas 6 hours ago 3 replies      
I'm not familiar with how the fuzzer was monitoring the executed code path. Would this be thwarted by address space layout randomization?
gear54rus 8 hours ago 0 replies      
I had a brief 'It's alive :O' moment when reading this, imagine seeing face looking at you in one of those pics :)

Nice article, concept of fuzzers was new to me.

bane 7 hours ago 0 replies      
Wow, two awesome ideas in a week. Reminds me of this posted just a couple days ago http://reverseocr.tumblr.com/
ionforce 7 hours ago 1 reply      
Sounds very much like a genetic algorithm/evolutionary computation.
byEngineer 7 hours ago 2 replies      
This is totally amazing! Wondering if it would be possible to go the other way around: from generated JPG to a string. If yes, what a cool way to send your password as a... JPG over email.
vitamen 3 hours ago 0 replies      
The beginning of this article reads eerily similar to the beginning of Greg Egan's Diaspora, though in a much more limited context.
stevebot 6 hours ago 0 replies      
You can throw afl-fuzz at many other types of parsers with similar results: with bash, it will write valid scripts;

^ that seems fun, I just don't think I would run it on my machine for fear of what it might create (oh.. rm -rf * ok!)

1ris 7 hours ago 1 reply      
OT: Is there a simple, little fuzzer that just uses grammars as templates for their outputs?
JonnieCache 6 hours ago 2 replies      
Now to try this with midi...

But what to feed it into? I could make some musical analysis stuff, but do I need to write it in C to avoid accidentally fuzzing my interpreter?

slvn 3 hours ago 0 replies      
This what a hacker be.
AudioKit: Open-source audio synthesis, processing, and analysis platform
66 points by adamnemecek  5 hours ago   4 comments top 3
cannam 3 hours ago 1 reply      
Looks like a very nice project. I've only glanced at the pages so far but it looks like you've really been putting good work into the packaging and documentation, which is refreshing.

It might have been useful to learn a little sooner that this is a Csound wrapper. I can understand your reasons for not advertising it as such, but I think there is a genuine utility for your users in knowing that the library they choose for iOS/OSX development has a structural analogue on any other platform they may ever want to write for.

AceJohnny2 2 hours ago 0 replies      
See also: "Essentia is an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPLv3 license (also available under proprietary license upon request). It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. In addition, Essentia can be complemented with Gaia, a C++ library with python bindings which implement similarity measures and classications on the results of audio analysis, and generate classication models that Essentia can use to compute high-level description of music (same license terms apply)."


(which I discovered from the game Crypt of the Necrodancer which uses it to do (at least?) beat detection for your custom music)

kastnerkyle 2 hours ago 0 replies      
If anyone is interested in playing with this kind of stuff in Python, I have been doing a lot of work in simple gist for LPC coding and synthesis, as well as sinewave synthesis. No musical synthesizers yet (focused on voice) but many of the ideas are similar. It might even work on music (I haven't tried).


This looks really cool and clean! Nice documentation. I will definitely look at this if/when I start the next iOS project.

New Uber Funding Round Could Value the Company at $25B
20 points by prostoalex  2 hours ago   17 comments top 4
johnloeber 1 hour ago 2 replies      
It's not easy to correctly value Uber, because it's not so easy to value the taxi industry, either. By some metrics, the global taxi industry is worth no more than $25B.[0] This seems spurious, and Nate Silver, using a different metric, values the taxi industry at about $100B.[1]

So it might seem as if Uber would have to seize a large fraction of the taxi market in order to justify its valuation. Some (myself included) say this is unrealistic, because Uber is unlikely to hold on to a monopoly or majority share of the rideshare market.[2]

The important factor, however, is that Uber is rapidly expanding what previously was the conceived size of this market. People who were not taking taxis previously are now using Ubers. There was an article on here recently about parents using Uber to ferry their children around. Some folks are leaving their cars at home, and are using Uber instead, because it makes sense financially (gas+car maintenance costs, in some cases, are higher than those of taking an Uber).

Beyond this, I expect to eventually see the integration of the rideshare market (Uber, Lyft, etc.) with other services, like delivery and last-mile transportation. This is already being done by Uber Rush. I would not be surprised to see Uber and its competitors greatly expand what we previously thought of as the Taxi industry, such that even a small share of this very large industry could be well worth the $25B valuation.

[0] http://iterativepath.wordpress.com/2014/05/22/if-you-most-ag...

[1] http://fivethirtyeight.com/features/uber-isnt-worth-17-billi...

[2] http://johnloeber.com/w/uber.pdf shameless plug

steven2012 41 minutes ago 1 reply      
Just out of curiosity, how do people who invest at $25B valuation expect to make money? Are they okay with it going from $25B to $50B and getting 2x, or are they also looking for 10x+ return?
swartkrans 26 minutes ago 1 reply      
I've never used uber or lyft, but I used sidecar yesterday for the first time, and given this was my first experience with this kind of service I have to say this is so much better than having to hail a taxi. It sucks that you have to have a smartphone, (what do you when battery is out) but otherwise it's easy to see how transformational this kind of new service is. You can pick your driver, your driver can accept or reject based on your destination, you can see how much they will charge up front, and you can tip right from your phone making the thing entirely cashless. It's pretty amazing.
dana0550 1 hour ago 5 replies      
I'm a heavy Uber user and I even sold my car about a year ago because I no longer needed. I use it exclusively for my trips around San Francisco and the East Bay. I've noticed recently (past ~1-2 months) there is always surge pricing. I wonder if this has to do with drivers not being paid enough or if they internally redefined what surge means.

Either way I like Uber but I just don't understand how there is always surge pricing. This is still has not deterred me from using it but still I wonder...

Fuzzing TCP Options
5 points by luu  48 minutes ago   discuss
Dart and Google Cloud Platform
48 points by pranavpr  4 hours ago   8 comments top 5
woven 1 hour ago 0 replies      
More specifically, Dart on App Engine via Managed VMs. (Dart was already runnable on Compute Engine instances.)

App Engine is a subset of Cloud Platform services and is a platform-as-a-service (PaaS) offering auto scaling, a scalable datastore, logging and other managed services and guarantees that aim to substantially reduce your DevOps.

By contrast, Compute Engine (and its closest competitor AWS) is more an infrastructure-as-a-service (IaaS) offering provision your instances and run anything you'd like on it, as well as choose from available resources like load balancers and firewalls to meet your needs.

Managed VMs attempt to offer the best of both worlds: the managed scalability of App Engine with the choice and flexibility of Compute Engine. Through a Docker-based container approach, App Engine scalability guarantees are still kept while allowing you to run your own language, like Dart or Node.js, and services.

My Dart app, http://woven.co, has been running on Compute Engine, and I plan to move it to App Engine.

JoshMilo 2 hours ago 0 replies      
This is good news; I'm still waiting for the DartVM to get integrated into Chrome.
tosh 1 hour ago 0 replies      
Looking forward to play around with this over the weekend.Great to have a low-maintenance way to run a full Dart stack.
kreas 1 hour ago 0 replies      
This is exciting news!
andyl 1 hour ago 3 replies      
"Runs best in Chrome", coming soon to a browser near you.

The spirit of IE lives!


Common C++ Gotchas
44 points by vickychijwani  4 hours ago   20 comments top 6
nly 1 hour ago 1 reply      
#3 should really say make public base class destructors virtual

#6 sometimes you actually will want an A(A&) constructor... for example, if you have constructor overloaded with a greedy template argument.

#8 objects returned by value will also bind to an rvalue reference

#9 Remember in C++11 there's also new T{};

#10 isn't absolutely correct either, dynamic_cast can actually be used in some circumstances where static_cast or implicit casting could have been used.

    struct A {    };    struct B: A {    };    int main() {        auto b = new B();        auto a = dynamic_cast<A*>(b); // fine    }

Suncho 1 hour ago 0 replies      
#1 should really say, "Avoid new and delete." new and delete are for experts. The average C++ programmer should go nowhere near them. If heap memory is used internally by a class that was written by someone who knew what they were doing, then fine. Use your std::vector. But new and delete are huge red flags in code.
mpyne 59 minutes ago 0 replies      
I would actually revise #1 to be to always use a smart pointer for heap-allocated objects. C++11 is 3 years old now after all.
kazinator 1 hour ago 0 replies      
It's not only C++ that has problems with throws out of destructors or similar logic. How about Common Lisp:


"The ambiguity at issue arises for the case where there are transfers of control from the cleanup clauses of an UNWIND-PROTECT. ..."

It is not clear what exit points are visible when the cleanup-forms (morally similar to C++ destructor calls) are invoked. Are the exit points that are skipped torn down all the way to the control transfer's target, or is the tear-down interleaved with the unwinding.

wfunction 2 hours ago 3 replies      
Why are you using delete so often in C++...? 'delete' should be rare in your code.
santaclaus 3 hours ago 1 reply      
> Option 1 is bad because: (a) it involves an unnecessary copy constructor invocation

Shouldn't rvalue move semantics kick in and prevent the copy?

Extracting timestamp and MAC address from UUIDs
17 points by mooreds  3 hours ago   3 comments top 3
geofft 45 minutes ago 0 replies      
There's a story that the creator of 1999's "Melissa" virus was found via a GUID in the Word document that included their MAC address.

(I'm having trouble confirming the veracity of this, since the web is full of citogenesis that links tohttp://www.zdnet.com/news/tracking-melissas-alter-egos/10197...which isn't super clear, but it's a good story nonetheless.)

elmin 28 minutes ago 0 replies      
An alternative that was posted here a week or so ago: https://eager.io/blog/how-long-does-an-id-need-to-be/
pbbakkum 58 minutes ago 0 replies      
We encountered some of these same issues and wrote this library to mitigate them: https://github.com/groupon/locality-uuid.java. I think UUIDs make good unique ids overall, particularly in distributed environments where id generation can't be coordinated, but should be used carefully, as the article notes.
Java Functions: Every Java FunctionalInterface You Want
39 points by BrandonM  4 hours ago   14 comments top 4
norswap 1 hour ago 1 reply      
Does someone know of a good tree-shaking packager for Java? Something that builds a jar (or a directory) with only the needed .class files (or even better, versions of the .class files with the used functions)?

Looks like it would perfectly complement this project.

ZitchDog 4 hours ago 2 replies      
It's so sad that this is necessary. Proper function types would have made lambdas so much more useful.
maaaats 3 hours ago 2 replies      
What is the use for this?
nardi 3 hours ago 2 replies      
Over 16,000 classes. Better increase your PermGen.
Basic colour theory for programmers
49 points by actraub  7 hours ago   17 comments top 6
drcode 3 hours ago 4 replies      
Some other points programmers should be familiar with (not an expert here, everyone feel free to correct me)

1. In typical situations, 256 shades of brightness of a single color hue are the most we can distinguish and will form a smooth gradient without banding. If you're dealing with Green and related colors in the RGB spectrum (i.e. yellows and blue-greens, and near-white colors) it can go a bit higher, to 512 shades- This is why many compression algorithms compress reds and blues more than greens, because it's less noticeable.

2. Many artists will use opposite colors on the color wheel to darken colors- So to make a darker yellow, they'll mix in purple paint (or use purple dithering, depending on the medium.) How this relates to color theory from a psychological standpoint is unclear, but using this approach in UI design (for instance, adding purple detail to the shadow of a yellow object in an icon) often gives clean, elegant results.

3. Sometimes, when designing something, you say to yourself, "I want a red here that is just as bright as the blue over there". However, the psychological basis as to when colors of different hues appear to have the same brightness is a very complex problem- You can't just add R+G+B intensities and think that will tell you that two different colors have the same brightness, perceptually. Your best bet is to just "eyeball" colors in this case, unless you have the time to go down some very deep rabbit holes.

4. If you are designing for paper printing, expect all colors in an RGB image to look much, much darker and less vibrant than on a monitor. Going from RGB->CMYK and getting perfect results is another really hard problem out of reach for most programmers (I actually think there's low hanging fruit for people to write libraries that help optimize this conversion- Everything I've personally seen that is designed for a layman, such as Photoshop's conversion process, seems to be sub-par at helping laymen do this well.)

5. There are exotic color spaces like the TSL color space that most programmers are completely unfamiliar with and are designed for better modelling of human perception of color. We programmers should probably be using these color spaces a lot more.

6. A good rule for UI design is to only use two colors at most (besides gray tones) on the screen at a single time and maybe one extra color in a very limited way. If you don't do this, you'll end with an "angry fruit salad" interface (http://www.urbandictionary.com/define.php?term=angry%20fruit...) Microsoft Metro and some of the new "flat UI" stuff is a major deviation from this rule, for better or worse.

7. When in doubt, leave the background of a website white. You should have a very, very good reason before using a black or dark green/blue/brown background on a website. (or so most UI designers will tell you, though of course you're free to ignore their advice.)

gtaylor 4 hours ago 1 reply      
I took a very heavy emphasis on color math while at school, figured I'd dump a few more useful resources on the pile:

* John the May guy - http://johnthemathguy.blogspot.com/ - Breaks down advanced color math/science concepts in very plain English. Complete with helpful charts and graphs. He's very approachable if you have questions. If you are in the print/GC industry, you'll probably run into him at trade shows.

* Bruce Lindbloom - http://www.brucelindbloom.com/ - Look past the tacky website and into the "Math" and "Calc" sub-sections. Tons of great theory, formulas, and calculators. Make sure you doublecheck the formulas against other sources, I did run into a small discrepancy or two (that I can't recall). I've also been able to get responses from him through email, which is nice.

* EasyRGB - http://www.easyrgb.com/index.php?X=MATH - More transform/delta E functions. Some of these are kind of crude, so I tend to look at Bruce Lindbloom or CIE directly when possible, but EasyRGB can sometimes fill in some gaps.

* python-colormath - http://python-colormath.readthedocs.org/ - And of course, I can't resist shamelessly plugging my Python color math module.

Color math is funky stuff, and at times very hard to get help with. There aren't many easily-reached people out there that have a deep understanding of it. But if you keep hammering your head against it, you'll learn through osmosis over time.

If this is something that interests you enough, RIT and I think Cal Poly both have color science degrees and certificates. Imaging companies eat these graduates up in a hurry.

zefei 3 hours ago 0 replies      
This is an old article, and so wrong and vague in so many aspects of digital colors. It contains not much more than simple device-dependent RGB/HSV colors.

To use colors correctly (not many programmers can do this, in fact Chrome/Firefox/Safari don't show the same color for same RGB value on my machine), programmer needs to consider color profiles, gamma correction, white point etc. The sad truth is that RGB is mostly misunderstood from intuition, and most graphic libraries don't deal with device-dependencies at all. Luckily, most engineers don't have to deal with colors outside sRGB.

The absolute best place to read about digital colors (or image processing in general) is efg's references: http://www.efg2.com/Lab/Library/Color/Science.htm. It contains overwhelmingly large amount of info, but they are good info and quite essential for professional work.

carlsednaoui 2 hours ago 0 replies      

On a related note, I recently helped write a "Color Theory Basics" guide for Thinkful (my current employer). Some of you might enjoy this: http://www.thinkful.com/learn/color-theory-basics/

This guide is more applicable towards branding and marketing.

hcarvalhoalves 3 hours ago 1 reply      
About colour temperature:

Subjectively (or maybe culturally?), red tones are warm and blue tones are cool.

There's another definition of "colour temperature" (derived from a physical property, rated in Kelvin), in which blue-ish tones are warmer than red, white being the hottest. This is based on the light emission of very hot bodies (you can tell the temperature by the color it glows).

This second definition of "colour temperature" might be counter-intuitive but is used to rate light sources and in photography, so you have to know the context the term is used.

AnimalMuppet 2 hours ago 1 reply      
> This is the traditional adding of colours together to produce new colours. (Said of RGB.)

But it's only "traditional" in computer graphics. Paint is subtractive, and so is printing. If your color background is one of these more traditional fields, then additive is most definitely not traditional.

Khan Academy Lite
61 points by aronasorman  6 hours ago   10 comments top 4
jamalex 5 hours ago 0 replies      
Thanks for the very sweet blog post, Kamens!

We would love to answer any questions about KA Lite or Learning Equality, and where we're headed. I'll be on a plane for the next few hours, but others on the team can chime in and I can respond a bit later as well.

Also, note that we're hiring! We're building a scrappy team of passionate, dedicated devs down in San Diego: https://learningequality.org/about/jobs/

AjithAntony 4 hours ago 2 replies      
KA Lite is great. I was working on an educational computer lab in a prison, and it was moderately easy to deploy on our Windows Multipoint server. Unfortunately we had to delete it becuase there was too much video content for the prison administration to screen and approve. FWIW, we also had to delete the offline wikipedia (kiwix) for the similar reasons.
sp332 4 hours ago 1 reply      
You can install KA Lite and other educational packages on a LibraryBox, which is basically a router with a web server on it. http://www.hackersforcharity.org/librarybox/
q4 3 hours ago 0 replies      
Thanks a bunch!
Receiving Dead Satellites with the RTL-SDR
133 points by pavel_lishin  10 hours ago   48 comments top 9
32faction 7 hours ago 3 replies      
hey i did my senior undergraduate thesis on orbital debris removal and mitigation. specifically on mitigating the effects of the Kessler Syndrome.

..and it's a lot harder than just shooting another satellite up to bring the old one down. you have to match not only the speed of the target satellite, but the altitude and orbit it's in too.

once you've done that, you can't just attach to it, fire off thrusters and deorbit it; Newton's law still applies. You'll bump into it and according to the 3rd law the target will apply an equal and opposite force. you can't shoot it with a harpoon cause that'll cause more debris.

but let's say you do attach to it somehow or utilize something like electrodynamic tethers (which is what the Japanese Space Agency, JAXA) is using. you can't just throw it back to earth and hope the atmosphere does the rest; you have components that may not completely break up and may rain down upon populated areas(c1).

usually once a target satellite is acquired, it is moved to a graveyard orbit away from operational satellites.

i was going to submit an application to YC to startup a space company dedicated to space debris mitigation but it seemed a bit too complex.

(c1)-This is why most of our spaceflight launch locations (Cape Canaveral, Vandenburg, Wallops) are on the coasts; they fire away from CONUS so just in case there is a catastrophic failure, the debris doesnt rain down on your house.

CapitalistCartr 9 hours ago 0 replies      
For those of you interested in orbital debris, there is a newsletter put out by NASA on the subject worth subscribing to. It's a free quarterly PDF.


diydsp 9 hours ago 0 replies      
> An audio example of Transit is over here https://dl.dropboxusercontent.com/u/124465398/Transit5b_5_20... my recording) it sounds like some kind of melody song.

This is in fact quite incredible to listen to. There seems to be a lot of "character" encoded in it. It goes through long and slow periods, parts with rapid notes, etc. Definitely worth a listen and pondering what kind of data it's "talking" about.

frandroid 8 hours ago 5 replies      
> There are many shutdown Satellites who apparently having a life of their own varying from Military, Navigation, Experimental, Weather, and also Amateur ones.

Wait, there are amateur satellites?

hapless 10 hours ago 2 replies      
Transit 5B-5, one of the two very early satellites mentioned, seems to have carried a very early RTG.

I wonder if it is still running on its nuclear power.

S_A_P 10 hours ago 2 replies      
So I know there are some initiatives and even a Swiss company that wants to clean up our space junk. Has there been any thing written(and Im sure there has, Im curious to read it) about when we will hit a tipping point that it becomes dangerous to even launch into space?
taylorbuley 7 hours ago 0 replies      
This chip -- designed as a digital TV radio receiver -- is truly fantastic. So is the story behind it's instruction set "discovery."
kitd 5 hours ago 0 replies      
Wow! Didn't realise these chips could be used for SDR.
nickthemagicman 8 hours ago 1 reply      
Would it be possible to have a two way conversation?Get a command line somehow?

That would so cool.

Also, what are the speeds of these signals? How long would it take to load The Pirate Bay site from a sattelite?

Badass potential.

What can I only do in Erlang?
300 points by davidw  16 hours ago   153 comments top 22
rvirding 10 hours ago 1 reply      
Some people have already been mentioning this but I wish to clarify some things.

When Erlang was designed/invented the goal was never to make a new language, the goal was to design a way of building systems with a set of specific characteristics, massive concurrency, fault tolerance, scalability, etc. One part of this was the language, Erlang. But at the same time we were developing the language we were also looking at how you would use the language and its features to build such systems. The language and the system architecture went hand-in-hand with support in erlang for the type of patterns you would need in the system architecture.

These ideas and patterns existed before OTP. So there were Ericsson products built on top of Erlang which used these ideas and patterns before OTP. OTP "just" took these ideas and design patterns and formalised them in a generic, cohesive way. Plus of course a lot of useful libraries.

So we were never really out to design a functional language based on the actor model, we were trying to solve the problem.

(We had actually never heard of the actor model but were told later that Erlang implements it)

radmuzom 15 hours ago 1 reply      
In 2005, I worked in software development for 6 months before abandoning it for another field.

In my new job, I was given the task of writing a server for a messaging application - which would allow users to send a "hand-drawn" message from our own proprietary handheld device to Windows phones. I was told to learn Erlang and get it done - the company was barely 30 people and had no formal training programs. While I have a degree in theoretical computer science, I didn't do much functional programming before - learnt some Haskell in 2001 in college (just the basics, equivalent of first 6 chapters of LYAH, no monads). I remember learning Erlang over the weekend and delivering it in the first week of my job. Obviously, the code was neither great nor scalable - but I write this not to boast but to tell people that Erlang was so beautiful and easy that even an average intelligence person like me could use to produce functional software in a week. Today, my only regret is that I am not a programmer.

bosky101 6 hours ago 0 replies      
here is a daemon that runs forever

    foo()->        foo().
1. processes as tail-recursive functions

it's callstack will never grow. you can have multiple of these running and the scheduler will still split work between them.a process is a tail-recursive function where the argument becomes its state.a process ceases to exist when it stops being tail-recursive.erlang's distributed computing strengths can in ways be attributed to tail-recursive approach to network programming & beginning with concurrency in mind. everything else came as a side-effect.

for anyone interested for more examples of expressiveness/composition, this is from my talk at functionalconfhttp://www.slideshare.net/bosky101/recursion-and-erlang/43

2. Node's or machine's are first-class citizens.

you can decide to replicate data over to them, make the data on one node location-transparent, or decide to just abstract it all in a distributed system.

3. binary patten matching

you can pattern match not just [A,B] = [1,2] but also contents of A,B. or do gaurds against contents . eg if the contents of this binary variable begin with "foo".

4. you never have to import any header or module. the vm uses all modules available in its (ebin) path. big cyclic dependency headaches good bye.

5. as many schedulers as there are cores in your machine. (configurable)

6. hot code swapping.

albeit little contrived for the beginner on production systems.

7. otp

comes bundled with the fsm,client-server,etc skeletons called "otp", the same that runs 40% of the worlds telecom traffic.


PS: the root link describes more such features you may find useful

davidw 16 hours ago 0 replies      
This is a pretty good summary of what makes Erlang unique. There's definitely a learning curve, as he says, but once you start to 'get it', you can make some really solid stuff with it.

I've been having a lot of success with remotely debugging things with observer ( http://www.erlang.org/doc/apps/observer/observer_ug.html ) and Recon ( http://ferd.github.io/recon/ ). I've been using the latter to compile code locally and quickly deploy it to a remote machine (while it's running, of course) to debug problems without going through a whole commit/pull/build cycle.

Some of the other responses in the thread are pretty good too:


willvarfar 15 hours ago 5 replies      
Curious how much people think Go will eat into Erlang? Many of the points made were about Erlang as CSP and that's Go-territory.

Loc Hogun (author of Cowboy, other projects) said:

  For me Erlang is first fault tolerant, then concurrent,  then functional
As Go gains libraries like groupcache will it become more and more go-to for networked and shared systems?

Is Go moving up from concurrent to fault-tolerant in a real way?

dpeck 10 hours ago 2 replies      
I'm just starting to do some work on a system that is going to be very distributed with many endpoints, often on unreliable networks connecting back to a server.

Erlang (elixir, really) looked promising for this so I've been investigating off and on while we do some prototyping and flesh out some of the details. So far it seems to be that you get nearly all the advantages of Erlang with running something like RabbitMQ and then being able to write message consumers in whatever language you desire.

Everywhere I think Erlang would be a good fit, seems that relying on RabbitMQ instead and having less code to maintain in-house makes more sense. I'm still very early in my Erlang journey but so far haven't been able to convince myself to use it directly. I must be missing something.

john2x 13 hours ago 1 reply      
I love Erlang (and it's younger sibling Elixir). I love reading about it. I enjoy following tutorials about it. I love its simplicity.

But when I try to come up with scenarios/ideas where Erlang might be a good fit, I realize I'm not smart/motivated enough (yet?) to tackle such problems.

Erlang isn't difficult. It's the problems it was designed to solve that are difficult.

amelius 14 hours ago 11 replies      
Erlang looks interesting. However, the only problem I have with starting in Erlang is that on the great computer language shootout, it shows that Erlang is about 10x slower than C++ on most examples. And about 3x slower than Go [1] I know that the problems on this website are not specifically "concurrent" problems, but still, even a distributed web-server must do some non-concurrent stuff at times :) Are my concerns correct/justified?

[1] http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?t...

Edit: even Haskell seems to be about 7x faster than Erlang (according to [1]). Suddenly, Haskell looks like a good candidate for writing a distributed server :) Can somebody comment on this?

cowabunga 15 hours ago 9 replies      
Not sure I agree with the conclusions:

1. Binaries: this isn't really an issue. We ship the JVM with anything that we do in Java. Unzip, run, done. Go is probably ideal there but it doesn't support embedded resources without some hideous hacks so you're still going to be deploying more than just a single binary in a lot of cases. CLR is pretty good at this as well.

2. Sockets: 0MQ/WCF/JMS/any stream abstraction wired up correctly.

3. ASN.1: Everything has ASN.1 libraries these days. I've never had to use one in the real world in any of the sectors I've worked in.

3. Let it crash: we do this. In fact we force threads to crash if they do anything bad by throwing an exception that isn't caught. All threads are designed to be idempotent, restartable and transaction aware. This is true in JVM/CLR at least.

4. Supervision: New thread, new state, no shared state. Not hard. PHP does this...

5. Inspection: I've seen the Erlang toolchain and it's pretty good but I'm not joking but inspecting and debugging the state of a system is better when there is lots of pooled knowledge (google) and lots of diverse tools. JVM wins there every time.

7. Messaging is not RPC: It's not in JMS either or WCF. It abstracts the transport away. In fact we have inproc transports in some cases that use concurrent collections to throw messages around.

8. Dialyzer: compiler, static typing (Java is less strong here to be honest than say C#).

I really like the idea of Erlang but the worse is better mantra applies here. It's easier to get staff, the knowledge when something goes pop is available, we benefit from the reuse of billions of lines of code written and tested by others if we pick a JVM. If we have to think a bit or throw some more kit at it, so be it.

Edit: just to add, I'm not knocking Erlang and quite like it in principle (I spent 2 months disassembling CouchDB to see if we could maintain it so it's not a foreign language to me), but the justifications in the reply aren't particularly unique ones nor are they major advantages.

querulous 8 hours ago 0 replies      
erlang's whole design is what OO promised but failed to deliver. it's completely trivial and natural to represent programs as a collection of loosely coupled state machines that communicate asynchronously. it turns out this is a fantastic model for anything network related but also for programming in general
StavrosK 16 hours ago 3 replies      
Only semi on-topic: Anyone hiring experienced remote Erlang developers? 30 years of experience or so.
mahmud 14 hours ago 3 replies      
Op's work environment is exceptional in many ways. It's a quintessential hacker work place. Unbelievable flexibility. I am sure a place like that attracts people who can make any technical edge, however miniscule, into a real felt advantage. I want to learn about their management, hiring and culture-building practices more than their actual hacking triumphs.
ColinWright 13 hours ago 3 replies      
Added in edit: Seeing that this is getting down-votes suggests either that I've not been sufficiently clear in the point I'm trying to make, or that you think I've phrased it in an unacceptable way. Either way, sorry you don't think this fits here, but I'll leave it for others to down-vote, correct, reply to, or support, as they see fit.

Another edit: Getting more down-votes, but not learning anything. Do you think this is the wrong place to say this? Or do you think I'm just wrong to worry about this level of precision in "normal" speech? Reply! Engage! Tell me why I'm wrong.


I'm going to be "that guy" ... warning: Rant ahead.

The wording of this question is like sandpaper on my brain - for me, the "only" is misplaced. Assuming he is asking:

    What is there that I can do in Erlang,    but cannot do (or is significantly more    difficult to do) in other languages?
With that reading, I feel the question should be:

    What can I do only in Erlang?
The question as phrased, to me, admits the alternate interpretation of:

    What does Erlang force me to do?
English, and natural languages in general, are weird things. To quote the great Terry Pratchett:

    It's very hard to talk quantum using a language    originally designed to tell other monkeys where    the ripe fruit is. -- (Night Watch, 2002)
I'm sure I've seen it before, but Google searches are just turning up places where I've re-quoted it, but:

    When using plain language is it difficult    to construct a sentence that a determined    adversary cannot misconstrue.
You may argue that in this case it doesn't matter, but I would suggest that practising precision in language is important, and helps to avoid sloppy thinking.

This comment is intended to be constructive, although I admit freely that it is off-topic for the submission. Even so, I think it's useful to think about these things. I wonder if Erlang makes it easier to be truly precise about something.

TheMagicHorsey 8 hours ago 0 replies      
Someone correct me if I'm wrong, but it seems to me the main advantage of Erlang over something like Go, is that the logic for your entire cluster is contained in Erlang itself, as opposed to in some external scripts or configuration. What I mean specifically is that it seems like you don't need anything like Mesos or Kubernetes to monitor and relaunch processes across the cluster. OTP within Erlang does that for you.

Now whether this is worth the tradeoff of switching to a new language and framework (OTP is a framework even if it is lightweight to use) depends on the team I suppose.

This is just based on my reading of the docs. I hope someone more experienced will correct me if my take is wrong here.

halayli 6 hours ago 0 replies      
For those interested in scalable and fast socket handling in C/C++ using a threaded-like approach, take a look at lthread & lthread_cpp bindings.



philip1209 7 hours ago 1 reply      
Technically in turing-complete languages, there is nothing that one language can do that another is incapable of, correct?
davexunit 8 hours ago 0 replies      
Erlang aside, I was pleasantly surprised to see that this person uses Guile Scheme for internal tools at their workplace. That's awesome!
hiring_sucks 8 hours ago 0 replies      
How does Akka (Scala/Java framework) compare to Erlang? It seems to deliver on the same promises that Erlang does.
mtdewcmu 11 hours ago 1 reply      
I got the sense that Erlang overlaps somewhat with node.js in terms of what one might use it for. Could anyone familiar with both compare them?
koloron 13 hours ago 1 reply      
How does Haskell compare to Erlang regarding these features?
bachback 15 hours ago 3 replies      
ZeroMQ gives you the same but better: messaging in any language. That means you can interface to any component you want written in any language you want. Also hiding native sockets, which means networking is not 1:1 but based on scalability patterns. At some point people will realize what a big deal that is.
Why do we need modules at all? (2011)
142 points by thomas11  11 hours ago   66 comments top 24
derefr 10 hours ago 9 replies      
A function's true name should be its content hash. (Where that content hash is calculated after canonicalizing all the call-sites in the function into content hash refs themselves.) This way:

- functions are versioned by name

- a function will "pull in" its dependencies, transitively, at compile time; a function will never change behaviour just because a dependency has a new version available

- the global database can store all functions ever created, without worrying about anyone stepping on anyone else's toes

- magical zero-install (runtime reference of a function hash that doesn't exist -> the process blocks while it gets downloaded from the database.) This is safe: presuming a currently-accepted cryptographic hash, if you ask for a function with hash X, you'll be running known code.

- you can still build "curation" schemes on top of this, with author versioning, using basically Freenet's Signed Subspace Key approach (sort of equivalent to a checkout of a git repo). The module author publishes a signed function which returns a function when passed an identifier (this is your "module"). Later, they publish a new function that maps identifiers to other functions. The whole stdlib could live in the DB and be dereferenced into cache on first run from a burned-in module-function ref.

- function unloading can be done automatically when nothing has called into (or is running in the context of) a function for a while. Basically, garbage collection.

- you can still do late binding if you want. In Erlang, "remote" (fully-qualified) calls don't usually mean to switch semantics on version change; they just get conflated with fully-qualified self-calls, which are explicitly for that. In a flat function namespace, you'd probably have to make late-binding explicit for the compiler, since it would never be assumed otherwise. E.g. you'd call apply() with a function identifier, which would kick in the function metadata resolution mechanism (now normally just part of the linker) at runtime.

Plug: I am already working on a BEAM-compatible VM with exactly these semantics. (Also: 1. a container-like concept of security domains, allowing for multiple "virtual nodes" to share the same VM schedulers while keeping isolated heaps, atom tables, etc. [E.g. you set up a container for a given user's web requests to run under; if they crash the VM, no problem, it was just their virtual VM.] 2. Some logic with code signing such that calling a function written by X, where you haven't explicitly trusted X, sets up a domain for X and runs it in there. 3. Some PNaCl-like tricks where object files are simply code-signed binary ASTs, and final compilation happens at load-time. But the cached compiled artifact can sit in the global database and can be checked by the compiler, and reused, as an optimization of actually doing compilation. Etc.) If you want to know more, please send me an email (levi@leviaul.com).

jbert 10 hours ago 1 reply      
So, immutability and/or api contract is important here.

If I'm pulling in a function, I want it to do what I think I want. Sometimes I want that to change (get a bug fix), but sometimes I don't (someone introduces a bug, or makes the func more general and introduces slowdowns).

This feels like a job for a content-addressable git-like tool. How about this:

I can discover my function (via whatever means). The function is actually named 8804ea505fda087da53b799434c377f015933707 (the sha-something of it's (normalised?) textual representation).

I then import it into my codebase as "useful_fun". My code reads like:

    useful_fun("do it", "to it")
but I have some kind of dependencies/import record which says that "useful_fun" is actually 8804ea505fda087da53b799434c377f015933707. That means one and only one thing across all time, the func with that hash.

So how do we handle updates? If we want a golang-like model, the developer could run something like "update deps". This would:

- go back to the central repository, looking for updates to 8804ea505fda087da53b799434c377f015933707. It might find 5. Local policy then determines what happens. Could be "always choose the original authors update" or "choose the one with the most votes" or "always ask the dev, showing diffs".

Note that because the unique name is based on the function content, any change to it creates a new item in the db. (Content-addressability, same way git and other systems do it.)

- stuff can be grouped and batched. If I pull in 10 functions tagged with the same project ('module') and they've all been updated, I can say "and do the same with all the others".

- This kind of metadata allows all kind of good stuff. I can subscribe to alerts on the functions I've imported and get told about new versions, or security warnings. This kind of subscription information can be used as a popularity contest to solve the "which fork on github do I want to use" problem?

- people can still publish modules. They now look like a git directory or tree. A git tree is a blob which contains the hashes of the files within it. A 'module' could be a blob which specifies which (immutable) functions are in it.

If we use normalised functions, we've now got a module representation which allows arbitrary functions to be pulled together. At fetch time, we can denormalise into the user's preferred coding style. At push time, we renormalise. We aren't grouping stuff into files, so a 'project' or a 'module' consists solely of the semantic contents, nothing to do with artificial grouping for the file system.

Seems like an interesting future.

thomas11 10 hours ago 0 replies      
Armstrong's proposal reminds me a bit of Emacs extensions. Since Emacs Lisp doesn't have namespaces or modules, all functions must be uniquely named which is done by prefixing them: foo-replace. This is not that different from having a module foo, as Armstrong notes: "managing a namespace with namees like foo.bar.baz.z is just as complex as managing a namespace with names like foo_bar_baz_z".

But what it enabled is an Emacs community where single functions are freely shared, for example on http://www.emacswiki.org/emacs/. People just copy them into their Emacs init file. Sometimes they modify them a little and post them again with their own prefix. This has obvious downsides such as lack of versioning and organization. But it provides a low barrier to entry and creates a dynamic community.

inflagranti 10 hours ago 0 replies      
To me this is the same question whether we need directories or not in a file system. Ideally, your file system is a flat database and files are indexes by a vast array of automatic and manually added metadata that allows to easily retrieve them. Microsoft tried to go this direction with WinFS that was eventually cut for Vista, maybe because it wasn't practical (yet). Looking how people use the Internet though, where 90% of browsing will start at Google, this does seem a very reasonable approach for many things in the future. At the end, why should humans do manual indexing and retrieval if the computer can facilitate this part?
zo1 10 hours ago 1 reply      
I don't know Erlang, so I might be missing something key here.

"I am thinking more and more that if would be nice to have all functions in a key_value database with unique names."

Yeah, sure... Sounds good, right. Until you have naming conflicts.

So then the patch is "oh, let's just add another column to make it more unique", without realizing that you've just, in essence, created a "module" of sorts except it's stored in some sort of giant key/value database.

And then you've come full-circle back to the dilemma the author complains of which is that he doesn't know where to put a function that seems to belong in two modules.

Eventually, I'd say this is a general failing of modules that could potentially solved by some sort of inheritance. Maybe even a tagging mechanism if you really want to be "patch-work joe" about it.

Verdex 5 hours ago 1 reply      
I saw Joe's strange loop talk [1] a while ago and I get the same vibe reading his post as I did when watching the video. It sounds very cool, but I can't shake the feeling that it only works for 85% of the code. That is to say if you program in exactly the right way, you will be able to do everything you want and it will work with this system, but there are ways of programming that won't work with this system.

More specifically I feel like there are two problems. 1) It feels suspiciously like there's a combination of halting problem and diagonalisation that shows there are an uncountably infinite number of functions that we want to write that can't be named (although I would want to have a better idea of how this is supposed to work before I try to hammer out a proof). 2) I don't understand how it's possible for any hashing scheme to encode necessary properties of a function such that the function with necessary properties has a different hash than an otherwise identical function without these properties. For example can we hash these functions such that stable sort looks different than unstable sort? Wouldn't we need dependent typing to encode all required properties? And if that's the case couldn't I pull a Gdel and show that there's always one more property not encodable in your system?

[1] - https://www.youtube.com/watch?v=lKXe3HUG2l4 [2]

[2] - https://news.ycombinator.com/item?id=8572920 (thanks for the link)

brianshaler 2 hours ago 0 replies      
Is the author's use of the term `module` specific to erlang? To me, it sounds like he's advocating for modules that are comprised of a single function, rather than utility belt modules that contain many functions. As I understand it, I agree with what the author proposes, and I feel like a subset of npm already provides what he's talking about. The best example is probably underscore.js versus lodash.js, which both have many functions and a wide API surface area. What's notable is that you can cherry-pick individual lodash functions and depend on a specific version[0]. (Admittedly, I lazily pull in the full lodash module instead of importing only the function(s) I'm using)

Lately, I've been moving more toward the proposed design in my Node.js projects. It keeps individual files concise, makes code sharing trivial, encourages stateless methods, and it makes writing tests a breeze.

[0] https://www.npmjs.org/browse/keyword/lodash-modularized

shaurz 10 hours ago 0 replies      
I quite like the idea. I think it would probably still make sense to have "collections" where a bunch of related functions can be grouped together, discovered and worked on as a unit (this would just be an optional extra layer on top of the global function database). Although there would no exclusivity in collections so a function might appear in more than one, or zero, collections.

Another idea: Unit tests could be stored as function metadata.

cwmma 10 hours ago 2 replies      
JavaScript works similar to this this and apps/libraries that wrap themselves in a giant closure work almost exactly like this. The disadvantage of this over using modules is in dependencies between functions. When you don't have modules and you try to refactor you get this annoying tendency for function a in file b to break when you change function y in file z. When you have modules you can easily tell before changing function a whether it is exported or not, and if it is to see in file z wither file a is imported.

Not saying this Erlang idea isn't good or wont work, just these are the pitfalls besides the obvious name spacing and conflicts.

andrewstuart2 6 hours ago 1 reply      
Because humans suck at serialized content.

7 +- 2. [1] That's the number of things our prefrontal cortex/short term memory can track at once. That's why we (humans) organize things into hierarchies. That's why the best team size is around that number. Etcetera.

Heck, everything in the world on a computer is serialized into memory or onto disk. Or addressed as some disk in a serial array of disks. Serialized as in, "there's some data somewhere in these 2TB that tell me where in the same 2TB the rest of the data is." Computers excel at this. Humans are terrible at this.

I guess my point is, humans are the reasons we need modules.

[1] http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_...

ryanisnan 5 hours ago 0 replies      
While you're talking about Erlang specifically, the concepts you bring up can be applied to programming in general.

Why does Erlang (or any other language) have modules?

The biggest reason for me (and I think the one with the most merit) is for clarity and usability.

Modules exist as ways of grouping units of code by the responsibilities of that code. If you removed this hierarchy, wouldn't things become a lot more difficult to navigate and understand as a developer?

hyp0 1 hour ago 0 replies      
reminds me of gmail: instead of hierarchical directories ("modules"), just search, and have multiple tags, so an email can be in more than one directory ("metadata").

Seems especially applicable to fp (like erlang), where code reuse is more often of small functions.

felixgallo 6 hours ago 0 replies      
I think a lot of people are focusing on the implementation details here, which is fun and great, but the real deep insight here is the idea of a global registry of correct functions.

If you postulate for a minute that the (truly nontrivial) surface problems are all solved, and concentrate only on the idea of a universally accessible group of functions that accretes value over time -- like a stdlib that every language on every runtime could access -- that seems like a pretty exciting idea worth thinking about.

I had something like that idea almost two decades ago (http://www.gossamer-threads.com/lists/perl/porters/26139?do=...) but at the time it was all in fun. But these days, that sort of thing starts looking pretty possible, especially for the group of pure functions.

philbo 7 hours ago 0 replies      
To answer the question in the title directly, I think modules are to aid reading and discovery.

The fact that it is difficult to decide which module a function belongs in doesn't make them pointless. People who have to read or debug your code use them to quickly zero in on areas of likely interest.

protomyth 5 hours ago 0 replies      
Lambda the Ultimate's discussion http://lambda-the-ultimate.org/node/5079 is pretty interesting.
al2o3cr 10 hours ago 0 replies      
In my experience, telling programmers "all functions must have unique names" means you get a half-ass module system tacked on via common prefixes. In other words, you get "foo_bar_function1", "foo_bar_function2" etc.
rymohr 7 hours ago 0 replies      
The problem with this approach is you need to consider every existing function name in order to define a new one.

The beauty of commonjs modules is they allow you to focus on implementation, rather than identification. All functions can be anonymous, identified only by their path and named at the whims of the caller.

tel 7 hours ago 0 replies      
The problem is now you either have zero data abstraction or uncontrolled data abstraction without even a convention like "these functions work together as a bundle" to save you.

That said, a nice SML module probably could work as the base abstraction here.

the_cat_kittles 5 hours ago 0 replies      
this talk about modules as a way to organize similar code makes me wonder- if you had all the functions in a global namespace, you could probably automatically generate some kind of organization by extracting relevant features from each function and doing some kind of clustering. maybe some features could be the function's dependencies, who depends on it, what it returns, its signature, and maybe even nlp in the hope that people are actually using descriptive variable names.
Alex3917 10 hours ago 1 reply      
This is basically what Urbit is doing, among other things.
endergen 9 hours ago 0 replies      
Related to this would be all the cool content addressable third-party meta data. Services could automatically generate pre-compiles of things or alternate optimizations. Or auto complete data, or statistics, test suities, behavioral diffing, example code, documentation, the options are endless.
moron4hire 10 hours ago 0 replies      
I think what you're discussing is really just namespacing ala C++, Java, or .NET. Especially with Java and .NET, you don't import a self-contained module directly from individual source files. The modules are technically all accessible at any time (or at least, the ones linked in to the build, which in the case of the Java and .NET standard libraries is quite a smorgasbord). You just reference the class/function you want in some way: either with using statements or with fully qualified names.

Because, really, if you start throwing everything into one store, you're going to run into the naming conflict issue, and any attempt at addressing the naming conflict issue is going to either look like importing modules or look like namespaces. You either have to explicitly state what your program has access to, or you explicitly state what function you mean when you have access to everything. Realistically, if you give every function a unique name and don't use namespaces, then there will start to be functions called system_event_fire() and game_gun_fire() and disasters_house_fire() and you're right back to having namespaces, just not in name or with a syntax that makes things nice when you know you're dealing with specific things.

Though, it'd be nice if types weren't the only thing that could be placed into a namespace directly in .NET. I'd like to put free functions in there. The Math class in the System namespace only exists because of this. I'd have prefered there to be a System.Math namespace and Cosine and Sine be members of it. Then I could "using System.Math;" and call "Cos(angle)". Instead, I'm stuck in a limbo of half-qualified names.

And I like it. I like it a lot more than Python, Racket, Node.js, etc. and having to import this Thing X from that Module Y. I like the idea that linking modules together is defined at the build level, not at the individual source file level. These languages are supposed to be better for exploratory programming than Java and C#, but actually, you know, doing the exploring part is harder!

Sometimes, I really do just want to blap out the fully qualified name of a function, in place in my code. System.Web.HttpContext.Current.User. If I'm doing something like that, it's a hack, and I know it's a hack, and having the fully qualified name in there, uglying it up, makes clearer that it's a hack. Though, I suppose I'm one of the rare people who actually do go back and clean up my hacks.

EDIT: I thought I wrote more, weird.

The network-accessible database of every library, ever, is definitely a great idea. I think it's where we're heading, with tools like NPM, NuGet, etc. It seems like a natural progression to move the package manager into the compiler (or linker, rather, but that's in the compiler in most languages now). Add in support in an editor to have code completion lists include a search of the package repository and you're there.

tracker1 8 hours ago 0 replies      
dibs on create_uuid_v4!!
fat0wl 10 hours ago 0 replies      
isn't this issue sortof analogous to the expansion/contraction of a language core?

Except in this case the core is user-generated and ever-expanding.

I bet there are a lot of issues in Java history that could predict possible bumps in the road for such a system (since it was essentially concurrently designed by a bunch of actors -- except in that case they were corporate entities)

BlinkOn3: State of Google Blink
22 points by cpeterso  4 hours ago   5 comments top 3
xcyu 2 hours ago 0 replies      
"Blink is the rendering engine used by Chromium."http://www.chromium.org/blink
serve_yay 2 hours ago 1 reply      

These slides aren't so useful without someone talking about what they mean.

bsimpson 1 hour ago 1 reply      
Anyone know what the Page Transition API is?
Website Streams Camera Footage from Users Who Didn't Change Their Password
29 points by BrandonMarc  7 hours ago   4 comments top 3
com2kid 1 hour ago 1 reply      
This has been around for nearly a decade. Many years ago I remember seeing fancy Google queries that popped up a list of these cameras.

The fascination now is about the same as it was then. Mostly driveways.

dang 4 hours ago 0 replies      
gojomo 1 hour ago 0 replies      
I wonder if "Hacktivist's Advocate" defense-attorney Jay Leiderman would agree this is "a stunningly clear violation of the Computer Fraud and Abuse Act (CFAA)", as he is quoted in this article, if he were defending someone accused of the practice.
The Dark Market for Personal Data
33 points by prostoalex  4 hours ago   8 comments top 3
otoburb 4 hours ago 3 replies      
>These problems cant be solved with existing law. The Federal Trade Commission has strained to understand personal data markets a $156-billion-a-year industry [...]

It kills me that this is a $156B/yr industry, yet asymptotically zero of the money paid for various lists are going to the actual individuals whose information is being bought, whether mundane emails or more detailed profile attributes.

The adage "if I had a penny for every time ..." would seem to align various parties' incentives. Perhaps somebody should release a product based on the underlying premise that some emails tied to some shadow of your identity will inevitably be sold, so might as well set a floor price of "a couple of cents" as per the article and watch in bittersweet resignation as the pennies flow in.

lovemenot 2 hours ago 0 replies      
As unpleasant as this industry is to my own sensitivities, I think it will continue to thrive until it eventually burns itself out. This will happen as a result of poor data quality and competition driving down prices in the race to the bottom. Deliberate acts of mass data poisoning could hasten that day.
srcmap 3 hours ago 0 replies      
How are these prices compare to what one gets from Facebook?


Codecademy launches Reskill USA
20 points by hirokitakeuchi  3 hours ago   7 comments top 6
fourstar 2 hours ago 1 reply      
> ReskillUSA will close the gap between technical education and employment.

No it won't.

> By connecting students with accessible vocational education programs and employers eager to hire from them, we're training more Americans for the jobs of today.

Yeah because the "jobs of today" all require you to know:

HTML and CSSJavascriptjQueryGithub

chrismealy 1 hour ago 0 replies      
There once was an island with a population of 100 dogs. Every day a plane flew overhead and dropped 95 bones onto the island. It was a dog paradise, except for the fact that every day 5 dogs went hungry. Hearing about the problem, a group of social scientists was sent to assess the situation and recommend remedies.

The social scientists ran a series of regressions and determined that bonelessness in the dog population was associated with lower levels of bone- seeking effort and that boneless dogs also lacked important skills in fighting for bones.As a remedy for the problem, some of the social scientists proposed that boneless dogs needed a good kick in the side, while others proposed that boneless dogs be provided special training in bone-fighting skills.

A bitter controversy ensued over which of these two strategies ought to be pursued. Over time, both strategies were tried, and both reported limited success in helping individual dogs overcome their bonelessness -- but despite this success, the bonelessness problem on the island never lessened in the aggregate. Every day, there were still five dogs who went hungry.

-- http://www.philipharvey.info/directjob.pdf

brandonmenc 29 minutes ago 0 replies      
> 3 months: How long it takes a dedicated beginner to learn the skills to qualify for computing and web development jobs.

I'm genuinely curious as to how many people here, who either program or hire programmers, will say that the above is a true statement.

mtmail 1 hour ago 0 replies      
Are they targeting outplacement companies?

Microsoft Certification centers used to target job centers. I've been to a Microsoft training and couldn't believe somebody without understanding of MS Windows would want to become a certified Windows NT server administrator. Simple answer: "the job center is paying for the training". Well in this case it was for soldiers to find jobs in the open market after their 12month serving time ended.

moeedm 1 hour ago 0 replies      
Unfortunate domain name.
puredemo 1 hour ago 0 replies      
$12k seems a bit steep.
How Apple Pay Saved My NFC Startup
32 points by grundyoso  6 hours ago   5 comments top 5
smoyer 1 hour ago 0 replies      
Several times I've been too far ahead of the market and even in a reasonably small niche, it's impossible for a smallish start-up to move a B2B market towards a new solution or technology. Except for my last attempt, I've been lucky enough to have other products to sell.

What's most interesting to me is that he had both the foresight AND tenacity to wait for the market to come to him. Of course, there's some luck involved too ... what if Apple had waited even another 12 months? Would we be hearing the same story?

Anyway ... thanks for sharing a very interesting story.

ludicast 21 minutes ago 0 replies      
This is a great story. I actually was trying to remember flomio's name the other day (wanted to use their technology for something).

Really perfect timing for their technology, kudos to their persistence.

habosa 55 minutes ago 0 replies      
That's real perseverance, way to hold out. In 2012 I also thought NFC was just about the coolest thing ever when I got my Galaxy Nexus. I started working at night with a friend on an NFC rewards system for Android and was really passionate about it. I was sure that the upcoming iPhone 5 would definitely have NFC support. Why would it not?

Anyway when it came out and had no NFC support I just stopped working on my side project. I know this is not at all on the scale of your story (mine was a few hours a week, this was your life) but I can relate.

Anyway I'm glad Apple went ahead and "invented" NFC again, because the tech is awesome and I'd like to see it wide spread.

dangayle 3 hours ago 0 replies      
Best line in that story: "That's why it's essential to have founders that are truly passionate about the problem you're trying to solve and not just ambitious about its success."
jolohaga 25 minutes ago 0 replies      
Sometimes, its just time to go home
529 points by johns  1 day ago   149 comments top 48
yason 19 hours ago 3 replies      
Life is now.

If youre doing something for money so that you can live your life the way you want later, that doesnt work because it takes time to learn to live your life the way you want. Typically it takes a lifetime to do that, so you must start early: you must start now if you wish to make it. Otherwise youll end up with a lot of cash and no idea of what kind of life makes you fulfilled.

If youre doing something to make a difference or to make the world a better place then if you feel like youre running out of time then youre probably doing it wrong. That sort of goals are rarely if ever such undertakings that are constrained by time. Paradoxically, life is short yet theres always plenty of time so as to not have to hurry things. If you work madly for a few years what is it that you think youll win from it if you compare doing that thing you cant not do for decades? Or maybe you werent doing a thing you cant not do in the first place?

Rush is a sign that you have grown an inflated sense of importance. Nothing is that important, not even the important things.

If youre building a company, you have all the time to grow the business. Some families have been doing that for centuries, many individuals for decades. If you have a really great idea, take the time to work on it. If youre lucky, someone else does the same thing and finishes faster than you which means you dont have to do it at all! You can just enjoy the benefits of the addition of widget X or feature Y in the pool of things tangible in our lives. Maybe the idea wasnt that unique after all. But some ideas are and thosee you can work on for years. For many things in life, building them slower makes them last longer. Even a good relationship or a family tree.

For these reasons, I consider it a better approach to weave the things you cant not do and your life together. This removes a divide: life goes on at its own pace and along comes your calling but theyre not in conflict. Theres no contrast between work and family, or business and private life because everything is scattered around in the timeline of your life in small pieces. Theres always some work but theres always home and family too. Neither is more important than the other and none of the areas can continuously hog a big chunk of your focus.

Daddy must work now, daddy has to write an important email because those people need daddys opinion.

I cant have the meeting today, I need to cook dinner and water the garden with my daughter.

famousactress 22 hours ago 4 replies      
The saddest thing about this post to me is the fact that he could only write it because his startup is apparently doing great.

Most startups aren't, despite the fact that lots of the folks there are in effectively the same personal position and feeling the exact same thing - in addition to the added pressure of not feeling like they measure up to stories like this one because their quarter didn't go nearly as well. I really feel for those people, and I wish they felt more freedom to be honest and public about these kinds of feelings.

Rather tangentially - I have a one and a half year old daughter who recently turned a question we posed her ("Woah, kiddo. Are you freaking out!?") on it's head. Now sometimes when she gets wound up she goes full-meta and runs back and forth in the kitchen waving her hands in the air yelling "freaking out!", in self-parody.

That reminds me, a little bit, of our startup-culture's relationship to overwork... and don't get me wrong, I don't think articles like this are the ridiculous self-parody, I think they're the really troubling and all-too-real consequence.

Put simply I think we need to do a little more wiggling our toes in the carpet and chilling the fuck out. The world's not suffering from a shortage of first-world martyrs.

Take a breath, go home, do great work tomorrow, and for God's sake appreciate the fact that you're in the hilariously small fraction of people who get to blog about the pains of working too hard by choice.

Htsthbjig 16 hours ago 4 replies      
When I was young I met a Spanish tennis player called Rafael Nadal when he was a kid.

He was managed and coached by his uncle who had experience in being a professional football player.

What shocked me is that he did not sacrifice Rafael's life like most of other kids that could be good(examples like Michael Jackson or in tennis the Wiilians' sisters or Arantxa Sanchez Vicario in Spain).

At the time everybody believed that if Rafael was not sent to Barcelona to a tennis Boarding school without seeing their family and giving it all to tennis, he couldn't make it.

Toni, his uncle, believed that if he were to be a good tennis player sacrificing everything else, it was not worth it.

Rafael got to be the best Spanish player of all time.

When I created my company I wanted to be rich, but rich meant not just money, but having time to make love to my wife, see my children grow or reading and writing in HN.

At the end, you discover you could delegate lots of work.

Animats 20 hours ago 2 replies      
This guy is the CEO. So, by definition, this is a management problem. Dwolla has some money now; they can hire people. This guy needs to learn how to build a staff and delegate. He's only done startups, and hasn't worked in a long-established company where they have this figured out.

Some founders have trouble letting go of control. They want to do it all themselves. That doesn't scale.

orofino 18 hours ago 3 replies      
I'm reading this while sitting in a restaurant in Nepal. My wife and I are here for around three weeks for hiking to Everest Base Camp. After this, we are headed to China for 11 days. In total, I'm going to be out from work for 5 weeks. Almost three years ago we quit our jobs, sold our house and travelled for 9 months through South America, Antarctica, and Europe.

I work at a startup. I'm the product manager and we are rebuilding the product from the ground up, in December we will have been working on the rebuild for a full year and our first beta customers will be starting on the new platform. The five weeks immediately previous to that, I'm out of the office for an extended period.

This is to say, you have to make the time for yourself. We both work hard, both of our new jobs (which are way better than our pre big trip jobs btw) allowed us to take this five weeks without much hassle. The team will survive and I'll come back refreshed and ready to tackle new problems.

Perhaps some think our startup will fail because someone took time off for this long, I'll tell you that I sure don't.

abuteau 22 hours ago 1 reply      
It shouldn't be the way. We met Brad Feld last year in Las Vegas (Up Summit) and he talked about his depression[1]. I suggest everyone to read the depression archives on Brad Feld blog [2]. Pretty insightful posts. I tend to be obsessive and I lost someone I really care about because of focus on money and too much work. At the end of the day, you have to focus on your priorities, your wife/family is one and should deserve a decent amount of attention. Brad said that when Amy call him whenever he's in a meeting he will still answer, because she's a priority.Outworking yourself is not likely the way you will succeed in the long run. Work hard != work smart

George Bernard Shaw said: "I learned long ago, never to wrestle with a pig. You get dirty, and besides, the pig likes it."

1. http://www.inc.com/magazine/201307/brad-feld/many-entreprene...

2. http://www.feld.com/archives/tag/depression

smoyer 1 hour ago 0 replies      
A start-up is a sprint to profitability ... except it's a distance run that tests your endurance. I did this for a long time and managed a reasonable balance by paying very close attention to my pace.

When I was sprinting at full pace, I learned that I could only reasonably expect about 5-6 weeks before I crashed (hard). I always planned for a much more relaxed pace for a couple months afterwards.

On the other hand, if we had a long-term project I could plan about 55-60 hours per week maximum, but I could sustain that pace almost indefinitely while still maintaining a life.

If you're doing continuous 80+ hour weeks you need to stop now ... if you don't, your body will fail you and other parts of your life will degrade (relationships). It's simply not worth it ... a few extra bills in your pocket is not going to compensate for missing life.

trevmckendrick 23 hours ago 4 replies      
I don't want to be a wet blanket, but when I read how hard he and others work, the first thought that comes to me is:

"I'm not working even close to that level."

In a way, it's a glimpse into the reality required to do Great Things. Followed by the painful self-awareness that you're nowhere close.

8f2ab37a-ed6c 21 hours ago 4 replies      
The startup thing has been both amazing and incredibly devastating.

Haven't seen my folks in years. Spouse dumped me because I practically didn't exist outside of work. Absolutely no hobbies or traveling in years. No guarantee the company will be anything but a massive drop in my financial history. You're constantly running out of money, so you're cutting every unnecessary expense, and you're living on scraps for years, knowing that you could be making 10x that much in the workforce. You can't quit though because so many people look up to you for direction, they need you to be there and lead them, to be certain of where you're going. The stress can be so high you just want to roll up in fetal in a corner and disappear. You understand why so many founders end their lives.

The experiences however and the connections are priceless. The feeling of playing the game on your terms is liberating and the hope of the upside is exhilarating, but I still wish the price wasn't so high. The amount of personal growth is astounding: once you go through the above, everything else feels like easy mode. You have to develop resilience, charisma, diplomacy, discipline and so many other traits or you will sink fast.

In hindsight, I don't know if I'd do anything differently. When you're this deep, you try not to dwell on hypotheticals.

I remember many years ago when I was sold to the startup lifestyle by the sexiness of the message, but the dozens of PG essays and stories of success and freedom. Nowadays I caution people to truly dig deep and understand if this is worth it to them, ask them if they're ready to sacrifice everything for likely nothing at all. Ultimately we're slaves of power law and similar to Hollywood talent in our outcomes: a very small fraction of us is going to becomes gods through either luck or huge sacrifice, and 99.9% of us will be waiting tables for the rest of our lives, or decide to do something else.

gerbilly 6 hours ago 0 replies      
It's just a startup-a type of work organization operating in the first world, staffed by people who can find another job in a few weeks if it should fail.

It's basically an exciting game for privileged people with too much time on their hands. The worst outcome is you have to go work at a regular job.

Relax, no one is being sent to the scaffold over this!

KobaQ 14 hours ago 1 reply      
That's the inevitable consequence of being overambitious. This pattern is the same, regardless of the area within which these folks try to become more powerful, rich or famous than others.

At BMW a former director told some trainees that he's the most lonely person in the world. Lost his wife, kids and friends. Hobbies? None. Money? More than can be spend. Power? You bet. "Too soon old, too late wise" is true even for extraordinary achievers like this guy.

I always like to point out to those younger folks that they need to be aware of their true motivation. To 99 % in the startup scene or at the big companies it's not to make the world a better place. It's not to be creative and productive. The main drive is ambition. That's OK, it's human. But it needs to be controlled. When you are aware of your true selfish motivation, it's easier to stop when it's all too much. You do it for yourself, not your family (you would choose a solid 9 to 5 job) or the world.

ChuckMcM 21 hours ago 0 replies      
This was a great read. Something to consider in that moment is scale. It is impossible to be an entire company and survive. A CEO I highly respect told me that you can move a truck with a go kart engine, a big transmission, and a lot of coolant, but adding more cylinders is what makes it a truck. He was explaining to me that 'scaling' people was about figuring out how to take what someone was doing really really well, and turning that into a process for doing that thing. Then handing over that thing to the process.

The first startup I joined was run by a guy who had not, to my knowledge, ever been more than a line manager (directly managing people who did things, versus managing people who managed people to did things) He had a really really hard time working with the indirection, unable to feel comfortable that things were "in control" unless he went and talked to the actual people doing the work himself. As the company got bigger that became a bigger and bigger issue for him. He didn't scale, and I could tell that there would come a time when he would be 110% subscribed with tasks that he couldn't figure out how to delegate.

The message about getting home though is really really important. Too many people are sleeping at work because they have no home to go home to any more. If you can set a goal for 'quality hours with the family per week' and when that is in jeopardy due to work commitments restructure work to reduce its impact.

lumberjack 18 hours ago 1 reply      
Perhaps there could be something in between a startup and a job as an employee.

Startups are rewarding because you are not simply an employee but also part owner and they are hard because you have to undertake a lot of extra work and responsibility and risk.

Maybe there could be a third way were you are part owner but undertake less risk and less responsibility and less work. Imagine a startup where instead of the typical 2/3 founders 3/15 employees you have a 15 employees who all share an equal amount of ownership and responsibility.

The point of such a startup would be to provide better working terms and better pay than one would get when working as an employee or contractor but at the same time lessen exposure to risk and responsibility.

I don't know how doable it is but I always thought that it was strange that most startups were trying to emulate the same corporate structure the, founders themselves probably wanted to avoid because it didn't suit them very well as employees.

Sorry for going of tangent but it's just random idea that crept into my head and wanted to share.

3pt14159 22 hours ago 0 replies      
Great article.

There are so many different angles to startup burnout, that sometimes it seems like we're playing a really rough game like American football, only instead of head and spine injuries we've got mental health problems and repetitive stress disorders.

Early in a startup the big enemy is yourself and poverty. Try to get yourself to finish that feature; to push the product over the line. To persevere after the launch basically goes unnoticed. The burnout is emotional because it's rooted in self doubt. Once a startup starts getting traction a different type of mental stress sets in: a fear of squandering an opportunity. All the late nights are rooted in the fear that your startup has had some luck and some traction, and maybe if you don't push so hard it will become another Excite. Used, but left to the wayside while a better contender came in.

yesimahuman 23 hours ago 0 replies      
Wow, much respect for writing this. While my company isn't quite at Dwolla scale, I've had my own version of too-much-travel this fall, and I am over it. I realized I was making the most impact back home, helping the team create the foundation for us to do more with less.

All these conferences and these meetings are rarely world changing. They hold potential opportunities and the start of relationships, but we never know if those opportunities would have come to us through cheaper, more effective means.

Travel bothers me so much because it feels incredibly suboptimal, like I'm working harder not smarter.

andrea_s 20 hours ago 0 replies      
Aren't we done yet with the rhetoric of "making the world a better place"/"improving people's lives"? I can't help but think about HBO's Silicon Valley every time I read this kind of thing...
partisan 19 hours ago 0 replies      
It's pretty standard to take a break after a long period of hard work. Especially if it starts to feel like burnout.

I'll probably write an enlightening blog post one day about how sometimes you have to take a nap during the day, especially if you feel sleepy. I'll encapsulate it into a life lesson: Don't lie to yourself, you are sleepy so just take that nap.

dmak 18 hours ago 0 replies      
Almost 2 years ago, I had three major things going on in my life. I was working at another startup in San Francisco, finishing my Computer Science degree from SJSU, and being a boyfriend of a 4 year relationship with my then girlfriend. At the time, I was really worried about doing well in my career as well as making sure I don't fail my courses otherwise I would be held back and be even more miserable. I had to commute between San Jose and San Francisco every other day. I put much of my time into the startup, and naturally that took away my time from other things. In hindsight and after reading this blog post, I realised how much I overlooked and have developed further understanding on how things played out. But yeah, great blog post, it is important to learn to identify what is happening and realise the gradual damages taking place before its broken.
durbin 3 hours ago 0 replies      
How one extremely successful entrepreneur manages his time. - https://news.ycombinator.com/item?id=8574978
barbudorojo 10 hours ago 1 reply      
What surprised me is the "Buy me a coffee" barner. It is supposed they are having a big success with BBVA selling their product/application. The business sound really interesting, but "buy me a coffee" doesn't give the impression I should like to receive for such a product. I was expecting to see something about how our product is the top one in security or any other required feature.

Perhaps I am a little harsh, but if you guy are having success now and you want to give a good impression I should take the banner out.

Also, I find it completely right to rebalance your life, now is the moment in which you can take a little rest and recover from the strenuous effort and stress. Is not only a desire, if a necessity for your enterprise to go on, don't get burned!

Enjoy family, get your batteries full, let your mind and mood recover. Cheers.

calinet6 13 hours ago 0 replies      
People ignore the systems surrounding themselves. Their life, the things supporting their happiness, and the things put in place to ensure your work functions in your absence, even just in the absence of being at home for dinner every night.

Put the right systems in place, whether it be delegation, or software, or a personal task management system (GTD is all about systems), and you don't have to worry about work all the time. You lead it by leading the right systems to function and continually improve themselves.

A systems-focused approach to work would improve every company because it's closer to reality. The reason we don't have the ability to put things down is because we make ourselves into the critical system of function, and that makes us part of the machine. Build a better machine you can trust instead. Remove yourself as a dependency.

Start with Deming: http://en.wikipedia.org/wiki/W._Edwards_Deming

AndrewKemendo 6 hours ago 2 replies      
I am constantly the contrarian in these cases, but not for sake of being contrary, it's just that I genuinely do not relate.

I would kill to be in Ben's place. He is making an impact. We aren't reading his words because he is a great dad or husband or whatever, he is impacting us and making us spill thousands of words because he is making something impactful. I could never turn that off, nothing is as great in my opinion. So like I said, I don't get it. If anything the more substantive work that is piled on (not bullshit bureaucratic stuff), the more effective I am across all components. Maybe I'm wierd.

Anyone who does this type of work for a living and responds negatively to you saying Im unable to make it, I miss my family and want+need to spend some time at home isnt a friend, partner, or an investor you should want to work with anyway.

See, at the same time I understand this also, because not all people are like me and just love doing good work they are passionate about. So when my co-founders or partners say things like this, its great because I know that they need that time and we will adjust things to give it to them.

The last thing I will say is this: No one's legacy is based on how good of a dad/partner they were.

bakhy 16 hours ago 0 replies      
With the advent of startups, this is the new normal for IT workers.

The best part - after all this effort, majority will none the less fail. But in the meanwhile they will be doing huge amounts of work, while their investors can breathe easy, no need to worry much about overtimes, vacations, sick leaves, finding work for people after the project ends, etc.

This system is rigged against us, but a truly IT specific thing is that so many of us enjoy exploiting ourselves like this. Sure, many people would start their business anyway, but now we're dealing with a trend, "everyone" is doing it, it's the hip way to be, and there's very little criticism about this model, if any. The failures are merely presented as lessons to learn, it's rarely considered to call the emperor naked, to stop the show.

This text is nice, family is important, but this is from someone who can afford it, now that he has succeeded. I see no retrospect, no reflection here - could he have cancelled trips like this years ago, when he was still fighting? Is that what he is suggesting to people now? I don't think so, and that makes it all sound just so hollow.

phesse14 14 hours ago 0 replies      
I did really like this post. Many thanks for sharing since most of us, including me, are relunctant to write these kind of things down

Is the price of being an entrepreneur high? Yes, it is. I do not know anyone enrolled in "creating something from the scratch" that might contradict the prior statement

Do we have to learn to put some limits to the price we are paying? Definitely yes, and this guy is putting a name to a lot of muted voices. Personally, I do not distinct so much between my personal or professional time since in my case I do love what I do, but I try not to bother F&Fs with conversations about startups and I do try to talk about other topics. This has double benefits. One not to bother others (as mentioned) and Two open your mind talking about things you don't usually have the opportunity to discuss about

markbao 23 hours ago 1 reply      
cimorene12 6 hours ago 0 replies      
Reminds me of Startups Anonymous, except he has the courage to put his name out there.


bpmilne 19 hours ago 1 reply      
Hey all. You folks killed my weak little blog server. Memory has been maxed and rebooted. Hopefully will stay online for a bit.
robinduckett 18 hours ago 0 replies      
Newsflash: Startup founder struggles with Work / Life balance and ends up not seeing his wife over Halloween.
dddrh 9 hours ago 0 replies      
If you have never read The Monk and The Riddle, I would highly suggest it. The experiences that are shared and the story that is woven lends some really good perspective on our lack of an ability to focus on what is important now versus what will be important later.

It's a short read and worth every page.

nevergetenglish 8 hours ago 0 replies      
What I see and makes me to reflect is that we allow ourselves to be rewarded (go home) when we are on the way to success. But what happens when we are in the worse part of a start-up? Harsh, very harsh.
scottndecker 8 hours ago 0 replies      
Great post. Wish more people would be honest that we aren't invincible. Use things, love people; not the other way around. Our society needs more influence on family. Good work.
startupgrinder 18 hours ago 2 replies      
I wish I had the balls to write this post. Mine would certainly not be so honest and raw. It's also sad that it comes to this. I get home for dinner but work till midnight. I'm in town for birthdays but leave at 6am the next morning. I still talk to my very best friends but no one else. I hope to heaven that the startup never ruins my marriage but it's not impossible to see it could happen.

Maybe these "compromises" mean I will never quite get to that level - but regardless I'm not willing to do compromise them. At any expense.

javajosh 23 hours ago 2 replies      
Last year I worked for a startup and I allowed myself to be pushed past the breaking point. The founders tempted me with equity (which never actually materialized) and I ended up doing many all nighters and many 90-hour weeks prior to an early release (another company was doing the front-end while we did the server in-house).

I don't have a wife or a puppy, but I shouldn't have allowed this to happen. I was hired to work for a set salary under the assumption of a 40-hour work-week - although I am naturally obsessive so I knew that wouldn't actually last. But I didn't expect my boss to say things like, "If we don't have a working build by 3pm tomorrow, we're all fired," (which turned out to be false, actually).

Pacing is important, and I think it would be interesting to do a startup where pacing is actively encouraged. I think this might be possible at a startup with strong IP protection, good funding, and leadership that sets realistic goals and trusts it's people to be motivated to get it done. It may mean that the public has to wait a bit longer before having a cool new thing available to them, but it also means that the thing will have been constructed by fresh, happy minds.

There is a moral imperative to not buy goods made in sweatshops. There is also a moral imperative to not use software produced under inhumane conditions. Congratulations on discovering this for yourself - and I hope you encourage your coworkers to be humane to each other and themselves as well.

nickbauman 21 hours ago 0 replies      
I can't help but feel the whole startup culture is a conflict of mission writ large in two ways. First, you're trying to make people's lives better while deleting the idea of a better life for yourself. Second, execution is everything. But when you're working like this the quality of execution inevitably degrades.

I've done startups. At the end I got better at what I was doing but I wasn't ultimately proud of the work I did. The startup even succeeded (I helped other people get rich, I just made a living). Many startups don't even do that.

edpichler 12 hours ago 0 replies      
The best path is the middle.

I learned this in some book about Buddhism. We must have a balanced life. Life is a long trip. The end is the objective and, of course, very important, but we also need to enjoy the travel.

lorenzfx 14 hours ago 0 replies      
Looking at dwolla's site and seeing "Eliminate paper checks", I realised how lucky we in Germany are, that even though Germany is a pretty backwards country in some regards, at least we managed to get rid of paper checks a long time ago.
jayantsethi 16 hours ago 0 replies      
One of the most beautiful articles I've read lately.

Reminds me of how I took the decision of coming back to stay with my family from a far off place, leaving behind an excellent opportunity, just to spend more time with my lovely family

benjaminwootton 12 hours ago 0 replies      
Great article. I've been doing a startup this year and despite best intentions it really does take a toll on yourself and people around you.
tomasien 19 hours ago 0 replies      
I just re-booked my flight to get home from Vegas (where Ben just was) a day earlier as well. My team was super confused about this - but y'all, sometimes you just need to go home and sometimes you just need to be alone.
facepalm 16 hours ago 0 replies      
What is driving the stressfulness of startup life? Is it the fear of being overtaken by the competition? Or the excitement of unlimited possibilities, so you an always try more?
seletz 12 hours ago 0 replies      
So I'll stop to pretend to "be at the office" now and actually go home to see my kids. No joke.
lovelearning 19 hours ago 0 replies      
Good for you. If the CEO himself is burning out like this, can't help wondering how much worse their engineers are burning out.
moron4hire 22 hours ago 0 replies      
One reaction: "grow up". Stop letting other people dictate how you live your life. That is what parents do for children. Becoming an adult means making your own decisions about how to spend your time, and when enough is enough. Grow up. Stop putting other people, ones not even that close to you, first. Grow up.
johnvschmitt 20 hours ago 1 reply      
2 big points I've learned & want to share on this topic:

1) In the smart long-term play, family & few close friends are most important. Everything else (including your startup) is trivial.

2) Even if you value your startup at 100% value (over family/friends/health/life), you're incredibly short-sited and killing your own startup baby if you don't have some balance in your life. Startup life is NOT a sprint. It's a marathon. To give the BEST to your startup, you need to bring your BEST every day.

Phrased another way, I often ask, "Let's say you're going to be interviewed on the Colbert Report, or other big-huge-friggin' deal tomorrow, what would you do today?" Often, people say, "I'll eat well, build something, hug my family, build something, help someone, build more, chat with a friend, take a walk, build more, play a game, go to bed early" So, if that's what you do to bring your BEST tomorrow, then what would you do if you wanted to bring your BEST EVERY day?

That tends to drive home the message that work-life balance isn't just helping your life, it's truly what matters to helping your work too.

Corollary: If you're overstressed, you aren't helping your work. Often, overstressed people at startups will add much more friction in the small team, hurting efficiency as arguments & disrespect poisons the day's actions. Not to mention the zombie-brain mistakes in execution when you're not taking proper care of yourself.

notastartup 16 hours ago 1 reply      
This article really made my stomach churn because it really hit home.

If I had a one tenth of the success these guys are having I'd tell myself to keep going.

However, working on the project alone, for the past couple of years has been devastating. I've really no more friends left as a result of blindly pursuiting my passion. I admit it hasn't even been worth it to this date. But something, this lizard brain keeps telling me to go go go don't stop.

I've been working on my project off with about 3 years of full time development and 2 years of working at a job to support myself.

I don't know. I'm super confused and agitated after reading this article. I had this gut feeling not to read the article but I did it anyways.

I will probably continue. Crazy.

xpop2027 22 hours ago 1 reply      
Not loading for me, :(
yegor256a 14 hours ago 0 replies      
am I supposed to cry?
Editors note: Reader comments in the age of social media
37 points by minimaxir  6 hours ago   35 comments top 10
imgabe 4 hours ago 1 reply      
Comments on major news sites are horrendous. They tend to be either astroturfers, trolls, or people trying in vain to talk sense to one of the above. Good riddance.
minimaxir 6 hours ago 7 replies      
There's a new design trend that has been popular on news websites such as The Verge and FiveThirtyEight: the comments section is hidden, and you explicitly have to click "open comments" for them to appear.

The motivation for this trend is to hide the comments, as blog comment sections tend to be of low value for controversial stories, despite algorithms that can correct for low value comments (e.g. upvoting/downvoting). The fact that Reuters is removing comments from news stories completely indicates that Reuters is lazy and doesn't want to deal with negative comments at all or find a more practical solution for user interaction. Note that the Reuters Facebook page has just as many if not more low-quality comments. (example: https://www.facebook.com/Reuters/posts/837804776239880)

The "everyone is talking about it on social media anyways" rationale is a terrible excuse.

NB/Disclosure: I received most of my internet fame/infamy through my comments in the comments section on TechCrunch articles.

gobengo 3 hours ago 0 replies      
IMO a question all blogs and media sites need to ask is kind of like: "Am I creating a community or a television channel?"

I grew up on the web, and the multi-directionality of it as a medium (including blog comments) has set my standards for the way I want to consume information (and interact with it). I don't trust information flowing to me that can't be annotated, corrected, and augmented by the wisdom of the crowd it's reaching.

It's Reuters' perogative to do this, but I think it just makes Reuters.com exactly like Reuters TV, it gives them a lot more control, but they're leaving a lot of opportunity on the table. That's usually what I expect from $30b companies though.

I look forward to having plenty of interactive communities to learn the same stuff on for the rest of humanity, even if Reuters is no longer one of them.

codva 5 hours ago 0 replies      
I just have a dorky personal blog, but the comments moved from the blog to Facebook on their own. I didn't do anything different, but my friends sort of decided they preferred to discuss the blog posts on Facebook. I didn't see any reason to fight it. It's not like I have ad revenue on the line.
danso 6 hours ago 0 replies      
Based on Reuters's past technological problems with the Web, I'd have to say this is more based on inability rather than principle.

Example: the Reuters Next project, a redesign that took several years before they abruptly killed it because of being unable "to meet delivery deadlines and stay within its budget":


Among the revelations of that fiasco was that Reuters Next was necessitated out of a failure to iterate on their web platform, such that "even putting in a hyperlink, one Reuters source said, was a very complex issue. Reuters had to put its blogs and opinion columns on a Wordpress platform so they could easily link to outside sources and embed videos."

barbudorojo 4 hours ago 1 reply      
There are some useful discussion about online comments on http://www.abc.net.au/radionational/programs/futuretense/

Online comments a view from the trenches (Podcast, 30 min)Online comments a wicked problem (Podcast, 30 min)

Both podcast, try to frame the problems, suggest some possible solutions and estimate the costs associated with dealing with online comments. I think they give a useful perspective from a multitude of points of view (also I use it to learn English).

lomocotive 6 hours ago 1 reply      
There should be an aggregation of the best high quality public comments gathered across social media per article instead.
jsmthrowaway 5 hours ago 0 replies      
The comments on the post really drive the point home, don't they?
programminggeek 3 hours ago 0 replies      
Discussions can be great, but I'm not sure that putting an article and a forum together (which is what most blogs tend to do) is a wise thing.

Think of it in the context of say someone giving a speech or presentation. In many cases the comments act like someone yelling "YOU SUCK!" in the middle or end of a speech. People tend not to do that in real life, but online the rules are different I guess.

There are so many places to leave comments, I just don't know that every article needs a comments section. The negative side of blog comments usually outweighs the positives. Leave commenting to more dedicated communities like HN, Reddit, LinkedIn, etc.

6stringmerc 4 hours ago 1 reply      
With so many people out of work in the United States, it strikes me as selfishly ignorant to simply can the comments section instead of 1) using an industry standard platform, 2) moderating that platform, and 3) cultivating a culture of utility.

I don't call this move "smart" in the grand sense. I'd go with "lazy" and/or "cheap" in this context. I mean, ask me how I know Disqus has the ability to ban users from specific sites...

Mapping Freight
18 points by sethbannon  4 hours ago   discuss
How Google Works, by Eric Schmidt and Jonathan Rosenberg
48 points by sonabinu  7 hours ago   5 comments top 3
xexers 5 hours ago 1 reply      
I bought this book, started to read it, then returned it because it was so bad. It reads like an advertisement and it seems to be aimed at old ladies who occasionally use "the googles". People in the industry will find it to be a lot of information that you already know.

If you want to read about the war between Google & Microsoft, I'd suggest this one:


cromwellian 1 hour ago 1 reply      
I don't like these kind of MBA driven business advice books because they all suffer from survivorship bias.

But I think the NYTimes review at the end imparts its own bias over the Gundotra story. I don't have any information but speculation, but I don't think either Gundotra or Marissa Mayer left because of 'failure', I think they simply had risen as far as they could and had no internal political opportunity for advancement. At one point, Gundotra was apparently in the running to be Microsoft's CEO (Gundotra was an ex-MS VP), and with Sundar taking over everything, Vic may have just decided to do other things.

Besides, Google Plus didn't fail, no matter how often this is repeated. It failed only if you assume the goal was to beat Facebook's news feed. However, if you consider the other things it does, it achieved a lot: Unified login, G+ now has 34% of logins across mobile and the web compared to Facebook connect at 46%. G+ photos became Google's photo hosting service. Even the G+ "stream" has hundreds of millions of active users posting. It's nothing compared to Facebook, but if it is not #2, it is #3, and who wouldn't want a business with a few hundred million people posting to it every day?

Google doesn't fire people for trying something bold and then failing, especially not execs. If Gundotra was "let go", it wasn't because G+ wasn't a roaring success.

josefresco 5 hours ago 0 replies      
Pretty critical review - although I already discount "how to" books written by widely successful people as often their new-found business wisdom cannot be tied directly to their success.
In South Carolina, a Program That Makes Apprenticeships Work
62 points by hotgoldminer  8 hours ago   7 comments top 3
thomaskcr 3 hours ago 2 replies      
I just started a software development apprenticeship at my company and it's going amazingly. Recruitment for other development positions generally cost $1000+ and really didn't give us anyone impressive (our good candidates/people came from active recruiting). I used ad-words for $120 and indeed for $40 and got over 100 applications for my development apprenticeship.

We chose people with skills and experience we wanted, and selected for traits we felt would make good developers. (Passionate about learning for example). From the start we've been able to really teach good habits as well as focus on how we do things (testing, continuous integration, version control, etc). Some of the exercises involve rebuilding key components from management systems with the hopes we'll have not only great programmers, but experts on our business when they go from apprentices to full time developers. We're also going to get exactly the employees we want, starting from desirable employees with very little to no programming experience.

If anyone is interested in starting a similar program, I will be more than happy to share my materials, some of the things I would change about what I've done so far, etc. I highly recommend it. The cost is relatively low, we balance the pay with the fact that the people instructing can't do work while they're teaching/helping - so during the apprenticeship it's basically a decently paid internship. They're already producing higher quality work than we've gotten from any outsourcing we've done - we'll be finishing a project by the end of the program that we completely controlled the quality of, got an MVP done pretty inexpensively, and we'll have basically custom built the team for that application. All while teaching the group an entirely new career, I've seen what investing in career growth via continuing studies can do for employee retention, I really think that taking it to this level will give us some great retention numbers in the near future as well.

sq1020 5 hours ago 0 replies      
Apprenticeships give people more viable skills than probably 75% of majors at four year universities. The fact is that you have an enormous glut of college graduates who studied sociology, political science, and communication who can't find work in anything related to what they studied so they end up working at a tech company doing customer service, doing a nursing program, working as bartenders, or as we all know learning how to program.
oddevan 4 hours ago 0 replies      
As a born-and-raised-and-still-living-here South Carolinian, it's always nice to see "positive" coverage of my state.

Speaking to the article itself, I can't imagine having gone through one of these instead of college; I was too set in the college mindset at the time and still treasure the experience (though I'm well aware of its shortcomings). On the other hand, any sort of "learn on the job" scenario would have been TREMENDOUSLY welcome a year and a half ago when I was suddenly without work and struggling to break out of the ".NET Programmer" label. Keeping this sort of option open to everyone can only be a good thing.

Who ordered memory fences on an x86? (2008)
76 points by luu  13 hours ago   23 comments top 8
userbinator 9 hours ago 1 reply      
Reminds me of this interesting paper (2 years later) which found at least one of the x86 memory ordering guarantees was not true:http://www.cl.cam.ac.uk/~pes20/weakmemory/cacm.pdf
mattnewport 8 hours ago 4 replies      
This article is pretty old and I suspect if you asked Bartosz now he'd explain it slightly differently. You certainly need to use instructions that impose additional ordering guarantees on x86 in these kinds of situations but you don't need an mfence and in general it will be slower than an appropriate locked instruction. The appropriate uses of mfence are actually much more limited, it's only really needed with special memory types (e.g. write combined) or when you need ordering on certain non temporal loads and stores (mostly vector instructions) AFAIK.

In regular code you should never require the hammer of mfence for correct synchronization. You can implement C++11 atomics without it.

davidtgoldblatt 7 hours ago 0 replies      
By the way, in case anyone was curious about the comment in the article:

> Since fences are so expensive, couldnt you add a dummy assembly instruction that prevents the X86 from re-ordering? So the pseudo assembly for thread 0 might become:

    > store(zeroWants, 1)    > store(victim, 0)    > r0 = load(victim) ; NEW DUMMY INSTRUCTION    > r0 = load(oneWants)    > r1 = load(victim)
> Since the dummy instruction is a load, the X86 cant reorder the other loads before it. Also, since it loads from victim, the X86 cant move it before the store to victim.

> If you do this to both threads, does that solve the problem?

This doesn't work. Intel specifically calls out these sorts of attempts to get a fake memory barrier: "The memory-ordering model allows concurrent stores by two processors to be seen in different orders by those two processors; specifically, each processor may perceive its own store occurring before that of the other." This is true in practice as well as in theory.

A related trick that does work in practice (though it is also banned by Intel) is to write to the low-order bytes of a word, read the entire word, and get the high-order bytes. It's sort of a single-word StoreLoad barrier. There's a paper from Sun that documents it further: http://home.comcast.net/~pjbishop/Dave/QRL-OpLocks-BiasedLoc... .

redraga 6 hours ago 0 replies      
Remember that x86 (and SPARC) offer the strongest memory ordering guarantees among modern processors. The POWER and ARM memory models are weaker than x86. This actually leads to correctness issues when virtualizing a multi-core x86 guest on a weaker host (cross-ISA virtualization). Of course, this problem only shows up in truly parallel emulators using multiple threads on the host to emulate a multi-core guest, such as COREMU (http://sourceforge.net/projects/coremu/
jhallenworld 4 hours ago 0 replies      
I'm pretty sure the strong ordering of x86 is all in support of backward compatibility (probably to the first multi-core x86). One related example of this is the cache coherent I/O system. If a PCIe card writes to memory, there is really not much that the driver code needs to worry about compared with other processors. Why is this? So the ancient device drivers in MS-DOS will still work with the ancient floppy disk DMA controller.
pkhuong 9 hours ago 1 reply      
The fun thing about membars on x86 is that, unless you're playing with nontemporal stores or non-standard memory types, LOCKed ops are more efficient fences than mfence.
amelius 10 hours ago 3 replies      
> Loads are not reordered with other loads.

Wonder why that guarantee is necessary. Loads have no side-effects (in the memory), after all.

m00dy 8 hours ago 0 replies      
You can apply Peterson`s lock for more than 2 threads.
       cached 8 November 2014 02:02:02 GMT