hacker news with inline top comments    .. more ..    6 Apr 2014 News
home   ask   best   5 years ago   
Google Now for Chrome google.com
22 points by jaseemabid  1 hour ago   15 comments top 4
jonemo 1 hour ago 4 replies      
Is Google Now a useful utility for others? I recently activated it when I purchased a new phone and am having a hard time understanding how to use it. It's showing estimates for how long it will take me to get home or to work but they are always based on locations where I was a while ago and often outright ridiculous, e.g. 2 hours 30 mins to go from Alcatraz to my home, by bicycle? The other "cards" seem to show up randomly, like the stock quotes that are always up top when I want to see the weather and hidden when I want to see stock quotes. How do others make use of Google Now?

edit for clarification: I have set it up to prefer cycling, but Alcatraz is an island.

ananth99 17 minutes ago 0 replies      
Does it come for Linux Distros too?
verandaguy 1 hour ago 0 replies      
This is a rather poorly-capitalized title. I thought OP meant that Google is, at present, available for the Chrome browser.
abimaelmartell 1 hour ago 2 replies      
"Google now" available for Chrome.
Spiped symmetric, encrypted, authenticated pipes between sockets tarsnap.com
55 points by malandrew  4 hours ago   27 comments top 9
malandrew 1 minute ago 0 replies      
OP here. So quick technical question for those reading this thread since I came across spiped while trying to find a solution to a problem that I believe should be solvable without resorting to spiped or socat.

The manpage for sshd_config says that turning off AllowAgentForwarding still doesn't prevent users from setting up their own forwarders. If I have ssh-agent set up to forward to the host I am connecting to, but it has AllowAgentForwarding turned off, is it possible to set up my own forwarder in sshd without changing having to enable it and reconnect with a new session? i.e. is there a way to get the SSH_AUTH_SOCK unix domain socket on my localhost forwarded to the host machine I'm currently sshed into using only the tools likely to be installed on a typical linux box or is the only way to do it with tools like spiped or socat?

b6 1 hour ago 2 replies      
I use spiped every day the same way I would ssh tunnels, except spiped is rock-solid, dealing with flaky network with no problems. I highly recommend it.
tezza 24 minutes ago 0 replies      
I've been using spiped in production now for a few years, ever since the initial announcement here.

I tunnel Zabbix health monitoring communications to and from the server.

I don't have to worry about setting up users, and each node 'config' is only 1 shared keyfile.

If a node is compromised, I can cut it off with one move ( changing the shared key )

aba_sababa 1 hour ago 1 reply      
Wondering - what's wrong with SSH? Is it considered insecure?
yp_master 47 minutes ago 1 reply      
It's such a simple and useful idea.

But being tied to what's in OpenSSL is scary.

I have been doing what spiped/spipe does using curvecpserver/curvecpclient.

stock_toaster 2 hours ago 0 replies      
I have been using spiped for years. Works great!
natch 3 hours ago 3 replies      
To check the sha256 checksum on a system with openssl, in terminal find the download and do:

    openssl dgst -sha256 spiped-1.3.1.tgz

robbles 2 hours ago 1 reply      

  > The simplicity of the code ... makes it unlikely that spiped has any security vulnerabilities.
Famous last words :)

On a serious note - this looks like a really useful addition to the toolbox of "low-level unix flavored tools". I can see this being a simple basis for connecting infrastructure together without exposing potentially sensitive data.

ing33k 1 hour ago 1 reply      
never heard about this tool, thanks for posting. I installed this but was not able to find manpages for this.. any how found instructions in README ..
How I Hacked a Router disconnected.io
193 points by jackpea  9 hours ago   63 comments top 14
ushi 8 hours ago 3 replies      
Interesting read. On thing i do not understand is why software updates/packages are still not cryptographically signed. It's a common thing on Linux. Notepad++ provides checksums[0] for their packages - so (i assume) they are actually aware of the problem.

[0] http://sourceforge.net/p/notepad-plus/discussion/1290588

quackerhacker 1 hour ago 1 reply      
Maybe some NetSec guys could answer this please. What would happen with his update to Notepad++? Would it still update the package?

Even if the target set his computer to auto-update (or something that did not require admin authentication), wouldn't he have some type of notion that something went wrong during his update?

With the target being an InfoSec guy, I would've imagined he would at least be running some type of network monitoring, like wireshark or little snitch, ESP on his personal computer. Wouldn't he have to authorize the outgoing packets?

Sorry, if I come off analytical to the story...it's a great read...I just want to make sure my networks are locked down. I've even went as far as dedicated networks for my server and home usage, and preventing internal ip addresses from communicating to each other (sucks for airplay).

jlgaddis 7 hours ago 2 replies      
While this is an interesting article and this is certainly feasible, I'm left with the opinion that this is fiction and didn't actually happen.
siliconc0w 7 hours ago 8 replies      
Everything is feasible except the faked linkedin email - it wouldn't pass SPF and so I'm pretty sure gmail would junk it.
ivan_ah 3 hours ago 1 reply      
Okay so OpenWRT stopped being optional now...

Any hardware recommendations for what I should look in for in a router? Is old better than new? Any particular model that is well supported?

refurb 8 hours ago 1 reply      
I'm curious how the email attack worked, don't most web-based email services flag emails that come from one domain, but contain a link to another?
svas 3 hours ago 3 replies      
Curious how the author knew to seed the backdoor'ed Notepad++ before Bill clicked the link?

I suppose you could just serve up a fake backdoor program for every *.exe\msi download, and remove the honeypot on the second download? The first download would execute and maybe do nothing (or error) - prompting a second download which led to the real thing.

zurn 1 hour ago 0 replies      
This doesn't sound like a router. Maybe a home wifi ap / NAT box?
pcunite 7 hours ago 2 replies      
Sweet story ... and another vote for MikroTik routers for personal use.
userbinator 2 hours ago 0 replies      
tl;dr: Social engineering won. It was over the moment he got tricked into clicking on a link in an email.
icebraining 6 hours ago 1 reply      
One more reason to use NoScript - it would have made the CSRF significantly harder to pull off. And a reason to use an OS with a proper package manager, of course ;)
yp_master 2 hours ago 0 replies      
How about using Soekris or Alix for a router instead of Netgear?
conchy 5 hours ago 1 reply      
How much harder would this attack have been with a fully patched OSX Mavericks target and an Apple Time Capsule router?
tsmash 5 hours ago 1 reply      
Which one do you think will happen first: This guy goes to jail, or this guy gets a job offer?
Is it too late for Microsoft? wired.com
16 points by gabriel34  1 hour ago   10 comments top 5
gkoberger 9 minutes ago 3 replies      
There's no way Microsoft could have created Office for the iPad since Satya Nadella took over as CEO; same goes for a lot mentioned here. I bet he'll be great for the company, however we can't give him credit for everything.
einhverfr 3 minutes ago 0 replies      
I think that the larger problem is not "too late" so much as the trap of success. Let's assume that corporate growth can be modelled on a sigmoid curve, with small companies growing exponentially and large companies growing asymtotically in mature markets. There are a couple of really big problems that large companies necessarily face:

1. Real, disruptive innovation is never worth it. Not only do you have the issue that exponential growth in small units will not contribute to exponential growth to the business as a whole but if you disrupt your support structure, you lose on the whole. So, large companies have to be conservative and only acquire disruptive players when they reach a certain size. However this furthers the problem because at that point those players will not continue exponential growth for very long.

2. Large companies are far more reliant on streams of income and partner supports that are subject to disruption than small companies. This again makes it harder to pivot.

3. Large companies have much more inertia than small ones, making pivoting even harder.

I don't know if it is "too late" for Microsoft because I don't know what "too late" means. Too late for what? Too late to do what? The company is large and changing remarkably rapidly these days (though I saw the beginnings of change starting when I worked there a decade ago).

But the future will certainly be painful for Microsoft. Everyone knows this, even everyone who I know that still works at Microsoft. Is it too late for them to survive? Certainly not. But success on the scale of Microsoft in our market is too often a dead end, and getting out of that dead end is never pleasant.

ssully 15 minutes ago 0 replies      
I get that Microsoft has made some missteps lately, but I find it insane that there seems to be this mindset of Microsoft being on some big downward spiral to oblivion.

They are still making boat loads of cash, they are still making things people want and do use, and they are making a lot of exciting changes. No, it is not too late for Microsoft.

mantrax4 26 minutes ago 0 replies      
So this is not exactly news, is it.

It's just the ramblings of a blogger, based on a bunch of hypotheticals and old info (i.e. again "not news").

Which part of this "editorial" is actionable here? Should Microsoft have thought "hey, if it's too late we better just give up"?

Does it mean you should abandon all Microsoft products because it's "too late"? Office is still a good product, and becoming better, and available on more platforms. There are challengers, but Office is keeping up.

What exactly does "it's too late" mean anyway? Too late for what? Is Microsoft about to go bankrupt? No. Is it too late for it to be 1995 again? Who cares?

Kuytu 17 minutes ago 1 reply      
It is hard to imagine anything is too late for a company that made over 20 billion profit last year. Even if Windows has done worse aren't Office sales increasing on OS X for example. Microsoft is a company that sells Office first and foremost.
Spaced Repitition gwern.net
40 points by rfreytag  4 hours ago   21 comments top 5
lvevjo 3 hours ago 2 replies      
Repetition, not repitition.

I have tried Anki, one of the spaced repetition programs he mentions. There are lots of different decks available. Browse some here:


edit: Oh and this is a dupe:



barry-cotter 2 hours ago 0 replies      
SRS is one of those things that make thinking about education depressing because it makes it obvious that merely being a massive improvement over the status quo isn't enough to get widespread adoption.

It is absolutely wonderful. I recommend downloading a shared deck and using it to get into the habit, then building your own. There are better and worse ways of using it but it's been a real help to me in learning Chinese.

mantrax4 1 hour ago 3 replies      
Ok, why on earth is almost every word in this article body a different shade of gray?

I don't like articles where I need to bring up the browser dev tools just so I can read it.

yzzxy 2 hours ago 0 replies      
I've been building a small-scope SRS app for my high school's mandarin curriculum as a side project in Node. Building a simple Leitner system is amazingly simple, and while I don't support the same level of analytics as Anki, the effect is similar. I'm looking foward to launching later this week!
Tossrock 2 hours ago 2 replies      
Only marginally related, but what the heck is going on with the text on gwern's site? Are the per-character varying gray values supposed to convey some kind of information?
Mathematicians find way to put 7 cylinders in contact without using their ends sciencenews.org
194 points by ColinWright  12 hours ago   74 comments top 16
ColinWright 11 hours ago 1 reply      
It has been known for a long time that one can arrange 7 cylinders to be mutually touching. That was written about by Martin Gardner decades ago, and was set as a puzzle.

The result had the cylinders touching at the end of one with the length of the other, so the question arose, can one arrange to have seven cylinders all mutually touching, without using the ends. The easiest way to say this is to ask for seven infinitely long cylinders mutually touching.

This has only recently been settled, hence this paper. It's believed impossible to arrange eight identical infinitely long cylinders to be mutually touching. I suspect the result is in fact known, but I haven't searched diligently for it.

There is an associated puzzle that uses cylinders that are very short - think coins. How many coins can you arrange to be mutually touching?

Consider that a puzzle. I can do 5. If you can do more, there's a mathematical paper in it for you, should you care.

chubot 10 hours ago 1 reply      
This sort of reminds me of: http://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c

Because it is a 3D object that was found using mathematics. Any other examples?

I think there are lots of new objects discovered in higher dimensions, but I like when there is something you can actually build and see. I also like how it appears to be very asymmetrical.

ColinWright 10 hours ago 0 replies      
Question I've not found the answer to:

    It's possible to have 7 arbitrarily long cylinders    mutually touching.  Currently it's not possible to    have more than 5 coins (which are short cylinders)    mutually touching.  As the cylinder's aspect ratio    decreases, where are the thresholds: 7 -> 6 -> 5 ?

soneca 11 hours ago 2 replies      
Great design for a no-gravity space station.

Easy to go everywhere from everywhere.

huhtenberg 11 hours ago 0 replies      
Here's the original solution - http://www.mathpuzzle.com/7cylinders.gif
josephwegner 11 hours ago 9 replies      
This is going to sound like trolling, but it's not - I'm honestly curious.

Why is this important? Is it just cool, or is there some real world application? Was someone paying for this research for some reason, or was it just a mathematician's hobby?

EDIT: For the record, I don't have any problem with "just cool" research. I do that kind of research often (albeit, not as smart), and totally understand the value in it. Just wondering if this had an application immediately.

analog31 9 hours ago 1 reply      
Ask HN: Is there an explanation somewhere of this "certification" that a layperson (college math and physics major, and self taught programmer) could understand?
josh-wrale 11 hours ago 3 replies      
Manufacturing errors? I can't tell if this is plastic, but if it is, surely a machinist can do better with metal.
DonGateley 4 hours ago 2 replies      
So, given that this involves rounding errors in solutions to equations it is only an approximate solution. A solution with fuzz. Is it possible to prove that each point of "contact" is exactly coincident? Or to prove that exact coincidence is not possible. There seems to be room for deeper work on this problem.
kang 10 hours ago 1 reply      
Links from one of the best riddle website : http://www.wuriddles.com/cigarettes.shtml

Cylinder Length | Max cylinders that can touch | Min cylinders that can touch

Infinite | 7 | 5

Actual | 9 | 7

L=D | 4 | 4

thret 9 hours ago 0 replies      
This is just one of many puzzles and mathematical curiosities popularized by Martin Gardner. His books and columns make delightful light reading for anyone with a curious mind.
laxatives 11 hours ago 3 replies      
Never seen this sort of thing before. Does this hold for an arbitrary radius? What if these were just lines in 3 space?
Zitrax 11 hours ago 2 replies      
Something special about 7 cylinders or would this be equally hard/simple for 6 or 8 ?
ebol4 11 hours ago 4 replies      
Don't they just have to not be parallel?
pjbrunet 8 hours ago 0 replies      
Proof the Internet is a series of tubes.
gary4gar 9 hours ago 2 replies      

    they built a wooden model to demonstrate their answer  although Bozki notes     that the model doesnt verify the result because manufacturing errors     are much greater than any errors the computer could have made.
What's the point, when its not practically possible?

Hello JIT World: The Joy of Simple JITs (2012) reverberate.org
27 points by jeffreyrogers  3 hours ago   1 comment top
thedigitalengel 1 hour ago 0 replies      
Related: https://github.com/sanjoy/bfjit a brainfuck interpreter with a tracing JIT for hot loops).
Alan Kay at Demo: The Future Doesn't Have to Be Incremental youtube.com
159 points by corysama  11 hours ago   59 comments top 22
dredmorbius 9 hours ago 4 replies      
The core idea of non-incremental progress: Xerox PARC accomplished what it did in large part by forcing technology 15 years into the future. The Alto, of which PARC built around 2000, mostly for its own staff, cost about $85,000 in present dollars. What it provided exceeded the general market personal computing capabilities of the late 1980s. This enabled the "twelve and a half" inventions from PARC which Kay claims have created over $30 trillion in generated wealth, at a cost of around $10-12 million/year.

Kay also distinguishes "invention" (what PARC did) -- as fundamental research, from "innovation" (what Apple did) -- as product development.

Other topics:

Learning curves (people, especially marketers, hate them)

"New" vs. "News". News tells familiar stories in an established context. "New" is invisible, learning, and change.

The majority acts based on group acceptance, not on the merits of an idea. Extroversion vs. introversion.

There are "human universals" -- themes people accept automatically, without marketing, as opposed to non-universals, which have to be taught.

Knowledge dominates IQ. Henry Ford accomplished more than Leonardo da Vinci not because he was smarter, but because humanity's cumulative knowledge had given him tools and inventions Leonardo could only dream of.

Tyranny of the present.

bitwize 9 hours ago 1 reply      
When Tetsuya Mizuguchi left Sega to form Q Entertainment, he and his team started work on the famous puzzle game Lumines. Their stated goal was to create a game that was merely half a step forward, as opposed to their previous game, Rez, which was two steps forward -- and didn't do well at market.

Smalltalk was at least two steps forward, probably much more than that. The critical thing that put it well into the future was the fact that it made the boundary between users and programmers even more porous. I'm sure many of you have heard the stories of teenagers sitting down to an Alto and writing their own circuit design software in Smalltalk. That kind of power -- turning ordinary people into masters of these powerful machines easily and efficiently -- is just the sort of revolution originally desired and promised us by the first microcomputer marketers.

But of course it didn't do well at market at first, so we had to settle for the thing that was merely half a step forward -- the Macintosh.

exratione 7 hours ago 0 replies      
Allow me to put forward a historical analogy: standing in 2014 and arguing a case for gentle future changes in [pick your field here] over the next few decades, based on the past few decades, is something like standing in 1885 or so and arguing that speed and convenience of passenger travel will steadily and gently increase in the decades ahead. The gentleman prognosticator of the mid-1880s could look back at steady progress in the operating speed of railways and similar improvement in steamships throughout the 19th century. He would be aware of the prototyping of various forms of engine that promised to allow carriages to reliably proceed at the pace of trains, and the first frail airships that could manage a fair pace in flight - though by no means the equal of speed by rail.

Like our present era, however, the end of the 19th century was a time of very rapid progress and invention in comparison to the past. In such ages trends are broken and exceeded. Thus within twenty years of the first crudely powered and fragile airships, heavier than air flight launched in earnest: a revolutionary change in travel brought on by the blossoming of a completely new branch of applied technology. By the late 1920s, the aircraft of the first airlines consistently flew four to five times as fast as the operating speed of trains in 1880, and new lines of travel could be set up for a fraction of the cost of a railway. Little in the way of incrementalism there: instead a great and sweeping improvement accomplished across a few decades and through the introduction of a completely new approach to the problem.

semiel 10 hours ago 3 replies      
One of the problems I've been struggling with lately is how to arrange for this sort of work, while still allowing the researchers to make a living. Governments and large corporations seem to have by and large lost interest in funding it, and a small company doesn't have the resources to make it sustainable. How do we solve this?
corysama 11 hours ago 1 reply      
For ideas on how to make non-incremental progress in technology, check out Kay's earlier talk "Programming and Scaling" http://www.tele-task.de/archive/video/flash/14029/
xxcode 5 minutes ago 0 replies      
Hacker News is the epitome of short term thinking, with projects like 'weekend projects' etc.
neel8986 9 hours ago 2 replies      
Though a bit obnoxious i really liked the talk.Alan talked about 2007. If we look back it was the time when first iphone was announced. We all knew that in a timespan of seven years the processor will be much faster( Now it is almost 20 times faster)., connectivity will be faster, it will have better display and better sensors.But still none of the application that exists today (except games and animations maybe) take all this improvement into consideration. We are still stuck in old ideas of messaging app, photo sharing app, maps and news aggregators. I believe all those apps could have been conceived back in 2007. No one thought about any new use cases which can take use of the improved hardware. In fact some of the noble concepts like shazm or word lens were conceived 4-5 years back. Now we are stuck at a time where giants of internet are just struggling to squeeze few more bytes of information from user in sake of making more money from adds. It is difficult to believe after 7 years of first smartphone the most talked about event this year was a messaging app being acquired for 19 billion!!I think hardware engineers push the limits by going to any extent to make moore's law true. But we software guys fails to appreciate what is going to come in future
jal278 10 hours ago 0 replies      
A practical suggestion Kay makes is that one way to brainstorm start-ups is to think of technological amplifiers for human universals [1]

[1] http://en.wikipedia.org/wiki/Human_Universals

leoc 6 hours ago 0 replies      
It's amusing that the same optical illusion has been discussed by Michael Abrash https://www.youtube.com/watch?v=G-2dQoeqVVo#t=453 and Alan Kay https://www.youtube.com/watch?v=gTAghAJcO1o#t=1534 in talks on very different topics recently.

> Thomas Paine said in Common Sense, instead of having the king be the law, why, we can have the law be the king. That was one of the biggest shifts, large scale shifts in history because he realised "hey, we can design a better society than tradition has and we can put it into law; so, we're just going to invert thousands of years of beliefs".

Pfft, tell that to the 13th-century Venetians: http://www.hpl.hp.com/techreports/2007/HPL-2007-28R1.pdf . Constitutionalism isn't that new an idea.

cliveowen 10 hours ago 1 reply      
Thank you for posting this, the best quote so far has been this: "Prior to the 18th century virtually everyone on the planet died in the same world they were born into". This is a realization I never had, we take progress for granted but it's a precious thing actually.
MrQuincle 10 hours ago 0 replies      
Perhaps he's a tad obnoxious, but he says some interesting things.

- think of the future, than reason backwards

- use Moore's law in reverse

- an introvert character can be helpful in coming up with real inventions

- be interested in new ideas for the sake of them being new, not because they are useful now, or accepted, or understandable

- it seems good to sell stuff that can be instantly used, people however, like many other things. they might for example like to learning or get skilled. the bike example is one. but also the piano. or the skateboard.

At least, this is what I tried to grasp from it. :-)

rafeed 8 hours ago 1 reply      
Firstly, I enjoyed his talk. It was pretty insightful into the ways so many businesses and corporations today think, and how we've lost track of building the future. However, there's one thing that really bugged me about his talk. It basically boils down to the fact that you have to take into consideration Moore's Law and have to pay a hefty sum to make any useful invention by paying for the technologies that are 10-15 years ahead of its time to do anything useful for the next 30 years. How does one "invent" in his terms today without the equity that he refers to which you need?
kev009 8 hours ago 0 replies      
I know this is really trivial, but I found the extended music intro and his unamused reaction quite comical. Over-analyzing, it's a juxtaposition to parts of his talk.
forgotprevpass 7 hours ago 1 reply      
At 15:00, he mentions research on the efficiency of gestures done in the 60's. Does anyone know what he's referring to?
andreyf 6 hours ago 1 reply      
Stephen Wolfram's demo he referred to doesn't appear to be up yet, but this one from a couple weeks back is pretty sweet: https://www.youtube.com/watch?v=_P9HqHVPeik
athst 5 hours ago 0 replies      
This is a great excuse for buying the nicest computer possible - I need to compute in the future!
revorad 11 hours ago 0 replies      
The talk starts at 2:42 - http://youtu.be/gTAghAJcO1o?t=2m42s
kashkhan 9 hours ago 0 replies      
Anyone have a link to the Q&A after the talk?
sAuronas 6 hours ago 0 replies      
Playing Wayne Gretzky:

30 years we will (ought to) have cars that repel over the surface by a bioether [sic], possible emitted from the street - which have become (replaced as) linear parks that vehicles float over and never crash. Because of all the new park area, some kids in the suburbs (because they will be park rich) will invent a new game that stretches over a mile that involves more imagination than football, basketball and soccer - combined.

That was an awesome video. C++ == Guitar Hero

LazerBear 10 hours ago 0 replies      
This is very relevant to something I'm trying to build right now, thank you for sharing!
Zigurd 10 hours ago 2 replies      
A lot of his talk was wasted on irrelevant complaining about lack of capex in R&D. That's only partly correct. Any one of us can afford to rent a crazy amount of computing power and storage on demand. Pfft.

In short, skip the first 20 minutes. He's being a grumpy old man. In the second part, he's a pissed-off genius and revolutionary.

Roritharr 10 hours ago 1 reply      
Wow, he really comes of as obnoxious.

Yes what was done in Xerox Parc was really amazing and cool, but can you please contain your ego atleast a little?

This talk sounds basically like him explaining to everybody in detail how awesome his achievements are.

EDIT: The best point is where he explains with charts that 80% of people are basically sheeple...

HTerm The Graphical Terminal 41j.com
17 points by evolve2k  3 hours ago   discuss
Why UPS Trucks Rarely Turn Left priceonomics.com
151 points by lelf  10 hours ago   101 comments top 24
bunkat 7 hours ago 2 replies      
I have a friend that is a UPS driver and he always rolls his eyes when somebody mentions this to him. UPS trucks definitely turn left. Sure the delivery order generally tries to reduce left hand turns, but on the routes that he drove he saw very little difference in the route when comparing them before this 'change' and after it. The biggest cost savings were done by reducing the driver and truck counts and paying the remaining drivers overtime to deliver more packages.
kristiandupont 9 hours ago 10 replies      
In Denmark, the term hjresvingsulykke (right turn accident) is common, meaning when a car, typically a truck, turns right and hits a bicycle. Lots of people are killed from this every year. One theory is that it is primarily foreign truck drivers who are not accustomed to the high number of cyclists in Copenhagen.

When I initially saw the title, I was expecting it to be a mistake and actually say "Why UPS Trucks Don't Turn Right". I guess the problem is less severe in the rest of the world.

rdl 5 hours ago 0 replies      
Whenever I get annoyed at labor unions and think they're mostly inefficient rent-seekers, make work, etc. (which they tend to be, in declining industries), the counter-example of UPS comes to mind -- they're unionized, but in a stable or growing industry, and seem to have both fair (not too high, not too low) wages and great performance.
mistermann 8 hours ago 7 replies      
Id very much like to know why they don't send a text+email prior to a physical delivery so I can confirm whether I will be home to take delivery and potentially offer a reroute address or simply say do not attempt, rather than the current approach of making three unsuccessful attempts.
mck- 9 hours ago 0 replies      
If you're running a business, and would like to apply route optimization to manage your fleet, have a look at Routific [1]

Disclaimer: I'm the founder of Routific

[1] https://www.routific.com/developers

PythonicAlpha 10 hours ago 2 replies      
I just wondered, if the parts of the trucks (specially wheels and steering parts) have uneven wear (off?) by this practice.

Apart from that, sounds like a good engineering/optimization result.

PS: But there is a solution to that problem, as I see the other posts around .... just bring the trucks with uneven wear to the UK after a while ...

guelo 9 hours ago 1 reply      
It's interesting that UPS drivers have to solve the traveling salesman problem every morning with additional real-world constraints like this one.
js2 3 hours ago 0 replies      
This article reminded me of the http://en.wikipedia.org/wiki/Michigan_left
jonalmeida 3 hours ago 1 reply      
For people who don't live in North America: If you're at a stop, you follow a "free right" rule where you're allowed to go right even if the traffic lights are red, but as long as there aren't pedestrians crossing.

I can see the free-right policy also being a major factor in this as well but I don't see it mentioned.

ntoshev 8 hours ago 0 replies      
You really need a data driven approach in this. Gather statistics of actual vehicles going from point A to point B so that you know the best way to do it in the future.

I'm running a routing optimization service (http://fleetnavi.com if you are curious, but it's only in Bulgarian for now, although Chrome translate works really well on the site). The routes a mapping service like google maps will give you are not that great really, in many cases the data needed to do a perfect route are just not available. We aren't using the stats from routes in the past yet, but it's a next step that would clearly be beneficial.

lutusp 7 hours ago 0 replies      
Apropos left turns, former FBI director J. Edgar Hoover was involved in a left-turn accident early in his career, after which he ordered that his drivers never turn left when he was in the car.


Quote: "A small army of agents spent days planning details of the trip. The supervisor drove the car with Hoover and Tolson. Hoover insisted on riding in the back seat, on the right side. Hed once been injured in a car wreck while sitting in the left rear seat, and he refused to sit there again. Also, the accident occurred during a left turn, and Hoover no longer allowed his drivers to make left turns. This complicated the route to Austin in those days before interstate highways."

michaelfeathers 9 hours ago 0 replies      
In the movie 'Cop Land' Ray Liotta's character (a cop) repeatedly uses the phrase "Don't fight, go right" to refer to a tactic fugitives use: avoiding left hand turns. Apparently, if you know that you can chase better.

I think it was meant as a double entendre too - referring to one's moral compass.

tehabe 8 hours ago 1 reply      
I can imagine that in a typical American city, but I wonder how this would work in a typical European city. Where you don't usually have so nice even blocks like in the US.

I could get to my parents w/o a left turn. So I don't think it is impossible to avoid left turns in a European city but I think it is much, much harder to do so.

ableal 9 hours ago 0 replies      
Aside from route optimization, something else that the tracking devices may allow is recording acceleration, braking and instant fuel consumption. This in turn makes possible to train and reward drivers for more efficient driving.

It's a big selling point for systems sold to trucking and public transportation companies.

Link- 8 hours ago 1 reply      
Hasn't this been confirmed on Mythbusters (in SF) as well (Episode 145)?
grannyg00se 9 hours ago 2 replies      
It's hard to imagine that a high number of intersections are so bad for left turns that a three right turn detour is warranted. I've seen some left turn lanes get severely backed up during rush hour on the busiest of streets, but as a general rule I'd expect the left turn would win. I'd be curious to see more data on this.
WalterBright 8 hours ago 1 reply      
It'd be nice if consumer GPS navigators offered an option like "prefer right turns".
kitd 10 hours ago 0 replies      
Ha, wondered why they've never caught on here in the UK.
noisedom 6 hours ago 1 reply      
I'm reminded of the one time I drove through New Jersey. Major arterials are all set up like interstates and left turns aren't an option. Usually you have to go further than your destination and take a right to get on an overpass that spits you out in the opposite direction on the arterial. It's very frustrating if you manage to get lost.
pcurve 9 hours ago 3 replies      
Amount of fuel saved is actually negligible. With a fleet of 100,000 trucks, it comes out to be less than 1 gallon per truck per year in savings.
KC8ZKF 9 hours ago 0 replies      
Mythbusters covered this. http://youtu.be/ppCz4f1L9iU
protomyth 10 hours ago 2 replies      
From the article: "While the no left turn rule has an appealingly simple and algorithmic quality to it, you will see UPS drivers take left turns on occasion, especially in residential neighborhoods without much incoming traffic."

So, no, they do turn left. More title mythology.

steven_pack 6 hours ago 0 replies      
I hope they refer to that as the "Zoolander" policy at UPS.


edemay 7 hours ago 3 replies      
Could this inform urban planning?Shouldn't we ban left hand turns more often for everyone?
Why 'gallons per mile' is better than 'miles per gallon' datagenetics.com
74 points by squeakynick  6 hours ago   65 comments top 13
pash 3 hours ago 2 replies      
This topic came up on HN a couple of years ago [0], and I posted a comment that was well received. So here it is again, edited for the present context and updated with my recent thoughts:

People do seem to misinterpret MPG ratings. A study published in Science in 2008 [1] found that participants consistently overvalued vehicles with high MPG ratings. They assigned values linear in MPG rather than linear in its inverse.

The study's authors told participants to "assume you drive 10,000 miles per year for work, and this total amount cannot be changed." The participants were then asked to come up with values for vehicles of varying fuel-efficiencies. That is just the sort of optimization problem people face when choosing which car to buy, and apparently a fuel-efficiency metric that puts the amount of fuel in the numerator makes the problem easier to solve because expenditure is proportional to the amount of fuel burned, at least when distance driven is taken as given.

But in reading many of the words expended on this topic over the last couple of years, my lasting impression is that this a lot of hullabaloo about the wrong problem. It's the lack of attention paid to the "miles" part of the equation that most needs fixing. If the goal is to reduce carbon emissions (or any of the other negative externalities of driving), then taking distance driven as fixed frames the problem in a way that obscures the real solution: we should be encouraging people to drive less.

Yes, reordering your daily life to drive fewer miles is more disruptive than simply buying a car that goes farther on a gallon of gas. And, granted, once you've chosen your style of life, minimizing the amount of gas you burn as you go about your daily routine is the thing to do (even if your optimization problem is a purely financial one). All the same, it's ludicrous to ignore the basic inefficiency of the suburban lifestyle that predominates in America while we wait for automotive engineers to come up with clever solutions to pricey gas and to carbon emissions that are twice as high per capita as in similarly wealthy countries.

Elon Musk isn't going to save us all by himself. Surely living closer to where you work, using mass transit, cycling, and walking more must be part of the solution as well. ... So maybe houses and apartments should come with a "miles per day" rating suggesting how far you'd travel getting to and from shops, restaurants, entertainment venues, and your place of work every day you live there. ...

0. https://news.ycombinator.com/item?id=4020885

1. http://www.sciencemag.org/content/320/5883/1593.summary (paywalled); http://nsmn1.uh.edu/dgraur/niv/theMPGIllusion.pdf [PDF]

amluto 3 hours ago 1 reply      
I was hoping that an article about careful use of measurement units wouldn't contain a blatant mathematical error.

The article calculates, presumably correctly, that driving 70mph costs $3.66/hr more than driving 55mph in some reference car. It concludes that driving 70mph is worthwhile if getting to your destination an hour early is worth more than $3.66.

This is completely wrong. Driving 70mph for an hour gets you 15 miles farther than driving 55mph for an hour. Doing that costs you less than $3.66, since you're driving for less time. It also doesn't save you anywhere near an hour.

dnautics 5 hours ago 1 reply      
> Fuel consumption is a better measure of a vehicles performance because it is a linear relationship with the fuel used, as opposed to fuel economy which has an inherent reciprocal distortion.

Fuel economy is a better measure of travel potential because it is a linear relationship with the distance you can go, as opposed to fuel consumption, which has an inherent reciprocal distortion.

danbruc 3 hours ago 2 replies      
Next: Why liters per 100 kilometers is better than gallons per mile.
dredmorbius 5 hours ago 2 replies      
MPG is useful for range estimates: if you've got n gallons, you can travel m miles.

GPM is useful for budget estimates: if you have a trip of m miles, you'll need n gallons of fuel, with some given cost.

Or you can use a tool such as GNU units which handles reciprocal conversions with ease and aplomb.

anigbrowl 5 hours ago 1 reply      
I'll settle for reciprocal distortion if more people would switch to the metric system.
werdnapk 5 hours ago 1 reply      
Canada pretty much uses X litres/100km nowadays recently replacing mpg. The lower the litres, the more efficient the vehicle. It actually was fairly easy to become accustomed to these new metrics.
shmerl 31 minutes ago 0 replies      
How about switching to the metric system altogether?
analog31 5 hours ago 0 replies      
I think there are other examples of this, such as price-to-earnings ratio, and focal length of lenses. In those cases, since a denominator can go through zero, the ratio goes through a big discontinuity. And I've seen p/e ratios reported as "n/a" when they should really be represented by a big negative number.
leccine 2 hours ago 0 replies      
I guess this is why l / 100km is used in the EU instead of km / l.
sillysaurus3 5 hours ago 0 replies      
Also time-per-frame instead of frames-per-second.
taeric 4 hours ago 5 replies      
Citations, or you are just promoting your opinion.

I'm also curious about the numbers showing that 55mph is the most efficient. I do not particularly dispute that this is the case for the majority of vehicles. I am curious as to whether or not this is as it has to be, or because that is the way vehicles are built in the US. That is, could you do better with different high speed gearing?

pbreit 4 hours ago 1 reply      
Either I did not understand the post or it is totally uncompelling. MPG makes much more sense to me. Gallons per 100 miles (or whatever) sounds very clumsy.
Low-level is easy yosefk.com
113 points by luu  11 hours ago   34 comments top 20
mbrubeck 10 hours ago 1 reply      
I've found this very true, as I've moved over the years from very "high-level" programming (client/server apps, database-driven web sites) to "mid-level" programming (working on the Firefox browser) and then toward systems programming (working on Rust and Servo). The desire to stop dealing with masses of fragile dependencies keeps driving me lower in the stack. I really need to practice more assembly coding so I can continue in this direction...

You get some similar benefits if you are working on code where you have some control over the levels above or below, or if your project has any form of self-hosting. For example, Firefox, where most of the UI is rendered by Gecko and some of it even runs within a browser tab. If you're a web developer and you run into a rendering engine bug, you report it and then ship a workaround for three years as you wait for enough people to update. If you're a Firefox front-end developer, you can just fix the Gecko bug and ship the fix along with your feature.

com2kid 8 hours ago 1 reply      
Low level is a lot more fun in the very least.

I wouldn't say it is easier by any means.

The first time you open up the debugger and your v-table pointer is null (presuming you know what a v-table pointer is!) things start to be interesting.

Or there was that time I printf'd a 64bit number and a random stack variable was corrupted. That was a lot of fun.

Memory protection? Haha. No.

For that matter, my team just came across a bug in our own code a couple weeks ago, we were performing a DMA memcopy operation (for which you get to setup your own DMA descriptors yourself of course) over a bus and at the same time sending a command packet to the same peripheral that was being DMA'd to.


Expected things to be put into order for you? Nope. Not unless you implement your own queuing system. (Which we are doing for that particular component now.)

All in all it is a ton of fun though. I'm loving it. Having seen an entire system be built up from setting up a clock tree to init'ing peripherals to our very own dear while(1). (We actually moved away from while 1, async is where it is at baby! Callback hell in kilobytes of RAM! oooh yaaaah)

georgemcbay 8 hours ago 0 replies      
I get the point of the blog author and I partly agree with it; but there are different types of "hard" and "easy".

To produce much of value at the low-end you need a really comprehensive understanding of the technology at the level you are working at which has significant upfront learning (and likely just plain aptitude) costs that aren't really addressed too much here. Once you have that knowledge, then yes, you get far less surprises, but acquiring it in the first place is not at all trivial or "easy" (though it may seem like it if you're a geek that's been banging away with assembler for years... you've just forgotten how much effort you expended at that stage, probably because it felt fun to learn).

At the higher-end you can string together a bunch of frameworks and glue code you cut and pasted off Stack Overflow and get something that pretty much does what you want, most of the time, maybe, while barely understanding the underlying technology.

Which is "easier" or "harder" depends a lot on what you mean by those terms.

Also the assumption that "high-level" means HTML/CSS/JavaScript is not that useful for the overall debate since not all high-level development is as annoyingly unpredictable as HTML/CSS/JavaScript.

mrow84 7 hours ago 0 replies      
In the company I used to work for we covered quite a range of development targets, from embedded micros up to very high level factory control systems, and I always found the most challenging stuff was "in the middle" - stuff at the systems level.

At the bottom, it really felt like programming a machine, and I found it all good fun, much like solving a puzzle (with occasional obscure headaches, mostly compiler related). At the top everything is pretty abstract, and there's more freedom to do conceptually interesting things (for example we were doing quite a bit of adaptive control type stuff, which would have been a nightmare to write with lower-level languages).

In the middle, however, it seemed to me to be largely just a complicated network of interacting conventions. Those systems are neither firmly grounded in "reality", because they are generally trying to abstract away those details, nor are they "theoretically pure", because they need to be efficient (though there is a lot of interesting stuff there). What that means is that you simply have to learn and understand all those human-defined conventions to know what you're doing, and that makes solving problems at that level more difficult, or perhaps rather it requires much more hard-won expertise (which I never got much of - I just bugged other people until they would help me out!).

Obviously, my views are coloured by my personal experience, so make of them what you will.

Scramblejams 9 hours ago 1 reply      
A beautiful elaboration of what I've been feeling for many years. I don't get all this excitement over web programming. The process is a pain in the neck -- why doesn't this color turn red when it's supposed to? Oh, this IE workaround conflicts with this Firefox bug. To me it feels like a house of cards held up with old, dessicated duct tape. Yeah, you can do amazing things with cards and duct tape, but because of the shaky foundation, it ends up being vastly more frustrating and so much more work than it really should be.

I haven't responded by going as far down the stack as this guy has -- although I've done a fair bit of assembly and enjoyed it, I spend most of my time in Python and Erlang -- but coding web apps can be such a ghetto.

Datsundere 23 minutes ago 0 replies      
I'm sure making the assembly instruction set was not an easy task. The instructions are small, but the simplicity and the beauty of keeping it simple is in itself a great accomplishment.

How floating points are represented in computers is such a neat hack, yet it can be done with a simple formula on pen and paper.

Doesn't mean you can come up with.

Everyone knows E= mc^2. It's easy, But you didn't come up with it.

fizx 10 hours ago 2 replies      
Playing the piano is easier than playing a violin. I would argue that a world-class pianist is about as skilled as a world-class violinist.

If you could allocate skill points like an MMO, the violinist is spending 3 points on instrument mastery, and 7 points on musical mastery, while the pianist spends 1 and 9.

I hate spending my limited skill points on "browser mastery," so I mostly do lower level things.

svantana 10 hours ago 1 reply      
The author makes some valid points, but he seems indifferent to the most important question: why program computers at all? He doesn't seem to enjoy it much (although that may be more of a sarcastic tone). I think we need more devs who do it because they want to accomplish something particular, and less devs who are doing it cause computers are kewl (or to get paid for that matter). With a utility view, the choice of low-high level gets less emotional:

* high-level is more productive in the short run, but may hinder you in the long run

* mid-level (C-ish level) tend to be more portable and more future proof

* low-level will probably give you better resource usage (speed, memory etc), but not necessarily these days

Personally, I find C++ (plus open source libraries) gives the best trade-off, but that's probably dependent on the task (I do audio/video analysis/synthesis).

kenjackson 10 hours ago 0 replies      
The big difference, and the reason most find low level harder, is that you can do a pretty bad job on front end code and still produce something of value. Whereas low level code tends to need to be more solid.
al2o3cr 9 hours ago 0 replies      
"Memory-mapped devices following a hardware protocol for talking to the bus will actually follow it, damn it, or get off the market."

Says somebody who clearly never had to deal with cheapass webcams in Linux... :)

faddotio 7 hours ago 0 replies      
I agree with this article. The Web is a shambling mess of crap technologies, just a Jenga tower of crap. When is it going to collapse under its own weight? The rise of native apps suggests that the collapse has already begun.
pags 10 hours ago 0 replies      
This seems pretty spot on to me. More moving parts = more failure points...as a web engineer, I find myself bitten more by the trappings of abstract code structure than by things like faulty algorithms. This ties pretty heavily into why I feel the standard technical interview for web engineering talent is severely broken, but I digress.
nly 6 hours ago 0 replies      
This seems like a kind of selection bias to me. For the most part we only build "high level" things on top of "low level" things when we're successful in finding a use for them, by definition. The more successful you are, the easier success seems. Put another way, if they weren't "easy" then there wouldn't be a "high level" above them to make the concept of "low level" meaningful or concrete.

Also, as a caution against asserting low level means "easy", I will take this opportunity to drop one of Murphy's Laws:

    "An easily understood, workable falsehood is more useful than a complex, incomprehensible truth."
sometimes written as

    "All abstractions are wrong, some abstractions are useful."
As an example of a low level abstraction that is both useful and wrong, consider the libc strtod() function, which converts a decimal string to a native floating point representation. If I were to give you the pre-parsed integer significand and integer exponent, base 10, then you'll find there's no mechanism in the C standard library to convert those two integers to the correct double value, despite strtod() having to do the very same thing, at some point after the parsing stage. If all you ever want to do are string to double conversions then this function will always have appeared quite low level, but the reality is that this is only the case because it's always been so damned useful.

Low level things are intrinsically useful and the more useful something is the less wrong it seems.

snambi 1 hour ago 0 replies      
The higher you go, more abstractions, more specs, standards and frameworks. Its actually more complex, but once you understand "all" of it is not that hard. Lower level is easier to get started but the more complex problem the more difficult to solve.
pedalpete 6 hours ago 0 replies      
I've been thinking about this a bit lately. I'm not a low-level developer in the slightest, but the question I've been pondering is, when you get the low-level stuff, is it a lot more focused on simple I/O? The complexity (as I understand it) comes from understanding the register, address, hexadecimal stuff (which I don't really understand).

If I understand correctly, the OP is basically saying the same thing. The complexity in the higher-levels comes in when we try to create an API further up the stack which is responsible for manipulating the more understandable data into something that the device understands at a low level. The difference in retrieving a file stream vs getting the file in standard utf-8 format, which anybody can read.

What are your thoughts on this. Have I got it right? Or am I on the right track?

qwerta 9 hours ago 0 replies      
I found this very true, even for other kinds of low-level. I do database engines in low level Java.
Andys 8 hours ago 0 replies      
The conundrum is that the high-level customer-facing stuff wouldn't be possible without the low-level behind-the-scenes work that was done on protocols and kernels.
malisper 5 hours ago 0 replies      
If you look at the author's actual arguments you realize most of them rely on other people. I don't know about others, but I include many more bugs in my code when I write low level code as opposed to high level code. Having to deal with the chance that someone else included a bug below me, I think, is worth it if I don't have to worry about anywhere near as many bugs in my code.

IIRC pg has an essay about this.

MrClean 9 hours ago 0 replies      
As a young web engineer who have had the pleasure of dealing with what seems like 30 different versions of RSS-feeds, which also appear to be evolving in random directions like living things, I can confirm this. (I've also messed around in C, and even Assembly at one point.
EpicEng 10 hours ago 1 reply      
This is overly simplistic. Sure, low level programming can be easy if you understand the basics of what you are doing, and the same is true for high level programming.

However, there are aspects of low level programming which are far more technical than anything you are likely to run into programming UI's or the like. Thread scheduling, OS development, compiler development, standard library stuff, etc. tend to be quite challenging from a technical perspective (I've done all but one of them.)

The author is picking one type of low level development and painting the rest with the same brush. Low level development occurs on more complicated architectures as well.

Of course, high level development can be challenging as well, but often in a different way. The challenge here lies in understanding the quirks of your libraries, creating a good user experience (terribly hard at times, but not often technically challenging), working around oddities of your platform, etc.

>And it sucks when you change a variable and then the other processor decides to write back its outdated cache line, overwriting your update.

Well... that's why you use memory fences (volatile if your language supports it) even for writes on types which would be atomic.

Recreating the THX Deep Note earslap.com
142 points by nkurz  13 hours ago   23 comments top 13
flycaliguy 10 hours ago 1 reply      
The song Spaced by Beaver & Krause made this sound before THX, in 1970 on the album Wild Sanctuary

3:10 at http://www.youtube.com/watch?v=2xKO3KAtDZ0

Edit: additional wikipedia searching reveals this unsourced fact

"A variation of the end of their track "Spaced" from the Wild Sanctuary album became the inspiration for dual gliding synthesizer soundtrack for the copied THX Sound Logo in movie theaters, also for which neither Beaver or Krause were compensated."


FatalLogic 13 hours ago 1 reply      
Here's a working link to the THX 'Deep Note' theme sound (the link on the page seems to be dead): http://www.uspto.gov/trademarks/soundmarks/74309951.mp3
TallGuyShort 13 hours ago 1 reply      
I use Deep Note as the sound for my alarm clock. It's perfect.
stuartmemo 7 hours ago 0 replies      
I made a version in JavaScript based on this description. You can hear it here - http://stuartmemo.com/thx-deep-note-in-javascript/
BillyParadise 6 hours ago 0 replies      
curtisullerich 12 hours ago 0 replies      
I appreciate the level of detail the author goes to in describing the creation process. Very informative. I did this myself once, but using a patch I made in Max/MSP for drawing and listening to line segments on a pitch vs time plane. For this particular use case, I generated the input score with a Python script rather than drawing them manually. I found that detuning the sustained tones made the biggest difference in matching the original sound, which the article author mentions. Here's a video of my patch: https://www.youtube.com/watch?v=xl4C4zsy9LY
aye 11 hours ago 0 replies      
The ChucK version is worth checking out as well:


Thanks to ahmetkizilay in the comments.

rch 10 hours ago 0 replies      
Anyone ever try this with an analog synth?
pscsbs 8 hours ago 0 replies      
And who can forget The Simpsons' THX introduction?


drippingfist 12 hours ago 1 reply      
It reminds me of the score from There Will Be Blood.
tommydiaz 12 hours ago 0 replies      
Well that was well worth my time. Awesome.
linker3000 10 hours ago 1 reply      
Seriously, why?
braum 10 hours ago 0 replies      
The only part of this "deep note" that I like is the very end; top of the crescendo. The first part has always creeped me out worse than watching Hostel for the first time.
Functional C (1997) utwente.nl
43 points by X4  7 hours ago   6 comments top 3
colmmacc 3 hours ago 1 reply      
There's a common clich, that C programs, as embodied in Unix, are probably the most widely used form of functional programming - or at least CSP. This example is idiomatic;

     grep -c foo * | uniq | sort -rn
It's more than possible to structure C programs this way internally too, elegant even. Use forks and pipes if you want to keep it simple, build co-routines if you're up for it. It's probably the most practical and beneficial way you can re-apply the lessons of FP to C.

But the book doesn't mention either technique - just weird and ugly tricks to emulate tail-recursion, cons et al, probably the least effective parts of FP to try to use in C.

vitd 3 hours ago 0 replies      
I'm confused by the title of this article. The copyright in the PDF is 1999, not 1997, and it says it's about imperative programming in C, not functional programming. It says it's for people who learned functional programming in SML, but now need or want to learn imperative programming.
gangster_dave 4 hours ago 1 reply      
Automation Alone Isnt Killing Jobs nytimes.com
49 points by evolve2k  4 hours ago   48 comments top 10
increment_i 2 hours ago 1 reply      
Bit of an awkward narrative. Has that undergrad "crank it out the night before its due" quality to it. Lots of words, none of them really saying anything.
pmorici 2 hours ago 7 replies      
"There is also a special problem for some young men, namely those with especially restless temperaments. They arent always well-suited to the new class of service jobs, like greeting customers or taking care of the aged, which require much discipline or sometimes even a subordination of will."

Did the above line from the article strike anyone else as flagrantly sexist?

sirdogealot 38 minutes ago 1 reply      
I don't get the "back in my grandpappy's day the steam engines left and we still survivded" notion that automation is not killing jobs. Most everybody was employable back in the 1820s. Simply because of the fact that there was no automation. Train took your horse carriage job? Go work in the fields.

If all department stores adopted automated cashiers, the cashier as a retail option as we know it is dead. Killed.

If all farms adopt tractors and automatic harvesting machines, the crop picker option as we know it is dead. Killed.

Etc. Etc.

There will come a time when we can conceive of a new task to be completed, while simultaneously sketching up the automatic machine to complete the task.

In fact, this job-killing-automation already exists in the form of the latest and greatest production lines. They were not built to be filled with workers, only product.

pcurve 1 hour ago 0 replies      
This is a throw-everything-and-see-what-sticks... kind of article. Financial crisis, long term unemployment stigma, business cycle, technology, demographics, all crammed into 1 page, and doesn't really answer its own question. A very frustrating read.
MWil 1 hour ago 0 replies      
Automation has always been an interesting topic to me. I wrote my upper level paper in law school on automation for a labor law class. Unless something has changed recently, the Supreme Court hasn't really addressed whether intentional "job killing" from automation qualifies as anti-union activity. The last time the Court did talk much about it the new cool technology was cold type setting.
wyager 1 hour ago 3 replies      
Automation doesn't "kill jobs" at all.

I'm tired of news articles that try to pass of "jobs" as some sort of discrete, easily quantifiable units. "Over a million jobs are being shipped to china", "12,000 jobs created", etc.

That's not how labor works. Or at least, it's a useless and misleading way of thinking about labor.

Sometimes automation makes a certain type of labor irrelevant. Those laborers need to find a new job (which possibly involves retooling) or starve (or, today, live off some sort of welfare or charitable income).

We are not even close to simply having no labor left that needs doing by humans. That's the only way you can really "kill" jobs; replace all human labor altogether. Otherwise, humans will just move to whatever they're still good at. Because of automation, the world will always be able to support those new jobs. The market will force this to be the case.

evolve2k 2 hours ago 1 reply      
We're getting mixed messages as to if automation is slowly killing more jobs than it creates. I think low level knowledge worker 'paper-shuffling' jobs are on the way out but where to from here? What's out role as coders as we disrupt industry by industry?
MWil 1 hour ago 2 replies      
Automation has never killed jobs, only dis[re]placed them.
andkon 1 hour ago 0 replies      
My beef with this piece: automation probably isn't the only thing killing jobs, but it's the only thing that the author provides any evidence of having a causal role in driving job losses. Everything else mentioned (e.g. the financial crisis) is a sort of catalyst for labour market changes or a red herring.

I mean, if a dude gets shot and dies, we tend not to argue too much about whether it was the bullet that killed him, or the corresponding massive blood loss.

orasis 2 hours ago 0 replies      
Boo! lame article.
How many genetic ancestors do I have? gcbias.org
26 points by maxerickson  6 hours ago   1 comment top
babesh 2 hours ago 0 replies      
This doesn't take into account males who inherit the Y chromosome wholly from the paternal side. It also does not factor in mutations that occur along the way.
Taking Pictures with a DRAM Chip translate.google.com
109 points by sizzle  14 hours ago   27 comments top 12
ChuckMcM 12 hours ago 0 replies      
Wow that brings back the memories. DRAM cameras, or "Rameras" were popular with robotics experimenters in the 80's and 90's. From "Android Design" by Martin Bradley Weinstein:

"The ramera was first developed by the Robotics group at Case Western Reserve University in Cleveland Ohio, in 1978."

Apparently they wrote up their work in the Sensor Group Journal. They had used a 4008 which was only 64 x 64 pixels.

Not surprisingly this work was the basis for early optical mice which use what is a DRAM circuit as an imager to detect motion of the mouse in two directions without using a rolling ball. This led to a number of roboticists hacking their optical mice into simple cameras (which was much easier than popping the lid on an increasingly hard to find DRAM chip). That practice was then replaced by using CMOS imagers that had been pretty cheap by the early 2000's and of course these days you can get a camera module for a phone very inexpensively (see the PiCAM for $29 for example)

IvyMike 12 hours ago 0 replies      
Almost all DRAM has some amount of address scrambling--logically adjacent cells are not physically adjacent. It can get quite complicated. Here's a PDF that describes some of the why and how: http://ce-publications.et.tudelft.nl/publications/1162_addre...

I have been told that a quick way to reverse engineer the scrambling is to shine a circular pattern onto the physical dram, then read out the data and test common scramblings until the logical data shows the same circle.

ernok 12 hours ago 3 replies      
I am the original author of this writeup.

Did this hack together with my brother twenty years ago (look at the file dates)!

Strange to see my 1994 hack at #1 on hacker news twenty years later..

It was done with 64kBit DRAMs in a ceramic package. Descrambling the physical chip layout was a pain..

owenversteeg 12 hours ago 1 reply      
From what it looks like, if anyone wants to build one they'll need:

- An IC (the one he used seems to be http://www.ebay.com/itm/5-Rare-Vintage-GOLD-NEC-044-D4164D-4...)

- A small lens to place over the IC (http://www.ebay.com/itm/MTV-6MM-CCTV-IR-Lens-For-Security-IP...)

- A parallel to USB cord if you don't have a computer with a parallel port (http://www.ebay.com/itm/USB-to-PRINTER-DB25-25-Pin-Parallel-...)

Please correct me if I'm wrong about any of these parts - I haven't tried to build one myself.

Here's my rudimentary understanding of the pin connection: (to connect it yourself, use https://upload.wikimedia.org/wikipedia/commons/b/b3/Pin_numb..., https://upload.wikimedia.org/wikipedia/commons/e/e0/Parallel..., and https://upload.wikimedia.org/wikipedia/commons/e/e1/25_Pin_D... as guides)

















fhars 13 hours ago 1 reply      
Of course these days there are only few 4164 with the metal cap remaining, and they can be as expensive as a cheap webcam: http://www.ebay.com/bhp/4164-ram
owenversteeg 12 hours ago 3 replies      
Watch out - the source and binary file (http://www.kurzschluss.com/kuckuck/kuckuck.zip) is a tarbomb.
vii 11 hours ago 0 replies      
The technique described takes a low quality sensor (a DRAM chip with an open window onto it) and uses it to produce reasonable quality grayscale images.

Interesting to look at the sourcecode of the Kuckoo program (before the widespread adoption of UTF-8!) and how it uses High Dynamic Range tricks to read off the image at different exposures.

With much faster RAM and smaller cells nowadays maybe one could do something cool with very fast photography, but of course the faster/smaller the RAM the more trouble interfacing with it and calibrating the light intake.

pravda 12 hours ago 0 replies      
I was going to comment that Steve Ciarcia did this in the 80s, and then I see he is credited on the page.

Byte September / October 1983 Steve Ciarcia: "Build the Micro D-Cam Solid-State Video Camera"

bananas 12 hours ago 0 replies      
One of my teachers did this in the 1980s with eproms. You could get a reasonable 1bpp silhouette out of a 2716 and a couple of lenses out of a broken pair of binoculars. You had to program it first then use the sun to erase the picture onto it which meant exposure times in the order of 20 mins on a sunny day to flip enough bits plus some figuring out of how the cells were organised. He had one wired to a BBC micro user port with a couple of shift registers and a BASIC program that copied the current state into video memory. I think this was around 1986 and was probably the coolest thing I'd ever seen at the time.

Edit: just remembered - he worked out that if you used a high power camera flash and fired it 20-30 times it had the same effect.

enthdegree 8 hours ago 0 replies      
Looks like its US Patent No. 4441125


TerraHertz 4 hours ago 0 replies      
This was an amusing read. I'd not heard of using DRAMs as image sensors, but I had discovered that EPROMs sort-of worked as image sensors. Back in the 80s I was writing video poker machine games. 6502 running at 700KHz clock rate, very simple boards using interleaved video/CPU memory access, and all graphics done via 8x8pixel sprites, that were programmed in EPROMs. One day I noticed that bright sunlight on a sprite EPROM (no cover label) produced on-screen garbage. With a bit of experimenting, I found I could get a fairly decent image into the screen. Wrote a program to generate an index mapping into the game screen RAM, so the sprite EPROM was mapping to the screen 'about right', then found that by programming all 1 (or 0, I forget now) into the EPROM then partially erasing it, I could get it quite sensitive to light. I only tried hand holding assorted crappy lenses in front of the EPROM window, and there were all sorts of optical and electrical and geometrical artifacts. But it kind of worked. Pity I have no photographs of those experiments. Didn't consider it anything more than a silly amusement. Especially since I already had a nice B&W PAL video camera.
toxicFork 13 hours ago 0 replies      
I'm also impressed by google translator!
America's Young Adults at 27: Results From a Longitudinal Survey Summary bls.gov
52 points by wallflower  10 hours ago   35 comments top 6
lkrubner 9 hours ago 6 replies      

"By 27 years of age, 32 percent of women had received a bachelor's degree, compared with 24 percent of men. "

This is a rather strong antidote to the idea that everyone gets a college degree:

"At 27 years of age, 28 percent of individuals had received their bachelor's degree while 38 percent had attended some college or received an associate's degree."

Also interesting:

"At 27 years of age, 34 percent of young adults were married, 20 percent were unmarried and living with a partner, and 47 percent were single, that is, not married or living with a partner."

noname123 8 hours ago 3 replies      
Can we start a HN Young Adults at mid to late twenties longitudinal survey?

Statistically speaking, I'm 27, over-educated with a BA degree, single and not living with a partner, making an 25th percentile salary relatively to the US average. Saving less than I should of my disposable income but in the 10th percentile relatively to my peers due to my working professionally for 5.5 years while most of my other over-educated peers in other fields are still enrolled in post-graduate training. Working in IT and living in an over-priced metropolitan area.

Anecdotally speaking, I'm ambivalent about marriage and career advancement. Personal experiences have informed me that relationships are less about true love and unfailing commitment than being able to compromise, communicate and having the financial and mental wherewithal to deal with (inter)personal issues. Unlike my younger self, I'm less anxious about "coupling" and more keen working on myself and working out my bad habbits/addictions.

Personal experiences have also informed me that IT careers esp. if you're not working at a Fortune 500 company are quite transient. Programmers (maybe other professions as well?) are treated more as sub-contractors entities than tenured employees - meaning that the company will keep you around if you're contributing efficiently to the bottom-line and cut you without sentiment otherwise.

Whereas my younger self aspired to "apply to YC, work at Google," I'm less keen on the promises of potential prestige and fortune but focused more on the longer term of angling more for more domain specific positions (e.g., Computational Biologist vs. Software Engineer on Genomic Data Platform) and more work-life balance (summer hours, regular 9-5 schedule) to pursue my other hobbies outside of coding.

Everyone is very different but I'd love to hear what stage other young adults on HN are (not talking about the wide-eyed CS undergrad still dreaming about The Social Network), both statistically and anecdotally speaking.

mkoryak 4 hours ago 0 replies      
Ok, how many of us reading this are high school dropouts?
colmvp 8 hours ago 1 reply      
Too bad they didn't include Asian Americans in the survey. Much disappointment. Census data from Pew shows a lot of interesting differences between Asians relative to other races in America.

Anyways, interesting to note that women of all races get some college or a Bachelor's degree at a higher rate than their respective male counterparts.

Also interesting is the correlation between highest attained education level and likelihood of having a child by 27. Anecdotally, all friends in my various circles are college educated and only started having kids after the age of 29.

simon_ 8 hours ago 1 reply      
Interesting: approximately half of this (my) cohort's children do not live with married parents.

EDIT: Even more interesting, 41% of single (living alone) women at 27 have a child.

cnaut 8 hours ago 1 reply      
Interesting that the only pattern that was consistent across gender, race, and economic status was this:

"Despite being in the labor force a greaterpercentage of weeks, individuals held fewer jobs from ages 23 to 26 than they did from ages 18 to 22.While ages 18 to 22, individuals held an average of 4.3 jobs and were out of the labor force 26percent of weeks. From ages 23 to 26, individuals held 2.7 jobs while being out of the labor force16 percent of weeks."

Sending and receiving emails over IPv6 linkedin.com
8 points by r4um  3 hours ago   discuss
Amazon Dash amazon.com
558 points by sheri  1 day ago   240 comments top 52
dotBen 1 day ago 18 replies      
HN'ers asking why this isn't a cell phone app take note - this exemplifies why we (geeks) don't make good use cases for consumer tech and we should always be careful looking to our own habits and values when in a Product Development role.

We're rarely the target customer and rarely behave like "average Joe". We're naturally resistant to superfluous redundancy ("My phone can already snap a barcode, I don't need a separate device") when consumers don't even see the duplication let alone the issue. They don't separate devices (or even apps) has having layers of similarity and just see things for their end functionality.

My mother would see a phone and apps as completely separate functionality to a physical device like this. She probably would have the Amazon Fresh scanner, the (theoretical) Google Shopping Express scanner and the (also theoretical) Whole Foods scanner and wouldn't even consider the duplication, let alone be frustrated by it. She doesn't care about the potential for an "open standard"/"common standard".

She also has an AppleTV and a ChromeCast connected to the same smart-TV that also has native apps within it (she mostly uses the native apps). Again, she sees no issue with that and might even buy an Amazon FireTV if she felt it was more compelling for one use.

Ultimately we shouldn't assume consumers value convergence, especially when it creates ever increasing complexity in user experience (eg opening an app to snap a barcode vs pressing a single button on an Amazon Fresh scanner)

ADDED: If you don't have parents that also work in tech, go visit them and just watch them use technology without prompting. Ask them about their experiences, their frustrations, their decisions behind purchasing specific equipment and downloading particular apps. It's very insightful.

dangrossman 1 day ago 4 replies      
Based on the comments, I'm guessing few poeople here have ever worked retail and held a barcode scanner.

Break out your phone, load up your barcode scanning app (there's 20 seconds right there even if the phone is in your pocket). Now try to actually scan something with it. You'll spend another 30 seconds lining up the little on-screen window with the code, rotating things, waiting for the camera to focus, and even having to move to another location if you're not in bright lighting. It's a terrible experience and that's why you don't see stores checking people out using the camera of an iPad.

A barcode scanner, on the other hand, just works. You point it in the general vicinity of the barcode, press the button, and it's scanned. You don't have to perfectly align anything, be in specific lighting, or wait for a camera and an app. I'm sure you've seen cashiers run multiple things over a scanner in under a second.

Amazon Dash isn't just a subset of your phone's functionality. It's a dedicated barcode scanner, which is hardware you don't have on your phone.

bluthru 1 day ago 4 replies      
Listening to a young child carefully pronounce words for the narration was a bit distracting and slightly irritating. A sentence or so would be fine, but narrating the whole video was an exercise in patience.

Or do I just have a cold, black heart?

olalonde 1 day ago 1 reply      
My first thought: people are going to bring this to retail shops to get the benefits of brick-and-mortars shopping while benefiting from the low prices / delivery of Amazon.

A lot of people already kind of do this. They go to a shop, find the items they like and look up on the web if they can get a cheaper price by ordering online.

This version of the product might not be so practical for this use case though since it requires a WiFi connection and can probably only scan AmazonFresh barcodes.

wehadfun 1 day ago 4 replies      
For some reason this made me thing of http://en.wikipedia.org/wiki/CueCat
donretag 1 day ago 2 replies      
"Dash ... works directly with your AmazonFresh account"

Which means it is only available in three locations (SoCal/SF/Seattle).

revelation 1 day ago 6 replies      
Why is Amazon gold plating their fresh service when they didn't manage to meaningfully expand it since 2007?
jameswilsterman 1 day ago 2 replies      
So assumedly this will work in a store also? Could I go 'grocery shopping' at Whole Foods and end up having everything shipped to me by Amazon for cheaper?

Can easily see this evolving into an Amazon price comparison tool for mobile use. Maybe I get a flash discount if the GPS has me standing in a Best Buy already.

alaskamiller 1 day ago 5 replies      
Number of steps to scan grocery by phone:

1. Find your phone

2. Unlock

3. Swipe left to home page three or maybe four

4. Visually scan for the AmazonFresh icon and tap

5. Wait for loading

6. Start scanning action

7. Confirm and pay

Number of steps to scan grocery by Dash:

1. Get device from drawer or pantry

2. Press one button and scan

3. Confirm and pay

For the target demo (30+, married, households with children), option 2 wins hands down. Because you will easily be distracted and stop using option 1 and not complete checkout.

Amazon knows CPG and commerce better than you do.

chunkyslink 22 hours ago 2 replies      
Nice try NSA!

Seriously though, it worries me that there are more and more 'listening devices' in my home.

We've seen what has happened recently with the NSA listening to calls. What is to stop the authorities getting a back door into all these devices and just recording everything?

aray 1 day ago 2 replies      
Anyone know the battery life/lifetime of these? If it's months, that's a lot more convenient to keep in the pantry. As a kitchen appliance it makes a lot of sense, but I don't have any muscle memory for "charging" appliances.
mcintyre1994 1 day ago 1 reply      
I've been expecting Tesco (UK) to do this for ages. They have supermarkets literally operating on this sort of device, you scan+bag as you go, and they have a decent national delivery service.
asnyder 1 day ago 2 replies      
I wonder if this will lead to showrooming of groceries, like Amazon's done with books. The only thing preventing this is the wifi requirement, but of course one can already do this with their phone. Though it does make it even easier.
joeld42 1 day ago 2 replies      
I hope that's a bottle opener on the top.
plg 1 day ago 2 replies      


next day what shows up, exactly?

6 granny smith apples?

a 15 pound bag of golden delicious?

3 MacBooks pro?


mfrommil 1 day ago 0 replies      
Amazon needs to scale Fresh in order for it to be more successful. On a micro level, scaling could come in one of 2 ways: (1)increase order frequency or (2)average order size.

(1) Order frequency - Right now, a typical customer likely picks up groceries when they're out and it's convenient. This very well could be on the commute home from work, later at night, etc. With Dash sitting around the kitchen, Amazon has now created a very tangible reminder in the form of the Dash device to order your groceries, rather than waiting until it pops into your mind (and possibly not buying on Amazon).

(2) Average order size - As someone posted above, it takes 1 or 2 button clicks to reorder an item using Dash. Compared to the current way of online grocery shopping, Dash eliminates a lot of possibilities of forgetting to reorder something you intended to, because it is so simple. Compared to on the PC when you may forget to browse the snacks category, for example, and you forget to order chips and cookies. Way less likely to happen with Dash.

This doesn't address price concerns, but in terms of convenience for Amazon Fresh customers & increasing Fresh orders/order size, this seems like a massive win-win for Amazon and their customers.

anigbrowl 1 day ago 2 replies      
I'm surprised this exists to be honest. Not because of smartphones, but because I thought RFID chips would be sufficiently disposable by now that we'd have smart refrigerators and trashcans. I had to buy a new refrigerator last year and I was struck by how many different kinds of ice dispensers there were (a feature in which I have no interest whatsoever) vs smart refrigerators. I found exactly one of the latter - the unfortunately named T9000 from Samsung (Komm vith me if you vant a snack...), which is really just a refrigerator with a tablet stuck on the front, didn't do very much, was completely locked down (understandable) and cost $4000.
pepijndevos 11 hours ago 0 replies      
You could also take it to a store and scan things there. See what you get, pay less.

I know someone who runs a book shop, and he frequently has people browse for books, only to buy them online later for slightly less.

lucb1e 1 day ago 3 replies      
Does anyone know how they connect to WiFi? There's like two buttons and one beep for I/O.

(Half joking: Or is it a Speak Friend And Enter kind of thing, where you have to speak the WiFi credentials.)

justinpaulson 1 day ago 2 replies      
Why would they make a device rather than an app to do this? Seems pretty awesome as a service though...if Amazon Fresh was available here.
elleferrer 1 day ago 0 replies      
I like how it's a separate device - it's too bad it's only available on the west coast though. AmazonFresh reminds me of "Webvan". Webvan failed during the dot com bubble, maybe Amazon is trying to start this kind of business model up again. I think the grocery delivery service is a great idea, especially now with everyone being so connected. Webvan only failed because they expanded rapidy and weren't able to attract customers at their speedy pace, plus back then not everyone was so connected.
binarysolo 1 day ago 1 reply      
How does this differ from people using the mobile app? Mainly for Amazon Fresh integration?
jds375 1 day ago 0 replies      
This is a cool service to integrate with AmazonFresh. A similar promising alternative is https://www.rosieapp.com ... It's a pretty cool startup with similar goals.
ngoel36 1 day ago 9 replies      
I see absolutely no reason why this couldn't have been a mobile app with a bar code scanner and voice recognition...
alanh 1 day ago 0 replies      
The :cue cat lives!
babesh 1 day ago 2 replies      
Wait till a 2 year old gets ahold of it and scans everything 10 times.
blobbers 1 day ago 2 replies      
Has anyone else noticed that this completely rips off Hiku?Or is this repackaging the same product?


mandeepj 1 day ago 0 replies      
i think this is a big deal. When I am working in kitchen, i see lot of items needing refill, reordering etc...it is little cumbersome to stop your work, wash your hands and get the phone to take the note. Not to mention, once u pick your phone there are 100 hundred things going on - facebook, mails, texts etc to distract you for long time. Knowing this distraction, if I just say - "OK, I will just remember in my brain and will not pick up phone" then you know I never remember that task or things later. I think you can also use this device as your note taker incase you want to buy from somewhere else. Enjoy the convenience ;-)
vuzum 1 day ago 0 replies      
Best idea ever, wow what a great product and functionality! This should be in every house! :-)

Amazon acts like a startup still. Good for them!

LeicaLatte 1 day ago 0 replies      
Now this feels like the future.

I am curious what the upgrade cycles of these products will end up being. Can Amazon charge a subscription and keep giving me a new one?

joshdance 1 day ago 0 replies      
Grocery stores should be worried. Amazon is predator that will take on any market it thinks it can win. Pretty soon they will not only be competing against the store next door, but the Amazon grocery warehouse with all the advantages of scale and convenience.
paul9290 1 day ago 0 replies      
So you zap your grocery needs with dash and flying robot delivers it within 24 hours....
suyash 1 day ago 0 replies      
If someone has an invite, can you please send me one as well. I don't have the Invite Code needed to participate.
tetrep 1 day ago 2 replies      
Why was a separate physical device needed for this? It seems like a simple smartphone application would work just as well, if not better as I would assume virtually all of the target audience for this service already have a smartphone.
elevenfist 1 day ago 2 replies      
Sounds like this would only real be useful for packaged, processed goods. I'd have trouble trusting the quality of perishable items over the internet...
Dorian-Marie 1 day ago 0 replies      
And when you go to somebody else place, you like something, "ok, I will add it to my Amazon shopping cart", etc... So many use cases this is amazing.
lechevalierd3on 1 day ago 4 replies      
Fruits and vegetables to not have a bare code, right ?How to make it simple for those fresh products then ?
ngoel36 1 day ago 1 reply      
How do I get a code?
thomasmarriott 1 day ago 0 replies      
When food became mp3's Amazon Dash / Fresh = Apple iPod / iTunes. Well done, Jeff.
beejiu 1 day ago 2 replies      
How does the voice feature work? Is it computer voice recognition, or does your voice get sent to a person to interpret?
dgarrett 1 day ago 0 replies      
It'll be interesting to see what Amazon does to continue to get more information on people's shopping habits.
anandg 1 day ago 1 reply      
If its real, this is going to revolutionize grocery shopping. Also, open up a new market for such devices.
beamatronic 1 day ago 0 replies      
Didn't see this pointed out so far - The existing Amazon app can already scan bar codes.
EGreg 6 hours ago 0 replies      
Amazon is making service backed devices. As opposed to selling them.
cnaut 1 day ago 0 replies      
This is the CueCat done right! Technology is similar but their is a clear use case.
ahunt09 1 day ago 0 replies      
I can't believe this is not an April Fools' Joke.
redditmigrant 1 day ago 0 replies      
I like the simplified dedicated device for creating a low friction experience, however one downside of having this as a separate device rather than an app on your phone is that you are more likely to loose it, forget where you kept it, etc. etc.
turisys 22 hours ago 0 replies      
disruptive innovation at it's best.........
cpezza85 1 day ago 0 replies      
hook it up with an invite code :)
thebokehwokeh2 1 day ago 0 replies      
What a time to be alive.
jhprks 1 day ago 0 replies      
People! amazon is known to be more sinister than google when it comes to april fools jokes, first it was using quad-rotors for delivery (which was stupid idea by the way) especially when it wasn't even april, now it is a little stick-like device automatically ordering food for you? what's next? a flying car? lol!
jhprks 1 day ago 0 replies      
Nice april fools prank amazon!!! LMAO
Mathematics: Why the brain sees maths as beauty bbc.com
45 points by sytelus  10 hours ago   33 comments top 10
chewxy 8 hours ago 3 replies      
I gave a talk just a couple of days ago on the beauty of Python code[0] - the talk hasn't been transcripted yet. In it I gave an example of how we've been conditioned by evolution to see clever things as beautiful[1][2].

I gave this scenario in the talk: imagine you were a paleolithic craftsman making handaxes. Now, you have a competitor making them handaxes too. We'll assume you make a better hand axe - sharper, cleaner lines and more wedge-shaped than the competitors. This leads to a higher "usefulness" than your competitor's handaxe.

You consistently make better handaxes than your competitor. You get better mates than your competitor. You have a higher probability of spreading your genes. Your "good" handaxes has become a proxy signal for your skill in making handaxes, which itself is a signal of your intelligence.

Over time, if the population uses only the "goodness" of the handaxe as a deciding factor in deciding mating partners, the population needs to be able to discern "good" handaxes from "bad" handaxes. As the lines get smoother, we become more and more tuned towards this standard of "beauty", and we put more and more reverence towards the makers of such items of beauty.

In short, humans see math as beautiful because it is a signa for intelligence, which is a deciding factor of fitness in our evolution. Indeed, our evolution seem to value intelligence over strength anyway.

Now, all the above is personal deduction of course. But I believe if we do go on and figure out the evolution of the medial orbito-frontal cortex of the brain, we'd see it's tied to intelligence.

[0]: https://speakerdeck.com/chewxy/beautiful-python - not very useful slides without the talk. The 4 criteria can also be translated to: usefulness, simplicity, averageness (I used the example: if the man on the street can read your python code and understand it, you did a good job), cleverness

[1]: The Art Instinct by Dennis Dutton

[2]: http://www.ted.com/talks/denis_dutton_a_darwinian_theory_of_...

ezequiel-garzon 8 hours ago 1 reply      
Here [1] is the study, and here [2] the 60 equations that were rated. I feel the brain scans themselves are more interesting than the scores assigned to the equations by the participants. Still, I would have omitted the equation descriptions, as I feel it could create a bias, and would have placed the great contender, Euler's identity, in a position other than 1.

[1] http://journal.frontiersin.org/Journal/10.3389/fnhum.2014.00...

[2] http://journal.frontiersin.org/Article/DownloadFile/390079/o...

tzs 6 hours ago 0 replies      
The article explained that brain scans show that when people say particular math is beautiful and particular math is not beautiful, the same parts of the brain are activating in the same way as when people distinguish between beautiful art and not beautiful art.

That just indicates that people who say something in math is beautiful are having the same feeling as someone who describes some art as beautiful. They are not feeling something different and just using the same word for it.

This is very interesting, but the title of the article led me to believe they were going to tell me WHY this is so. I didn't see that in there.

sytelus 4 hours ago 0 replies      
A lot depends on typography and convention as well. For example, Maxwell's equations in raw form are HUGE. It's only after you compress them by inventing short hand for div and grade, they can be fit on T-Shirt. Even then integral forms does not feel as elegant. Similarly the field equations for General Relativity in "raw" form huge and ugly (involving 4X4 matrix). It's only after you shorten them using tensor notation they start feeling elegant.

I'm not sure we have analog for this in art world. However the technique in this article is interesting because you can extend it to other domains. How about finding out which algorithms are "beautiful"? I think the most beautiful algorithm ever invented is binary search.

bane 4 hours ago 1 reply      
I really tried hard to enjoy mathematics...but I never got to the point of ever seeing it as beautiful. I've seen some code or algorithms I thought were beautiful from time to time.

I love art, prose, music...I can frisson, mostly to music, but sometimes to other mediums. I tend to think visually, finding it easier to put a sequence of pictures together to describe what I want to say rather than words. People comment to me that my replies in email are frequently just a link to a relevant picture.

Strangely, I don't enjoy most poetry or music that requires listening to the lyrics to enjoy (songs with lots of humorous lyrics almost always fall flat with me). I did fine with Mathematics in school, studied hard, good grades, but it never really "sung" to me. I never enjoyed it beyond the natural joy one gets when developing a skill. But I guess I never really got to the point where it was speaking to me so I could reach this kind of enjoyment.

I can sit there are appreciate certain cool things. Euler's given in the article provokes some fascination...but beyond being a lucky coincidence it doesn't really stir anything in me.

This kind of makes me sad.

ivan_ah 5 hours ago 2 replies      
> this beauty of maths was missing from schools and yet amazing things could be shown with even primary school mathematical ability


Most people think that math is about arithmetic and memorization, which couldn't be further from the truth. But can you blame them since the only math teaching they received as kids was about arithmetic and memorization?

It would be interesting how much science will move forward if we had one generation of kids who grow up learning about the more "beautiful" aspects of math and end up interested in it...

kristopolous 6 hours ago 2 replies      
I'm not trying to be a troll or argumentative, but personally I think it's all ugly and lifeless. Which is interesting because this article says that there's some biological link to people perceiving beauty.

I think all math is some little tool and all code is also some other little tool - some of it hairy, some of it less hairy. But I consider none of it "beautiful".

I think the entire notion is preposterous. I'm totally misanthropic to the whole notion.

Does that make me strange?

gballan 2 hours ago 0 replies      
Some math is so elegant that many understandably think it beautiful. To assess yourself, perhaps consider the area of the triangle [1, page 4], explained at a stroke -- not a formula in sight. If any math is beautiful, that is.

What is less appreciated is that such elegance is not limited to math. Many (all?) fields in science and engineering and technology are striving for a similar moment of clarity.

[1] A Mathematicians Lament by Paul Lockhart, http://www.maa.org/sites/default/files/pdf/devlin/LockhartsL...

NanoWar 6 hours ago 1 reply      
I never forget the moment when my math teacher revealed e^pi*i+1=0 after dealing with complex numbers. It was beautiful.
kzahel 8 hours ago 1 reply      
I love analytic/holomorphic functions, they're so smooth and beautiful.
Virtual reality affects men and women differently zephoria.org
96 points by killwhitey  5 hours ago   85 comments top 22
GuiA 3 hours ago 4 replies      
An advisor for a startup where I used to work at worked heavily with VR systems in the 80s/90s. I was having coffee with her a year or so ago, when I had just received my devkit, and she was up in arms about how terrible motion sickness was on the Rift.

"I was telling companies back then that their VR tech was doomed from the start because of nausea, and it hasn't changed at all!"

This is a good tale of why having a more balanced gender ratio in the tech industry is important. If 90% of Oculus designers/prototypers/engineers are male, female voices will naturally get drowned. The problem is that if your audience is potentially "all humans", the ratio is 50/50.(although here, it seems like a) there exists prior research in the literature and b) good user testing could highlight that problem. If you're aspiring to doing any form of quality R&D, being on top of those 2 things should be a priority)

As for the title, I initially disliked it, but as I read the article I changed my opinion - I find it perfectly correct and just the right dose of irreverentious. The way I interpret it is as follows: it would be correct to label a poison which systematically kills any man who drinks it but not women as "sexist". The problem is that our culture tends to bundle intent with sexism, which is not the case - whether a process is sexist or not is completely independent from intent, or even whether there is a sentient agent behind it.

aaronem 1 hour ago 0 replies      
[Preface: dang, thanks for overriding the flags on this HN posting. The discussion thus far seems to be mostly worthwhile, unless it's gone to hell in the twenty minutes or so I've just spent writing this comment, and I'm glad to see it taking place here.]

boyd's baccalaureate thesis, of which her blog post appears to be a recapitulation for a general audience, dates from 2000 and spends considerable effort talking about how, for example, the lack of normal maps results in a lack of shape-from-shading cues, which makes it difficult for a visual system prioritizing those cues over parallax cues to develop a 3-space representation of a scene.

And that's fair enough! For 2000. Now, though, a decade and a half later, normal maps are ubiquitous in current-gen and next-gen 3D graphics; while it's more computationally expensive to render with them than without them, the Rift's resolution is only 1280x800 overall, and even with the added overhead of parallax calculation, that's still easily within the capabilities of a modern GPU.

This is the sort of thing one might expect to be addressed in boyd's discussion of her earlier research. That said, having once read the thesis and then gone back to review the blog post, it's quite plainly a simple restatement of circa-2000 conclusions, and bears no trace of having been updated in light of the enormous advances in graphical rendering technology which have taken place between then and now.

I don't know whether there is any evidence of women having trouble with Rift-induced simulator sickness at higher rates than men. Going by boyd's blog post, I can't know, because she doesn't bother to mention whether there is or there isn't; she just rehashes her earlier research and hangs "Oculus" and "sexist" off it as search keywords.

This would be disappointing in general from someone reputed as highly as danah boyd; much worse, though, it hamstrings her entire point! Her basic thesis, in this blog post, is "This is a discussion we need to be having." But there's no knowing whether that's true, because in comparison with modern rendering technology, the research on which she bases that statement is hopelessly outdated, and she presents no evidence to suggest that people who rely on shading cues have the same problems with today's VR technology as with that of fifteen years ago.

aamar 2 hours ago 2 replies      
This article's definitions of "motion parallax" and "shape from shading" are quite different from my understanding. Can anyone shed any light on this? Specifically:

"Motion parallax has to do with the apparent size of an object. If you put a soda can in front of you and then move it closer, it will get bigger in your visual field. Your brain assumes that the can didnt suddenly grow and concludes that its just got closer to you."

Whereas I believed "motion parallax" to be moving one's head so as to compare an object's displacement against a more distant background. Size is irrelevant.

"Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, youll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is."

"Shape from shading" I believed to be simply recreating a 3d structure from the way light falls on it, a depth cue that occurs even without motion. The quoted description seems like it is referring to specularity, which does play a role in shape from shading, but also seems well handled by (many) rendering engines.

devindotcom 2 hours ago 4 replies      
Hmm. I'm at a loss to explain how hormones, in the retina of all places, would affect perceptual cues that have been built up over years and years of use and are in many ways hard coded into the many layers and blobs (yes, "blobs") of visual cortex. I certainly can't rule it out, but it seems an unlikely mechanism for me. The neural networks governing vision are a very powerful combination of nature (some blobs fire only when there are edges of a certain orientation, for instance) and nurture (motion parallax ties in closely with a physical understanding of distance, proprioception, etc). The differences have to be deeper, if you ask me. Men and women's brains are indeed different, in ways we don't fully understand (this is an understatement), but there's a lot of evidence that men have, if you will, a spaciality speciality.

I wonder if there's a whole proprioceptive feedback center that helps tally your visual input with your movements, that's fed by androgens or otherwise activated by male-dominated chemicals and structures.

However I'm also curious about controls for things like playing lots of games when younger we're only just now starting to see female gamers approach males in proportion, and I'm not confident that near-equality is as near when you look at 5, 10, 15 year olds. Especially games like FPSes with lots of 3D movement and rectifying of a virtual space with the real one. Having grown up with the motion of gaming, I think I'm less susceptible to VR sickness. I don't have any data on this, of course, but I would be very interested to see some.

dsugarman 2 minutes ago 0 replies      
CAVEs make me nauseous too, I'm a dude and I used to help build them
shadowmint 3 hours ago 1 reply      
It's a microscopic sample size, but with the OR developer kit I got to play with, probably half women I know who tried it felt motion sick; the proportion was much much lower for men (maybe 1/10).

At the time I simply put it down to the guys having played more FPS games and being more accustomed to it, but its interesting to read this.

On the other hand, with the precise head tracking in OR, I wonder if a higher-resolution with a better lighting model would make this issue go away?

It's basically just tiny head movements, right? As you move you see minor shading differences in the scene, and use that to mentally reconstruct the 3D geometry (as I understand it from the article).

You'd think high precision head tracking with a sufficiently high frame rate would be able to catch that.

(however, the low res / poorly lit OR demos probably don't)

doktrin 2 hours ago 2 replies      
> Motion parallax has to do with the apparent size of an object. If you put a soda can in front of you and then move it closer, it will get bigger in your visual field. Your brain assumes that the can didnt suddenly grow and concludes that its just got closer to you.

> Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, youll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.

I need help understanding the mechanics here.

What exactly is the flicker described? What does it have to do with shading being recalculated? If hypothetically our eyes did not flicker, how would that affect our depth perception?

As a follow up - what would be involved in emulating this in a virtual system?

vilhelm_s 2 hours ago 0 replies      
I don't understand what "[computers can't] simulate how that tiny, constant flickering of your eyes affects the shading you perceive" refers to. How can flicking your eyes affect the shading of objects?

In the linked paper[1], the shape-from-shading cue was just a static greyscale gradient, with no eye-flicking. This seems like something that standard computer graphics techniques can emulate easily.

Since the study was done in 1997, I could imagine that the environments they were working with still contained lots of flat polygons. But the modern Oculus-rift runs things like Doom III, where everything is smoothly shaded. So to me it would seem that while the CAVE might have been "sexist", the Oculus isn't anymore? (Of course, there are lots of other depth cues than shading and parallax, and there could be sex differences about how those are prioritized also, but the cited experiment did not study them.)

[1] http://www.danah.org/papers/sexvision.pdf?_ga=1.245737348.10...

hcarvalhoalves 2 hours ago 0 replies      
> Motion parallax has to do with the apparent size of an object. If you put a soda can in front of you and then move it closer, it will get bigger in your visual field. Your brain assumes that the can didnt suddenly grow and concludes that its just got closer to you.

While parallax does play a part in perception of size and distance, we still perceive the size of objects without stereoscopy (that is, with one eye closed, or on a flat image). When shown an upclose photo of a can, people can figure out it didn't just grew because 1) we have previous knowledge of the shape and size of a can and 2) it's image gets distorted as it gets closer [1].

You can trick someone by making an object with an unfamiliar shape to look familiar when viewed with the right angle / FOV [2]. Similarly, making miniatures appear realistically-sized in photos involves careful FOV control to not trigger the signals in our perception [3].

This is a mechanism that, I think, would be interesting to study w.r.t. gender differences, more than rotating 3D objects.

[1] http://en.wikipedia.org/wiki/Perspective_distortion_(photogr...

[2] http://en.wikipedia.org/wiki/Forced_perspective#Forced_persp...

[3] http://petapixel.com/2013/10/14/life-like-miniature-scenes-s...

thefreeman 1 hour ago 0 replies      
Pretty cool article / research. I kind of wish the author hadn't intentionally brought sexism into it because I think the research is interesting enough on its own and it just distracts from the actual information. She does address this a bit at the end though.
Pxl_Buzzard 2 hours ago 2 replies      
Reading this article I'm not convinced this is the fault of the Rift. Motion parallax happens naturally when a game engine uses two cameras to render a scene (the 3D effect). Shape-from-shading is dependent on the lighting system the engine uses, and because developers haven't needed to program for VR they've never implemented it.

There are many changes in both game design and engine features that we'll start to see as virtual reality becomes mainstream. It's very likely we'll see a stall in graphics improvements as game engine programmers begin to enhance other systems like physics and lighting to work more realistically.

pubby 3 hours ago 1 reply      
So would having separate male and female versions of the Oculus be considered more sexist or less sexist? No snark intended.
sliverstorm 2 hours ago 2 replies      
All too often, systems get shipped with discriminatory byproducts and people throw their hands in the air and say, oops, we didnt intend that.

Is it just me, or is this kind of OK?

Intentional discrimination isn't, of course. But if your initial release accidentally isn't accessible to people with blue-yellow colorblindness, is that a tragedy of social justice?

No product serves every person equally, and this is especially true early in the product's (or company's) lifetime. You're a little too busy tackling the core problem to have time for fixing every accessibility problem in rev A.

Which seems OK to me. Rev A's are by nature lacking & incomplete, and I would rather they get something out the door, make a profit, and decide to make a rev B- than miss the boat and/or go bankrupt trying to make the product equally accessible all from day 1.

There are plenty of products like that, that I have been unable to enjoy in their early stages, so it isn't like getting the short end of the stick has never happened to me. One of my favorite outdoor gear companies, for example, has started making technical clothing. They don't offer sizes that fit tall, slender me, so I'm pretty much hosed for now. But that's fine. They'll get to it eventually.

(Obviously in this particular case wrt. Oculus Rift & women, the point has now been raised and women are half the globe, so it would seem to be a high priority to address. I'm just addressing the quoted statement, which was much more general)

peterhajas 3 hours ago 4 replies      
This doesn't sound like the Oculus is sexist, so much as that men are more predisposed to experience the 3D effect biologically. How is that sexist?
argumentum 2 hours ago 1 reply      
Is Nature sexist? To maintain a consistent argument, the author would have to answer yes.

Nature is many things, but fair is not one of them.

jbcurtin2 4 hours ago 3 replies      
Being a man and an oculus tinkerer. How would I go about testing this?
dang 3 hours ago 2 replies      
This was flagged by users. On the other hand, the article is clearly substantive. As an experiment, I'm going to provisionally override the flags. The thread hasn't degenerated into a flamewar so far; let's see if we can keep it that way.

All: please take extra care to make your comments high in substance and civility, and as low as possible in flammability.

ender7 3 hours ago 2 replies      
Ugh, I wish we didn't need such a rage-baiting title to talk about this. I'm not a huge fan of danah's methods [1], but then againit's hard to argue with her results.

danah is right that her findings are not at all conclusive. I'm somewhat doubtful that the root cause is what she suggests, but the problem itself seems to be very real. I'm quite excited to see if we can find a solution, as it may have broader-reaching effects. Could this allow people who get motion sick even from 2D presentations to enjoy them sickness-free?

[1] Labeling a company and its engineers as sexist for not being aware of certain, extremely obscure research is unfair to say the least. But, you know, institutional and implicit biases in subconscious power structures etc. etc.

Oculus 3 hours ago 1 reply      
The title of the blog/article almost ruined the whole point trying to be made.

I read the entire article waiting for justification of a clearly provocative title. I got to the end, felt Danah Boyd didn't justify the title and felt the blog/article was weak. Only when I took a step away and thought about the it did I realize the reason I had such a bad taste in my mouth - it was the title. The article itself brings up a good point and one that should be explored by Oculus and others doing VR. I don't think anyone expected that men and women would react different to VR or the techniques we use to render VR worlds.

dang 3 hours ago 4 replies      
The title is linkbait, which is against the guidelines, but I can't think of a better one. If any of you suggest one that's accurate and neutral, I'll change it.
zem 3 hours ago 0 replies      
lousy title, but fascinating piece of research. please do go back and read it if you skipped it due to the title.
Yaa101 3 hours ago 0 replies      
The words Oculus Rift sound like a sexually transmitted disease, so who knows?
Functionally Solving Problems With Haskell learnyouahaskell.com
81 points by hawkharris  14 hours ago   58 comments top 9
tel 12 hours ago 2 replies      
While LYAH is a lot of fun, if you've gotten your feet wet with Haskell for a while and want to see some really powerful examples of "functionally solving problems" I cannot recommend more highly Richard Bird's "Pearls of Functional Algorithm Design"[0].

All of the "Functional Pearls" papers are worth a read, but Bird's book is such a compressed, convenient, powerful collection of advanced functional problem solving style that I think anyone interested in "thinking" functionally should strive to read it.

It's not exactly for the faint of heart, to be clear. Bird's idea is that we can establish certain mathematical rules which allow us to transform "the simplest thing which could possibly be correct" to an efficient program through a series of steps, each one maintaining correctness as an invariant.

[0] http://www.amazon.com/Pearls-Functional-Algorithm-Design-Ric...

abraxasz 13 hours ago 1 reply      
I read the book a while back and it is absolutely fantastic. What I understand about monads, I learnt it from this book (no other exposition of monads ever made sense to me), and I particularly like his progression functors -> applicative functors -> monads. I think it's the clearest way to proceed (certainly helped me a lot anyways)
bsamuels 12 hours ago 5 replies      
As a side-note, if you want to get into functional programming but don't want to make the full leap into "PURE EVERYTHING MUST STAY CLEAN DONT BREAK THE RULES" world of Haskell, F# is another excellent choice for a beginner functional lang. You can program in either OOP or functional style, letting you take your time with the transition.

I also toot F#'s horn because it has an amazing resource for learning how to use functional style in your programs: http://fsharpforfunandprofit.com/site-contents/

A few pages on that site did more for me regarding functional programming than the entirety of LYAH did, and it also has a very nice monad tutorial: http://fsharpforfunandprofit.com/posts/computation-expressio...

dbbolton 12 hours ago 0 replies      
If you just want to play around with RPN a bit, I made a simple (regex-based) RPN calculating Perl script a while back:


brianmwaters_hn 12 hours ago 1 reply      
Did anybody else notice what is written on the upside-down calculator about 1/8 down the page?
zmanian 13 hours ago 1 reply      
Confused why a random chapter from the wonderful LYAH is on the top of Hacker News....
maxyb 12 hours ago 3 replies      
The link is a chapter from "Learn You a Haskell for Great Good", which I recommend a lot if you want to learn Haskell. However, if you click through to the table of contents, you'll see immediately one of my problems with Haskell as a language: guess in what chapter you finally get to write hello world?
stesch 12 hours ago 0 replies      
Oh, I remember. That's where I stopped reading the book.
ardz 9 hours ago 3 replies      
Another Haskell SPAM.

How can anyone seriously code in a language which introduces new kind of bugs which cannot be detected by any test until running in production on real data when it is already too late?

360 degree video with 6 GoPro cameras jonasginter.de
57 points by duvok  5 hours ago   19 comments top 13
rallison 1 hour ago 0 replies      
I like this. I'd love to see more ventures in this direction.

Ever since trying out Google's photo sphere feature (yes, they weren't the first ones in the space - Microsoft and others actually beat them to market), I've been quite interested in the concept of 360 by 180 ways to explore places.

I've used the feature to document, especially, my hikes, and a trip to Peru (https://www.google.com/maps/views/profile/111004183840840391...). I've experimented with using my DSLR to do the same (https://www.google.com/maps/views/view/111004183840840391992... and https://www.google.com/maps/views/view/111004183840840391992...).

And, after all that, what would I love to be able to do? Take a video with a setup that is something like in the link and be able to put together a 360 by 180 video that would allow a viewer to follow along with me skiing down a slope in the not so snowy mountains of southern California, or enjoying a zip line through a cloud forest in Costa Rica, or descending on bike down a mountain road. Imagine being able to view such a video but being able to pan side to side, up and down, looking forward or backward - basically, allowing the viewer to pan to any angle at any point in time in the video.

I have to imagine this is not a novel idea, but I can't say I've seen any demos of such an experience. I'd love to see some if they exist.

Anyway, just some thoughts I had while enjoying the video in the link.

adg 48 minutes ago 0 replies      
Surprised no one's mentioned Bublecam: https://www.kickstarter.com/projects/bublcam/bublcam-360o-ca....

Instead of six Go Pros, you can just use one Bublecam to capture 360 degree video, and the software to stitch the photos / videos comes with it.

lambda 2 hours ago 1 reply      
I think they mean 4 steradians (unit of solid angle covering the surface of a sphere), not 360 (unit of angle, covering the circumference of a circle).
Gracana 2 hours ago 0 replies      
I really liked the slow cycling and music at the beginning. I think it would be cool to do a music video like this, with an MC rapping as they walk/dance along, and all sorts of interesting people and scenery come and go over the horizon.
IvyMike 2 hours ago 2 replies      
Does he mention what software he used to stitch the views together? (I looked but don't see it, but I'm going through Google Translate)
NickNameNick 2 hours ago 2 replies      
I wonder how you start all the camera's recording at the same time, and how much variation there is in their framerates.

I know the old gopro's used to have a connector for some sort of sync cable to make 3d stereo recording work better,It might be posible to slave many camera's together if someone reversed engineered the connection details.

Otherwise I imagine that aligning all the video's in time might be incredibly annoying, especially if the framerate isn't consistent from camera to camera.

Urgo 4 hours ago 0 replies      
Hah thats pretty cool. Felt like I was watching a real life super mario galaxy a little.
kenrikm 2 hours ago 0 replies      
Who ever said the earth was not round? Very cool effect, Looks like it would make a trippy video game.
31reasons 2 hours ago 0 replies      
The effect looks cool, however if the capture is loss-less it might be possible to create a shader to flatten it and make it natural looking 360-degree video.
cookiecaper 1 hour ago 0 replies      
Lots of information on this type of thing here, including information on stitching softwares and methods, and many sample videos and photos: http://www.360heros.com
zxexz 2 hours ago 0 replies      
I wonder about the feasibility of using this idea to create Google street view equivalent with fluid movement between each point...
jggonz 3 hours ago 0 replies      
Aaah, my brain hurts.
Noxchi 3 hours ago 0 replies      
Woah, trippy.
The SEO Dominance of RetailMeNot priceonomics.com
78 points by zbravo  12 hours ago   35 comments top 17
sharkweek 11 hours ago 3 replies      
SEO-driven businesses are super fun to work on and run -- I have a few sites that rank for informational queries and thus get a decent amount of search traffic and ad revenue because of it. At scale, it can quickly become highly lucrative.

HOWEVER, and perhaps call me a giant wuss, but when you're at the mercy of not only an algorithm but also a savvy business, it's a pretty dangerous bet to hedge it all on search traffic.

Coupon codes are one of those things I could easily see appearing at the top of SERPs in no time courtesy of Google.

"Credit card comparisons" and other searches like it are a massively profitable keyword to rank for. But take a peek now and you'll see Google threw their hat into the ring, flipped one switch and magically appear above all organic rankings. They're doing the same thing with travel, weather, you name it. I can't sit here and argue it's unfair, it's their yard, they make the rules.

And truthfully, as a user, I completely applaud Google for providing these services, I love the one stop shop and trustworthy nature of their results.

As a marketer, I'm very, very wary to place any huge bets on any sort of long term business model relying on rankings. Should it be a tactic in your strategy? Absolutely. But diversification here is sure going to save a lot of headaches down the road.

But at the end of the day, as far as RetailMeNot goes, get it while it's good I suppose.

InfinityX0 9 hours ago 0 replies      
I don't buy the "don't have all your eggs in one basket" mantra here. I get that idea if you rely on SEO and you have risky tactics, but in RetailMeNot's case, they have the best user experience and also, up to date coupons and for that reason are the best result most of the time in the vertical.

If this was a publicly traded company that bought links and had a site with a shoddy user experience, I'd get the argument that there's significant risk.

In the history of Google there are very few, if any case studies of sites that A) did the right thing B) were the best result that ended up losing significant traffic. Most of it is "yeah, we were doing this wrong, but it's still not fair..". In Rap Genius' case, they clearly did something wrong.

The only exception to this rule is if Google moves in on the vertical and steals traffic, which is possible with coupons - but there's a significant difference between completely destroying a vertical and impeding on it somewhat. Google has moved in on the airline tickets vertical but the businesses there are still doing just fine (Expedia, Priceline), as shown by their considerable growth in the last year+. It seems likely that if Google moved in on this vertical, it would be a similar impact.

In this article, there's little to describe what RetailMeNot does wrong. Yes, they are somewhat of a parasite, but in many ways they do these businesses a service by lowering purchase friction by creating an elegant experience for users. If their UX was substandard, it's possible customers would get lost and never complete their purchase - this happens time and time again, which is why CRO professionals can get hundreds of thousands of dollars to optimize a conversion funnel. In many ways, RetailMeNot is a piece of that elegant conversion funnel, so they are rewarded as an extremely profitable affiliate for that reason.

Disclaimer: I haven't taken the time to evaluate their link profile so there's a possibility that they are doing something risky there, but from an on-page perspective, they are the best result.

robryan 2 hours ago 0 replies      
I think businesses that are giving retail me not a commission are largely throwing their money away.

As described the vast majority of sales being generated are from people that have already started the purchasing process on your website and are looking for a discount coupon. Without having any hard numbers from my experience in ecommerce I think the majority of these people are going to purchase anyway, regardless of whether they find a coupon. There is a good chance the people actively trying to find a coupon have already checked out the competition and are relatively satisfied with your offering.

Price discrimination can be good but I don't think throwing away 10% to companies like this is the way to do it. I also dislike how they end up ranking really highly on a lot of your brand terms where I'd prefer my business, pages we control and 3rd party reviews to rank. Ideally you could get a coupon page of your own to rank well for the "x coupon" search to get the same effect without having to give up the 10% or whatever the commission level is.

Destitute 3 hours ago 1 reply      
As a consumer, RetailMeNot is great. As another website that sells products for commission, it's terrible. If RetailMeNot can find a coupon that I fail to mention, that's fine.

But RetailMeNot also steals that last click cookie with coupons that simply do not work or are not really coupons (Check out the Amazon Free shipping "coupon" for example). That's nothing but cookie stuffing, and if it were any other smaller company doing something similar they'd probably get banned from the Amazon affiliate program.

Basically, imagine you as a website owner buying a product on Amazon and writing a review about it, reading the review out while piecing together a video montage that shows it off, taking some pictures of it, and compiling it all just to get a little bit of commission if someone reads and watches your review. Then the customer searches for an Amazon coupon before checking out, goes to RetailMeNot, and then gets the "Free Shipping on 35+ deal" that is not even a deal or coupon, or even a coupon that does not even work or exist. This million dollar company has stolen your effort and will get commission on that sale. It's just rotten from that viewpoint.

alohahacker 3 hours ago 0 replies      
I remember submitting coupons on retailmenot and getting them erased. I actually remember contacting them to post coupons that could eventually help their end users and them informing me that they wouldn't post it because it's not an affiliate code and they wouldn't get paid.

As a user, I wasn't getting paid either just wanted to help out since the coupon I found was superior than anything they posted for that store at the time.

Is their mission to help users find deals or pad their affiliate revenue? If it was the latter, their reason they gave me would make sense.

xpose2000 10 hours ago 2 replies      
"We take no stance on RetailMeNots business model. (We like getting coupon codes too!) We also have no idea how the company achieved its SEO dominance. The company has no doubt put a lot of work into conquering Google."

The answer could be simple. As far as I know, RetailMeNot was one of the original coupon aggregators. (The domain was created in 2006.) It also seems to concentrate on natural SEO and does not seem to be cheating. Combine that with good quality content and you have solid SERPs.

rahimnathwani 7 hours ago 1 reply      
This line from RetailMeNot's S-1 is interesting:

"When a consumer executes a purchase on a retailers website as a result of a performance marketing program, most performance marketing conversion tracking tools credit the most recent link or ad clicked by the consumer prior to that purchase."

I thought that often it was the earliest click (subject to an expiry), and that multi-touch attribution was becoming more common: http://www.clickz.com/clickz/column/2282207/embracing-the-re...

If you're a CJ affiliate, please share any insight.

MitziMoto 4 hours ago 2 replies      
I run an SEO fueled business and it scares the crap out of me; especially as we hire our first full time employee. The fact that Google could literally shut down my business at any time keeps me up at night.

My business started as a hobby and has grown significantly (due to SEO ranking improvements) over the last couple years. We're now at a point where we (my wife and I) can no longer handle the volume that's coming in. Our options right now are a) hire an employee to help reduce the load or b) remain small and stagnant because Google could cut the cord any minute.

I'm trying like hell to find alternate traffic streams like Adwords, Facebook, Mobile, etc, but I can't quite figure out how to make a reasonable profit with them. So for now I guess I just have to take the risk. No guts, no glory.

AznHisoka 7 hours ago 1 reply      
Opening popups that put the cookie in your browser, such as Amazon???

Am I missing something here? This is cookie stuffing, essentially! There's nothing white-hat about this at all.

rajacombinator 10 hours ago 0 replies      
I have no clue about the SEO dynamics but I do know RetailMeNot used to be more legit and organic user generated. At some point they made a change and got coopted by companies paying to post codes. Maybe that original popularity is what gives them search result rankings.
davemel37 3 hours ago 0 replies      
I have no doubt that affiliate networks will move into multi-click or other more dependable attribution models.

I would be more afraid of changes in attribution than getting slapped by Google.

Zigurd 9 hours ago 1 reply      
I've been looking at the coupon search business recently and, while I have not done a comprehensive competitive analysis, I can tell you RetailMeNot just works a lot better than Coupons.com.

I was sitting outside Macy's using Coupons.com "local search" and got no Macy's coupons. Oh but they are a "featured coupon!" You have to use that category to find them. Bleah.

All the coupon sites are limited. But RetailMeNot isn't overtly lame. They don't pretend to do things they can't. Local search works. The mobile apps work. They stay away from stuff that would not work on mobile and that burdens the user. They suck less. That's often a winning strategy.

porker 10 hours ago 2 replies      
Can anyone shed light on how they do their SEO? How do they get #1 for so many terms?
BorisMelnik 9 hours ago 0 replies      
Great post and they are really doing it right. That amount of traffic from search is impressive. On the other hand this industry is about to die. One of the most exploited niches in affiliate marketing right now. Retailmenot actually is doing it right, but sites trying to mimic their success by generating totally bogus promo codes are rising to the top of the SERPs, dropping their affiliate cookie and profiting.
hazard786 9 hours ago 0 replies      
The point which the article makes about Google Ventures having invested in the RetailMeNot speaks volumes. RMNs business model is absolute cash-cow until a mass of advertisers change their commission policies, which they won't as they are amongst one of their biggest affiliate partners.

So for Google, this is a great way for them to monetize their SEO SERPs. Invest in a company like this, get them to maintain clean SEO practices, reward them with obscenely dominant rankings against their competitors and then get the return from the IPO.

After IPO, wean off their reliance on Google with a push into mobile, as Google doesn't need to protect their investment any more and could disappear at any moment...

Pretty obvious how/why they have such dominant SEO results across practically all their markets, including arguably the second biggest market, the UK, if you ask me.

adventured 10 hours ago 0 replies      
I'm always astounded at what some SEO positions can generate. Coupons.com is also worth around $1.5 billion. Or take Demand Media for example. They've been less prominent since the first Panda update hit them, specifically eHow took a beating. Their stock has languished, but their sales have continued to climb (almost $400m over the last four quarters).

Does anyone compile an estimate for how much revenue Google is directly responsible for generating for other sites via referrals off their search engine? I'd be curious to know how that compares to their take via AdWords and AdSense.

whoismua 10 hours ago 1 reply      
Google Ventures was an investor in them.

Google wouldn't do that? Not so sure these days.

I also think that Google picks by hand the first few sites for top categories.

How we solved an infamous sliding bug joostdevblog.blogspot.nl
50 points by exch  12 hours ago   18 comments top 6
tetrep 10 hours ago 2 replies      
Problem: Lag + collision detection = characters slide.Solution: Turn off collision detection.

I was sincerely hoping for an actual technical solution for dealing with peer-to-peer networking and lag. I feel somewhat cheated out of the time it took me to read that article. While it did explain the problem very nicely, the solution was to just disable the feature outright.

scottfr 4 hours ago 0 replies      
Could you resolve the issue by introducing an element of stochasticity into the distance players are displaced to avoid collisions?

That way, one player would move right (or left) more than the other player and the collision would still resolve itself.

jbert 8 hours ago 1 reply      
Would it work to have a different response to collision, namely to apply a reverse of your last step? (A full reverse or partial).

There might still be corner cases, but it would seem to resolve the described issue, since irrespective of where you think the other player is, if you both reverse your motion you should disengage.

I can see 3-body problems being more problematic, but that's probably also the case with the original code.

eridius 10 hours ago 1 reply      
How does the player with the lower objectID detect that the sliding bug is happening? The way it was described, only the player with the higher objectID would be able to figure that out. Did they change it so both players sends the message? Or does the player with the higher objectID simply send a message when they detect the bug, meaning they turn off their collision first? Or something else entirely?
sigvef 11 hours ago 1 reply      
How do you stop clients from cheating in a peer-to-peer game like Awesomenauts?
kevingadd 10 hours ago 2 replies      
It seems like this solution implicitly acknowledges (and fails to resolve) the fact that their game has frequent, pervasive desyncs. The bug being described is only possible when in a state of persistent desync - that's a little scary in a competitive or semi-competitive game (like most games in the 'MOBA' genre are, including Awesomenauts).

I wonder what their reasons were for going with peer-to-peer instead of anointing a player as the 'server' as many other console multiplayer games do (Awesomenauts started out on console, IIRC)? That would solve a lot of these desync problems because the server would be responsible for resolving all the collisions.

India Launches Its Second Navigation Satellite space.com
73 points by middleclick  15 hours ago   14 comments top 3
ljd 11 hours ago 2 replies      
"Preliminary data showed the rocket placed the spacecraft in an orbit with a high point of 12,807 miles, a low point of 176 miles and an inclination of 19.2 degrees."

Maybe I'm reading this wrong, but this appears to be a very elliptical orbit with a the difference between the high and the low being more than 70x.

Does someone here know what the purpose of such an orbit is? Obviously it states navigation but I guess that's not enough for me to understand why they would pick such an orbit.

swatkat 11 hours ago 0 replies      
dalek2point3 12 hours ago 3 replies      
As an indian citizen I've never understood how these technologies ever reach me. I know GPS is by the US government, is any one commercializing these technologies in India?

"Indian officials say the independent navigation service will aid marine traffic, emergency response officials, vehicle tracking applications, mobile communications, mapping, and civilian drivers."

First Look AWS WorkSpaces virtuwise.com
23 points by aluciani  8 hours ago   11 comments top 7
michaelt 7 hours ago 2 replies      
Does anyone know why these are priced by the month, when so many other services (EC2, S3, RDS...) have much more granular billing?

I don't use Windows or Office enough to need a full copy, but if I could get access for $0.25 an hour on demand that would be pretty useful.

gburt 5 hours ago 1 reply      
I tried to do this with EC2 once and it proves to be far too slow to use over RDP. How are they handling that? Does anyone know? The OP called it "pretty snappy" which I assume referred to response time.
DonGateley 4 hours ago 0 replies      
What is the minimum commitment? Can I sign up for the $35/mo service to test drive and then cancel before the end of the first month.

A limited time trial configuration would be cool.

nodesocket 2 hours ago 0 replies      
I'd love to be able to create OS X WorkSpaces as well.
BatFastard 4 hours ago 0 replies      
If I could run this on a linux desktop, it would be a no brainer. But if I have to buy and configure a windows or mac license in addition, its less compelling as a desktop replacement. Plus I would really like to see some 4 or 8 core machines with 16 Gb ram. That said I have been wanting this for 20 years...
benjaminoakes 5 hours ago 1 reply      
I keep wondering if they'll try a Linux-based desktop for certain use cases. It could be offered for a cheaper price, perhaps.

Hard to say if it would happen, but it would fit in with Amazon's previous AWS offerings. Definitely good to start with Windows, however.

BorisMelnik 4 hours ago 0 replies      
anyone else having issues with IP? Forget which thread it was, but someone stated that their IP showed up as X in once instance (What is MY Ip I believe) and Y in another.
Microsoft Open Sources C# Compiler codeplex.com
1283 points by keithwarren  2 days ago   440 comments top 69
keithwarren 2 days ago 10 replies      
They also announced as part of this that they are putting a large swatch of their .NET Source code under Apache 2 and accepting pull requests.

Folks, this is a very big deal for Microsoft. Who would have imagined this 10 years ago?

Here is an image that shows what they are putting into the communityhttps://pbs.twimg.com/media/BkT9oBcCQAAHIAV.jpg:large

Locke1689 2 days ago 4 replies      
Everyone on Roslyn is really excited about this and we hope that it serves as a signal that big things are happening in .NET to make the entire platform more open and agile!

P.S. We're the Visual Basic compiler too :)

Pxtl 2 days ago 4 replies      
As a C# developer who genuinely likes the C# language (although I loathe vast swaths of the .NET framework libs) I'm actually super-excited about this.

C# is a great language, and I hope to see it flourish outside of the MS walled garden. Miguel de Icaza does what he can with Mono, but it can be so much more.

McGlockenshire 2 days ago 2 replies      
sergiotapia 2 days ago 3 replies      
Microsoft, you're seducing me again. I cheated on you with Ruby and Rails development a couple of years ago, but you're making me consider coming back in full swing.

Competition is great for everybody and Microsoft is making all the right moves!

dangero 2 days ago 1 reply      
Wow these contributions are a huge deal for Mono. I've spent the last few months making sure C# code works well in Mono and there are a lot of things that are missing or buggy. WebClient for example on Mono is missing DNS refresh timeout which means your app will never update a dns cache entry. If the ip of a server changes, you're pretty much screwed in Mono.
fournm 2 days ago 1 reply      
I'm not sure who this new company is going by the name of Microsoft, but I'm glad they seem to be running things now.
ChuckMcM 2 days ago 0 replies      
This is great. There is a really interesting lesson/insight here. Programmers are expensive.

There are a number of things people are doing, based on Linux, which are basically using Linux as an OS and then layering on some custom drivers or such into a product. Whether its a web 2.0 company using it as the server OS or an embedded signage company. All of these were "impossible" when you had to have your own OS team to support them, and Microsoft benefited from that. Now the OS "maintains itself" (such as it is) and so businesses get everything they got from employing Microsoft tools but at a much lower effective cost. They don't need to pay big license fees, they don't need to hire programmers to maintain a lot of code that isn't central to their product, and they don't have to spend a lot of money/time training people on their own infrastructure. That is a pretty big change.

Its nice to see folks realize it isn't the software that is valuable, its the expertise to use it that has value. By open sourcing the C# compiler Microsoft greatly increases the number of people who will develop expertise in using it and that will most likely result in an increase of use.

ak217 2 days ago 0 replies      
Nice. For some reason I can't clone the repo, though.

    Cloning into 'roslyn'...    remote: Counting objects: 10525, done.    remote: Compressing objects: 100% (4382/4382), done.    remote: Total 10525 (delta 6180), reused 10391 (delta 6091)    Receiving objects: 100% (10525/10525), 16.94 MiB | 1.69 MiB/s, done.    error: RPC failed; result=56, HTTP code = 200
Edit: This looks like an incompatibility between GnuTLS and whatever Microsoft is using for TLS. Using git+libcurl linked against OpenSSL works fine.

tdicola 2 days ago 0 replies      
What kind of patents do they have on the compiler tech, and is there any guarantee they won't go after you for using it?

edit: Ah nice, Apache 2 license explicitly calls out a patent license is granted for use. I wonder how much cajoling it took to get the lawyers to agree to that!

stcredzero 2 days ago 6 replies      
If Microsoft starts working on its own Unity-clone, with a functional language, advanced concurrency features, and good incremental GC, they could be sure to capture a big chunk of the mindshare of game developers. This could then be parlayed into mindshare of soft-realtime development, which will become ever more important.
iamthepieman 2 days ago 0 replies      
For the first time in a long time I'm excited to be developing with Microsoft technologies.
revelation 2 days ago 1 reply      
Really confused about the Xamarin/MSFT situation. I see Miguel everywhere on Build, and other MS teams casually mentioning cooperations with Xamarin.

Certainly Microsoft wouldn't mind just throwing some millions at them and buying them outright, so are we to deduce that any such offer was rejected?

acqq 2 days ago 1 reply      
Note that the C# compiler being open-sourced now is not the one used in Visual Studio. The open sourced one is called currently the "Roslyn C# compiler."

See Locke1689's comments here, especially:


"the native C# compiler (that's what we call the old C# compiler that everyone's using in VS right now)"

plg 2 days ago 2 replies      
The changes in company behavior have been absolutely stunning since Ballmer left. Hopefully these changes signal a more modern, forward looking Microsoft going forward.
quux 2 days ago 3 replies      
Anyone else getting deja vu?

Makes me think of Sun open sourcing Java.

mixmastamyk 2 days ago 0 replies      
Wow, this feels like the end of Return of the Jedi where Darth takes off his helmet, having realized he was on the wrong side.

Is this the return of the original MS?

noelherrick 2 days ago 0 replies      
I wonder if Mono will use this to replace their compiler and just focus on the runtime / VM. They'd be more focused on making it performant, vs. trying to keep up with the latest C# features. They'd have to keep up with .NET, of course, but I'd guess that not having to worry about a C# compiler would be quite the load off their shoulders.
tolmasky 2 days ago 1 reply      
I really hope Unity is able to incorporate this somehow so we can finally get a solid update to C# and .NET.
weavie 1 day ago 2 replies      
Interesting to see that they don't shy away from using goto in their code.


Something I've just learned from looking at the code is you can jump between cases in a switch statement :

    switch (a) {       case '1':         ...       case '2':         goto case '1';    } 
Never realised you could do that.

lovemenot 2 days ago 0 replies      
Is it in the realm of possibility that XP could go the same way? I can imagine why Microsoft might want to release that albatross, but I have no idea whether or how they could contain the damage due to leakage of their IP.
euske 2 days ago 0 replies      
It's interesting that they made their own lexer/parser for this (cf. Microsoft.CodeAnalysis.CSharp/Parser/LanguageParser.cs). It seems that it has a lot of advanced technologies (e.g. error recovery) here. I'm curious if it's possible to create a more general parser framework out of this.
csulmone 2 days ago 2 replies      
Wow, this is impressive. Does this mean the .Net framework is next?
imarihantnahata 2 days ago 0 replies      
May be in the next few years, Microsoft will be one of the biggest players in OpenSource :)
j_s 2 days ago 2 replies      
Is this the 2014 edition of the Rotor Project[1], where Microsoft dumped a bunch of code to run .NET on XP/OSX/FreeBSD and then almost nothing happened? Hopefully the choice of a standard license this time will give this release a chance.

[1] http://www.microsoft.com/en-us/download/details.aspx?id=1412...

NicoJuicy 2 days ago 0 replies      
Microsofts decision to opensource is not suddenly.

It has actively been pushed by some of Microsoft's evangelists (Phil Haack (ex employee, works at github now i think) and Scott Hansselman to say the more popular names).

I believe they got some playfield to do things and now the community has more and more impact (eg. Nuget and software like myget which is based on Nuget (Nuget for Enterprise))

Also, the CEO isn't Balmer anymore, that probably helps to.

elwell 2 days ago 0 replies      
Commit history only goes back to Mar 18 [0]. Presumably, to hide code that needed to be cleaned up before the release. Would've been interesting to see the full history, mistakes and all; a full view of their dev process.

[0] - http://roslyn.codeplex.com/SourceControl/list/changesets?pag...

JackMorgan 2 days ago 0 replies      
I hope for a C# and F# plugin to Intellij IDEA! Please add it JetBrains!
cwt137 2 days ago 4 replies      
What does this mean for the Mono project?http://www.mono-project.com/Main_Page
marpalmin 2 days ago 0 replies      
I think that Microsoft is doing a really smart move. Xamarin is towards being the main framework for cross platform mobile development and Microsoft is positioning itself very well there.
kclay 2 days ago 0 replies      
Man Microsoft is changing, this is great news.
_superposition_ 2 days ago 0 replies      
Nice start, but the real power is in the open platform, not the language IMHO. This becomes less of an issue as things move to the cloud and paas, but we're not there yet. Yes, there's mono, but its still the red headed step child.
pekk 2 days ago 0 replies      
I guess they are about done promoting C#, then
aceperry 2 days ago 0 replies      
Wow, you know that the times are a'changing when something like this happens. Ex-chairman Steve Ballmer used to call linux a cancer and MS had nothing but disdain for open source software. Open source really has made a difference, and Microsoft is reacting in a big way.
guiomie 1 day ago 0 replies      
Is there any blog post/documentation/diagrams to help understand the compiler and how each modules interact between each other ? I'm going thru the code and its cryptic for me.

Also, I read a lot of comments saying this way good for mono ... how is this ? Wouldn't an open source CLR be more useful ?

Tloewald 2 days ago 0 replies      
Is the CLR runtime open-source too? Because open sourcing the C# compiler isn't such a big deal without it.
sytelus 2 days ago 1 reply      
Does this mean one can now take this code and build compiler that targets Mac/Linux platforms? How about forking this to build new variants of C#?
damian2000 2 days ago 0 replies      
Great news. A 180 turn since Ballmer compared linux and open source to communism 14 years ago ... http://www.theregister.co.uk/2000/07/31/ms_ballmer_linux_is_...
vanilla 2 days ago 0 replies      
I fell that they are doing it because of a growing threat from Linux as a Windows alternative.

With Valve pushing their Debian fork and more gaming support for Linux in the last time, Microsoft wan't to appeal to the Open Source community the reduce the "bashing" which ... which could actually loose some force behind it. Not that it could actually benefit Linux with better Mono support etc.

jestinjoy1 2 days ago 0 replies      
Open Sourcing everything! Today read they will be releasing Windows os for IoT free! Looks like Opensource is the next business model! :)
chj 2 days ago 0 replies      
Is it possible to get it run on Linux/Mac?
novaleaf 2 days ago 0 replies      
lots of positive sounding stuff comming from msft in the last week.

But, I dunno. I'm extremely skeptical of Microsoft's ability to put long-term momentum into any of their non-core strategies. All these things are one re-org away from becoming basket cases.

Case in point: XNA

Illniyar 2 days ago 0 replies      
Go Nadella go!
az0xff 2 days ago 1 reply      
I'm stupid, so I must ask:

What does this mean for the future of C# on Linux?

nodivbyzero 2 days ago 1 reply      
Microsoft uses Git. Is it not cool?

Microsoft, please add unix terminal instead of start button in Windows 8.

ckaygusu 2 days ago 0 replies      
This is just the beginning. While I hate their business model and corporate mindset, Microsoft has really well thought out products that can easily make an impact outside their ecosystem. I'm glad they are realising this, and %100 sure there will be more coming from this direction.
Yuioup 2 days ago 1 reply      
This means that you can compile .NET code on a non-MS platform (like Linux) but you can only deploy it to ... Azure.

Microsoft's endgame is in sight.

GFunc 2 days ago 1 reply      
No more ".Net magic" now that the curtain's dropped.

I think this will help .Net devs make smarter decisions about their code now that they can see what's happening in the background.

ilitirit 1 day ago 0 replies      
Who is going to break the news to Slashdot?
pritambaral 2 days ago 0 replies      
What about the AOT compiler recently announced?
jimmcslim 2 days ago 2 replies      
I wonder what the impact of this is for Resharper, Jetbrains .Net refactoring tool, and their Nitra effort?
sgy 2 days ago 0 replies      
Now with Windows free and many of its work is open-sourced, Microsoft is going to try to make money on services and other software that comes with Windows.It's a risk, but better than the alternative: watching Android completely takes over the planet.

It's not an advertising company like Google. Google makes money when you use the Internet; Microsoft makes money when you pay for its software.

Yuioup 2 days ago 2 replies      
Yes! I hope somebody will create a VB.NET to C# convertor with this.
arjn 2 days ago 0 replies      
Wow, big changes at MS.
sagargv 2 days ago 0 replies      
How does Microsoft benefit by open sourcing the C# compiler ? How will this drive users/developers towards Windows ?
irishjohnnie 2 days ago 0 replies      
I was hoping they would update Rotor to reflect the new CLR
matheusbn 2 days ago 0 replies      
Microsoft should have done this announce two days ago! :)
copter 1 day ago 1 reply      
I suspect Scott Hanselman has huge impact on this. Thanks for pushing it Scott.
vmmenon 2 days ago 0 replies      
i wish they would post the sources of the initial basic that bill and paul wrote.
jhprks 2 days ago 1 reply      
I think we're all very lucky to have a corporation as innovative, open-minded, and generous as Microsoft. Microsoft is a company that every company should look up to.
faruzzy 2 days ago 0 replies      
Now we're talking!
duongkai 1 day ago 0 replies      
It's a good sign.
notastartup 2 days ago 0 replies      
What sort of changes can we see from this move? Could we generate ASP code by writing it in PHP first? Can .NET be run on Apache and linux servers?
duongkai 1 day ago 0 replies      
It's a good sign
leccine 2 days ago 0 replies      
RIP Java! :)
anaphor 2 days ago 0 replies      
Great, now GPL the entire windows kernel :)
paulftw 2 days ago 0 replies      
What about the past statements of MS executives?e.g. "A Microsoft legal representative has said during a hearing in the European Parliament that open source actually presents a higher vulnerability risk."
ndesaulniers 2 days ago 0 replies      
It's great to see M$ embracing open source. If you think Open Source is important, let me know! https://github.com/nickdesaulniers/What-Open-Source-Means-To...
Touche 2 days ago 3 replies      
Step in the right direction. I'm still waiting for

  git clone https://github.com/microsoft/windows.git
to happen

C# 6: First reactions msmvps.com
107 points by yulaow  19 hours ago   54 comments top 12
louthy 15 hours ago 4 replies      
I think Declaration expressions are a mistake and proper support for tuples (and pattern matching) shold have been put in. It feels like an inbalanced expression with the result on the left and the right:

    var success = int.TryParse(s, out var x);
I'm a bit 'meh' about Primary constructors, Auto-property initializers, Getter-only auto-properties. It seems a bit messy and incomplete. Personaly I don't use properties anymore and just use readonly fields with constructor initialisation (which is enough to capture the property setting logic). What I would like to have seen in this area is 'readonly' classes, with mechanisms for cloning objects using named parameters for partial updates:

    readonly class ReadOnlyClass(public int X, public int Y, public int Z)    {    }

    ReadOnlyClass obj = new ReadOnlyClass(1,2,3);    ReadOnlyClass newObj = obj.X = 10;
Instead of the current system for creating immutable classes in C#, which becomes quite unweildy and error prone as the number of fields grows.

    class ReadOnlyClass    {        public readonly X;        public readonly Y;        public readonly Z;        public ReadOnlyClass(int x, int y, int z)        {            X = x;            Y = y;            Z = z;        }        public ReadOnlyClass SetX(int x)        {            return new ReadOnlyClass(x, Y, Z);        }        public ReadOnlyClass SetY(int y)        {            return new ReadOnlyClass(X, y, Z);        }        public ReadOnlyClass SetZ(int z)        {            return new ReadOnlyClass(X, Y, z);        }    } 

    ReadOnlyClass obj = new ReadOnlyClass(1,2,3);    ReadOnlyClass newObj = obj.SetX(10);
I think 'using' static members will be a huge win for creating functions which appear to be part of the language. For example I use the following to do type inference on lambda expressions:

    static class lamb    {        public static Func<T> da<T>(Func<T> fn)        {            return fn;        }    }    var fn = lamb.da( () => 123 );
I'v tried to make it look like part of the language by making it all lowercase, but it's still a little odd. Being able to 'using' a static class in would be perfect for this kind of stuff. To be used sparingly obviously.

Expression bodied members, yes please! So pretty...

    public double Dist => Sqrt(X * X + Y * Y);
Typecase and Guarded Cases would be lovely too. Basically anything that enables expression based coding in C#. It would be nice if the LINQ language functions were extended too: count, take, skip, tolist. It's slightly less pretty when having to wrap the expression in brackets to call .ToList();

bruceboughton 17 hours ago 3 replies      
I'm not sure of this direction. It looks like a rather large syntax explosion to cover a bunch of corner cases. C# doesn't want to become Perl. That's my initial gut feeling but perhaps that will change.

Does anyone know what the dollar syntax is? Similarly, the new dictionary initialiser syntax? Fisnlly, what is private protected?

Edit: these are from https://roslyn.codeplex.com/wikipage?title=Language%20Featur...

Pxtl 12 hours ago 1 reply      
I looked at this earlier - on the one hand, I like that they're directly tackling known pain-points in the language. The declaration expression "out var x" thing is, imho, great. The TryParse syntax has always been a problem because you have to declare the variable the line before, which means no type inferencing and whatnot. Also, good for casting as they show.

Using static is great - finally we've liberated the verbs in the kingdom of nouns.

However, some of them have really ugly syntax. I know they wanted to get rid of the boilerplate problem of declaring and setting into a read-only member, but the primary constructor syntax is monstrous - declaring local variables inside of your constructor? Really? I feel like just having a "readonly set" access modifier on properties (both auto and explicit) would have been sufficient.

The new dictionary accessor seems silly.

Exception filters seem just as pontless in C# as they did in VB (nesting an "if/else throw" into a catch clause wasn't too hard).

The crazy null ?. operator I'm unsure about... that one I'll have to use. I've run into the problem enough times to see why they did it, though.

Locke1689 10 hours ago 0 replies      
Thanks for the feedback, Jon!

FYI, we have discussion threads on http://roslyn.codeplex.com.

Hominem 15 hours ago 0 replies      
Seems like 6.0 is all about reducing boilerplate. Those default constructors and null checking will probably save me hours of typing as the ctor that does nothing but set stuff and null checking properties all the way down are such common code patterns.
jksmith 16 hours ago 0 replies      
Hjelsberg is continuing the tradition from Delphi that "less is less." Golang, stay disciplined and focused my friend.
zwieback 17 hours ago 0 replies      
Looks good but I don't know if I'll be able to keep these fresh enough in my head to be able to use these things without looking them up. Trying to keep up with the explosion of features in the curly brace languages starts getting really hard. Moving between low level FW (C), desktop apps (C++/C#) and Android (Java) something will have to give, my code will probably continue to just look really 2005-ish.

I really like the what the monadic null checking does but aesthetically it looks pretty ugly right now.

Spearchucker 16 hours ago 1 reply      
Not as familiar with C# as I am VB.Net, but this (the before scenario) strikes me as a little odd (incorrect, or not the whole story, maybe?):

Readonly properties require less syntax.


private readonly int x;

public int X { get { return x; } }

The only scenarios where I require readonly properties are where I set or otherwise calculate a private variable at runtime, and then enforce readonly to the public getter.

In the example above, it looks like the private variable is readonly, and must therefore be initialized with a value that cannot be changed. Am I reading this correctly?

AndrewDucker 18 hours ago 2 replies      
Is there a list anywhere of all the new functionality?
platz 16 hours ago 1 reply      
This is perhaps a bit trollish but hopefully someone will get some enjoyment out of it.


greatdox 16 hours ago 1 reply      
Well it is nice that C# 6 becomes cross platform, can someone please explain to me the license behind it?

Microsoft usually does the license thing that takes away some of your rights. Is it the Apache2 license or something else? What are the limitations on this project?

Thanks in advance.

django-fanboy 16 hours ago 0 replies      
What ever new feature C# includes, people will still hate it, whatsoever.
       cached 6 April 2014 07:02:01 GMT