hacker news with inline top comments    .. more ..    17 Apr 2014 News
home   ask   best   5 years ago   
The New Linode Cloud: SSDs, Double RAM and much more linode.com
190 points by qmr  1 hour ago   96 comments top 26
kyrra 59 minutes ago 3 replies      
I forgot to benchmark the disk before I upgraded but here are some simple disk benchmarks on an upgraded linode (the $20 plan, now with SSD)

  $ dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync  1024+0 records in  1024+0 records out  1073741824 bytes (1.1 GB) copied, 1.31593 s, 816 MB/s  $ hdparm -tT /dev/xvda  /dev/xvda:   Timing cached reads:   19872 MB in  1.98 seconds = 10020.63 MB/sec   Timing buffered disk reads: 2558 MB in  3.00 seconds = 852.57 MB/sec
Upgraded cpuinfo model: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz

Old cpuinfo model: Intel(R) Xeon(R) CPU L5520 @ 2.27GHz

CPUs compared: http://ark.intel.com/compare/75277,40201

madsushi 32 minutes ago 1 reply      
Why do I pay Linode $20/month instead of paying DO $5/month(1)?

Because Linode treats their servers like kittens (upgrades, addons/options, support), and DO treats their servers like cattle. There's nothing wrong with the cattle model of managing servers. But I'm not using Chef or Puppet, I just have one server that I use to put stuff up on the internet and host a few services. And Linode treats that one solitary server better than any other VPS host in the world.

(1) I do have one DO box as a simple secondary DNS server, for provider redundancy

orthecreedence 5 minutes ago 1 reply      
Bummer, they're taking away 8 cores for the cheap plans and replacing it with 2. Does anyone know if the new processors will offset this difference? I don't know the specs of the processors.

Linode's announcements usually come in triples...I'm excited for number three. Let's hope its some kind of cheap storage service.

nivla 1 hour ago 3 replies      
Awesome News. Competition really pushes companies to please their customers. Ever since Digital Ocean became the new hip, Linode has been pushing harder. My experience with them has been mixed. Forgiving their previous mishaps and the feeling that the level of Customer Service has gone down, they have been decent year long. I wouldn't mind recommending them.

[Edit: Removed the bit about DigitalOcean Plans. If you have Ghostery running, it apparently takes out the html block listing different plans]

rjknight 1 hour ago 7 replies      
It looks like Linode are still leaving the "incredibly cheap tiny box" market to DO. Linode's cheapest option is $20/month, which makes it slightly less useful for the kind of "so cheap you don't even think about it" boxes that DO provide.
endijs 1 hour ago 2 replies      
Most interesting part in this great upgrade is that they went from 8CPU setup to 2CPU setup.But yeah - 2x more RAM, SSDs will guarantee that I'm not going to switch anytime soon. Sadly I need to wait a week until this will be available in London.
vidyesh 14 minutes ago 0 replies      
So this makes Lindode practically on par with DO's $20 plan. Up till now $20 plan at DO was better now its just the choice of the brand.

But here is one thing that DO provides and I think Linode too should, you get the choice to spin up a $5 instance anytime in your account for any small project or a test instance which you cannot on Linode.

relaxatorium 1 hour ago 2 replies      
This seems pretty fantastic, I am excited to upgrade and think the SSD storage is going to be really helpful for improving the performance of my applications hosted there.

That said, I am not an expert on CPU virtualization but I did notice that the new plans are differently phrased than the old ones here. The old plans all talked about 8 CPU cores with various 1x, 2x priority levels (https://blog.linode.com/2013/04/09/linode-nextgen-ram-upgrad... for examples), while the new plans all talk about 1, 2, etc. core counts.

Could anyone with more expertise here tell me whether this is a sneaky reduction in CPU power for the lower tiered plans, or just a simpler way of saying the same thing as the old plans?

munger 39 minutes ago 1 reply      
Rackspace cloud customer here These Linode upgrades are very tempting to entice me to switch.

I get I might not be their target market (small business with about $1000/month on IaaS spending) but there are a couple things preventing me from doing so:1) $10/month size suitable for a dev instance.2) Some kind of scalable file storage solution with CDN integration like RS CloudFiles/Akamai or AWS S3/Cloudfront or block storage to attach to an individual server.

I guess you get what you pay for infrastructure components and flexibility AWS > RS > Linode > DO which roughly matches the price point.

giulianob 1 hour ago 0 replies      
Holy crap this is awesome. Good job guys at Linode. I said I would switch if the prices dropped about 25% because RAM was pricey.... So now I have to switch.
davexunit 1 hour ago 2 replies      
Cool news, but their website now has the same lame design as DigitalOcean. I liked the old site layout better.
raverbashing 1 hour ago 0 replies      
Congratulation on Linode

I stopped being a customer since migrating to DO but my needs were really small

But I think their strategy of keeping the price and increasing capabilities are good. Between $5 and $20 is a "big" difference for one person (still, it's a day's lunch), for a company it's nothing.

However, I would definitely go to Linode for CPU/IO intensive tasks. Amazon sucks at these (more benchmarks between the providers are of course welcome)

mwexler 1 hour ago 1 reply      
There's similar and then there's alike. I guess it makes comparison easy, but imitation certainly must be the sincerest form of flattery:

Compare the look and feel of https://www.linode.com/pricing/ and https://www.digitalocean.com/pricing/

level09 47 minutes ago 0 replies      
I would probably move back from Digital Ocean if they allow a 10$/mo plan.

I know that's not a big price difference, but some website really don't need a lot of resources. they work well on D.O's 5$ server, and I have really a lot of them.

jevinskie 1 hour ago 0 replies      
I resized a 1024 instance to 2048 last night and it looks like it is already running on the new processors (from /proc/cpuinfo): model name: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz

Should I upgrade? Do I want 2 x RAM for 1/2 vCPUs? =)

extesy 1 hour ago 2 replies      
So now they match DigitalOcean prices but offer slightly more SSD space for each plan. I wonder what DO answer to this would be. They haven't changed their pricing for quite a while.
bfrog 1 hour ago 3 replies      
I'm actually a little unhappy, it looks like they reduced the CPU count for my $20/mo instance. At this point there's basically no reason to stay with them now.
jaequery 58 minutes ago 1 reply      
DO's biggest problem is their lack of "zero-downtime snapshot backup and upgrading". i've not used Linode but anyone know if theirs is any different?
dharma1 19 minutes ago 0 replies      
ohhh yesss. DO is good for some locations like Southeast Asia but loving this upgrade for my London and Tokyo Linodes
h4pless 1 hour ago 2 replies      
I notice that Linode talked a good bit about their bandwidth and included outbound bandwidth in their pricing model which DO does not. I wonder if DO has a similar model or if transfer capacity the only thing you have control over.
jaequery 1 hour ago 0 replies      
im really impressed by their new CPU specs. from experience those aren't cheap and it's possibly the fastest CPU out in the market. combined with the SSDs, it may be that Linode currently is the fastest of any cloud hosting right now.
ff_ 34 minutes ago 0 replies      
Wow, that's beautiful. Currently I'm a DO customer (10$ plan), and if they had a 10$ plan I'd make the switch instantly.
Justen 1 hour ago 1 reply      
Higher specs sound really nice, but on HN I see people commenting on the ease of DO's admin tools. How does Linode's compare?
zak_mc_kracken 54 minutes ago 1 reply      
Does any of LINode or DigitalOcean offer plans without any SSD? I couldn't find any.

I just want to install some personal projects there for which even SSD's are overkill...

notastartup 16 minutes ago 0 replies      
These upgrades are impressive but they are a bit too late to the game. DO still has these advantages besides the cheap monthly price:

- DO has excellent and easy to understand API- Step by step guides on setting up and running anything- Minimal and simple

To entice me, it's no longer just a matter of price, DO has extra value added, largely due to their simplicity.

izietto 1 hour ago 0 replies      
Do you know cheaper alternatives? Like DigitalOcean, as @catinsocks suggests
How Americans Die bloomberg.com
188 points by minimax  2 hours ago   84 comments top 25
tokenadult 12 minutes ago 1 reply      
About three or four slides in you get the take-away message, which is often missed in discussions about mortality here on Hacker News: "If you divide the population into separate age cohorts, you can see that improvements in life expectancy have been broad-based and ongoing." And this is a finding that applies not only to the United States, but to the whole developed world. I have an eighty-one-year-old mother (born in the 1930s, of course) and a ninety-four-year-old aunt (born in the 1920s) and have other relatives who are quite old and still healthy. Life expectancy at age 40, at age 60, and at even higher ages is still rising throughout the developed countries of the world.[1] An article in a series on Slate, "Why Are You Not Dead Yet? Life expectancy doubled in past 150 years. Heres why."[2] explains what incremental improvements have led to better health and increase life expectancy at all ages in the United States. The very fascination data visualizations in the article submitted today highlight the importance of research on preventing suicide, reducing drug abuse, and preventing senile dementia such as Alzheimer disease, which is where some of the next progress in prolonging healthy life will have to come from.

Professional demographers try to think ahead about these issues, not least so that national governments in various countries can project the funding necessary for publicly funded retirement income programs and national health insurance programs. Demographers have now been following the steady trends long enough to make projections that girls born since 2000 in the developed world are more likely than not to reach the age of 100,[3] with boys likely to enjoy lifespans almost as long. The article "The Biodemography of Human Ageing"[4] by James Vaupel, originally published in the journal Nature in 2010, is a good current reference on the subject. Vaupel is one of the leading scholars on the demography of aging and how to adjust for time trends in life expectancy. His striking finding is "Humans are living longer than ever before. In fact, newborn children in high-income countries can expect to live to more than 100 years. Starting in the mid-1800s, human longevity has increased dramatically and life expectancy is increasing by an average of six hours a day."

I was in a local Barnes and Noble bookstore back when I was shopping for an eightieth birthday gift (a book-holder) for my mom, and I discovered that the birthday card section in that store, which is mostly a bookstore, had multiple choices of cards for eightieth birthdays and even for ninetieth birthdays. We will be celebrating more and more and more birthdays of friends and relatives of advanced age in the coming decades.

[1] http://www.nature.com/scientificamerican/journal/v307/n3/box...

[2] http://www.slate.com/articles/health_and_science/science_of_...

[3] http://www.prb.org/Journalists/Webcasts/2010/humanlongevity....

[4] http://www.demographic-challenge.com/files/downloads/2eb51e2...

wtvanhest 2 hours ago 1 reply      
The data is interesting, but somewhat difficult to draw conclusions from without considering how different rates are impacting other rates. What is really noteworthy here is the approach to showing the data. Its effortless to scroll through.

Here are some things I noticed after the fact:

1. I naturally wanted to finish the presentation and was compelled to click to see if there were any amazing insights.

2. After the fact, I have no idea how I even advanced the presentation, all I knew was that I clicked something. It was 100% natural.

It fully pulled me in. I can't remember if there were ads on the sides or more information.

[added] I went back and looked at it again and I think what made it so flawless is that the first page gave me no option but to click the right hand arrow which taught me what to look for. I clicked the right arrow, and then I knew to click it again to advance. The progress dots on the top let me know that I didn't have much time left. Really amazing work here.

joshuak 2 hours ago 3 replies      
So to achieve longevity escape velocity [0]

1. Don't have unprotected sex if you're less than 44 years old.

2. Don't kill yourself, or do drugs, if you're less than 54 years old.

3. Invest heavily in heart disease, cancer, and alzheimer's research.

[0] http://en.wikipedia.org/wiki/Longevity_escape_velocity

ihodes 1 hour ago 3 replies      
Probably the four most important things you can do to change your odds of making it past 80 are:

    1. Not smoking.    2. Eating healthily (fiber, vitamins, low sugar; this is a nascent field).    3. Exercising regularly.    4. Wearing sunscreen and minimizing sun exposure.
These will collectively reduce your risk of common cancers significantly, as well as protect against heart disease. Additionally, they can help strengthen your immune system and body against other diseases that e.g. the malnourished or obese would be more likely to succumb to.

webwright 1 hour ago 2 replies      
Ugh, the fact that many of these charts show raw # of deaths versus deaths/100k really masks how much things have improved. In 1968, the population was 64% of our current population... So a flat line is actually a pretty massive improvement.
minimax 2 hours ago 0 replies      
If you liked this, you might enjoy some of their previous articles. It's interesting to see them iterating on the technique.

Consumer spending (from last December): http://www.bloomberg.com/dataview/2013-12-20/how-we-spend.ht...

Housing prices (from February): http://www.bloomberg.com/dataview/2014-02-25/bubble-to-bust-...

brudgers 33 minutes ago 1 reply      
"And, how do suicide and drugs compare to other violent deaths across the population? Far greater than firearm related deaths, and on the rise

In 2010, 19,392 of the 38,364 suicides were "by discharge of firearm" [the same term used for classifying 11,078 homicides and 606 accidental deaths]. Seems a bit odd that the report classifies the accidents and homicides as "firearm related deaths" but the suicides as unrelated.

From a public health perspective, a 50% reduction in suicide by firearm would save more lives than the complete elimination of HIV deaths or cervical cancer deaths or uterine cancer deaths.


imgabe 2 hours ago 1 reply      
So in 1968 all age cohorts had the exact same mortality rate of 100 per 100,000? Why is that?
mberning 2 hours ago 6 replies      
Any info on how they create these visualizations? Are they using any particular libraries or frameworks?
ABNWZ 1 hour ago 3 replies      
"This is particularly striking since cancer and heart disease - the two biggest killers for 45-54 yr olds - have become much less deadly over the years"

Except your graph shows that cancer death rates have increased by almost 20% from 1968-2010... Am I missing something here?

richev 2 hours ago 4 replies      
Very nice graphs and visualisations, but am I alone in finding most of them hard to understand?
Pxtl 1 hour ago 0 replies      
Maybe we should have a war on drugs, then. I'm sure that would work.

Getting guns out of our communities is probably easier than getting drugs out of them, not to mention mental conditions that lead to suicide.

infosample 1 hour ago 3 replies      
Black males die at such a higher rate from AIDS. Are they having that much more unprotected sex, taking that many more drugs from dirty needles, or getting that much inferior treatment than the general population?
bittercynic 26 minutes ago 0 replies      
I couldn't figure out any way to navigate without using the mouse.
dmritard96 1 hour ago 0 replies      
"progress stopped in the mid 1990s"maybe i am missing something but it seems like the mortality rate would be a lagging indicator progress hence progress would have "stopped" earlier?

Not that I necessarily would say it stopped at all...

rpedela 2 hours ago 4 replies      
The part about suicides is pretty interesting and perplexing. Are there any insights into why the rate has increased?
devanti 1 hour ago 0 replies      
Surprised how nice the visualization looks, given how ugly the Bloomberg terminal is
matthewisabel 1 hour ago 1 reply      
I created a visualization on a similar topic that looked at mortality rates state-by-state using the 2010 census data. It was on HN about six months ago.


RobotCaleb 1 hour ago 0 replies      
That's neat, but it's very hard to tell the colors apart.
brokenrhino 53 minutes ago 0 replies      
I wonder is the drop in car accident death caused by;1) Cash for clunkers taking old dangerous cars off the road so the fleet consists of more newer safer carsor;2) People driving less since the recession and the gas price increases
0003 2 hours ago 1 reply      
Any reason why the 75-84 group was out living the 85+ group until recently?
fophillips 2 hours ago 0 replies      
Need some error bars on that data.
cheetahtech 2 hours ago 3 replies      
It interesting to see that drugs and suicide are the highest causes of death, well over that of guns. But we seem to be progressing more towards a drug open world and gun closed world. Do you see the Irony?
dragontamer 53 minutes ago 0 replies      
<script src="global/js/jquery-1.8.3-min.js" charset="utf-8"></script>

<script src="js/modernizer.2.7.1.js" charset="utf-8"></script>

<script src="js/underscore.1.5.2.js" charset="utf-8"></script>

<script src="global/js/less.js" charset="utf-8"></script>

<script src="global/js/d3.v2.js" charset="utf-8"></script>

<script src="js/jquery.cycle.all.js" charset="utf-8"></script>

It looks like the majority of this visualization was from the D3.js library. I've been seeing more and more web-documents of this style, it must be because of the rise of D3.

EGreg 47 minutes ago 0 replies      
"That's why total deaths in the 75+ category has stayed constant"

I thought that was a particularly funny statement. Reminded me of the onion: http://www.theonion.com/articles/world-death-rate-holding-st...

Ubuntu 14.04 released ubuntu.com
50 points by pjvds  32 minutes ago   18 comments top 7
neverminder 25 minutes ago 2 replies      
I don't know about everyone else, but for me "released" means I can download it from the official location - http://www.ubuntu.com/download/desktop and that is not the case yet.
plg 23 minutes ago 0 replies      
it still shows beta at the link given above
ziggamon 23 minutes ago 1 reply      
Tried to find some sort of release notes, best thing I could find was this:https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes

If someone has a better link, please share!

floor_ 20 minutes ago 1 reply      
Does Ubuntu still shill out personal data to Amazon?
ateevchopra 2 minutes ago 0 replies      
Its great to see another LTS. Finally time to update 12.04 version.
bsg75 18 minutes ago 1 reply      
At this point this is a link to beta 2 from March.
bluedino 17 minutes ago 1 reply      
Do people actually use jigdo?
NYTimes open-sources PourOver, a library for fast in-browser filtering nytimes.github.io
62 points by jsvine  1 hour ago   8 comments top 7
danso 43 minutes ago 1 reply      
So I visited one of the PourOver examples, this Academy Awards fashion feature published earlier this year:


I opened the dev tools to inspect the traffic and code, and this pops up in the console:

              0000000                         000        0000000            111111111      11111111100          000      111111111            00000        111111111111111111      00000      000000            000        1111111111111111111111111100000         000            000        1111       1111111111111111100          000            000         11       0     1111111100              000            000          1      00             1               000            000               00      00       1               000            000             000    00000       1               000         00000            0000  00000000       1                00000       11111            000 00    000000      000                 11111         00000          0000      000000     00000              00000            000        10000      000000      000              0000            000        00000      000000       1               000            000        000000     10000        1     0         000            000        1000000 00              1    00         000            000         1111111                1 0000          000            000          1111111100           000000           000            0000          111111111111111110000000            0000            111111111        111111111111100000          111111111              0000000              00000000              0000000                     NYTimes.com: All the code that's fit to printf()       We're hiring: http://nytimes.com/careers       
....You sneaky audience-targeting bastards

dmix 1 hour ago 0 replies      
This page needs a giant "demo" button near the top. The examples are all code.
JangoSteve 34 minutes ago 0 replies      
It seems similar to our Dynatable plugin [1], which is basically the functionality of this plugin with some additional table-read/write functions included. The main difference being that this library depends on underscore, while Dynatable depends on jQuery (which is mainly used for its browser compatibility functions).

Given both library's emphasis on speed, it looks like I have something to benchmark against!

[1] http://www.dynatable.com

barkingcat 56 minutes ago 0 replies      
More details at http://open.blogs.nytimes.com/2014/04/16/introducing-pourove...

There are a few links to projects at the nyt that has used these two libraries.

bestest 21 minutes ago 0 replies      
Benchmarks comparing PourOver to Backbone would be nice. Anyone?
paulcnichols 58 minutes ago 0 replies      
Reminds me of crossfilter (http://square.github.io/crossfilter/) by square. It has a killer demo, however.
anarchy8 1 hour ago 0 replies      
Anyone have a demo?
Seznam (Czech search company) 3D maps preview mapy.cz
137 points by rplnt  3 hours ago   58 comments top 23
lubos 1 hour ago 2 replies      
Honestly, I'm surprised to see Seznam on HN. I grew up on Czech internet in 90s and Seznam.cz (or "Directory" in English) has been huge for long time until Google has eventually beaten them. The vibe I get here from comments is like as if Seznam.cz is some new hot company while it is really a dying dinosaur like Yahoo.

Maps are not core competency of this company. They are early Internet pioneers maintaining huge portfolio of various services for almost two decades. Maps is just another service they are working on to keep users from leaving them for Google/Youtube, Facebook etc.

Btw, I spoke to Seznam.cz founder briefly once at some business event in Slovakia back in 2000 when I was 17

edit: their maps are created by Melown.com, see example https://www.melown.com/maps/

bhouston 3 hours ago 1 reply      
I think that given that Google already has 3D depth coverage from its street view machines [1], it should be possible to combine that data with some medium resolution overhead 3D scans to create something similar, and likely even higher quality at the street level.

I wonder why Google hasn't done it yet. I don't think there are any real technical limitations. It may be that getting it fast is hard and the usefulness from an end user perspective isn't there yet?

[1] http://gizmodo.com/depth-maps-hidden-in-google-street-view-c...

zk00006 3 hours ago 2 replies      
Based on the posts, people think that seznam.cz is a startup and Google will buy it like in 3,2,1. This is complete nonsense. Seznam is far from a startup and I am pretty sure their goal is not to get "only" acquired. Its mapping service is superior to google as far as Czech republic is considered. Well done guys!
lars 1 hour ago 0 replies      
The Norwegian site Finn.no got 3D maps that looked exactly like this back in 2008. [0]

As the link explains, the technology originates from the Swedish air force, and was meant to guide missiles through urban landscapes. It was since commercialized for civilian uses by the company C3 Technologies.

This looks like it's exactly the same technology.

[0]: http://labs.finn.no/sesam-3d-map-3d-revolution-the-people/

fractalsea 2 hours ago 0 replies      
I find this very impressive. The fact that you can rotate arbitrarily and see correct textures applied to all surfaces of buildings/foliage is amazing.

Can anyone provide any insight into how this is done? Is there a dataset which specifies the detailed 3D layout of the earth? If so, how is it generated? Is there satellite imagery of all possible angles? Is this all automated, or is there a lot of manual work in doing all of this?

suoloordi 2 hours ago 0 replies      
Is this different than, Nokia's 3d Maps?This is Stockholm:http://here.com/59.3314885,18.0667682,18.9,344,59,3d.dayedit: I see this covers different regions in Czech Republic, whereas Nokia covers some well known cities all over the world.
kome 1 hour ago 0 replies      
Far better than google, bing and apple maps. Nice work, seznam.

Why seznam does non exist in others European languages?

Czech republic is a little market, and if they focus just on Czech republic their economy of scale will be broke very soon. They need investment to update technology, but if their market is so little it became prohibitively expensive very quickly.

robmcm 2 hours ago 0 replies      
I hate the use of the history API.

I don't want the back button to navigate the map!

Piskvorrr 2 hours ago 1 reply      
Why does the error message remind me of "This site is only accessible in IE5. Get it [here]"?

In other words, we seem to be rapidly drifting back into the Bad Old Days, when sites were made for a single browser? Not using Firefox? You're SOL. Not using Chrome? You're SOL elsewhere.

chris-at 3 hours ago 1 reply      
secfirstmd 3 hours ago 2 replies      
Cool, I smell buy out in 5, 4, 3, 2, 1... :)

I like the idea of bringing back more of the contours into maps once again. The move to flat satelite and Google Maps style stuff has meant the act of being able to navigate based on most efficient effort (e.g across contours not just A to B) is rapidly getting lost.

RankingMember 3 hours ago 2 replies      
Very nice. I wonder where the source data (building textures, etc) came from.
antjanus 2 hours ago 0 replies      
Not in the time that I've started going here would I have thought that Seznam would make it here. You should check out their tile search feature!

They experiment a TON, all the time.

helloiamvu 2 hours ago 0 replies      
Seznam is also working on 'Street View'. Check this out: https://scontent-b-lhr.xx.fbcdn.net/hphotos-prn1/l/t1.0-9/10...
_mikz 3 hours ago 1 reply      
Vypad to skvle. Looking great.
vb1977 30 minutes ago 0 replies      
The model is calculated from aerial photographs. The software for this was made by Melown Maps, a Czech computer vision company. See their website http://www.melown.com/maps for more models.
dharma1 2 hours ago 0 replies      
same stuff as apple maps, nokia 3d maps - low flying planes and lots of photos. Apple bought a Swedish company from Saab to do this

Nice to see it can be done with a single UAV and camera. Is there any open source software doing this?

ReshNesh 1 hour ago 0 replies      
That's where I run. Very cool
SchizoDuckie 3 hours ago 1 reply      
Sweet holy mamajama.

have they actually scanned this? or are they generating this from google maps imagery?

evoloution 3 hours ago 5 replies      
Would Google try to buy the startup, hire the developers, or just reinvent the wheel in-house?
Almad 3 hours ago 0 replies      
Thumbs up!
dermatologia 2 hours ago 0 replies      
me gusta
toddkazakov 2 hours ago 0 replies      
A bit of XENIX history seefigure1.com
16 points by luu  39 minutes ago   3 comments top 3
cstross 0 minutes ago 0 replies      
Hmm. I joined SCO in early 1991, and one of my first jobs in techpubs was working on documenting compatability between SCO-branded Xenix and SCO's release of SVR3.2 UNIX -- which was able to run binaries compiled for SCO Xenix (unsurprisingly) but offered a bunch of extras. AIUI SCO had been doing a lot of development of Xenix from 1986/87 onwards, when Microsoft made the strategic decision to focus on OS/2 and the successor to DOS. Taking on Xenix was what enabled SCO to grow to a $200M/year turnover multinational in about 5 years; and failing to understand the implications of Linux was probably what killed SCO (or rather, when they finally got it, they split the company and sold the UNIX IP to Caldera, who renamed themselves SCO and attempted to sue the universe) -- the rest is history.
ja27 1 minute ago 0 replies      
XENIX was my first nix. Back around 1985 my high school had a Tandy TRS80 with the 68000 processor and 6-12 terminals. They used it to replace a Burroughs mini for the COBOL class. I just barely missed punching cards for the Burroughs and instead learned vi and stupid tricks like writing to other users' ttys. I still torment my Microsoftie friends when I remind them that they're the ones that got me started on nix, long before Linux or OSX came around.
CurtHagenlocher 4 minutes ago 0 replies      
> Xenix should be the 16-bit successor to DOS

Shouldn't this be "32-bit"? MS-DOS itself was always 16-bit.

Pourover.js and Tamper.js Client-side superfast collection management opennews.org
32 points by danso  1 hour ago   2 comments top 2
vjeux 22 minutes ago 0 replies      
If your model is a list of enums where you know all the possible values, you can use SmallHash which encodes to the smallest possible string.


NathanKP 26 minutes ago 0 replies      
This looks really useful. I may take a stab at creating a Node.js encoder later today. If it can integrate nicely with Express and/or Restify and use content negotiation to allow the client to specify when it has support for Tamper then that would be a very useful piece of middleware.
Boring Systems Build Badass Businesses devopsu.com
153 points by samscully  6 hours ago   85 comments top 32
onion2k 6 hours ago 3 replies      
In the stated examples there's are no benefits to the additional complexity. No one would argue that complexity for the sake of it is a good idea. That'd be insane. If Alice's restaurant could handle 5,000,000 covers a night with only 1 member of staff while Zola's restaurant could only handle 10,000 then you'd have a more realistic scenario to compare with the SaaS industry. The benefit of "complexity" is that you are able to do more things with less work.

The ideal is to build powerful systems from small, simple processes - if any single stage is trivial then anyone can understand it, fix it, modify it, and so on. With many little processes working together you can do amazing things. A good example in software is a build process - a good system can lint code, test it, uglify it, minify it, push it to version control, watch for changes, reload an output mechanism, clean up a distribution, and push it to a live server if it's working all from a single command. That's very 'complex', but really it's just a set of very simple linked processes.

jasonkester 5 hours ago 0 replies      
Well said. I can't tell you how nice it is to have software in production on a boring stack. It gives you freedom to do other things.

I can (and often do) go entire months without touching the codebase of my main rent-paying products. It means I can, among other things, pick up a full-time development gig to sock away some extra runway, take off and go backpacking around the world, or better still, build yet another rent-paying product without having to spend a significant amount of time keeping the old stuff alive.

It seems like on a lot of stacks, keeping the server alive, patched and serving webpages is a part-time job in itself. In my world, that's Windows Update's job. Big New Releases come and go, but they're all 100% backwards compatible, so when you get around to upgrading it's just a few minutes of point and clicking with nothing broken.

I see it as analogous to Compound Interest, but to productivity. The less effort you need to spend on maintenance, the more pace you can keep going forward.

gordaco 4 hours ago 4 replies      
This is why Java is used widely. It works. It works well. And this is also why Java is great for huge systems (not in terms of users, disk space or bandwith, but in terms of code size). The same can be said about a lot of "old" technologies, and certainly about almost every industry standard out there.

On the other hand, once in a while the Alice/Albert bet happens to win; be it because the new system is really better (as in: easy to maintain, or really being capable of managing higher amounts of workload), for non-technological reasons (Alice/Albert just happen to have a great idea), or just because of luck. Over time their technology may even become the new industry standard. The problem here is that it's the Alices/Alberts from the world who make it progress by trying new things (and failing often), but we're afraid of failure.

So, yes, it's completely natural that corps resort to Java or C#, while startups use Scala or Ruby.

For all of you doing startups in shiny new technologies: this means that even failure has a bright side, since even in that case you've put your grain of sand to make the technology more mature.

bsaul 5 hours ago 6 replies      
I can easily see how this post could be mis interpreted, so i'll add my personnal experience :

I had the occasion of building the same system in two different companies : one was a start up, the other a huge company.

For the start up i could choose the tech i wanted, and decided to go for python + app engine + backbone ( new at that time ). Those tech were "hype" yet not absolutely brand new. I took some risks by choosing them but thought it could be worth it.

For the big company i had to go with java spring mvc + sencha, they didn't want to hear of any new tech that would be different from what they were used to. They deployed it in their own infra.

Now, the start up project took 3 man month, the big company took more than 7, and a year total before being deployed.The startup only paid an intern to maintain the software,and almost nothing in infrastructure fees. The big company outsourced maintenance to a software service company that proved unable to do even the most basic troubleshooting.

I designed and coded the two systems, and i wasn't a guru of any specific tech, so it's not an issue with the people. Sometimes, under the right circumstance new technologies are way better.

noonespecial 5 hours ago 1 reply      
The fun part is that you can be either Alice or Zola with nothing but perl.

The moral of the story might just be "stop trying to be clever and start trying to be done", with all of the usual yaddayadda about preoptimized yak razors.

venomsnake 5 hours ago 0 replies      
A simple rule - you should always remove complexity from a project and never adding it. It builds on its own so any tech you add must remove some complexity from the current project.

Warning sings for tech that brings more complexity than it is usually worth - extensive xml configs, hiding of executable code, stack traces more than 240 levels deep.

Current favorite offender - GWT - I just love when something blows up in the javascript and it just tells you - well signature object object object is not what I expect in javascript apply. And you have no idea where exactly it was generated.

So it is a KISS - the project must be of the least possible complexity to solve the problem.

karterk 5 hours ago 0 replies      
Boring system themselves do not ALWAYS build badass businesses. It's knowing when to stick to boring systems vs taking the chance on something new. A lot of systems start off as someone's side-project. It's a calculated risk when you pick something that brings different things to the table.
chasing 54 minutes ago 0 replies      
Well, I mean, it's all about understanding both the tools and the needs and selecting the tool that fits the need. Some "restaurants" have exotic needs that good ol' Zip might not be able to satisfy using his system. Or Albert might have ways to do things that are way cheaper -- require fewer resources, less time, etc -- but have the drawback of using newer tools that might become abandoned, have low developer numbers, etc.

But, as a developer, this is why you have a conversation with your client and understand what their needs are. So you can understand these trade-offs and make the best possible recommendation. Neither Alice's way nor Zola's way is the Way Things Should Work 100% of the Time.

nadam 4 hours ago 0 replies      
Of course if you are working on a boring problem it is a mistake to try to make it more interesting by incorporating interesting tools. This is a common problem in web development for a lot of people. On the other hand if the problem you want to solve is interesting and hard then probably you will not go far with the boring standard solutions. (see: Oculus Rift).

Summary: if you don't want to be bored, choose interesting problems, not just interesting tools.

mijustin 6 hours ago 0 replies      
The maintenance aspect is huge. I've been able to observe how fancy, complex systems fare over a long period of time (as opposed to simple systems): in almost every case the "cool" complex system required way more maintenance. There are just more things that can break.

Unfortunately, we don't normally record the "long tail" cost of a feature. We build and deploy, but don't keep an eye on how much time it takes to maintain that feature.

noelwelsh 4 hours ago 0 replies      
The problem is, simplicity is not an objective measure. Take monads, for example. To most developers these are a foreign and possibly scary concept. Once you understand them, however, they seem ridiculously simple. This is one of the problems with monad tutorials -- they are so simple there is almost nothing there. I know I spent a long time trying to find a "deep" concept when learning monads, before I realised there isn't one.

Building a system with monads, if you understand them, is simple. You can write together components easily, and have concurrency, logging, error handling and more all nicely abstracted away. But is this a simple system? It depends entirely on your background.

goombastic 59 minutes ago 0 replies      
I think this is the precise reason why every industry and function has a process framework. Working around processes/functions and its value maps while creating a solution is the best way to not just meet customer expectations but also ensure that your products play well with other products a buyer might have.

Processes and sub process maps like Procure 2 Pay, Order 2 Cash, etc, are there for a reason. They tend to make life simpler for buyers making a choice and also help ensure that your product doesn't have process blind spots that will kill it.

The big guys in the ERP space have perfected this approach and it's something a lot of business oriented startups don't seem to consider.

qwerta 5 hours ago 1 reply      
Unproven innovative technologies are not necessary bad. They can give you edge over competition. The real problem is to restrain yourself when applying them, and have a fallback plan.
jesstaa 4 hours ago 0 replies      
The sad thing about our industry is that the boring systems with the greatest support are also often the most complicated and hardest to deal with.

So the choice ends up being,

* Go with the system you can get some kind of possibly useless support for, but your people will have a hard time dealing with on a daily basis.


* Go with something your people can understand that might not have wide external support.

dsirijus 4 hours ago 0 replies      
"Boring is where the money is". ~ some HN dude.
benjvi 4 hours ago 1 reply      
This post highlights something that can be a problem with the contracting of workers. Namely, that Albert will be more in demand than Zip, despite having built an inferior system. The failure of the business, in real life, is probably not attributable to him - there are many other variables that one could point the finger at (low demand, location, infrastructure, sourcing prices, etc..). And, the manager will often not understand what truly constitutes a "best-practice", maintainable solution. So, by default, he probably ends up being paid more, and is seen to be more important and accomplished as well.

So, where is the incentive for the handyman to act like Albert? And how do you identify these people?

weixiyen 5 hours ago 0 replies      
I wouldn't say that boring solutions are always the best, unless they satisfied the conditions below.

Here's what is REALLY important:

A) How fast can you get your first product in front of customers?

B) How often can you measure and iterate on that and get the new version in front of customers.

You should pick the best solution that optimizes for A & B. Both are really important because they will help you discover the actual thing you need to be building.

dscrd 6 hours ago 3 replies      
Even though it's still hip, this is why Go works in capable hands. As a new programming language / platform, it's academically quite boring. In fact, that's the number one criticism of its detractors.
mathattack 1 hour ago 0 replies      
This is an argument for buy vs build. As others have stated, the question is whether complexity is worth it. My bias is we tend to underestimate the complexity of small additions, and overestimate the benefit of having control over a system. The implication is too much complexity in things we build ourselves. Sometimes the industry standard solutions aren't appropriate, but it all depends on what a company wants to focus on.

And I'm not sure of the reference for Zola's restaurant, but I like the Guthrie inspired complexity of Alice's restaurant.

porker 4 hours ago 0 replies      
I was planning to architect a new system using SOA (or the newly-hyped microservices), then realised the complexity it'd bring.

Each individual system is easy to debug, test etc. Debugging what's wrong when the output isn't what you expect - much harder.

Not sure what's the boring (vs familiar) option here, but the old adage "All that gleams isn't gold" is true...

inthewoods 4 hours ago 5 replies      
This post spoke to me because I'm in the middle of an interesting decision - for our public/marketing website, should we go with Wordpress (something a lot of people know, etc) or a static site generator (fill in your favorite - Harp, Docpad, etc.)? The argument for going with the static site is that we'll have a much faster site (it will be static) that will likely be easier to customize (we don't need a lot of what Wordpress offers) but the potential downside is that most developers don't know the static tools so if I hire someone new, I'm likely training them. Now, I don't think training will be that hard if you get someone with a decent background, but you get the idea.

What would you choose? Safe and stable Wordpress with more customization effort, or the static site generator idea with less installed based of developers?

robinwarren 5 hours ago 0 replies      
I get the rather bluntly hammered home moral to use safe reliable tech but that misses a lot of subtlety.

I think the moral of the story is to load test before you dump a bunch of customers onto your system. Regardless of the tech you use you can easily fail in this regard. And secondly not to value people for the effort they put in but the results they achieve.

nsfyn55 2 hours ago 1 reply      
The cases presented in this article are contrived.

1) There are room for both boring and cutting edge technology in any business. Albert didn't drop the ball by choosing cutting edge tech. Albert exhibited poor risk management skills. Alice wouldn't be complaining if Albert took a controlled risk and installed a next generation flash fryer that gave a clear competitive advantage over Zola in terms of personnel and order-to-delivery time.

2) Good ideas require both Albert and Zip. Zip keeps the lights on and the costs down for all the mundane BS required to run a business. Albert is the disrupter. He is the reason starting the business was a good idea. He is an iconoclast that looks at the state of the world and said "I can do this better".

The title of this article should be Boring Systems Build Benign Businesses

frankwiles 5 hours ago 0 replies      
Couldn't agree more. The pre-optimization of cool might be a good name for it.
maximgsaini 3 hours ago 0 replies      
Why add the complexity of a car when you can simply walk?! The car will break down, you will have to waste days taking it to the service station, you will have to get a driver's license, you will get tickets, you may kill someone and get in trouble, you can't drink if you will be driving. Why add so much complexity to your life?

The idea that complexity in itself is bad is flawed. Sometimes innovation does require complexity. Complexity for the sake of complexity is bad.

>"But [some new unproven system] is really cool! Even [some big company] uses it!"

A company I know uses a big/buggy oil pipeline leak detection software. It is very complex and very buggy. Tech support has to be called in every few months. But they still use it. Why? Because it will detect oil leaks much faster. Potentially saving them millions in case of something bad. Should we stop innovation because we are scared of 'complexity'? I wouldn't suggest using a system because it is 'really cool' and a 'big company uses it'. But why do they use it and why is it 'cool'? Can it make you more money? Those are the questions worth asking.

>"Innovate on your core product, not on your plumbing "

Every bit of complexity deployed to make more money is good. Can you tell and prove how it will make money?

Every bit of complexity added because it is 'cool' is flawed! If plumbing can make me more money, then hell yeh it requires some investment. Every situation is different.

borplk 3 hours ago 0 replies      
From the title I was expecting this to be about the area of the businesses, perhaps suggesting to solve boring and real problems for real people who pay real money instead of, as is fashionable today, building "buisnesses" for sharing your crap to another crap and liking and commenting and following this and that whilst being fed with advertisements.
_random_ 4 hours ago 1 reply      
And then there is outsourcing to the cheapest bidder overseas...
krisgenre 4 hours ago 0 replies      
"There are many ways to achieve developer happiness, but making your core business products a playground for developers seeking novelty is the path to hell."

Excellent point. This also applies to programmers who'd like write everything themselves so that they can learn more in the process. My current job involves maintaining an application that has everything written in-house - logging, html templating, url mapping, validation, form bean binding, scheduling and what not! - all this is possible with just using slf4j, Freemarker, Spring and a bunch of other lightweight libs. Some of the stuff is good and so it makes me think the only reason would have been to become more proficient in OOPs and Java.

timc3 4 hours ago 0 replies      
Simply put: Choose your battles wisely, and the ground you do it on even more so.

Though one can gain serious competitive advantage by using something new to compete against established players or use something well tested in an innovative and new way.

flylib 2 hours ago 0 replies      
what was up with the Github and 37signals references? They both use Ruby on Rails which could be considered a niche technology, 37signals even admits to use the most bleeding edge version live in production before they even put out a beta of the version to the public so if anything referencing them is actually hurting the article
mark_sz 3 hours ago 2 replies      
Boring stack: PHP+MySQL ?
menubar 3 hours ago 0 replies      
Bad biscuits make the baker broke, bro.
Yahoo spends $58 million to fire its chief operating officer washingtonpost.com
89 points by xmpir  4 hours ago   71 comments top 13
bedhead 1 hour ago 4 replies      
It's really sad how wildly distorted executive compensation has gotten. The best phrase I heard was "entrepreneurial reward for managerial duty", and I fear it's become all-too-common. My eyes popped out of my head recently when I saw that Coca-Cola (yes, that same drink company that's done just fine for over a hundred years and whose organic growth rate might be 1% if they're lucky) was trying to give management $13 BILLION over the next four years. It's insanity. And when it's not simple pay, it's severance packages that give Fuck-You money to people whose performance provably dreadful. Leo Apotheker made $25 million on his way out the door from HP, after vaporizing over $6 billion in buying a fraudulent company and doing virtually no due diligence. It's madness, pure madness. Executive comp is a bubble, these people aren't worth anything near this much, but I have no idea when it will pop.
pachydermic 51 minutes ago 0 replies      
That video is painful to watch... I know there's clearly a bit of a language barrier, but this guy sounds like a complete dingus. Then again, he's the guy making a cool ~$60 mil a year so what does that suggest?

I wonder what a guy like that actually does on a day-to-day basis. I could see there being a huge amount of pressure and work to do, but maybe they just hand it off to their underlings secure in the thought that they have a fat severance package waiting for them if anything goes wrong.

How can you expect someone to give a damn when they have no skin in the game? So this guy did an apparently horrible job and made millions. How does that make sense? Wouldn't you only want him to make an obscene amount of money if he did a good job? It really is fascinating how massive companies like that work - I guess you can draw some similar conclusions as in politics.

zaidf 1 hour ago 1 reply      
I really feel for Jerry Yang. It seems like every Yahoo CEO and Board has been riding the coattails of his Alibaba investment while his name is left for the footnotes.
malanj 2 hours ago 2 replies      
That has to be one of Marissa's most public mistakes yet. She was the one who pushed very hard for Yahoo to hire him. It's interesting that a few articles I read mention that he's really smart but not good with people. I've read that Marissa has the same characteristics, I wonder if that gave her a blind spot on this one?
smackfu 1 hour ago 0 replies      
If you're the COO, and most of your compensation is in stock or bonuses related to the stock price, and the stock nearly triples during your reign (15.92 to 41.07)... you're going to get a big payout.

I also don't really buy that firing him cost this much, since much of it seems to have been a sunk cost. That stock was going to vest eventually whether he was fired or not, it just vested faster because he was fired.

omegant 4 hours ago 4 replies      
Honest question, could somebody please explain why a big corporation like Yahoo, doesn't have some kind of cliff and progresive compensation? It's the Article accurate?
gcb0 8 minutes ago 0 replies      
I always assumed that hiring price included him bringing clients from Google. As that is usually what happens when you hire anyone from sales from your competitor...
sillysaurus3 4 hours ago 5 replies      
I've been wondering: How's Marissa Meyer doing as CEO? I haven't heard much about Yahoo recently except that they acquired some companies in order to get talent in the mobile space. It's been about 1.75 years since she became CEO. Is that enough time for a non-Steve Jobs CEO to change the trajectory of a company?
benaston 1 hour ago 0 replies      
An excellent case study for the Macleod Heirarchy. http://gapingvoid.com/2004/06/27/company-hierarchy/
RighteousFervor 54 minutes ago 1 reply      
Anyone else notice that the comments at washingtonpost.com are more insightful and succinct than here at HN?
6d0debc071 3 hours ago 0 replies      
Yahoo seems to have a certain difficulty in making the best use of the people it works with. Flickr springs to mind. If they're firing him just because he gives sucky presentations - which seems to be the only guess the article has - then that's on them.
lifeisstillgood 3 hours ago 1 reply      
I'd have done it for them for half that :-)
stormqloud 1 hour ago 0 replies      
This is a testament to how top management destroy shareholder value and an operating company in their own massive short term gains.

"I'll hire you for $50 million, then you hire me for $60 million, think how much value we just brought to the company."

Plant Breeders Release First 'Open Source Seeds' npr.org
33 points by ptwobrussell  2 hours ago   13 comments top 5
spodek 2 hours ago 2 replies      
This idea makes sense at first blush, at least to this non-plant-breeder.

At first I wondered how much of a difference it could or would make since while in software anyone can code in their free time, how many people can splice a gene? But if they get universities to join the effort so that work at that university has to result in Free seeds, I could see it catching on and working.

As a planter, I'd certainly prefer to have seeds that minimized risks of legal hassle.

I would also be curious to see what would happen when the reverse of one of Monsanto's legal attacks happened -- if Free seeds made their way into Monsanto's stock, could their legal attack on farmers be used against them? Or de-fanged?

logfromblammo 1 hour ago 1 reply      
People have been sharing heirloom variety seeds with each other for a long time, along with their local sourdough cultures, kefir mother cultures, yeast strains, and other re-propagatable biological source materials.

So it is great that professional horticulturists recognize the value of that enough to contribute their work to the system. Home-hobbyist gardeners/bakers/zymurgists/etc. simply don't have access to the same techniques used for commercial production.

It would also be great if a professional could curate a biological distribution package for food polycultures. A lot of people are familiar with the "three sisters" polyculture of corn, beans, and squash, but there are presumably others that would work just as well. Additionally, we now know that the microbiota of the soil itself can be as important as the genomes in the seeds. What if you could make your potting soil resemble Iowa corn field topsoil by pouring a few mL of open source dirt juice into it?

viggity 2 hours ago 1 reply      
Having worked in R&D (molecular breeding dept) of <insert huge agribusiness> for 4 years, this article has highlighted that I do in fact suffer from Gell-Mann Amnesia.

There a great many things that this article gets wrong/not quite right, and yet I'll probably read the next NPR story and think "oh, that is interesting". http://www.goodreads.com/quotes/65213-briefly-stated-the-gel...

kseistrup 1 hour ago 0 replies      
Here's a link to OSSI the Open Source Seed Initiative http://www.opensourceseedinitiative.org/
theotown 1 hour ago 0 replies      
Monsanto will cross-breed these immediately, right? :-D
Ubuntu 14.04 LTS (Trusty Tahr) Released ubuntu.com
16 points by id  33 minutes ago   8 comments top 4
jaryd 17 minutes ago 1 reply      
Confused--can anyone clarify if this is a stable release or a beta release?

Thanks in advance

Jupiterlyght 14 minutes ago 0 replies      
The beta looked nice, loving that option to put menus in the app window. The final product should be promising.
hsinxh 24 minutes ago 0 replies      
its herehttp://releases.ubuntu.com/14.04/ubuntu-14.04-desktop-amd64....

Update: They have removed the file now.

azurelogic 26 minutes ago 1 reply      
Still showing beta 2.
SEO Through The Years: A Retrospective wayfinder.co
8 points by donhoagie  29 minutes ago   discuss
Kendo UI Core open sourced github.com
70 points by stonys  5 hours ago   14 comments top 4
avenger123 2 hours ago 2 replies      
The previous GPLv3 was the complete package.

This change actually removes features that the GPLv3 had. In particular, the main one and the reason most companies would want to buy a commercial license now is the Grid component. I understand their complete package will have ASP.NET MVC bindings but those aren't necessary.

So, in a way, this is really bad news for open source projects as it effectively takes away a component that is at the heart of why Kendo UI may be used instead of jQuery UI.

They have in effect basically screwed over completely open source projects. For them the only change is that they lose functionality.

But, Telerik is a for-proit company and they've always struggled with the licensing for this as they didn't know how it would fit with their commercial goals. I don't think they've got it right now.

Personally, I see no reason to use this even now. jQuery UI is not maintained by a commercial company and is just as good. I can be confident that the license for jQuery UI isn't going to be messed around with based on new corporate goals. The Kendo UI versus jQuery UI site does a good job of doing the comparison but without the grid component, its a hard sell.

EDIT: I do want to add that overall this is great news. I'm pointing out the nuances of this decision. For people already using this and don't need the other components or can find substitutes, it makes sense to continue to use it and not have to buy licenses. Also, I hope Telerik somehow addresses this and not ignore it.

angryasian 22 minutes ago 0 replies      
I'm sort of confused on pricing.


Is the pro widgets like the editor and tree view available at the $699 price point ?

stonys 5 hours ago 2 replies      
Although not all features are included in Kendo UI Core. Official Telerik press release can be found here: http://www.telerik.com/company/press-releases/2014/04/16/tel...
pingec 3 hours ago 2 replies      
I love their Kendo UI Web widgets, especially the Grid, quite powerful: http://demos.telerik.com/kendo-ui/web/grid/index.html
The Developer is Dead, Long Live the Developer paperplanes.de
41 points by roidrage  3 hours ago   21 comments top 12
lectrick 7 minutes ago 0 replies      
Programming these days seems to be more of an exercise in managing unexpected complexity between various components or pieces of code. Here's a few tips:

1) Unit test. 95% of your test suite should be unit tests. Objects under test should not require the entire codebase to be loaded in order to perform... ideally they depend on nothing, or just 1 or 2 things (which in turn hopefully depend on nothing, or just 1 or 2 things, ad nauseum).

2) Don't mutate passed-in arguments. In fact, mutate as little as possible.

3) Don't use objects with potentially unexpected, surprising or unknown behavior that makes it difficult to reason around the code (QUICK- what happens when you merge a ruby HashWithIndifferentAccess with a regular hash containing both similarly-named symbol and string keys?)

4) Use function objects, that have zero side effects, wherever it makes sense. (Or just use an entirely functional language.)

5) Separate components that talk to each other through I/O should use something like the circuit-breaker pattern http://martinfowler.com/bliki/CircuitBreaker.html

6) Microbenchmark things. Individually, a series of tasks might all "look" fast, but when run 10000 times in a row, might expose unreasonable resource utilization.

hibikir 1 hour ago 0 replies      
In my experience, the teams that run the best are full of generalizing specialists. So you have three people that are quite handy with the DB: One is the best of the lot, and is mostly doing DB work. The others are good at it, but they bring unique strengths in other areas. In the same way, you have a guy that is pretty strong at Unix administration, a language lawyer, some people that are big into UIs and such.

So while people will probably dedicate 80% of a week to a given kind of task, they can, and do, play multiple roles, depending on how much their specific expertise is needed that week.

For instance, I am the number one choice on UIs, but I also play support managing Postgres, because I did a whole lot of that in a past life, and we don't need a full time DBA. I am also leveling up in our scheduling system, all Akka actors.

At my previous job, the team had all the Database experience it needed, but domain knowledge was weak, so I spent much of my work working with users trying to figure out what they needed, and why they were unhappy with the product. They hired me for my UI strength, but reality said that I would be more useful doing customer facing activities half the time, so I did.

Shivetya 20 minutes ago 1 reply      
I have never been in a shop where as a developer I was isolated from quality assurance or production. Fixing production issues was part of the job and insuring the fixes and original code got through QA was there too.

However I have also seen this branded as "soft skills". Something that seems to be yet another way to excuse many people for their lack of skills and reward them for doing other stuff. I have seen it used to keep really bad developers, if not reward them. It also tends to be a favorite term of those who cannot manage their team properly.

venomsnake 2 hours ago 2 replies      
The developers should know how to do ops. And vice versa.

We are moving into weird heterogeneous bugs territory. A lot of stuff that has wasted my time last year was caused by interaction of technology stacks and not by pure logical errors. Or error localized in even one package.

I had problem with premature socket close on the response with just the wrong setup for all of nginx, haproxy, slim and php 5.4 ... lets pure ops or pure dev figure that out.

agentultra 2 hours ago 1 reply      
I argue that the shared responsibility mentality is distracting and diminishes the power of an individual. It's hard enough being a good programmer let alone one that also knows how to configure networks, firewalls, and operating systems; one who knows how to manage security policies, LDAP organization charts and ACLs, etc. I still believe that the jack-of-all-trades is a useful individual but I don't believe an entire organization should be formed around their ideals.

As in all things in life -- the truth is somewhere in the middle. If you build significantly sophisticated systems you will need specialists at some point. I don't recommend wasting their talents on getting reamed out for forgetting to configure the firewall properly.

michaelochurch 1 hour ago 0 replies      
Elephants in the room: unfunded mandates, disparate equity, control issues, and deadlines.

Ideally, people should support their own work. If I build something and have complete ownership of it, I'll make sure it runs. If I'm the one who's getting the money and career progress, and I'm picking the deadlines, I'll be the "3:00 am guy". It seems fair that it should be that way.

On the other hand, if I'm asked to build someone else's idea, to their deadlines, making technical compromises that I wouldn't choose, then I'm not going to be the "3:00 am guy" for shots someone else called. Fuck that shit, life's too short. If you want that from me, either pay a consulting fee ($250+ per hour) or give me serious equity (like, founder levels). Otherwise, That's Not My Job.

Salary jobs are for the stability of pay and work (i.e. not being the 3:00am Guy) and for career advancement (resume, respected titles, networking). That's what I'm paying for when I accept a salary 1/10 of what I'm worth to the business. If I'm expected to take on extra duties that don't advance my career, and not getting that stability, then I'm not getting what I paid for.

I'll gladly take the negatives of ownership if I get to partake in the positives (autonomy, self-direction, participation in the reward) but expecting me to take only the first is just unreasonable.

auganov 1 hour ago 0 replies      
I think I'm a bit skeptical of how different the responsibilities of a 'DevOps' are different from a traditional 'Developer' in terms of actual complexity. The whole discussion seems to reduce technological change to just a little tiny part of this 'transition' to DevOps. How can you even develop a web SaaS application without having a basic knowledge of sysadmin/networking/IT/etc?In my opinion technology changed and development methodologies followed.I wish the author proposed an alternative to status quo [that would resemble the past a bit more]. Can anyone?Of course adopting PaaS and especially BaaS solutions can change it, but again, that would be a technological change rather than just a methodology thing.
prawks 3 hours ago 3 replies      
> Putting developers in charge of not just building an app, but also running it in production, benefits everyone in the company, and it benefits the developer too.

How do companies which do this keep the support workload of developers low enough so that they have time for development? It's a great idea, because as the linked interview with Wener Vogels proposes, it creates a developer that has more frequent contact with customers. I suppose eventually you have to shift resources around to start other projects, but then who supports what they're leaving? Even in large teams, eventually expertise will dwindle until you're adding new people to support an existing system.

devonkim 2 hours ago 0 replies      
This reminds me of going back towards a 90s job title trend - the webmaster. While that applied very much for small Geocities sites, there seems to be an awful lot of overlap and even business alignment toward minimizing operations staff / budget such that today's so-called devops engineers would really have been the webmasters then and in SaaS shops we're just a team of webmasters that have deeper specialization / interest in specific areas.

Regardless, this pretty much only applies for SaaS shops I'd say. There's still software delivered the old way that's viable.

Pinwheeler 2 hours ago 0 replies      
I read both articles and felt they both made good points.

It is in a company's (and the industry's) best interest to pair down well-paid developers and open up more entry-level positions, however, it's in the consumer's best interest for developers to maintain high accountability and "closeness" to the product.

I don't see these approaches as mutually exclusive I see how full stack development and silos can both exist in the same environment like if the Dentist in the counterpoint article's analogy takes over the hygienist's role when the hygienist is out on vacation.

roncohen 3 hours ago 0 replies      
Had been looking for the source of that Werner quote for a while. Thanks Mathias!
tianyi-aisin 2 hours ago 0 replies      
> What really matters is the willingness to change, to learn a new stack when necessary.

You've hit the nail on the head there !

Go Performance Tales jmoiron.net
162 points by signa11  10 hours ago   19 comments top 10
jws 1 hour ago 2 replies      
I found the bit about Go using AES native instructions to accelerate map key hashing most interesting. This accounted for a >50% throughput increase when he found aws c1.xlarge instances which had AES-NI as compared to those that didn't.

This is the kind of detail most developers would not be aware of, and to be fair, even now knowing it exists the only reference I can google up at golang.org is the C source code of runtime/alg.c where you will see

    23if(use_aeshash) {    24runtimeaeshash(h, s, a);    25return;    26}
no hint that it might reduce your hosting costs by 33% or account for some huge variation in performance between one test machine and the next, or even individual runs if you are spinning up cloud instances to do your testing.

Does your CPU have the AES, SSE3 and SSE4.1 cpu capability bits all turned on? If so, you will hash mightily! Do you even know where to look to check?

jsnell 7 hours ago 0 replies      
Just a note on the zlib optimization patches. The blog post is linking to an old version, there's a newer one from a month ago. Also, the patch still appears to be a bit buggy (e.g. corrupt output being generated by the new deflate strategy), so don't plan on actually deploying it.
DrJokepu 4 hours ago 3 replies      
I find it interesting how insightful, technical articles like this receive hardly any comments while the usual "softer" articles that tend to dominate the Hacker News frontpage these days receive dozens if not hundreds of comments. I wonder what this says about us, HN readers.
timtadh 1 hour ago 0 replies      
I have also played around trying to achieve a high performance trie in Go.[1] My approach is to use the Ternary Search Trie structure. Unfortunately, I have not yet approached either the performance of the native hash table or a hash table in Go (although Sedgewick tells us you should be able to beat a hash table). My TST does not yet have the Patricia Trie optimization (of collapsing long internal runs). Perhaps with that addition it will get closer to hash table performance.

Also everything he said about channels also holds true in my experience. I haven't tried writing a C library for Go yet but his discovery is pretty interesting for when I dive into that.

[1] : https://github.com/timtadh/data-structures/blob/master/trie/...

SixSigma 4 hours ago 0 replies      
If you want more details on Go profiling, this Go Lang blog post is a great place to look


ihsw 4 hours ago 1 reply      
> Using a map[int]* Metric instead of a map[string]struct{} would give us that integer key we knew would be faster while keeping access to the strings we needed for the indexes. Indeed, it was much faster: the overall throughput doubled.

I'm a little sceptical of this -- type assertions are fast but it's an extra step to initializing a struct. It would have been nice to see tests done comparing map[string]struct{} to map[int]struct{} and comparing map[string]* Metric to map[int]* Metric.

Also, there is no way to escape an asterisk, so I apologize for the awkward space after each one.

yukichan 1 hour ago 0 replies      
The biggest performance issues I think that some people run into with Go involve reflection which seems to be slow. Something that does a lot of JSON parsing for example maybe could be much slower in Go than in Java, Python or JavaScript I think. I don't have any data, but I've known people to complain about it that work with Go. I wonder if a JIT or AOT compiler might help.
awda 3 hours ago 1 reply      
> All of our existing indexes were based on string data which had associated integer IDs in our backend

You already have a perfect hash function :).

logicchains 4 hours ago 1 reply      
This is probably a stupid question, but I wonder if the author could have used slices instead of maps with integer keys. It would have used more memory, but it would probably also have been significantly faster. A significant proportion of the performance issues I see raised on the Go mailing list seem to involve maps.
sagichmal 2 hours ago 0 replies      
Good article. It closely reflects the experience we've had at SoundCloud in our more heavily-stressed services.
It crushes me to hear that they loved my software, and Im taking it away sideprojectprofit.com
13 points by cvshane  1 hour ago   4 comments top 4
michh 58 minutes ago 0 replies      
I'm guessing there might be someone willing to take over the project for free (if he's abandoning it anyway) and invest the time in actually getting the word out and improving it? Who knows.

Ask for a small percentage of profits or something. The author can't lose more than he already is by completely abandoning it and letting the domain expire. If only because when someone else is keeping it running, he could still use it himself.

revnja 1 minute ago 0 replies      
This looks like it would serve the needs of my small team pretty well. We really just need a list of everyone on our team and what tasks they have assigned to them, not really any due date management needed. Please consider open sourcing it to run in our Intranet or something similar! We would use TaskShot!
nfoz 1 hour ago 0 replies      
Why don't you just leave it up and running? Open-source it, let people run it on their own servers?
mpnordland 30 minutes ago 0 replies      
Darn, this would have fit my needs exactly. Well, I'm just one person, but the way it's described it would fit my workflow pretty good.
Quantum Entanglement Drives the Arrow of Time simonsfoundation.org
197 points by jonbaer  11 hours ago   94 comments top 27
cromwellian 10 hours ago 4 replies      
This Google Talk https://www.youtube.com/watch?v=dEaecUuEqfc uses entanglement and quantum information theory in a clear and understandable way to explain 'spooky' quantum phenomena, like the quantum eraser, de-coherence, the aspect experiment, and the measurement problem. Even if you don't know any QM, just basic algebra and calculus, it's really approachable.

I used to be a fan of the Many Worlds interpretation, but after seeing this, I'm now a big fan of the Quantum Information Theory explanation. Starting about 43 minutes in, he goes into the QM Information Theory explanation, but I'd recommend watching the entire prezo.

Link to my original post on the subject: https://plus.google.com/110412141990454266397/posts/HC49S9ip...

tim333 9 hours ago 3 replies      
The reasoning sounds a bit iffy as in:

Finally, we can understand why a cup of coffee equilibrates in a room, said Tony Short, a quantum physicist at Bristol. Entanglement builds up between the state of the coffee cup and the state of the room.

I think you can understand coffee cooling quite well without any quantum stuff - the atoms in the coffee are moving faster than those in the room. There will be a tendency when one impacts with an atom of the air in the room for that to speed up and the coffee atom to be slowed.

Actual quantum entanglement is a strange and interesting thing. It's a shame people tag the term on things it is not really relevant to try to sound impressive for the most part.

mbq 5 hours ago 4 replies      
This is nonsense; entropy and the arrow of time are essentially a many-body effects and require no quantum effects to occur. A simplest way to see it is to make small simulation of a, say, 1000 gas particles with only classical bouncing in a one side of a box partitioned in half with a barrier, obviously with a time-reversible numerical method -- after the removal of the barrier the gas will evenly spread over the box without any entanglement.
dalek_cannes 9 hours ago 1 reply      
Do we need entanglement to explain the Arrow of Time? Even though in classical mechanics, the past and the future are both equally observable, we remember the past and not the future because the future does not contain certain information yet -- the information to be introduced into the universe in the form of quantum fluctuations. One could even argue that all information in the universe was created at some point in time due to one quantum event or other.

I may have misunderstood though (I'm not a physicist). Entanglement does however, explain why systems tend to equilibrium rather than any other type of state as it evolves forward in time.

On a related note, I found this quote interesting. It reminds me of how HN comments about quantum information theory has a tendency to get downvoted:

> The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was no physics in this paper. Quantum information theory was profoundly unpopular at the time, Lloyd said, and questions about times arrow were for crackpots and Nobel laureates who have gone soft in the head. he remembers one physicist telling him.

clavalle 28 minutes ago 2 replies      
Does it seem to anyone else that quantum entanglement and decoherance is the universes way of doing the least amount of computation possible? Like the universe is lazily loaded?
sheerun 4 hours ago 2 replies      
I love following article: http://www.flownet.com/ron/QM.pdf

It basically shows that observation (measurement) and entanglement are the same things.

Think about it: particles are not magically going out of superposition as we observe (measure) them. We (our atoms) become entangled with those particles, we become superposition. It's just propagation of entangled state.

Why we don't perceive ourselves as in superposition? "It turns out that this result generalizes to any number of mutually entangled particles. If we ignore any one particle, the entropy diagram of the remaining particles looks like a system of N-1 particles in a classically correlated state with a non-zero entropy.". That means each atom of our bodies perceives other atoms entangled with it as they were not in any superposition (though as a whole, the system is still in superposition). We (atoms) are constantly entangled and in superposition with our environment, but we perceive it as classical state.

In what state each atom "sees" every other? According to probability. That's why in double slit experiment we see only one of most probable outcomes, not a random one.

Time could be rate of entanglement propagation. Entanglement propagates with speed of light (speed of particles), so we seem live in same timeline. But if something moves away from us with speed of light, the time for this object goes slower, but only relative to us.

Until two particles interact with any way, they live in totally different timelines. After they "observe" each other (entangle with each other), also their time becomes entangled. That's why after we see a cup begin dropped, it becomes part of our reality, and the cup becomes broken in our time.

We live in spacetime. As mentioned in article "Spooky action at a distance ought to be no more and no less) mysterious than the spooky action across time which makes the universe consistent with itself from one moment to the next.".

Why arrow of time? The article says: "Under QIT, a measurement is just the propagation of a mutually entangled state to a large number of particles. To reverse this process we would have to "disentagngle" these quantum states. In principle this is possible. In practice it is not.". I think differently though.

That are my thoughts. Please don't judge :)

elzr 10 hours ago 2 replies      
This was surprisingly beautiful. As a geek in programming/computers/information/mathematics, but only a physics admirer from afar, it is very suggestive, even natural, to explain the deepest physical reality in terms of information:

"It was as though particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature."

Whats really going on is things are becoming more correlated with each other, Lloyd recalls realizing. The arrow of time is an arrow of increasing correlations.

The present can be defined by the process of becoming correlated with our surroundings.

jostylr 6 hours ago 0 replies      
From a Bohmian perspective, quantum mechanics consists of a wave function psi(q) that guides all the particles Q. The wave function is distinct from the particles. The particles are in equilibrium, relative to the wave function. It is the wave function that is not in equilibrium in its realm of states.

As it turns out, the usual psi^2 probability distribution of the particles is a reflection that the particles are in quantum equilibrium, that is, psi^2 is the natural measure in quantum mechanics for what equilibrium ought to be since it is the only measure preserved by the dynamics. And so if the particles start that way, they stay that way. And they are likely to start that way using psi^2 as the distribution.

There is actually a lot of subtlety involved in accepting that argument; I recommend http://plato.stanford.edu/entries/qm-bohm/#qr and an actual paper: http://www.ge.infn.it/~zanghi/BMQE.pdf

But what it implies is that the wave function is responsible for the arrow of time. It is a special state that evolves into a less special state. Presumably this is what their research is pointing at.

I would also comment that their description is exactly the classical explanation transferred to the quantum world (which it needs to be since our world is quantum). That is, we start in a special state and it evolves into a less special state because the less special states are more numerous and so more likely to be, all things being equal. And by more likely, we are talking 10^100 kind of more likely.

They still have the problem that the fundamental evolution of the wave function is time reversible. So if that bothered someone (it shouldn't), then their argument does not actually resolve that problem.

So I take from their work that what they are doing is getting the classical thermodynamic explanation (which is about volumes in phase space, not human ignorance) and translating it to the quantum theory. Neither wrong nor revolutionary.

millstone 8 hours ago 1 reply      
> After some time, most of the particles in the coffee are correlated with air particles; the coffee has reached thermal equilibrium.

No doubt this is some way oversimplified explanation, but it still makes no sense.

Say I have hot coffee and lukewarm coffee. The lukewarm coffee will equilibrate faster. Does it interact with the air faster? What if I bring in coffee that's the same temperature as the air, so that it's instantly at equilibrium. Does it interact with the air instantly?

dominotw 1 hour ago 0 replies      
I love this[1] 'arrow of time documentary' if you are looking for something fun to watch.https://www.youtube.com/watch?v=4BjGWLJNPcA
neolefty 6 hours ago 0 replies      
From the article

  One aspect of times arrow remains unsolved.   There is nothing in these works to say why you  started at the gate, Popescu said, referring to  the park analogy. In other words, they dont  explain why the initial state of the universe was  far from equilibrium. He said this is a question  about the nature of the Big Bang.
Could it be that expansion, which proceeded much faster than light, therefore didn't allow entanglement to take place, delaying the heat death of the universe until everything is fully entangled?

If expansion had been slower, would entropy maybe have kept up with it, leaving us as just a single black hole instead of a dispersed, interesting, unentangled, things-are-still-happening universe 13 billion years later?

throwaway7548 9 hours ago 2 replies      
I have a question. I just went to a source of physical (quantum) randomness http://www.randomnumbers.info/ and I'm giving you a random number between 0 and 10,000 which I've just generated there. Here it goes: 6296.

Ok. Now that light cone had finally reached you. And you (neurons in your brain to be precise) are thoughtfully entangled with that random event (outcome), now in your past.

Now imagine the following. A few days passes. And you forget that number. A few years passes. Connections between the neurons which were storing this information are now gone. Molecules and atoms which were part of these neurons are gone from your body. There are no entanglements any more which link you to that event. Is that event in your future now? Again?

TeMPOraL 8 hours ago 1 reply      
Is this really new? IANAP, but I clearly remember being taught about the Arrow of Time as a probabilistic/thermodynamical phenomenon even in high school and I also read similar explanations that involved causality and probability theory without refering to quantum entanglement. Is the "quantum" bit even needed there for anything?
denom 1 hour ago 0 replies      
In the article the author describes the notion of a "pure state" which is something that has independently evolving probability. Individual 'units' lose their pure state and become part of an entangled ensemble--move to equilibrium.

How is the evolution of biological organisms and technological systems explained in this sense? Played backwards, evolution would fit this and traditional notions of thermodynamic entropy. Is evolution a kind of de-entangling?

thibauts 1 hour ago 0 replies      
So, if I get it right, states become more and more coupled, thus entropy tends to decrease in an open system ? I'm confused.
spikels 10 hours ago 0 replies      
Quantum mechanics is where physics became more like mathematics: common sense no longer provides much guidance. It is really cool that it provides the missing explanation for one of the most common sense ideas in classical physics: the arrow of time.
yati 9 hours ago 2 replies      
I've always wanted to study quantum mechanics because of this very "entanglement". Can people please post recommendations on good resources/books on the topic for a person like me having no solid experience with physics(except college level courses)?
Houshalter 4 hours ago 1 reply      
Just a thought that I've been thinking about. Time has a direction because of causation. State1 causes state2 which causes state3 and so on. You get weird paradoxes if you allow causation to work in both directions. The universe would also have to magically align everything perfectly so that everything is consistent.

Another observation is that even with reversible laws of physics that can work in both directions, if you have a single starting state, all other states will causally propagate from it. In a single dimension of time/causation.

analog31 3 hours ago 0 replies      
Should I be looking for Planck's constant in the equations of thermodynamics?
spcoll 10 hours ago 1 reply      
The question of whether time is in fact directional is far from being closed, at least for quantum physicists. In fact, one of the physicists cited in the article is known for proposing a time-symmetric formulation of Quantum Mechanics [1].

[1] http://www.phy.bris.ac.uk/people/Popescu_S/papers/sandu_othe...

officialjunk 8 hours ago 0 replies      
i recall learning that time "flows" both ways at the quantum scale, but i admit is has been a while since i've attended any lectures. has there been any new discoveries to say otherwise? i think i've read about research of both time reversal violations and time-invariance at the quantum scale.

also, what are peoples' thoughts on time being an emergent property at the macro scale and that down at the quantum level, everything is described by time independent equations, like the Wheeler-DeWitt equation? http://en.wikipedia.org/wiki/Wheeler%E2%80%93DeWitt_equation

one-more-minute 5 hours ago 0 replies      
This is an interesting step, but doesn't actually explain why time is asymmetrical. Ok, so things equilibrate as time moves forwards because they entangle as time moves forwards. But this just shifts the question why is entanglement asymmetrical when time, when the underlying laws are not?

You still have the same problem: if you reverse time, the states become untangled and the coffee heats up.

It's nice to be able to model this from a quantum perspective, but make no mistake no philosophical issues have been resolved here, and we don't "finally" understand anything we didn't before.

EGreg 9 hours ago 0 replies      
Wow, just today I read this:


and I thought it was all explained quite simplyand now this?

EGreg 9 hours ago 1 reply      
I thought it was the second law of thermodynamics that already explained the arrow of time? Well, that and friction?
softatlas 9 hours ago 1 reply      

    The rate of information increases.
Hence why

    Information wants to be free.
Parasitic on

    Only information explains its own existence.
Which all, I think, intuitively follows from Spinozist/Cartesian "Conatus" principle. That is to say:

    The order and connection of ideas is the same as the order and connection of things.
Some of us rave about this or that: "well, how many folk use X today" or "qualify as X" or "subscribe to X". But these expressions are all within the scope of multiply converging nexuses of increasing correlative potentia. The coffee cup is a simple example so like Wittgenstein's point: "if a lion could speak, we could not understand him". The lion, like the cup, has restricted correlative powers: these laws apply, these others do not.

The laws of information are laws about the dimensions of proportionality, which give the arrow of time an aspect of curvature (needing to exhaust a universe for exponentially narrowing arrows, so the onion-skinning of properties of a thing "come way may" at "frozen" temporal localities what happens when we "bend" time at certain family resemblance (physical) properties?).

suprgeek 8 hours ago 0 replies      
At or very near the Big Bang, the Universe was in a state of minimum Entropy i.e. minimum entanglement i.e. maximum order (in some sense).

Post Big bang the cosmological arrow of time is in the direction of increasing disorder i.e. increasing Entanglement i.e. decreasing order

On a smaller closed system, Before is when the system is more pure, less entangled, more ordered After is when it has become less ordered, more entangled.

Obvious really...

salimmadjd 9 hours ago 2 replies      
Seeing this article is rather bittersweet. I came to a similar conclusion in my college years but I never pursuit it further.

Taking Quantum Physics in college was a life changing experience and it reshaped how I viewed the world. I was always obsessed by time and one afternoon it became clear.

I explained my variation not as a cup of coffee but a handful of dice. Essentially every tick of time is rolling these dice. And the variation of dice from one combination to the next is the arrow of time.

Like one of the authors in this article, I got the most amount of resistance from physics major. For most part they had a dogmatic view of anything that they had not studied yet. If it wasn't in their books then it didn't exist.

I also came to the conclusion time travel as depicted in the movies will never happen. It can happen randomly in a smaller body but for anything large the arrow of time is almost impossible to reverse.

Lens Blur in the new Google Camera app googleresearch.blogspot.com
550 points by cleverjake  21 hours ago   229 comments top 54
jawns 21 hours ago 5 replies      
Regarding the technology (achieving shallow depth of field through an algorithm), not Google's specific implementation ...

Up until now, a decently shallow depth of field was pretty much only achievable in DSLR cameras (and compacts with sufficiently large sensor sizes, which typically cost as much as a DSLR). You can simulate it in Photoshop, but generally it takes a lot of work and the results aren't great. The "shallow depth of field" effect was one of the primary reasons why I bought a DSLR. (Yeah, yeah, yeah, quality of the lens and sensor are important too.) Being able to achieve a passable blur effect, even if it's imperfect, on a cellphone camera is really pretty awesome, considering the convenience factor. And if you wanted to be able to change the focus after you take the picture, you had to get a Lytro light field camera -- again, as expensive as a DSLR, but with a more limited feature set.

Regarding Google's specific implementation ...

I've got a Samsung Galaxy S4 Zoom, which hasn't yet gotten the Android 4.4 update, so I can't use the app itself to evaluate the Lens Blur feature, but based on the examples in the blog post, it's pretty good. It's clearly not indistinguishable from optical shallow depth of field, but it's not so bad that it's glaring. That you can adjust the focus after you shoot is icing on the cake, but tremendously delicious icing. The S4 Zoom is a really terrific point-and-shoot that happens to have a phone, so I'm excited to try it out. Even if I can use it in just 50% of the cases where I now lean on my DLSR, it'll save me from having to lug a bulky camera around AND be easier to share over wifi/data.

grecy 21 hours ago 9 replies      
We had an interesting discussion about this a few nights ago at a Photojournalism talk.

In that field, digital edits are seriously banned, to the point multiple very well known photo journalists have been fired for one little use of the clone tool [1] and other minor edits.

It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very shallow depth of field photo, which is OK, even though it's not very representative of what my eyes saw. If I take the photo at f/18 then use an app like the one linked, producing extremely similar results, that's banned. Fascinating what's allowed and what's not.

I find even more interesting is the allowance of changing color photos to B/W, or that almost anything that "came straight off the camera" no matter how far it strays from what your eyes saw.

[1] http://www.toledoblade.com/frontpage/2007/04/15/A-basic-rule...

DangerousPie 20 hours ago 4 replies      
Isn't this just a copy of Nokia's Refocus?


edit - better link: http://www.engadget.com/2014/03/14/nokia-refocus-camera-app-...

salimmadjd 20 hours ago 3 replies      
Is the app taking more than one photo? It wasn't clear in the blog post. AFAIU to have any depth perception you need to take more than one photo. Calculate the pupil distance (the distance the phone moved) then match image features between the two or more images. Calculate the amount of movement between the matching features to then calculate the depth.

As described you then map the depth into an alpha transparency and then apply the blurred image with various blur strength over the original image.

Since you're able to apply the blur after the image, it would mean the google camera always takes more than one photo.

Also a Cool feature would be to animate the transition from no blur to DOF blur as a short clip or use the depth perception to apply different effect than just blur, like selective coloring, or other filters.

dperfect 20 hours ago 5 replies      
I believe the algorithm could be improved by applying the blur to certain areas/depths of the image without including pixels from very distant depths, and instead blurring/feathering edges with an alpha channel over those distant (large depth separation) pixels.

For example, if you look at the left example photo by Rachel Been[1], the hair is blurred together with the distant tree details. If instead the algorithm detected the large depth separation there and applied the foreground blur edge against an alpha mask, I believe the results would look a lot more natural.

[1] http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAAR...

nostromo 20 hours ago 6 replies      
I sure wish you could buy a DSLR that just plugs into your iPhone. I don't want any of that terrible DSLR software -- just the hardware.

I think many devices should become BYOD (bring your own device) soon, including big things like cars.

edit: I don't just want my pictures to be saved on my phone. I'd like the phone to have full control of the camera's features -- so I can use apps (like timelapse, hdr, etc.) directly within the camera.

themgt 21 hours ago 5 replies      
Is looking at the examples giving anyone else a headache? It's like the software blur falls into some kind of uncanny valley for reality.
kbrower 20 hours ago 1 reply      
I did a quick comparison of a full frame slr vs moto x with this lens blur effect. I tried to match the blur amount, but made no other adjustments. Work really well compared to everything else I have seen!http://onionpants.s3.amazonaws.com/IMG_0455.jpg
fidotron 20 hours ago 0 replies      
Doesn't look totally convincing, but it's good for a first version.

The real problem with things like this is the effect became cool by virtue of the fact it needed dedicated equipment. Take that away and the desire people will have to apply the effect will be greatly diminished.

Spittie 21 hours ago 2 replies      
I find it funny that this was one of the "exclusive features" of the HTC One M8 thanks to the double camera, and days after it's release Google is giving the same ability to every Android phones.

I'm sure the HTC implementation works better, but this is still impressive.

nileshtrivedi 18 hours ago 3 replies      
With these algorithms, will it become feasible to make a driverless car that doesn't need a LIDAR and can run with just a few cameras?

Currently, the cost of LIDARs are prohibitive to make (or even experiment with) a DIY self-driving car.

angusb 6 hours ago 0 replies      
A couple of other really cool depth-map implementations:

1) The Seene app (iOS app store, free), which creates a depth map and a pseudo-3d model of an environment from a "sweep" of images similar to the image acquisition in the article

2) Google Maps Photo Tours feature (available in areas where lots of touristy photos are taken). This does basically the same as the above but using crowdsourced images from the public.

IMO the latter is the most impressive depth-mapping feat I've seen: the source images are amateur photography from the general public, so they are randomly oriented (and without any gyroscope orientation data!), and uncalibrated for things like exposure, white balance, etc. Seems pretty amazing that Google have managed to make depth maps from that image set.

sytelus 10 hours ago 2 replies      
Wow.. this is missing the entire point on why lens blur occurs. Lens blur in normal photographs is the price you pay because you want to focus sharply on a subject. The reason photos with blur looks "cool" is not because the blur itself but its because the subject is so sharply focused that its details are order of magnitude better. If you take a random photo, calculate depth map somehow, blur our everything but the subject then you are taking away information from the photo without adding information to the subject. The photos would look "odd" to the trained eyes at best. For casual photograph, it may look slightly cool on small screens like phone because of relatively increased perceived focus on subject but it's fooling eyes of casual person. If they want to really do it (i.e. add more details to subject) then they should use multiple frames to increase resolution of the photograph. There is a lot of research being done on that. Subtracting details from background without adding details to subject is like doing an Instagram. It may be cool to teens but professional photographers know it's a bad taste.
scep12 21 hours ago 2 replies      
Impressive feat. Took a few snaps on my Nexus 4 and it seems to work really well given a decent scene.
anigbrowl 16 hours ago 0 replies      
It's interesting that the DoF is calculated in the app. I am wondering if this uses some known coefficients about smartphone cameras to save computation, but in any case I hope this depth mapping becomes available in plugin forms for Photoshop and other users.

As an indie filmmaker, it would save a lot of hassle to be able to shoot at infinity focus all the time and apply bokeh afterwards; of course an algorithmic version would likely never get close to what you can achieve with quality optics, but many situations where image quality is 'good enough' for artistic purposes (eg shooting with a video-capable DSLR) then faster is better.

tdicola 11 hours ago 0 replies      
Neat effect--I'm definitely interested in trying this app. Would be cool to see them go further and try to turn highlights in the out of focus areas into nice octagons or other shapes caused by the the aperature blades in a real camera.
jnevelson 21 hours ago 1 reply      
So Google basically took what Lytro has been using hardware to achieve, and did it entirely in software. Pretty impressive.
marko1985 4 hours ago 0 replies      
Happy for this "invention" but I would wait for this kind of stuff when smartphones will have all their laser sensors for depth measurment, so this calculations doesn't require a sequnce of taken picture, as the main character could move quickly and deform the final picture or the blur effect. But for static photography or selfies looks amazing.
frenchman_in_ny 21 hours ago 2 replies      
Does this pretty much blow Lytro out of the water, and mean that you no longer need dedicated hardware to do this?
spot 13 hours ago 0 replies      
i just noticed i have the update and i tried it out.wow, first try. amazing:https://plus.google.com/+ScottDraves/posts/W4ozBLTBmKy
jestinjoy1 13 hours ago 1 reply      
This is what i got with Moto G Google Camera Apphttp://i.imgur.com/a6AxO4e.jpg
mauricesvay 18 hours ago 0 replies      
The interesting part is not that it can blur a part of the image. The interesting part is that it can generate a depth map automatically from a series of images taken from different points of view, using techniques used in photogrammetry.
gamesurgeon 21 hours ago 2 replies      
One of the greatest features is the ability to change your focus point AFTER you shoot. This is huge.
kingnight 20 hours ago 1 reply      
I'd like to see an example of a evening/night shot using this. I can't imagine the results are anything like the examples here, but would love to be surprised.

Are there more samples somewhere?

goatslacker 19 hours ago 0 replies      
On iOS you can customize your DoF with an app called Big Lens.

Normally apps like Instagram and Fotor let you pick one point in the picture or a vertical/horizontal segment and apply focus there while blurring the background. Big Lens is more advanced since it lets you draw with your finger what you'd like to be in focus.

They also include various apertures you can set (as low as f/1.8) as well as some filters -- although I personally find the filters to be overdone but others might find them tasteful.

bckrasnow 19 hours ago 1 reply      
Well, the Lytro guys are screwed now. They're selling a $400 camera with this feature as the main selling point.
Lutin 16 hours ago 0 replies      
This app is now on the Play Store and works with most phones and tablets running Android 4.4 KitKat. Unfortunately it seems to crash on my S3 running CM 11, but your experience may vary.


Splendor 16 hours ago 0 replies      
Isn't the real story here that Google is continuing to break off core pieces of AOSP and offer them directly via the Play Store?
zmmmmm 15 hours ago 0 replies      
If nothing else, these improvements make HTC's gimmick of adding the extra lens while giving up OIS seem all the more silly.
defdac 20 hours ago 0 replies      
Is this related to the point cloud generation feature modern compositing programs use, like Nuke? Example/tutorial video: http://vimeo.com/61463556 skip to 10:27 for magic
jheriko 13 hours ago 0 replies      
This sounds clever but also massively complex for what it does. I don't have anything finished but I can think of a few approaches to this without needing to reconstruct 3d things with clever algorithms... still very neat visually if technically underwhelming
insickness 20 hours ago 1 reply      
> First, we pick out visual features in the scene and track them over time, across the series of images.

Does this mean it needs to take multiple shots for this to work?

thomasfl 5 hours ago 0 replies      
I wish google camera gets ported to iOS. The best alternative for iOS seems to bee the "Big Lens" app, where you have to manually create a mask to specify the focused area.
mcescalante 20 hours ago 3 replies      
I may be wrong because I don't know much about image based algorithms, but this seems to be a pretty successful new approach to achieving this effect. Are there any other existing "lens blur" or depth of field tricks that phone makers or apps are using?

I'd love to see their code open sourced.

techaddict009 20 hours ago 0 replies      
Just installed it. Frankly speaking I loved the new app!
ohwp 9 hours ago 0 replies      
Nice! Since they got a depth map, 3D-scanning can be a next step.
anoncow 18 hours ago 1 reply      
How is Nokia Refocus similar or different to this? It allows refocusing a part of the image which blurs out the rest.(Not a pro) https://refocus.nokia.com/
guardian5x 19 hours ago 1 reply      
I guess that is exactly the same as Nokias Refocus that is on the Lumia Phones for quite some time: https://refocus.nokia.com/
coin 18 hours ago 0 replies      
Shallow depth of field is so overused these days. I much prefer having the entire frame in focus, and let me decides what to focus on. I understand the photographer is trying to emphasize certain parts of the photo, but in the end it feels too limiting. It's analogues to mobile "optimized" websites - just give me all the content and I'll choose what I want to look at.
CSDude 19 hours ago 0 replies      
I wonder what is the exact reason that my country is not included. It is just a fricking camera app.
the_cat_kittles 21 hours ago 1 reply      
Isn't it interesting how, by diminishing the overall information content of the image by blurring it, it actually communicates more (in some ways, particularly depth) to the viewer?
sivanmz 12 hours ago 0 replies      
It's a cool gimmick that would be useful for Instagram photos of food. But selfies will still be distorted when taken up close with a wide angle lens.

It would be interesting to pair this with Nokia's high megapixel crop-zoom.

benmorris 17 hours ago 0 replies      
The app is fast on my nexus 5. The lense blur feature is really neat. I've taken some pictures this evening and they have turned out great. Overall a nice improvement.
dharma1 17 hours ago 0 replies      
the accurate depth map creation from 2 photos on a mobile device is impressive. The rest has been done many times before

This is cool, but I am waiting more for RAW images exposed in Android camera API. Will be awesome to do some cutting edge tonemapping on 12bits of dynamic range that the sensor gives, which is currently lost.

bitJericho 21 hours ago 0 replies      
If you couple this with instagram does it break the cosmological fabric?
spyder 19 hours ago 0 replies      
But it can be used only on static subjects because it needs series of frames for depth.
servowire 20 hours ago 3 replies      
I'm no photographer, but I was tought this was called bokeh not blur. Blur is more because of motion during open shutter.
matthiasb 16 hours ago 0 replies      
I don't see this mode. I have a Note 3 from Verizon. Do you?
avaku 20 hours ago 0 replies      
So glad I did the Coursera course on Probabilistic Graphical Models, so I totally have an understanding of how this is done when they mention Markov Random Field...
DanielBMarkham 20 hours ago 0 replies      
Lately I've been watching various TV shows that are using green screen/composite effects. At times, I felt there was some kind of weird DOF thing going on that just didn't look right.

Now I know what that is. Computational DOF. Interesting.

Along these lines, wasn't there a camera technology that came out last year that allowed total focus/DOF changes post-image-capture? It looked awesome, but IIRC, the tech was going to be several years until released.

ADD: Here it is. Would love to see this in stereo 4K: http://en.wikipedia.org/wiki/Lytro The nice thing about this tech is that in stereo, you should be able to eliminate the eyeball-focus strain that drives users crazy.

apunic 11 hours ago 0 replies      
Game changer
alexnewman 16 hours ago 0 replies      
Got me beat
seba_dos1 20 hours ago 0 replies      
Looks exactly like "shallow" mode of BlessN900 app for Nokia N900 from few years ago.

It's funny to see how most of the "innovations" in mobile world presented today either by Apple or Google was already implemented on open or semi-open platforms like Openmoko or Maemo few years before. Most of them only as experiments, granted, but still shows what the community is capable of on its own when not putting unnecessary restrictions on it.

sib 20 hours ago 2 replies      
If only they had not confused shallow depth of field with Bokeh (which is not the shallowness of the depth of field, but, rather, how out-of-focus areas are rendered), this writeup would have been much better.


Cool technology, though.

Animatron: HTML5 Animation Editor Inspired by Middle-School Homework Assignment jetbrains.com
13 points by rdemmer  2 hours ago   3 comments top 2
robmcm 16 minutes ago 0 replies      
Funny that there is a link in that post back to an original earlier post on Hacker News: https://news.ycombinator.com/item?id=7360296

Interesting the team used GWT. I would like to see a post about how they found it, and if they would choose it again for a new project.

timdorr 1 hour ago 1 reply      
Why isn't the first thing I see on the Animatron website an animation made in their own app? http://animatron.com/

Webflow did this with yesterday's post: http://interactions.webflow.com/

Uber hit with preliminary injunction to stop service in Berlin [german] zeit.de
26 points by Xylakant  5 hours ago   33 comments top 5
Genmutant 3 hours ago 3 replies      
It's strange that this took so long, and only in Berlin.If you want to drive a Taxi in Germany, you need a special driver's license. If you want to open a taxi company you need to prove that you are reliable and secure. Additionally you can't refuse a passenger (if there is nothing wrong with them) and can only charge the tarif the city says (+ additional charges like if it sunday or at night).
merrua 3 hours ago 2 replies      
Pretty fair that they blocked it. Its basically an unregulated taxi service. Also the name is terrible for Germany.
Xylakant 5 hours ago 4 replies      
Sorry, I couldn't find an english source. Google translate link http://translate.google.com/translate?sl=de&tl=en&js=y&prev=...

It's interesting to note that "Uber" is translated as "About" (ber)


Thanks for everyone explaining why Uber translates to about - I'm a german native. Not knowing that makes the google translate a little hard to read, so I thought I'd add it for all that don't speak german.

peterjancelis 2 hours ago 0 replies      
In Brussels Uber (more specifically UberPOP) got banned as well.
calibwam 4 hours ago 1 reply      
It is always better to attack competition than to become better yourself. - AT&T, Comcast, etc.
Ask HN: What source code is worth studying?
219 points by SatyajitSarangi  9 hours ago   106 comments top 52
sillysaurus3 8 hours ago 4 replies      
== Vim or Emacs ==

Just pick one and force yourself to use it to the exclusion of other editors. Future you will thank you later, because you'll still be using it 20 years from now. "We are typists first, programmers second" comes to mind. You need to be able to move chunks of code around, substitute things with regexes, use marks, use editor macros, etc.

== 6.824: Distributed Systems ==

http://pdos.csail.mit.edu/6.824-2013/ Do each lab. Read the discussion and rtm's course notes.

== Tarsnap ==

https://www.tarsnap.com/download.html How to write C. Study the "meta," that is, the choice of how the codebase is structured and the ruthless attention to detail. Pay attention to how functions are commented, both in the body of the function and in the prototypes. Use doxygen to help you navigate the codebase. Bonus: that'll teach you how to use doxygen to navigate a codebase.

== xv6 ==




Read the book. Force yourself to read it in its entirety. Use the source code PDF to study how to turn theory into practice.

== Arc ==


You're not studying Arc to learn Arc. You're studying Arc to learn how to implement Arc. You'll learn the power of anaphoric macros. You'll learn the innards of Racket.

Questions to ask yourself: Why did Racket as a platform make it easier to implement Arc than, say, C/Golang/Ruby/Python? Now pick one of those and ask yourself: what would be required in order to implement Arc on that platform? For example, if you say "C," a partial answer would be "I'd have to write my own garbage collector," whereas for Golang or Lua that wouldn't be the case.

The enlightenment experience you want out of this self-study is realizing that it's very difficult to express the ideas embodied in the Arc codebase any more succinctly without sacrificing its power and flexibility.

Now implement the four 6.824 labs in Arc. No, I'm not kidding. I've done it. It won't take you very long at this point. You'll need to read the RPC section of Golang's standard library and understand how it works, then port those ideas to Arc. Don't worry about making it nice; just make it work. Port the lab's unit tests to Arc, then ensure your Arc version passes those tests. The performance is actually not too bad: the Arc version runs only a few times slower than the Golang version if I remember correctly.

== Matasano crypto challenges ==

http://www.matasano.com/articles/crypto-challenges/ Just trust me on this one. They're cool and fun and funny. If you've ever wanted to figure out how to steal encrypted song lyrics from the 70's, look no further.

== Misc ==

(This isn't programming, just useful or interesting.)

Statistics Done Wrong http://www.statisticsdonewrong.com/

A Mathematician's Apology http://www.math.ualberta.ca/mss/misc/A%20Mathematician's%20A...

Surely You're Joking, Mr. Feynman http://web.archive.org/web/20050830091901/http://www.gorgora...

Zen and the Art of Motorcycle Maintenance http://www.arvindguptatoys.com/arvindgupta/zen-motorcycle.pd...

== Above All ==

Don't fall in love with studying theory. Practice. Do what you want; do what interests you. Find new things that interest you. Push yourself. Do not identify yourself as "an X programmer," or as anything else. Don't get caught up in debates about what's better; instead explore what's possible.

stiff 8 hours ago 1 reply      
I think you get more benefit from reading code if you study something very close to what you are working on yourself, something in the same domain, in the same framework perhaps, or at least in the same programming language, at best something you are deeply involved in currently.

I never seem to get enough motivation to read deeply into random "grand" code bases like Lua or SQLLite, but some months ago I got into the habit of always studying a bunch of projects that use a given technology before I use this technology, and it greatly decreased the amount of time it takes me to get to a "idiomatic" coding style. So instead of diving in a random, I would recommend making researching existing code-bases related to what you are currently doing an integral part of your workflow.

willvarfar 8 hours ago 1 reply      
Fabien Sanglard http://fabiensanglard.net has some excellent code reviews on his website, particularly games.

You could read some of the code-bases he reviews, and then read his review. You'll be able to compare and contrast your opinions with his, and if there's interesting variation you can blog about it ;)

fotcorn 8 hours ago 0 replies      
The Architecture of Open Source Applications book[0] gives a high level overview on many open source projects. It's a good starting point to dive into the code of these projects.

[0] http://aosabook.org/en/index.html

robin2 5 hours ago 0 replies      
Slightly off topic, but Peter Seibel's take on the idea of code reading groups, and the idea of code as literature, is interesting: http://www.gigamonkeys.com/code-reading/

"Code is not literature and we are not readers. Rather, interesting pieces of code are specimens and we are naturalists. So instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of Comp Lit. grad students, I think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found."

The reason this is off topic is that it sounds like you were after interesting specimens anyway. I don't have any code examples as such, although if algorithms count I'm particularly fond of Tarjan's algorithm for finding strongly connected components in a directed graph, and the Burrows-Wheeler transform (as used in bzip).

oneeyedpigeon 6 hours ago 1 reply      
To mix things up a bit, I'm going to give two very small examples of code that can be understood quickly, but studied diligently. Both are in JavaScript, which I notice you mention specifically in another comment:

[1] Douglas Crockford's JSON parser. Worth a look because it is excellently commented and is easily understandable https://github.com/douglascrockford/JSON-js/blob/master/json...

[2] Bouncing Beholder. A game written in 1K of highly obfuscated code, which the author expands upon here. Worth it because it teaches some crazy optimisation techniques that are applicable to all programming, but also includes plenty of javascript-specific trickery. http://marijnhaverbeke.nl/js1k/

dailo10 3 hours ago 1 reply      
Python Sudoku Solver by Peter Norvig -- an elegant solution in one page of code. When I read this, I felt like code is art.


davidw 7 hours ago 0 replies      
I'm partial to the Tcl C code:


It's very nicely commented and has a nice, easy to read style throughout (except for the regexp files).

raverbashing 4 hours ago 0 replies      
The Linux Kernel

Very clean (mostly) and very revised C code, following a strict code convention

(Of course it's kernel code, so some things don't apply to userspace, still)

pcx 8 hours ago 0 replies      
I've heard lots of people sing praises for Redis source - https://github.com/antirez/redis. A cursory look into the source shows a very well documented code-base. It's one of the top items in my to-read-some-day list. Salvatore is an excellent C programmer and takes a lot of pain in writing good documentation, despite his not so great English skills. A shout out for him, thanks for setting an example.
SixSigma 3 hours ago 0 replies      
The plan9 operating system

* The lack of ifdef's that make cross compiling a breeze

* It is easy to understand, compare to reading the Linux kernel


spacemanmatt 1 hour ago 0 replies      
Please enjoy the source code of PostgreSQL (any version, but latest is generally recommended) core. It is very well factored, and typically also very well commented. This community cares a great deal about code quality, because they are so clear on the relation between readability, diagnosability, and execution correctness.
agentultra 2 hours ago 0 replies      
Anything you find interesting or find yourself using frequently.

A less glib answer try Brogue: https://sites.google.com/site/broguegame/

A very interesting roguelike with interesting constraint-based features.

rch 2 hours ago 0 replies      
Take a look at Redis sometime. You might want to actually work on it a bit to help internalize what you're reading. Here are a couple of articles that might help get you started:



pavlov 7 hours ago 0 replies      
I learned a lot from the Cocotron source:


It's a free cross-platform implementation of Apple's Cocoa, so there's a lot of stuff there. But the project is well organized, and almost everything is written in a minimalist oldschool Objective-C style.

I've looked at some other cross-platform frameworks, and they are often hard to understand because they have been developed by a large group of developers and include lots of complex optimizations and platform-specific code paths. Cocotron is not as finely tuned as Apple's CoreFoundation (for example), but much more readable.

AhtiK 4 hours ago 1 reply      
Python => SQLAlchemy

Very clean, feature-rich yet pragmatic and well documented. https://github.com/zzzeek/sqlalchemy

chris_wot 1 hour ago 0 replies      
It's not great code (though I'm working to make it so), and perhaps not the intent of this question - but if you want to looking at a 25+ year old codebase that's being refactored, check out LibreOffice, especially the VCL component:


lamby 5 hours ago 0 replies      
"Beautiful Code" is worth a read-through, particularly for the commentary.

(One thing that I still remember years on is the "drop of sewage" example.)

projectileboy 3 hours ago 0 replies      
I'd echo the advice to read the Arc source, and I'd add the various versions of Quake (C, C++). I learned a lot reading John Carmack's code.
riffraff 4 hours ago 0 replies      
Not a specific codebase, but I went through "Code Reading"[0] many years ago, I found it interesting. Most reviews are not very positive though, so maybe it was just at the right point for me.

[0] http://www.amazon.com/Code-Reading-Open-Source-Perspective/d...

DalekBaldwin 4 hours ago 1 reply      
Honestly, aside from learning to express a few extremely specific patterns in your language of choice concisely and elegantly and reminding yourself of the existence of certain libraries and utility functions so you don't accidentally waste time reinventing them, I think reading source code is a pretty useless exercise unless you also have a detailed record of how that source code came to exist in its present form. Until there is some revolutionary new tool for generating a human-understandable narrated history of large-scale design decisions from a source control history, your time will almost certainly be better spent reading textbooks that incrementally develop a piece of software over several chapters. Even that is cheating -- the authors know exactly where they want to end up and they won't include all the missteps they made when they first started writing similar programs. But it's still loads better than the alternative. Just as sitting in a law school library absorbing an encyclopedic knowledge of the law won't really train you to make arguments that will fly in front of a judge, reading a code base as a dead, unchanging document won't teach you what it is to live in that code.
betterunix 3 hours ago 0 replies      
SBCL or CMUCL -- Lisp compilers written in Lisp.
j_s 2 hours ago 0 replies      
In the .NET world, shanselman has a series of Weekly Source Code blog posts and most recently posted a list of seven 'interesting books about source and source code'.


hiisi 6 hours ago 0 replies      
C -> Redis

I haven't written any C for years, but really enjoyed skimming through Redis codebase, it's so clean, easily understandable and extensible.

nicholassmith 7 hours ago 0 replies      
I had a read through the PCSX2 emulator recently, that was quite interesting: https://github.com/PCSX2/pcsx2 it's a complex project in what was surprisingly readable C++ code.
fit2rule 7 hours ago 2 replies      
The sources to Lua are pretty darn great:


patrickg 3 hours ago 0 replies      
I suggest the source code of TeX. Not new, but still very interesting to read.

source that needs some postprocessing (tangle/weave):


PDF from the source (including hyperlinks)


kjs3 2 hours ago 0 replies      
I learned a huge amount about how real operating systems are put together and the compromises that get made by reading the V6 Unix source via John Lions Commentaries (yes...I had a photocopied copy). Made exploring the BSD 4.2 and 4.3 source trees (another worthwhile exercise) much easier. I suppose if I was starting out today and not in 1985 I'd look at xv6 or Minix.
twunde 2 hours ago 1 reply      
For PHP, I've been very impressed by Phabricator's code (and the related phutils library). It's worth looking at the git commits as well to see just how clean and structured commits can be.I'm much more impressed by it than by any PHP framework code I've read (and I've read Zend, Symfony2, li3, codeigniter as well as custom frameworks)
olalonde 6 hours ago 0 replies      
Javascript/Node.js: pretty much anything written by https://github.com/visionmedia his less popular libraries are not very well commented though) https://github.com/jashkenas/underscore

Scheme (and functional programming in general): examples/exercises from the SICP book

twelvechairs 7 hours ago 0 replies      
The most interesting things to read are those where a programmer has done something cleverly, but this only needs to happen when your language or libraries make it hard for you to begin with. Aside from low-level performance intensive functions, the best code is not interesting to read - it just reads like statements of fact.
collyw 6 hours ago 1 reply      
Slight tangent to your question, but one thing I have noticed recently is that having to deal with really crap code inspires me to do my own better.

I inherited a colleagues work after she left, and it was horrible. But I thought about why it was horrible, and how to make it better. What would it look like if it was done well?

Even with my own code, if I look at something I did 6 months ago, and it doesn't make sense straight away, the it can usually be improved.

laichzeit0 7 hours ago 0 replies      
Eric S. Raymond wrote a book The Art of Unix Programming [1] that has many "case studies" as well as recommendations of which software/RFCs are particularly worthy of study.

[1] http://www.faqs.org/docs/artu/

jacquesm 8 hours ago 5 replies      

  C -> Varnish  PHP -> Yii   Ruby -> Merb  Scheme -> Arc  Clojure -> Core  JavaScript -> Multeor
Any languages in particular that you're interested in not covered above?

raju 3 hours ago 1 reply      
Any suggestions for Clojure projects?

[Update: Oops. I missed the "Clojure -> Core" by jacquesm]

redox_ 5 hours ago 0 replies      
For all low-level I/O details (fflush/fsync/fsyncdata on files/directories after creation/renaming), I've used to read MySQL routines, pretty simple to understand: https://github.com/twitter/mysql/tree/31d6582606ddf4db17ad77...
villek 5 hours ago 2 replies      
I found the annotated source code of the underscore.js to be very educational: http://underscorejs.org/docs/underscore.html
diegoloop 7 hours ago 0 replies      
I made this tool: http://codingstyleguide.com to improve the way I code for different languages and not get lost with too much programming information and it's helping me a lot.
vishnugupta 7 hours ago 0 replies      
I'm fascinated by concurrent programming. I find that reading classes from Java's java.util.concurrent package gives me very good practical insights as to what goes into building a concurrent class. My all time favorite is ConcurrentHashMap :
davedx 6 hours ago 1 reply      
* BackboneJS

* UnderscoreJS

agumonkey 8 hours ago 0 replies      
I really enjoyed skimming through Ian Piumarta's Maru, a Lisp in C, very pretty code, very concise. (I already mentioned it in other topics)


entelect 6 hours ago 0 replies      
dfkf 6 hours ago 0 replies      
dschiptsov 4 hours ago 0 replies      
db48x 6 hours ago 0 replies      
TeX the Book is good, even if it is in Pascal.
borntyping 6 hours ago 0 replies      
Python: Flask (and related projects)
s_dev 6 hours ago 0 replies      
I've heard that reading the Git source code is very beneficial but haven't done it myself yet.
Hydraulix989 8 hours ago 1 reply      
C -> nginxC++ -> Chrome
willvarfar 7 hours ago 1 reply      
(You say the 'naive' way; how can it be compressed better?)
visualR 6 hours ago 0 replies      
marincounty 2 hours ago 0 replies      
Get to know the command line before you start any language.
plicense 6 hours ago 0 replies      
Everything at Google.
Google's Street View computer vision can beat reCAPTCHA with 99% accuracy googleonlinesecurity.blogspot.com
302 points by apawloski  20 hours ago   142 comments top 44
zwegner 18 hours ago 12 replies      
This particular issue (AI performance on captchas) is really quite fascinating. It's an arms race, but the problem is, only one side can win. Google is claiming they have improved their system in some (understandably) unspecified way, but there's only so far this can go. Captchas need to detect whether someone is human, but it has to work for everyone, ideally, even those with disabilities. Any simple task a human can do will eventually be able to be automated. Tasks that aren't currently feasible to be automated, say some natural language processing tasks, have another problem: scalability. To prevent simple databases of problems -> solutions, the problems need to be generated en masse, and for cheap, which means a computer needs to generate the solutions in the first place. And of course, paying people to just do captchas all day already happens.

The street address/book scan approach that Google uses is interesting in that the exact solution is not known, so they presumably have to be somewhat forgiving in accepting answers (as their machine learning might have gotten it wrong). Perhaps this is what their "risk analysis" refers to--whether their response seems "human" enough according to their data, not necessarily whether it's correct.

I don't see a way around this problem for free services that still preserves privacy (so directly using some government-issued ID is off the table). Maybe some Persona-like digital signature system, where a person can go to a physical location with a government ID, and get a signature that says "Trusted authority X affirms that person Y is in fact a person". Obviously this still has problems, as you need to trust X, and it's a big pain in the ass.

There are parallels to the realm of passwords, which are also becoming obsolete (not that there's a good replacement...). Anything that a human can feasibly remember for a bunch of sites is becoming easier and easier for computers to guess.

So basically, computers are taking over the world, and we can't do anything to stop it. God help us all.

josho 20 hours ago 2 replies      
Interestingly I activated a new gmail account today and during the signup process I experienced the obligatory captcha. It was in two parts, the first looked strikingly like a street view picture of a house number, while the second looked like a traditional captcha.

I suspect that google has been using techniques like this to validate their computer vision conclusions. Which makes their 99% assertion even more interesting, because it's likely 99% confirmed by a very large crowd sourced data set, not simply a staff member going through several hundred samples to come up with the success rate.

jrochkind1 19 hours ago 2 replies      
From that caption "CAPTCHA images correctly solved by the algorithm", there are at least two of them that I'm not sure _I_ can correctly solve on the first try.

Which is generally my experience with captcha's these days, I only have about a 50% success rate.

CAPTCHA is a failed strategy, time to give it up.

adyus 20 hours ago 4 replies      
In effect, Google computer vision got so good that they made their own system obsolete. This is a good thing.

I still think the only reliable way to confirm identity (or humanity) online is an email or SMS verification. Recently, receiving a 2-factor SMS code took less time than the page refresh prompting me to enter it.

frik 11 hours ago 0 replies      
Google's reCAPTCHA showed street numbers as one of the two catcha-"words" for more than two years.

For me this was quiet annoying to input street numbers of others. It's a privacy issue, it was like helping the NSA spying and one feels bad entering Google's captcha.

What is even more astouning is that Google does not even mention all the croud sourced "volunteers" that trained their OCR database. As Google use an open OCR software (former HP OCR app from '95) it would be a good choice to publish their data back to the community.

I removed Google captcha on my own sites and implemented my own traditional captcha (on the first sight of it about two years ago).

zobzu 19 hours ago 1 reply      
The program solves captcha that I, as a human, cannot solve.Pretty sure that means captcha of that type are definitely dead.
jere 20 hours ago 3 replies      
>In this paper, we show that this system is able to accurately detect and read difficult numbers in Street View with 90% accuracy.

> Turns out that this new algorithm can also be used to read CAPTCHA puzzleswe found that it can decipher the hardest distorted text puzzles from reCAPTCHA with over 99% accuracy.

Am I missing something or could we improve CAPTCHAs by mimicking street numbers?

pacofvf 18 hours ago 0 replies      
well there are a lot of Resolve CAPTCHA as a Service sites like http://www.9kw.eu/
ilitirit 13 hours ago 0 replies      
To be honest, I can't even solve those reCAPTCHAs on that page (that's one of my biggest gripes about reCAPTCHA). I think we're nearing a point in time where if some(thing) can solve a particularly hard CAPTCHA, we can safely assume that it's not human.
dnlbyl 20 hours ago 3 replies      
99% is probably better than my success rate with reCAPTCHA...
msvan 6 hours ago 1 reply      
Here's a captcha idea: make people write a 100-word essay on a specific topic. If it's good, you're accepted and you won't have to do it again. If it's bad, you're either a computer or cheap Nigerian labor. When we get to the point where we can't distinguish a computer from a human, we'll just let them be a part of the community.
shultays 3 hours ago 0 replies      
My accuracy is way below 99%, good job Google!

Seriously though, I hope this does not mean there will be harder captchas, current ones are already stupidly hard

rasz_pl 6 hours ago 0 replies      
Does google aggregate&correlate data in vision algo?

For example for street numbers they not only have picture of a number, they also have knowledge of all the other numbers on that street and guesses for those other numbers. Easy to guesstimate order of a number by checking neighbouring ones.

Same for book words, they have n-gram database.http://storage.googleapis.com/books/ngrams/books/datasetsv2....

Thats a lot of useful MAP/ML data.

But the example they give for the new captchas all look like random crap, "mhhfereeeem" and the like. Its like they are not interested in structure, just pure geometry of letters/numbers.

dlsym 20 hours ago 0 replies      
"CAPTCHA images correctly solved by the algorithm" - Ok. Now I have to consider the possibility of being a machine.
infinity0 7 hours ago 0 replies      
Ironic how the HTTPS version force-redirects you to HTTP. (Amazon.co.uk started doing this a few days before and it's pissing me off no end.)
spullara 9 hours ago 0 replies      
Reminds me of a hack day at Yahoo where one team made a captcha where you had to match a photo with its tags and another team made an algorithm that would assign tags to a photo. Both based on Flickr humans meant that the captcha was easily solvable by the algorithm.
tsenkov 4 hours ago 0 replies      
It's fascinating how, arguably simple software now, which is the captcha, would inevitably become more and more complex as AI develops.
spullara 9 hours ago 0 replies      
So, now if you get the captcha right you're a computer, otherwise you are a human?
zatkin 19 hours ago 1 reply      
But can it beat CRAPCHA? http://crapcha.com/
rasz_pl 7 hours ago 0 replies      
>CAPTCHA images correctly solved by the algorithm

well, isnt that great? Because I, HUMAN, can maybe solve _one_ of those (lower right one).

I frickin HATE google Captchas and simply close the page if it wants me to solve one, they are too hard for me.

aviraldg 11 hours ago 0 replies      
Isn't this expected (and a natural consequence of the fact that it's trained on huge volumes of reCAPTCHA data?)
plg 17 hours ago 1 reply      
Why isn't google releasing the full algorithm?
aljungberg 7 hours ago 0 replies      
Google software could use their 99% successful algorithm to filter potential captchas. Then show the 1% they can't crack to humans.

Now the race becomes who can write the better captcha solver, Google or the spammers? As spammers learn to identify things in the 1%, Google will hopefully improve faster and continue to narrow the "hard to solve" band.

mrsaint 19 hours ago 2 replies      
Captchas were meant to keep spammers at bay. Unfortunately, that's no longer the case. Thanks to "cloud technology" like DeathByCaptcha - that is, people in countries where labor is cheap solving captchas all day - spammers have no problem getting through reCaptcha-protected sites and forums to do their mischief.

As a result, reCaptcha & co tend to be more of an annoyance to honest visitors than to spammers.

drawkbox 17 hours ago 0 replies      
99% is better than most humans captcha accuracy. Back in my day humans could still beat computers at Chess but nowadays computers can beat humans at Jeopardy and drive. Interesting to see when it fully crosses over.
daffodil2 20 hours ago 1 reply      
Wait, it's not clear to me from the blog post. Did they make a system that obsoletes reCAPTCHA? If so, it's just a matter of time before the spam systems catch up, correct? If so, what's the successor to CAPTCHA? Or is the web just going to be full of spam in the future?
varunrau 17 hours ago 0 replies      
I've always felt that it would be only a matter of time before computer vision would be able to solve the (re)CAPTCHA problem. Especially since digit classifiers are able to match the performance of humans.

One approach that I enjoyed seeing was the use of reverse captchas. Here you pose a problem that a computer can easily solve, but a human cannot. For instance, if you ask a simple question (1+1=?), but you place the question box off the screen so the user can't see it. A computer would be able to easily answer the question, but a human user would have no way of doing so.

pavelrub 18 hours ago 0 replies      
This is essentially the technology that was discussed here 3 months ago [1], and it links to the exact same article on arxiv, titled: "Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks". [2]

They new addition to the article is that now they have tested the same type of NN on reCAPTCHA, and (perhaps unsurprisingly) it works.

[1] - https://news.ycombinator.com/item?id=7015602[2] - http://arxiv.org/abs/1312.6082v4.

blueskin_ 8 hours ago 0 replies      
Great... now they are going to get even harder to actually do.
northisup 16 hours ago 0 replies      
Yet it says I'm a robot a good two of three times.
aaronbrethorst 20 hours ago 0 replies      
I'm impressed that their address identification algorithm can solve those CAPTCHAs. I can't make heads or tails of them.
leccine 17 hours ago 0 replies      
exadeci 8 hours ago 0 replies      
You're welcome google (we are the rats labs that teached their system how to read)
stuaxo 17 hours ago 0 replies      
I'm sorry, as a human I have had to fill these street view style captchas in all the time for google, so this is hardly a completely artificial intelligence, humans have done it many many times, in fact I'm sure some of the pictures in the articles have come up.
peterbotond 13 hours ago 0 replies      
what if someone has bad eyes of some rare eye problem and can not solve captcha problems at all? in other words fails captcha 90% of times.
vfclists 5 hours ago 0 replies      
Google are getting too creepy for any sensible persons liking. Addresses which are off the street in apartment complexes are now getting recognized as well.

Whenever I see these kind of captchas I switch to audio captchas. It is rather unethical for Google to use recaptchas in this way.

EGreg 13 hours ago 0 replies      
Basically consider why we want to eliminate computers from accessing the sites -- because we want to make account creation expensive, to prevent sybil attacks and giving away scarce resources.

What is expensive? Reputation. That's where credit money's value comes from.

I wrote a more comprehensive piece here, in case anyone's interested: https://news.ycombinator.com/item?id=7601690

Keyframe 19 hours ago 2 replies      
Now that programs are better and better at solving CAPTCHA - that means that correct CAPTCHA input will mean the opposite from what it means now. Since programs are better at solving CAPTCAH than humans, correct input (3/3 for example) will mean it's a robot. Thus, CAPTCHA becomes relevant again.
knodi 11 hours ago 1 reply      
I just came here to say fuck reCAPTCHA! I hate it, I can't read it with my human eyes.
sajithdilshan 20 hours ago 0 replies      
conectorx 15 hours ago 0 replies      
this is also can be done with tesseract or encog framework... i dont know whats news about this
spcoll 10 hours ago 0 replies      
It's a new success for Deep Learning. It seems to be actually 99.8% accuracy according to their paper: http://arxiv.org/abs/1312.6082

That's one order of magnitude higher.

maccard 19 hours ago 0 replies      
Damn, that's better than me!
techaddict009 20 hours ago 0 replies      
This is really Great. AI is getting really smarter and smarter day by day!
Why The Clock is Ticking for MongoDB rhaas.blogspot.com.br
70 points by turrini  2 hours ago   97 comments top 17
overgard 1 hour ago 5 replies      
Having used mongo in a professional context, I'm sort of amused by how much vitriol it gets. It has it's flaws, but it's not that bad. I think it's been a bit misbranded as the thing you use to "scale", which ticks people off. To me, when I use mongo, I mostly use it because it's the most convenient option. It's very easy to develop against, and awesome for prototyping and a lot of times it's good enough that you never need to replace it after the prototype phase.

Relational databases are great, but they're almost an optimization -- they're way more useful after the problem set has been well defined and you have a much better sense of data access patterns and how you should lay out your tables and so on. But a lot of times that information isn't obvious upfront, and mongo is great in that context.

pilif 2 hours ago 9 replies      
I really don't see how a fixed schema is seen as such a bad thing by many NoSQL advocates. In most databases, altering a schema is an operation that's over quickly and in many databases it's easily reversible by just rolling back a transaction.

The advantages of a fixed schema are similar to the advantages of a static type system in programming: You get a lot of error checking for free, so you can be sure that whenever you read the data, it'll be in a valid and immediately useful format.

Just because your database is bad at altering the schema (for example by taking an exclusive lock on a table) or validating input data (like turning invalid dates into '0000-00-00' and then later returning that) doesn't mean that this is something you need to abolish the schema to solve.

Just pick a better database.

gdulli 2 hours ago 3 replies      
"This is not to deny that MongoDB offers some compelling advantages. Many users have found that they can get up and running on MongoDB very quickly, an area where PostgreSQL and other relational databases have traditionally struggled."

A few things about this I've never understood.

1. Someone's going to make a technology decision with priority given to a one-time cost over the strengths and weaknesses of running competing products in perpetuity? The ability to avoid having to learn something is a pro in this decision?

2. If you don't have the skill set to install or maintain MySQL or Postgres, you should not be in charge of installing or maintaining production systems, period. You will hit your ceiling the first time you have to do anything non-trivial with a system that happens to have been easier to "get started with."

yawz 2 minutes ago 0 replies      
Dramatic titles get the readers attention, therefore I get the choice of words. However, our industry is so big that there will never be a single solution. MongoDB may never become #1 but as it's described in many comments, it's a pretty good choice in various situations. So, as they say in Ireland "Stall da beans der bi!" I don't hear a ticking clock.
m_mueller 1 hour ago 1 reply      
I can't really speak about Mongo, but since the post seems to be talking about relational vs. document based DBs in general, here's my perspective coming from CouchDB:

- Schemaless DBs make sense, when handled correctly within the application framework, for what I'd call information systems with regularly changing requirements. I'm currently building a rapid development platform for these kinds of systems, where users can define an arbitrary relational data model from a Web-UI and get the application with forms and views all pre-made. The user design is changeable at any point without breaking anything and even the underlying static data structures can be changed without any need for an update or data migration process - it's all handled when opening or saving a document with a previous version.

- CouchDB's map/reduce view definitions are interesting when designing a DB system, since they IMO restrict the developer in exactly the right way: One is forced to write the right kind of index views instead of being able to just join willy nilly. Making something slow usually means writing complex views and chaining many lookups together - one has to work for it and, conversely, being lazy and reducing everything to the simplest solution usually results in a fast solution as well. The result usually scales up to a high number of documents in terms of performance.

- Being able to replicate without any precondition, including to mobile devices with TouchDB, is a big plus - and in fact a requirement in our case. Offline access is still important, especially in countries where people spend a lot of time in trains or for systems that manager types want to be accessing in flight.

dkhenry 2 hours ago 1 reply      
This is clearly a flamebait title. The article doesn't say anything about why MongoDB is running out of time other then "PostgreSQL is making real progress as a document store". I think unwittingly the author has identified why MongoDB is not running out of time. It still has a huge lead on RDBMS in a very common and useful workload. If they can continue to make progress while other engines catch up with some of the document oriented features ( as Postgre has done ) then they will still have compelling features to offer. If they do nothing while other engines make progress then of course they will fail.

If anything all this points out is that Document stores, like MongoDB, have a real market where they excel and other engines are playing catchup.

danford 1 hour ago 0 replies      
Often when I read articles like this I take them with a grain of salt. A lot of hate for MEAN technologies seems to stem from people who don't know how to properly use them. You're not supposed to use MongoDB like *SQL and if you try you're gonna have a bad time.

Let's say I have a big pick-up truck I use to haul xWidgets. My friend gets a little motorized scooter to drive around town hauling his yWidgets. Is it proper for me to believe that my friends scooter sucks because it doesn't haul xWidgets like my truck? I don't know much about scooters, but by friend says it doesn't have a steering wheel and it only has two wheels. How the heck can it steer without a wheel? He says it has "handle bars" for steering. Seems kind of dumb but I guess it works for him. He touts the fact that he gets 80mpg on gas but it doesn't matter how efficient it is if he can't haul xWidgets. Scooters suck because they don't work the way my truck does.

justinsb 2 hours ago 2 replies      
My personal "big picture" critique of MongoDB is that I see it evolving into a SQL database with a different syntax. It is a strongly-consistent, distributed-through-replication-and-sharding system. Additions to 'traditional' databases, like Postgres' HStore or MySQL's HandlerSocket show that many of the MongoDB differences are not fundamental.

Much more interesting to me are systems that do something fundamentally different. e.g. explore different parts of the CAP trade-off, like Cassandra. Or systems that are built around the idea that data no longer lives exclusively on a server, like CouchDB.

craigching 1 hour ago 0 replies      
As I always say when these sorts of articles come out (and I admit this is my use case for MongoDB and that doesn't necessarily match everyone else's use case), where is the easy to setup HA and sharding for PostgreSQL? I know it's coming, but right now it's not there.

For someone who redistributes a product that relies on end-users setting up HA, this for me is MongoDB's killer feature, easy to configure replication and sharding. I love PostgreSQL, but this is the one big thing that keeps me from using it right now.

ThePhysicist 1 hour ago 2 replies      
I agree that document-oriented databases will probably not replace relational databases in the near future. In my opinion though, the schemaless design of MongoDB paired with its ease of use and its native support of JSON data makes it a perfect choice for prototyping and (in some cases) a viable option for use in production.

What you also have to consider when comparing document-oriented to relational databases is that the former is still a very young technology: MongoDB has been founded in 2007, whereas Postgres has been around since 1986! So given what the MongoDB team has achieved in such a short time span, I expect to see some huge improvements in this technology over the next decade, especially given the large amount of funding that 10gen received.

In addition, the root cause for most complaints ("it doesn't scale!", "it loses my data!", "it's not consistent!") is that people try to apply design patterns from relational databases to MongoDB, which often is just a horrible idea. Document databases and relational databases are very different beasts and need to be handled very differently: Most design patterns for relational databases (data normalization, using joins to group data, using flat data structures, scaling vertically instead of horizontally) are actually anti-patterns in the non-relational world. If you take this into account I think MongoDB can be an awesome tool.

AdrianRossouw 1 hour ago 0 replies      
I've been asking a lot of people when MongoDB is actually the right tool for the job.


I'm starting to form this idea of what constitutes an ideal use case for mongo in my head, and i'm trying to prove the model.

If I were to imagine some kind of "realtime" multiplayer game, like quake or something.

1. You have to have the state be shared between all the parties in a reasonable time.

2. The clients only need the data that is directly relating to the round they are in, so you have the concept of cold and hot data.

3. The data is all kind of ephemeral too, so that you don't specifically care about who was on what bouncy pad when, but you do want to know what the kill score/ratio is afterwards.

4. You have a couple of entities that have some kind of lightweight relationship to each other, which makes it just more complex than a key-value store like redis is really suitable for.

5. These entities are sort of a shared state, and thus get updated more often than new unrelated documents get added, and couchdb's ref-counting and append-only nature makes it really unsuited for constant updates of an existing record.

any feedback would be appreciated.

hartator 2 hours ago 1 reply      
Its funny to see this kind of posts now and then predicting the close end of mongodb... for several years now!

MongoDB is here to stay. Its opinatred, PostrgreSql isn't. It's faster out of the box. The client drivers are pretty good. (Dont forget that SQL database still send raw text as request and get raw text in return!). It's fitting the bill for a lot of quick and dirty web apps and deliver early performance and argualy scalable performance. Dont get me wrong, I still litteraly love postgres.

lucisferre 1 hour ago 0 replies      
The author is conflating (or just ignoring) the very significant difference between application databases and reporting databases. Not surprising since most of us do this as well when we are building applications. However no comparison of the relative value of database schema styles can responsibly ignore this difference.
mathattack 1 hour ago 0 replies      
I think many folks confuse normalization as a strategy with the underlying database technologies. Oracle and other RDMBS technologies can create normalized databases too. In the end it's a design judgment. There is a lot of room between fully normalized and one-big-table. Even firms that logically map things out fully normalized frequently decide that for some things that doesn't make sense.

Taking a step back, there are still reasons to abandon Oracle. It may not scale up, or be good for certain time series calculations, but that's another story entirely.

Yuioup 1 hour ago 0 replies      
In short, I don't expect MongoDB, or any similar product, to spell the end of the relational database.

The author makes it sound like that was a possibility. SQL and NoSQL are two different tools in the toolbox and should be considered as such.

jchrisa 1 hour ago 1 reply      
Maybe the clock is ticking because thier large production deployments are migrating to other tech? At least we are seeing plenty of folks who realize that a query API on top of mmap isn't really a database. :)

One high profile migration: http://www.couchbase.com/viber

loftsy 1 hour ago 0 replies      
On document store indexes the article says:

> If all order data is lumped together, the user will be forced to retrieve the entirety of each order that contains a relevant order line - or perhaps even to scan the entire database and examine every order to see whether it contains a relevant order line.

Both of these are untrue. Author needs to read up on secondary indexes.

I, Pencil (1958) econlib.org
61 points by nkurz  9 hours ago   18 comments top 7
gradi3nt 1 minute ago 0 replies      
haakon 8 hours ago 0 replies      
http://www.fee.org/library/detail/i-pencil-audio-pdf-and-htm... is the 50th anniversary edition with foreword and afterword, and audio and PDF versions.
nateabele 5 hours ago 1 reply      
Here's the tl;dr version, animated, and with a fun soundtrack!


mckoss 2 hours ago 0 replies      
Henry Petroski's excellent book The Pencil[1] uses the same subject to explore the history of the pencil's development into the common artifact we know today.


Houshalter 4 hours ago 0 replies      
Political arguments aside, I think it's a beautiful description of how complex and interdependent our economy is.
torrent-of-ions 6 hours ago 5 replies      
A nice read but I don't agree completely with the final paragraph. The invisible hand uses a greedy algorithm. It does take a mastermind to get close to any kind of optimal solution in many cases. We can't trust the invisible hand to provide healthcare to everyone, nor can we trust it to take into account external costs like pollution etc.
hxa7241 2 hours ago 2 replies      
This is, well, shallow propagandising guff. It promotes an agenda but diverts criticism with mythologising illusion.

Markets fail in various ways. Should we just let that happen? -- and just have 'faith' in the wonder of 'freedom' etc.? As Stiglitz says, a common reason why 'the invisible hand' is invisible is because it is not actually there.

Piston X86-64 Assembler working in web browser and Node.js pis.to
51 points by Sami_Lehtinen  9 hours ago   15 comments top 10
k4st 3 hours ago 1 reply      
Would be really cool if this gave more explanation for the encodings. For example, showing the opcode, mod/reg/rm, and displacement components is really cool. What would be even cooler is to say why some bit or combination of bits makes this opcode use, for example, the rax register. This would be more an effort of exposing some tables, referencing manuals, etc. but I think it would give people more of an intuition for the encoding. Another thing to consider would be breaking the encoding down into octal digits instead of hex or binary (or maybe mix when appropriate) to give the most clear presentation of the encoding format. I could see this as a great way to lazily learn.
bglazer 1 hour ago 0 replies      
No snark, just curious. What's the point of this? Is it for teaching? It's a not a performance thing like asm.js right?
kyberias 5 hours ago 0 replies      
Instead of writing "working in web browser", I would write "written in CoffeeScript (or Javascript))".

The demo with live opcode display in the editor is pretty cool!

xhrpost 2 hours ago 0 replies      
Pretty cool. Now, how long until I can take the compiled byte code and run it in an emulated PC that outputs to a canvas screen? :)
acqq 4 hours ago 0 replies      
The demo is in 8086 16-bit code though. It would be nice to be able to see 32-bit and 64-bit codes in the demo too. At least 64-bit is the most interesting to me, as the older ones are the most covered on the internet.
rplnt 7 hours ago 2 replies      
Am I missing a joke here?
kosinus 6 hours ago 0 replies      
Now wire this up with Native Client. :-o
vivek_st 4 hours ago 0 replies      
Awesome! This can come in handy while teaching my Shellcoding class: http://www.pentesteracademy.com/course?id=7
mrsaint 4 hours ago 0 replies      
Neat! So, who is going to write Softice for Node.js next? ;)
pconner 3 hours ago 0 replies      
Does Atwood's Law apply to Node.js?
Node.js cluster versus JXcore multithread stackoverflow.com
18 points by nodefan  5 hours ago   5 comments top 3
Xdes 1 hour ago 0 replies      
JXcore isn't on my radar until it is open sourced with a permissive license.
lunarcave 3 hours ago 1 reply      
You can partially mitigate the risks of single threaded Node by using clustering (although it's still tagged an experimental feature) [1].

Also, these worker threads can be made to respawn on a shutdown caused by - let's say - an unhandled exception as [2].

[1] http://nodejs.org/api/cluster.html

[2] https://nadeesha.silvrback.com/setting-up-multiple-worker-th...

stephenr 3 hours ago 1 reply      
I'm curious how either of these compare to say Passenger?
CSS Vocabulary pumpula.net
118 points by pasiaj  15 hours ago   32 comments top 13
pamelafox 7 hours ago 0 replies      
Awesome! Just tweeted this out for our GirlDevelopIt students, this is a great review resource.

For those of you looking for alternative ways to learn CSS (or teach it), here are our materials:

* CSS Basics/Layout: http://www.teaching-materials.org/htmlcss-1day/ Scroll down)* CSS3: https://dl.dropboxusercontent.com/u/10998095/css3-workshop/i...

...I find that teaching the CSS3 workshop is always a great reminder of the crazy ass selectors you can use in CSS now, and I end up using them way more in the weeks after. And then forgetting again. :-)

bbx 8 hours ago 1 reply      
Very neat and straight to the point. I'm writing a book about CSS, and this comes in very handy considering the vocabulary in CSS is usually misused and lacks precision, especially on Stack Overflow (including by me).
reshambabble 40 minutes ago 0 replies      
This is a really great visual way to learn CSS vocabulary. Love the simple and clear UX/UI. Have you thought about adding pop ups when you click on each term that explains the exact purpose of each? It could be an interesting way to expand this into a 5 minute self-taught tutorial.
zatkin 13 hours ago 2 replies      
This is really neat. I've been using CSS for several years and have never stumbled upon something that uniquely educates in this manner. You'll only really grasp a true understanding of CSS like this if you sift through a lot of (dull) specification.

Also, this brings a lot of attention to the fact that CSS relies upon formal grammar and vocabulary (as the page is titled), which is something that you won't see often.

voltagex_ 14 hours ago 1 reply      
This is a neat and simple idea. I wonder how difficult it would be to add other languages.
rmmw 11 hours ago 0 replies      
This is great! To anyone else delving into CSS, I also found this tutorial on CSS selectors helpful: http://flukeout.github.io/
SquareWheel 10 hours ago 0 replies      
Very neat and intuitive way to learn. I wasn't aware of the difference between "pseudo classes" and "pseudo elements" before.
runarberg 5 hours ago 1 reply      
Really nice. Why did you skip the media query though?

    @media only screen and (min-width: 35em) {        /* responsive styles */    }    @media print {        /* printed styles */    }

JazCE 7 hours ago 0 replies      
This is really nice, I might fork it myself and add to it as I'd like to see some more explanation on certain parts, though it would then become something different.
nnq 10 hours ago 2 replies      
Why do they use `::before` instead of `:before`? Is that a typo?
hackaflocka 11 hours ago 0 replies      
Great resource.

Anyone know of something like this for Javascript or PHP?

bowlofpetunias 9 hours ago 0 replies      
For someone with only superficial knowledge of CSS but who regularly has to communicate with front-end devs, this is very very helpful.

A shared vocabulary is so important in any collaboration.

conectorx 13 hours ago 1 reply      
well css is death so, no, thank you.
Software process and tools for non-tech product owners Part 1 codemancers.com
4 points by emilsoman  1 hour ago   discuss
Improve developer habits by showing time cost of DB queries danbirken.com
128 points by birken  16 hours ago   60 comments top 33
holman 12 hours ago 2 replies      
Love things like this.

Here's a screenshot expanded version of our staff bar on GitHub.com:


Most of those numbers are clickable. The graphs button on the left links to a flame graph (https://www.google.com/search?q=flame+graph) Ruby calls of the page. The microscope button is a sorted listing of CPU time and idle time by file that went into the page's render. The template number links to a timing breakdown of all the partials that went into the view. The SQL timing links to a breakdown of MySQL queries for that page, calling out N+1 queries, slow queries, or otherwise horrible database decisions. Depending on the page, we'll also have numbers for duration and queries spent in redis, elasticsearch, and gitrpc.

Our main github.com stuff is pretty tied into our stack, but one of our employees, Garrett Bjerkhoel, extracted a lot of this into peek, a very similar implementation of what we have on GitHub. We use peek in a ton of our smaller apps around the company. Here's the org: https://github.com/peek

thejosh 13 hours ago 1 reply      
As much as people like to make fun of PHP, symfony2 has the best development toolbar I've ever seen, with the previous winner from symfony1.
nilkn 13 hours ago 1 reply      
I recently developed a profiler for my company's web application. It uses dynamic introspection to profile (nearly) all function calls on the server side. It automatically picks up any SQL queries and profiles them as well. It's all packaged up in a web interface which can be invoked on any page on the site. You can see the exact queries, sort functions by time or number of calls, etc. It also shows a full reconstruction of the call tree, with individual branches expandable and collapsible.

It was a lot of fun to write and has been just as fun to use. We've found a number of simple changes that led to big performance gains on some of the pages.

Spooky23 11 hours ago 1 reply      
If you do significant work with databases, you should have a DBA.

When I was in that role, I combined education, public humiliation, cajoling and various administrative means to discourage bad database behaviors or optimize databases for necessary workloads. The median developer can barely spell SQL... Adult supervision helps.

Whatever I was making then, I probably recovered 3-5x my income by avoiding needless infrastructure and licensing investments.

kamens 14 hours ago 0 replies      
If you use App Engine you can use https://github.com/kamens/gae_mini_profiler ==> modeled after the Stack Overflow folks' miniprofiler.
baddox 16 hours ago 4 replies      
For Django, there's https://github.com/django-debug-toolbar/django-debug-toolbar .

For Rails, there's https://github.com/josevalim/rails-footnotes.

Forgive/correct me if there are newer or better alternatives.

easy_rider 9 hours ago 1 reply      
Sometimes what bothers me is I sometimes have no idea if a query is slow or not. This holds more true since I've been developing on Rails with Postgres. With MySQL it seems the bottleneck queries seem a lot more exponentially progressively degrading in correlation to the amount of records in a set.With Postgres this seems to be a lot more linear.

I have just made some crafty work in rails to make use of hstore datatypes in Rails. It stores an array of hashes, which I have not found a way to index the keys of yet.

Is this slow or fast for example?I'm doing a full-text search over all unnested (hstore) array values over a given column in a table wih 35k records.

    development=# SELECT "leads".* FROM "leads"  WHERE ( exists ( select * from  (SELECT svals(unnest("address"))) x(item) where x.item LIKE '%mst%') );
Time: 57.257 ms

I'm used to MySQL, and this kind of query over unindexed records seems fast, but it also seems this might be slow for Postgres standards ?:/ Bear in mind.. no indices.

lukencode 15 hours ago 2 replies      
The .net world has http://miniprofiler.com/ and http://getglimpse.com/ which work really well.

They will also show you the sql (useful when it is generated by an orm) and breakdown each queries time cost.

mjibson 11 hours ago 0 replies      
MiniProfiler supports this and more. It exists for:

ruby: https://github.com/MiniProfiler/rack-mini-profiler

.net: https://github.com/MiniProfiler/dotnet

go: https://github.com/MiniProfiler/go (w/ support for app engine, revel, martini, and others)

If there is a python dev who wants to port MiniProfiler to python, I would love to help (I'm a MiniProfiler maintainer and did the go port). The UI is all done, you just spit out some JSON and include the js file. Kamens has a good port, but it's not based on this new UI library.

mikzael 10 hours ago 1 reply      
I have the following on localhost at the bottom of every page:


    head in <?php echo number_format($head_microttime, 4); ?> s    body in <?php echo number_format(microtime(true) - $starttime, 4); ?> s    <?php echo count($database->run_queries); ?> queries    <?php if (DEBUG) print_r($database->run_queries); ?>

Example output:


    head in 0.3452 s    body in 0.7256 s    32 queries    Array(    [/Volumes/data/Sites/najnehnutelnosti/framework/initialize.php25] => SELECT name,value FROM settings    [/Volumes/data/Sites/najnehnutelnosti/framework/initialize.php45] => SELECT * FROM mod_captcha_control LIMIT 1    [/Volumes/data/Sites/najnehnutelnosti/framework/class.frontend.php123] => SELECT * FROM pages WHERE page_id = '1'    [/Volumes/data/Sites/najnehnutelnosti/framework/class.wb.php96] => SELECT publ_start,publ_end FROM sections WHERE page_id = '1'    [/Volumes/data/Sites/najnehnutelnosti/framework/frontend.functions.php28] => SELECT directory FROM addons WHERE type = 'module' AND function = 'snippet'    ...etc

The array keys are the file from which the query originates with line number and the value is the query. I made it due duplicate queries.Query method in my class.database.php for the above output: public $run_queries = array();

    function query($statement) {        $mysql = new mysql();        $mysql->query($statement);        $backtrace = debug_backtrace();        $this->run_queries[$backtrace[0]['file'].$backtrace[0]['line']] = $statement;        $this->set_error($mysql->error());        if($mysql->error()) {            return null;        } else {            return $mysql;        }    }

AlisdairO 6 hours ago 0 replies      
If you're rolling an SQL timer of your own, be a bit careful. On most relational databases it's quite possible for your first result(s) to come back extremely quickly, but additional processing to be required to retrieve subsequent results - effectively a lazy evaluation. If you measure time to first result you may get an inaccurate picture of the cost of certain classes of query.
MBCook 13 hours ago 0 replies      
We've been using NewRelic at my work for a few months now and this is capability is incredibly helpful. When a page is slow you can see which of the numerous queries may be at fault (or if it's being spent in code Ina bad loop, etc.
systematical 10 hours ago 1 reply      
This is okay, but what about showing the actual queries? What about showing how long each query took? I wrote some code that actually logs all this information directly to Chromes Console log. Pretty awesome: http://bakery.cakephp.org/articles/systematical/2013/05/08/h...
mchail 16 hours ago 0 replies      
Great tip! For rails devs, I strongly recommend rack-mini-profiler [0]. It gives you an in-browser view of rendering time (which you can expand to break out by partial) with query time listed for each query. Each query also includes a stacktrace to make it easy to fix offensive queries. Great for identifying N+1 query bugs as well. There's a railscast for it, too [1].

[0] https://github.com/MiniProfiler/rack-mini-profiler

[1] http://railscasts.com/episodes/368-miniprofiler

thrownaway2424 13 hours ago 0 replies      
Ironically the destination site is now down. I'll just leave this here: http://research.google.com/pubs/pub36356.html
inconshreveable 14 hours ago 0 replies      
I wrote sqltap not too long ago to solve this problem for users of SQLAlchemy, complete with pretty web UI for visualizing not just the queries, but also where they originated in the application:

It also lets you dynamically enable/disable the introspection so you could run it in production.


edwinnathaniel 11 hours ago 0 replies      
I've been fortunate enough to work for a company that develop performance monitoring so I always dogfood the latest cutting edge stuff that we do.

Best part of the performance monitoring is that if the Request spans multiple Servers (load balancer/proxy, web-app, microservices, DB, etc), I can see it as a single transaction so that I can easily track down which Microservice that caused the performance slowness. For example, we have a web-app written in Python and a Microservice written in Java. A single request can end up at the Microservice level and our tool can see truly end-to-end => load balancer to web-app to microservice to database.

On top of that, we also have an app that performs synthetic monitoring where I can set to automate certain User workflow and set it to run every 5 minutes. Combining both performance monitoring and synthetic monitoring give us a leg up on automating performance monitoring!

Disclaimer: I work for AppNeta.

ozh 8 hours ago 1 reply      
For what it's worth, this is easily done if you're coding something with PHP on top of WordPress:

    - add `define( 'SAVEQUERIES', true );` to your wp-config.php    - add `global $wp;var_dump( $wp->queries );` at the bottom of your script

X-Istence 14 hours ago 0 replies      
In the debugtoolbar queries through SQLAlchemy are picked up and logged, and it shows the query, as well as how long it took.


The cool thing is that you can also click on EXPLAIN and get back information from the DB directly on what indexes it is using/maybe not using.

Twirrim 11 hours ago 0 replies      
That's a fine and good idea, but don't exclude the front end time.

SQL servers can be faster places to carry out certain types of operations on data, and it may be quicker to write the right kind of query for the server, than choose a very quick query and have to spend time on the front end processing that data.

The ultimate goal is time to eyeballs, not fastest query time. The best achieve that is to measure every stage not just queries, and spend time profiling and experimenting with various operations.

workhere-io 9 hours ago 0 replies      
With Flask + SQLAlchemy you can either set app.config['SQLALCHEMY_ECHO'] to True (which will output SQLAlchemy's queries to the console) or you can use http://flask-debugtoolbar.readthedocs.org/en/latest/
josegonzalez 14 hours ago 0 replies      
CakePHP has the DebugKit Toolbar[1] which I've ported internally to our symfony application at work (it's quite a bit better than the symfony debugbar).

Example from google: http://blog.japanesetesting.com/wp-content/uploads/2009/11/d...

[1] https://github.com/cakephp/debug_kit/

VintageCool 13 hours ago 0 replies      
New Relic's application monitoring and transaction traces were invaluable for us. For particularly slow transactions it provides a detailed view of the time spent in each function call and each SQL query.
matclayton 15 hours ago 0 replies      
We built Speedbar for django which does this as part of its summary information, but also provides full page profiles as well in production.


sergiotapia 16 hours ago 0 replies      
For Rails I use Bullet to warn me about slow N+1 queries.


JoeAcchino 6 hours ago 0 replies      
In addition, I would track with a web analytics tool the number of queries and the time they took for each page, so I could easily have them as metrics in the Pages Report.

Next, sort pages by query time and optimize them first.

sfaruque 13 hours ago 1 reply      
Any suggestions for something similar with CodeIgniter or even native PHP?
Aqua_Geek 15 hours ago 0 replies      
For Rails apps I use Peek: https://github.com/peek/peek
NickSharp 7 hours ago 0 replies      
Maatkit is great for profiling your MySql queries in production.http://www.maatkit.org/
joshribakoff 15 hours ago 1 reply      
My tip is to watch the queries "live". Just enable "general query log", and "tail -f" the log file
ClifReeder 12 hours ago 1 reply      
More often than not, the slowness end users experience is problems in the front end - not the database. Think uncompressed assets, cache misses, or unused javascript and stylesheets. It's very likely most pages in your application are (or should) be served entirely from cache, so end users will benefit much more from front end optimizations.
Sindrome 15 hours ago 0 replies      
I use mini-profiler from the guys @ Stackoverlow
patrickxb 12 hours ago 0 replies      
How about starting with a blog that doesn't need a database so it can survive HN front page? jekyll + s3 works.
       cached 17 April 2014 16:02:01 GMT