hacker news with inline top comments    .. more ..    20 Nov 2014 News
home   ask   best   3 years ago   
Senator Al Franken's Letter to Uber [pdf]
181 points by kyledrake  4 hours ago   79 comments top 16
NamTaf 2 hours ago 2 replies      
I note that many of the questions are around the processes, training, rules, etc. that govern the use of technology developed by Uber. This is important.

Systems Engineering teaches the concept of POSTED: People, Organisation, Support, Training, Equipment and Doctrine. When a system is developed, it must give consideration to all of these aspects. Failing to do so means you design an incomplete system.

In this case, Uber has developed a piece of Equipment, their God Mode view. Franken's asking about the other pieces of the system, such as the training, support and doctrine and people. These are equally as important to design, document and implement. Failing to give due consideration to these aspects of the system is no different to having an incomplete equipment solution developed. I'm interested to see whether Uber gave due consideration to these aspects of the system.

There's something to be said about startups moving fast to develop technology but not necessarily the other aspects of a complete system. Mature systems engineering / software development firms do this day in and day out. Yes, it can lead to slower iteration on the core technology and capabilities, but it is critically important to consider. I suspect it's often a pinch point when start-ups try to scale, for example when a piece of technology then needs to consider user access rights, etc.

DigitalSea 1 hour ago 1 reply      
I actually agree with the letter for the most part. Yes, Uber are not the only company out there with troves of data that is most likely being abused without anyone noticing, but it is just unfortunate that the spotlight is currently on Uber because of the silly words from a man in a position who should know better than to say such stupid things in public (on or off the record, it doesn't matter). He worked as an advisor to the White House, he should know the importance of holding your tongue.

Not to mention all of the lobbyist pressure Uber is experiencing on the business side of things, this is the kind of stuff taxi driver unions and companies/entities threatened by Uber's business model can only dream of getting. They are not doing themselves any favours here.

It seems that Uber have well and truly put their foot in it this time over any of the other controversies and scandals that have involved the company. And yet, after all of this, Emil Michael gets to keep his job? Seems to me the only way Uber can start to make amends and repair their broken image here is to make some effort and fire Emil.

A_COMPUTER 2 hours ago 5 replies      
So, I completely agree with this letter, but I felt a sense of unease reading it. I realized why. Other companies are as bad or worse about privacy, where are all those letters? I wish there were a lot more of these and wonder if the only reason this one exists is because the blowup with Uber was so visible and so it can be used for political posturing.
mikeyouse 3 hours ago 0 replies      
Note that Franken wrote this as the Chairman of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. The committee's mandate / scope:

    Jurisdiction: (1) Oversight of laws and policies governing    the collection, protection, use and dissemination of     commercial information by the private sector, including     online behavioral advertising, privacy within social     networking websites and other online privacy issues;     (2) Enforcement and implementation of commercial    information privacy laws and policies; (3) Use of     technology by the private sector to protect privacy,     enhance transparency and encourage innovation;     (4) Privacy standards for the collection, retention, use     and dissemination of personally identifiable commercial     information; and (5) Privacy implications of new or     emerging technologies.

leephillips 2 hours ago 2 replies      
Franken supported the NSA as the surveillance scandal broke: "I can assure you, this is not about spying on the American people." [http://www.nationaljournal.com/congress/the-nsa-has-at-least...]. I thought it was relevant.
debacle 2 hours ago 2 replies      
It seems like a lot of things are coming home to roost right now for Uber. As with many things, I think that Uber is the first player in an arena where the second player tends to survive (because they learn from all the first player's mistakes for free, the groundwork is laid, etc).

The barrier to entry for an Uber competitor is quite low and trust is the only thing that keeps Uber afloat. If they don't realize that and act accordingly, they could die relatively quickly.

tomcam 3 hours ago 3 replies      
I dislike Al Franken's politics greatly, but I think it's every bit his right to use the bully pulpit this way (I mean this sincerely). If Uber did track a journalist it is reprehensible.

But how Uber could benefit from answering this fishing expedition? If I were Uber I'd simply stonewall. They are under no legal compunction to answer, and virtually no answer would help them.

softdev12 19 minutes ago 1 reply      
Wow. I'm interested in seeing how Uber responds to this. I had no idea that Al Franken was now the chairman of the privacy subcommittee. For those not in the United States, Al Franken used to be a popular comedian on the show Saturday Night Live. His biggest hit was this character Stuart Smalley: https://www.youtube.com/watch?v=uYPc-dPVbow
Aeolun 1 hour ago 2 replies      
Why is everyone so surprised that Uber is tracking information about their users, or that employees have access to that information.

I just think it's incredibly naive to think that they wouldn't use it in any way possible.

mxpxrocks10 1 hour ago 0 replies      
best part of the letter - two words I thought I'd never hear together: "BuzzFeed reported"
AndrewKemendo 2 hours ago 3 replies      
What are the rules here for Uber? Are they compelled to respond or can they just ignore this letter and move on?
closetnerd 6 minutes ago 0 replies      
He's a goddamn hypocritical moron.
downandout 1 hour ago 3 replies      
This is officially the most blown-out-of-proportion story in the history of the Internet. When was the last time the US Senate got involved after you went on a rant at a party? I'm certain that Sarah Lacy is enjoying both the attention and the money from pageviews, but this is getting ridiculous.
marincounty 1 hour ago 1 reply      
Yes--the Boober made some asinine statements. Let him dig his way out of his utter foolishness. Now to Uber. I was shocked what they expect you to buy and drive in order to become a Uber Driver. Nothing less than a 2008 vechicle?And the list of acceptabe cars? After insurance it just dosen't add up--unless your sleeping in you car, or your in a brand spanking new market.

In all reality, maybe he said these things just to get the free advertising? It's too bad it's come to this? That said, is there any freeware uber/Lyft type code floating around? Just curious?

dreamdu5t 48 minutes ago 0 replies      
Oh, this bread is so tasty and the clowns are so funny! Thanks Al Franken!
omgitstom 2 hours ago 1 reply      
Publicity stunt. When has he ever been worried about privacy in relation to his stance for the NSA?


Spooky Alignment of Quasars Across Billions of Light-years
66 points by Zomad  2 hours ago   14 comments top 6
hyperion2010 38 minutes ago 0 replies      
If we wind the clock back far enough couldn't we explain this if the original matter that went on to form the black holes originated from blobs of matter that were affected by the same local forces? Then we just wait long enough and things that were next to each other in the distant past now reside long the dark mater filaments? Given the angular momentum of these suckers I'd guess that it is pretty hard to significantly change their axis of rotation even over a couple billion years.
incision 1 hour ago 0 replies      
>"The new VLT results indicate that the rotation axes of the quasars tend to be parallel to the large-scale structures in which they find themselves. So, if the quasars are in a long filament then the spins of the central black holes will point along the filament."

Though I have no real idea what I'm talking about...

This feels intuitive to my mental picture of the universe.

The description of this large scale structure and the expansion of the universe has always put me in mind of watching the patterns form and reform from drips in a soapy sink or an elastic fabric being pulled apart.

In both cases, you end up with these big expanses bordered by dense stringy areas. That the motion of the stuff that snaps / shears / collapses or whatever into these strings and knots would be aligned seems perfectly logical.

anigbrowl 2 hours ago 0 replies      
The first odd thing we noticed was that some of the quasars rotation axes were aligned with each other despite the fact that these quasars are separated by billions of light-years, said Hutsemkers.

This seems like it might be a breakthrough result.

deckar01 1 hour ago 0 replies      
Out of "93 quasars", "19 of them found a significantly polarized signal." "Results indicate that the rotation axes of the quasars tend to be parallel to the large-scale structures in which they find themselves."

Do quasars that aren't parallel to their large-scale structures not have a significantly polarized signal? Maybe interference from the structure or a weaker signal because of their alignment?

mrfusion 44 minutes ago 2 replies      
Dumb question alert. Could an advanced civilization have done this? Perhaps to collect power?
carsongross 1 hour ago 2 replies      
Another severe blow to the cosmological principle.
Node.js in Flame Graphs
667 points by stoey  12 hours ago   210 comments top 40
ChuckMcM 9 hours ago 10 replies      
The moneyquote:

"We made incorrect assumptions about the Express.js API without digging further into its code base. As a result, our misuse of the Express.js API was the ultimate root cause of our performance issue."

This situation is my biggest challenge with software these days. The advice to "just use FooMumbleAPI!" is rampant and yet the quality of the implemented APIs and the amount of review they have had varies all over the map. Consequently any decision to use such an API seems to require one first read and review the entire implementation of the API, otherwise you get the experience that NetFlix had. That is made worse by good APIs where you spend all that time reviewing them only to note they are well written, but each version which could have not so clued in people committing changes might need another review. So you can't just leave it there. And when you find the 'bad' ones, you can send a note to the project (which can respond anywhere from "great, thanks for the review!" to "if you don't like it why not send us a pull request with what you think is a better version.")

What this means in practice is that companies that use open source extensively in their operation, become slower and slower to innovate as they are carrying the weight of a thousand different systems of checks on code quality and robustness, which people using closed source will start delivering faster and faster as they effectively partition the review/quality question to the person selling them the software and they focus on their product innovation.

There was an interesting, if unwitting, simulation of this going on inside Google when I left, where people could check-in changes to the code base that would have huge impacts across the company causing other projects to slow to a halt (in terms of their own goals) while they ported to the new way of doing things. In this future world changes, like the recently hotly debated systemd change, will incur costs while the users of the systems stop to re-implement in the new context, and there isn't anything to prevent them from paying this cost again and again. A particularly Machievellan proprietary source vendor might fund programmers to create disruptive changes to expressly inflict such costs on their non-customers.

I know, too tin hat, but it is what I see coming.

thedufer 11 hours ago 7 replies      
> Its unclear why Express.js chose not to use a constant time data structure like a map to store its handlers.

Its actually quite clear - most routes are defined by a regex rather than a string, so there is no built-in structure (if there's a way at all) to do O(1) lookups in the routing table. A router that only allowed string route definitions would be faster but far less useful.

I can't explain away the recursion, though. That seems wholly unnecessary.

Edit: Actually, I figured that out, too. You can put middleware in a router so it only runs on certain URL patterns. The only difference between a normal route handler and a middleware function is that a middleware function uses the third argument (an optional callback) and calls it when done to allow the route matcher to continue through the routes array. This can be asynchronous (thus the callback), so the router has to recurse through the routes array instead of looping.

clebio 16 minutes ago 0 replies      
> I cant imagine how we would have solved this problem without being able to sample Node.js stacks and visualize them with flame graphs.

This has me scratching my head. The diagrams are pretty, maybe, but I can't read the process calls from them (the words are truncated because the graphs are too narrow). And I can't see, visually, which calls are repeated. They're stacked, not grouped, and the color palette is quite narrow (color brewer might help here?).

At least, I _can_ imagine how you could characterize this problem without novel eye-candy. Use histograms. Count repeated calls to each method and sort descending. Sampling is only necessary if you've got -- really, truly, got -- big data (which Netflix probably does), but I don't think the author means 'sample' in a statistical sense. It sounds more like 'instrumentation', decorating the function calls to produce additional debugging information. Either way, once you have that, there are various common ways to isolate performance bottlenecks. Few of which probably require visual graphs.

There's also various lesser inefficiencies in the flame graphs: is it useful (non-obvious) that every call is a child of `node`, `node::Start`, `uv_run`, etc.? Vertical real-estate might be put to better use with a log-scale? Etcetera, etc.

rwaldin 10 hours ago 0 replies      
I'm surprised nobody has mentioned that express has a built in mechanism for sublinear matching against the entire list of application routes. All you have to do is nest Routers (http://expressjs.com/4x/api.html#router) based on URL path steps and you will reduce the overall complexity of matching a particular route from O(n) to near O(log n).
elwell 11 hours ago 2 replies      
TIL, SVG's can display labels on element hover: http://cdn.nflximg.com/ffe/siteui/blog/yunong/200mins.svg

Nice, contained way to show data like this.

remon 11 hours ago 4 replies      
I wonder what the thought process was behind moving their web service stack (partially?) to node.js in the first place. For a company with the scale and resources of Netflix it's not exactly an obvious choice.
tjholowaychuk 2 hours ago 0 replies      
Sounds like a documentation issue, or lack of a staging environment. I've written and maintained countless large Express applications and routing was never even remotely a bottleneck, thus the simple & flexible linear lookup. I believe we had an issue or two open for quite a while in case anyone wanted to report real use-cases that performed poorly.

Possibly worth mentioning, but there's really nothing stopping people from adding dtrace support to Express, it could easily be done with middleware. Switching frameworks seems a little heavy-handed for something that could have been a 20 minute npm module.

vkjv 11 hours ago 3 replies      
> ...as well as increasing the Node.js heap size to 32Gb.

> ...also saw that the processs heap size stayed fairly constant at around 1.2 Gb.

This is because 1.2 GB is the max allowed heap size in v8. Increasing beyond this value has no effect.

> ...Its unclear why Express.js chose not to use a constant time data structure like a map to store its handlers.

It it is non-trivial (not possible?) to do this in O(1) for routes that use matching / wildcards, etc. This optimization would only be possible for simple routes.

wpietri 10 hours ago 2 replies      
From the article:

> What did we learn from this harrowing experience? First, we need to fully understand our dependencies before putting them into production.

Is that the lesson to learn? That scares me, because a) it's impossible, and b) it lengthens the feedback loop, decreasing systemic ability to learn.

The lesson I'd learn from that would be something like "Roll new code out gradually and heavily monitor changes in the performance envelope."

Basically, I think the approach of trying to reduce mean time between failure is self-limiting, because failure is how you learn. I think the right way forward for software is to focus on reducing incident impact and mean time to recovery.

TheLoneWolfling 11 hours ago 1 reply      
> benchmarking revealed merely iterating through each of these handler instances cost about 1 ms of CPU time

1ms / entry? What is it doing that it's spending 3 million cycles on a single path check?

jaytaylor 1 hour ago 0 replies      
I am upset that the title has been changed from "Node.js in Flames". Which is not only the real title of the article, but also a reasonable description of what they've been facing with Node.


drderidder 11 hours ago 0 replies      

  > our misuse of the Express.js API was the   > ultimate root cause of our performance issue
That's unfortunate. Restify is a nice framework too, but mistakes can be made with any of them. Strongloop has a post comparing Express, Restify, hapi and LoopBack for building REST API's for anyone interested. http://strongloop.com/strongblog/compare-express-restify-hap...

Fishrock123 4 hours ago 0 replies      
I would like to mention that Netflix could have consulted the express maintainers (us) but didn't.

Source: myself - https://github.com/strongloop/express/pull/2237#issuecomment...

ecaron 11 hours ago 2 replies      
My biggest takeaway from this article is that Netflix is moving from Express to Restify, and I look forward to watching the massive uptick this has on https://github.com/mcavage/node-restify/graphs/contributors
_Marak_ 7 hours ago 0 replies      
I read:

"This turned out be caused by a periodic (10/hour) function in our code. The main purpose of this was to refresh our route handlers from an external source. This was implemented by deleting old handlers and adding new ones to the array"

refresh our route handlers from an external source

This is not something that should be done in live process. If you are updating the state of the node, you should be creating a new node and killing the old one.

Aside from hitting a somewhat obvious behavior for messing with the state of express in running process, once you have introduced the idea of programmatically putting state into your running node you have seriously impeded the abiltity to create a stateless fault tolerant distributed system.

augustl 10 hours ago 1 reply      
A surprising amount of path recognizers are O(n). Paths/routes are a great fit for radix trees, since there's typically repetitions, like /projects, /projects/1, and /projects/1/todos. The performance is O(log n).

I built one for Java: https://github.com/augustl/path-travel-agent

forrestthewoods 11 hours ago 2 replies      
If I had to pick one line to highlight (not to criticize, but was a wise lesson worth sharing) it would be this one:

"First, we need to fully understand our dependencies before putting them into production."

degobah 8 hours ago 0 replies      

* Netflix had a bug in their code.

* But Express.js should throw an error when multiple route handlers are given identical paths.

* Also, Express.js should use a different data structure to store route handlers. EDIT: HN commentors disagree.

* node.js CPU Flame Graphs (http://www.brendangregg.com/blog/2014-09-17/node-flame-graph...) are awesome!

codelucas 10 hours ago 3 replies      
> This turned out be caused by a periodic (10/hour) function in our code. The main purpose of this was to refresh our route handlers from an external source. This was implemented by deleting old handlers and adding new ones to the array. Unfortunately, it was also inadvertently adding a static route handler with the same path each time it ran.

I don't understand the need of refreshing route handlers. Could someone explain they needed to do this, and also why from an external source?

bcoates 7 hours ago 0 replies      
It's not just the extra lookups -- static in express is deceptively dog-slow. For every request it processes, it stats every filename that might satisfy the URL. This results in an enormous amount of useless syscall/IO overhead. This bit me pretty hard on a high-throughput webservice endpoint with an unnoticed extra static middleware. I wound up catching it with the excellent NodeTime service.

Now that I look at it, there's a TOCTOU bug on the fstat/open callback, too: https://github.com/tj/send/blob/master/index.js#L570-L605

This should be doing open-then-fstat, not stat-then-open.

ajsharma 11 hours ago 1 reply      
This is the first I've heard of restify, but it seems like a useful framework for the main focus of most Node developers I know, which is to replace an API rather than a web application.
hardwaresofton 8 hours ago 0 replies      
Responses are already firing in:https://news.ycombinator.com/item?id=8632220
revelation 5 hours ago 1 reply      
Crazy talk. In 1ms, I can perspective transform a moderately big image. NodeJS cant iterate through a list.

We really need a 60 fps equivalent for web stuff. You have 16ms, thats it.

drinchev 7 hours ago 0 replies      
NodeJS Project has already a similar issue about recursive route matching.


pm90 10 hours ago 0 replies      
I love these kinds of investigations into problems in production. I mean, you really have to admire their determination in getting to the root of the problem.

In some ways, these engineers are not that different from academic researchers, in that they are devising experiments, verifying techniques, all in the pursuit of the question: why?

sysk 9 hours ago 0 replies      
> We also saw that the processs heap size stayed fairly constant at around 1.2 Gb.

> Something was adding the same Express.js provided static route handler 10 times an hour.

Why didn't it increase the heap size? Maybe it was too small to be noticeable?

hit8run 10 hours ago 0 replies      
I would have written my apis in golang and not nodejs. Go is way faster in my experience and it feels leaner to create something because creating a web service can be productively doneout of box. Node apps tend to depend on thousands of 3rd party dependencies which makes the whole thing feel fragile to me.
bentcorner 11 hours ago 0 replies      
Interesting article. I have a lot of experience dealing with ETLs in WPA on the Windows side - it's an awesome tool that gives you similar insights. I haven't used it for looking at javascript stacks before though, so I don't know if it'll do that.
BradRuderman 11 hours ago 1 reply      
Why are they loading in routes from an external source? Is that normal, I have never seen that before.
pcl 10 hours ago 0 replies      
Second, given a performance problem, observability is of the utmost importance

I couldn't agree with this more. Understanding where time is being spent and where pools etc. are being consumed is critical in these sorts of exercises.

dmitrygr 10 hours ago 3 replies      
So the lesson is to actually know the code you deploy to prod? Is that not obvious?
debacle 11 hours ago 1 reply      
Doesn't this seem like a bug in the express router? All of the additional routes in the array are dead (can't be routed to).
Pharohbot 11 hours ago 0 replies      
I wonder how Netflix would perform with using Dart with the DartVM. I reckon it would be faster than Node based on benchmarks I've seen. Chrome DartVM support is right around the corner ;)
coldcode 10 hours ago 0 replies      
I must admit I could enjoy just doing this type of analysis all day long. Yet I hate non computing puzzles.
exratione 9 hours ago 0 replies      
The express router array is pretty easy to abuse, it's true. For example, as something you probably shouldn't ever do:


I guess the Netflix situation is one of those that doesn't occur in most common usage; certainly dynamically updating the routes in live processes versus just redeploying the process containers hadn't occurred to me as a way to go.

qodeninja 11 hours ago 0 replies      
wow. I love that Netflix us using Node and even more curious that they would use express.
notastartup 7 hours ago 0 replies      
this is why you stick to tried and true methods folks. this is such a typical node.js fanboy mentality. "reinventing the wheels is justified because asynchronous". or "i want this trendy way to do things just because everyone else is jumping on the bandwagon".

Give me flask + uwsgi + nginx anyday.

talkingtab 10 hours ago 1 reply      
an unfortunate title. Ha ha "flames" ha ha "Node.js" but the article is really about express. Not so "ha ha"
general_failure 11 hours ago 0 replies      
A very good reason to go with express is TJ. He was the initial author of express and he is quite brilliant when it comes to code quality. Of course, TJ is no more part of the community but his legacy lives :-)
gadders 12 hours ago 3 replies      
OFFTOPIC: "Today, I want to share some recent learnings from performance tuning this new application stack."

The word you want is "lessons".

Understanding the iPhone 6 Plus Screen
41 points by elo  4 hours ago   7 comments top 3
userbinator 1 hour ago 1 reply      
Does anyone wonder what's with the odd 414x736 resolution? I can't say I've ever heard of that resolution before; 360x640 at 3x would fit the 1080x1920 native resolution perfectly, or 2x the 540x960 of the iPhone 4. Apple's "Think Different" mentality at play here?
0x0 1 hour ago 1 reply      
Would it take a jailbreak to trick uikit into rendering at native resolution?

What would be the downside? Larger UI elements and less available space on the screen? Could a @2x instead of @3x mode work or would that result in super tiny "bad hidpi" UI?

TillE 1 hour ago 1 reply      
The scaling makes sense for legacy apps, but I can't understand why they don't present a 1920x1080 screen for everything else.

Hasn't iOS had tools for building resolution-independent apps ever since the iPhone 5 was released?

Before Snowden, a debate inside NSA
37 points by cgtyoder  5 hours ago   4 comments top 3
lylebarrere 1 hour ago 0 replies      
This only shows that Snowden was correct that internal channels were not effective at providing real oversight.
contingencies 37 minutes ago 0 replies      
We know from previous whistleblowers the most likely response to raising concerns is being sidelined or punished.
larakerns 1 hour ago 1 reply      
NSA employees are so siloed from each other that it limits dissent and self auditing
Surviving the Series A Crunch
95 points by sethbannon  4 hours ago   32 comments top 11
AndrewKemendo 1 hour ago 0 replies      
any entrepreneur able to build a prototype can get an idea funded

Except this is not true and I don't know where people are getting this idea. Maybe it's just SV or YC that has money but Angels and VC's in D.C. and NY at least are looking for solid traction before even bringing up the word term sheet.

At a recent Cooley pitch event a friend of mine who is already revenue positive came up from Ohio to pitch his startup. He told me not a single investor had followed up with him about a term sheet despite multiple discussions after the pitch. Another friend in CO is in the same boat, being revenue positive but with no interest from angels or anyone else.

I think stating that anyone can get money is a dangerous thing to say because it gives the wrong impression about availability of dollars. That post about things being frothy is so insanely different than the reality here on the east coast that it's staggering and totally unbelievable (not saying that I don't believe it by the way).

peterjancelis 3 hours ago 5 replies      
If the founder can get the 90K cost constant and keep the rev growing at 9% per month, there is only a $10K shortage by month 7 (and cash runs out in month 6 only). This can easily be solved with some annual prepayments.

Source: https://docs.google.com/spreadsheets/d/1RSHx9pwrSKfOlUr2jyqK...

debacle 2 hours ago 0 replies      
So the plan of action when you can't raise more money is "Try pretty much everything and hope it works?"

I realize you have to provide advice that might apply to all startups, but in this specific case cutting the burn to 60k (maybe losing a bit of equity to keep employees happy) and trying really hard to hit 60k in revenue and bam, you're suddenly at breakeven and your runway can start to grow, you can breathe, etc.

Raising money is always good, but it's hard, and if you've received a lot of nos, it's not going to get easier. Planning for the acquihire is pretty much admitting defeat.

asanwal 3 hours ago 1 reply      
Great advice and solid post.

Some data on how long companies typically wait between Seed and Series A rounds. The median is 349 days so raising for 18 months is smart.


A couple of other notes:

- Might want to look at revenue-based financing. More of a debt instrument but if you can't raise or are getting bent over by equity investors, it is an option.

- I wish the funding is required to grow quick meme would die. Our company is revenue funded and growing at a very good clip. If you can make your customers your de facto "investors", life can be very good.

nodesocket 3 hours ago 1 reply      
While $50,000 a month is nice revenue, a growth rate of 9% month over month does seem quite low. Was the majority of the $50,000 front-loaded? I'd really be curious to hear what market they are in? Seems hard to believe given their pedigree (YC), monthly revenue 50K, and lean team (8 people), they can't find a VC to bite.

If they are charging monthly, what about blasting all paying customers with an upgrade to yearly promotion (20% off). That would bring in a lump sum of cash upfront which should provide additional runway.

3pt14159 3 hours ago 0 replies      
Haha, I know so many people in this boat. I even know of one startup that was walking the line between B2B and B2C, and just temporarily made a slight turn to B2C to see some easy (non-paying) user growth just to raise the A, only to go back to their B2B route that they are confident will succeed in the long run. I'm sure this story is not going to be popular amongst the investor crowd, but it's much harder to raise an A once there is actual margins involved.
michaelochurch 2 hours ago 2 replies      
The real problem is that this game is controlled by ADD children who can't differentiate between 9% monthly growth in something of quality and 15-30% "viral" monthly growth in Snapchat-for-cats (the original Snapchat was idiotic, and give cats some credit because they have way better taste than sexting tweens) bullshit.
yesimahuman 3 hours ago 0 replies      
Really great post, and not just for founders but for early stage investors as well. This is definitely why we are seeing so many more bridge rounds, but I'm not sure if those are really as bad as they've been made out to be, or just a new reality (for example, I know startups that have done a "damaging" bridge round only to then raise a huge A).

Either way, even more reason for founders to bootstrap for longer if they haven't hit that huge growth curve yet, or just bootstrap forever!

dchuk 3 hours ago 1 reply      
If anyone from 42Floors is reading this: your link to your own site with the anchor text "office space" is a relative link and therefore broken:

<a href="42floors.com">office space</a>

You need to make that http://42floors.com or it's not going to work for users (and search engines ;) )

cykho 4 hours ago 0 replies      
This is a real wakeup call - raise bigger seed rounds to last until you're a breakout success.
notastartup 3 hours ago 0 replies      
i dont get why you have 8 engineers when you can't afford it. does raising money give you a false sense of optimism?

I've seen a startup where they just broke even year after year for the past 6 years. they just increase the revenue for the sake of higher valuation but what ended up happening was, it created a toxic working environment, highest attrition rate (because they simply fire people and replace it), eventually a year came around when they started bleeding. eventually the founding members were fired. now the company is getting outdone by the competition, A list clients have jumped ship, and business is dwindling.

Brands Are Wasting Money on Facebook and Twitter, Forrester Says
41 points by selmnoo  2 hours ago   10 comments top 7
_almosnow 3 minutes ago 0 replies      
I found out this very early, a few years ago when their ad platform was barely launching for everyone. I had a digital magazine and overall it seemed like a good deal, send me people interested in music from these countries at very low CPCs, nice. In total, I may have spent around $1200 USD over a month for a little more than 100 subscribers; needless to say I felt scammed. When trying to figure out what happened (w/ Google Analytics, my own event tracking code and even a few apache logs) I found out very interesting stuff, like that 99% of those clicks were people who didn't stay for more than a second on my webpage, like in: they didn't even wait for the page to load completely... weird.

Just for the record, my ads were not clickbaity and were targeted fine, and my magazine didn't suck (IMO at least heh), so if someone clicked on my ad I would pretty much expect it to stay in the page and have at least a look at the cover of the current issue.

Why did I burn $1200 if the ads weren't working from the start? I wish I had figured that out earlier and spend that money on a fancy chair or whatever instead... The thing is that I was only looking at my daily visitors and believed that everything was fine, it was until the end of the month (where I always made some kind of audit thing to see if I was growing or not) where I noticed that the number of new subscribers had remained the same even though FB ads were active all time.

Since then, the only advice I give to friends and clients is "stay away from Facebook ads, it's not what you think they are". And on a small side note, I tried a lot of advertisers at that time and the best experience I had was with StumbleUpon, their referrals converted to subscribers at an incredible ratio (like >50%!!!) and on top of that they drove some extra organic growth even a few months after then campaign was finished, respect for them.

walterbell 5 minutes ago 0 replies      

"These ad units are largely purchased by free-to-play game publishers such as King (maker of Candy Crush Saga) and Big Fish Games, which leverage Facebooks incredible demographic data to target the small percentage of players who will spend hundreds of dollars on in-app purchases.

.. So to recount, Facebook is going gangbusters because of ads for free-to-play games, developers are excited about the chance to cash in via Facebook ads, Google and Twitter are trying to mimic Facebooks success, and Google and especially Apple are hanging their app store hats on the amount of revenue generated by in-app purchases.

In other words, billions of dollars in cold hard cash, and 20x that in valuations are ultimately dependent on a relatively small number of people who just cant stop playing Candy Crush Saga."

Many of whom are women, their purchases leveraged up into Valley value. How is that for irony?

kumarm 35 minutes ago 0 replies      
Facebook mobile ads are pure crap. Here is a proof if anyone needs one:http://forums.makingmoneywithandroid.com/income-reports/1635...

Most installs from Facebook Mobile ads never even open the app once (Yes even Once).

If you read the post from the same guy, you will notice he actually targeted to People who play word games on mobile.

michaelbuckbee 1 hour ago 1 reply      
An important distinction: the waste in this case is on "organic" non-paid interactions with users on the social networks.

For Facebook, this means getting "Likes" on your dedicated Facebook Page (aka https://facebook.com/your_brand_name) for Twitter I'm presuming it's followers of @brandname.

This is separate from the actual paid advertising that the platforms offer - which is likely far more effective, targeted and profitable than say a magazine or newspaper ad.

Running a nail salon and want to reach women 18-45 who live in your city? Facebook lets you target that exactly and clicks to the ads just go to your website.

PublicEnemy111 57 minutes ago 1 reply      
Facebook blew away earnings this summer citing "mobile ads" as the reason for growth. Instagram had its first ad ever a few weeks prior to the earnings release, which had hundreds of thousands of comments. I can't help but feel Facebook was being misleading when they cited "mobile ads" for the revenue spike. Merchants had renewed faith in the platform thus resulting in a positive feedback cycle(merchants come back/signup -> more revenue -> revenue jumps again -> repeat). I think a break down of their mobile ad revenue would tell a much different story than the one they were trying to promote
unclebucknasty 2 hours ago 0 replies      
Man, this is intuitively and empirically so true. I'm not sure who would be surprised by this study, though some may point to specific exceptions.

But, there are a number of reasons this is obvious, not the least of which is that people go to Facebook to socialize, not befriend companies.

So, it always felt like a ruse that brands should encourage their customers to engage on Facebook's turf, as it always seemed that it accrued much more value to Facebook than to the brands. Why do I want someone else owning or even inserted between that relationship with my customers?

Yet companies (regrettably, including my own), would even hold contests, etc., effectively paying to get more likes from customers they already had! Then, Facebook pulled their master-stroke of ratcheting down the reach to all of those fans unless the company paid for it. It really was like some kind of racket or "minor extortion".

Remarkably, they also really began to push the idea of paying them even more to get more fans.

So, send you my current customers by converting them to fans, advertise with you to get more fans, then pay you again to advertise to reach all of them? No thanks. I'd rather get a good old-fashioned email address.

I look at their ad numbers and I just don't get it.

snowwrestler 1 hour ago 0 replies      
The idea of a "social relationship" with a brand was always pretty silly, I think.

But Twitter is a good tool for PR (because every blogger and reporter uses it), and Facebook can be an efficient content distribution channel.

I think brands have a hard time on social media when they have nothing interesting to say. "Like my page" or "download my app" or "tell us how much you love your toilet paper" are not interesting, and Facebook is doing the right thing to hide that crap from more feeds.

But if you can create good content, you can spend money very efficiently on social media. You just need to boost it a bit over the noise floor, and then folks will share and comment to spread it farther.

Neural Networks That Describe Images
226 points by vpanyam  9 hours ago   22 comments top 12
kolbe 5 hours ago 2 replies      
Presentations like these make me realize how close we are to developing law enforcement (/police state) technology that will be very effective. I figure when the kinks are smoothed out, that we could run this on a video feed, and have crimes prevented right as they're about to happen. It's almost scary. Imagine 20 years from now, some guy pulls a gun on you, and a video feeds identifies his action, and immediately shoots a tranquilizer straight into his jugular with perfect aim.
arjie 7 hours ago 2 replies      
Fascinating. Also interesting to see the failure modes. Any human would quickly realize that the "boy doing backflip on wakeboard" is actually playing on a trampoline. Or the "two young girls playing with legos toy". Great stuff!
jcr 8 hours ago 0 replies      
At the Bay Area Vision Meeting in 2013 [1], Fei-Fei Li and OlgaRussakovsky gave a related talk on, "Analysis of Large-Scale VisualRecognition" [2,3]

[1] http://bavm2013.splashthat.com/

[2] video: http://www.youtube.com/watch?v=DK6KfUsVN8w

[3] slides: http://bavm2013.splashthat.com/img/events/46439/assets/a10b....

ed 2 hours ago 0 replies      
This isn't really a breakthrough in object identification, as much as it is a clever pairing of identification with (mostly existing) language systems, is that right?

Wondering whether there's any merit to sibling comments speculating this is the future of e.g. surveillance

tiler 5 hours ago 1 reply      
I really appreciate that the Stanford group is always willing to post mistakes/mislabels. Kudos.
zvanness 4 hours ago 0 replies      
This is one of the coolest things i've seen in a while. I'd guess this is super similar to what the folks on the DeepMind team at Google are working on now, with the overall vision of being to classify images that have no metadata and add them to a dynamically learning knowledge graph:





xanderjanz 6 hours ago 0 replies      
Sharing Pre built models are so cool, and definitely important to the advance machine learning science. Especially when you consider how mixing weight layers allows you to do things like understand Portuguese text better through English text.
yuncun 6 hours ago 1 reply      
Idk if this is a daft question, but in the Visual-Semantic Alignment section, are those objects in the colored boxes actually being directly recognized by the software? Or are they inputted in some other way?
thomasahle 8 hours ago 1 reply      
It would be great to have this for image search!
hugozap 4 hours ago 0 replies      
They forgot to put the link to the npm module ;)
tintor 5 hours ago 0 replies      
It is interesting how the neural network labeled the woman in the lower right photo with the rectangle that includes the body only, without the head.
notastartup 7 hours ago 0 replies      
this is unsettling and amazing. not too far when we'll have robots that will be aware of what's going on around it.
Why Can't I Take an Orange Through Customs?
26 points by gordon_freeman  4 hours ago   7 comments top 3
WildUtah 1 hour ago 0 replies      
A customs dog sniffed me out and his handler searched my bag in 2010. I had some chocolates infused with orange oil in my pack.

She was satisfied but apparently you can't bring oranges into Mexico from the USA or vice versa.

Since then I always pack breakfast for early morning flights across the border, but I'm careful to eat it before landing or leave it for the aeromozas to clean up.

flashman 28 minutes ago 0 replies      
Bill Bailey on Australian Customs: http://www.youtube.com/watch?v=6s5AF4ahrOk
leoc 1 hour ago 3 replies      
Try taking a set of bagpipes into the US.
Light Table 0.7.0
158 points by one-more-minute  8 hours ago   42 comments top 11
james33 6 hours ago 0 replies      
Well this is a pleasant surprise. I was rather disappointed by the last announcement as I felt Light Table was a nice start and was heading in a great direction. As one of the KS backers, I'm glad to see the effort was put in to make it easy for the community to keep it going.
logn 3 hours ago 0 replies      
If you're going to do an MIT license, why not just Apache 2.0? That offers some protection against patent trolling. And the Apache license can still require an attribution notice like the MIT license.
pnathan 5 hours ago 7 replies      
What issue could possibly exist for using a GPL editor? It's not being shipped from a company...
ohfunkyeah 7 hours ago 1 reply      
Anyone know of a good update to the discussion that happened here? https://news.ycombinator.com/item?id=3874324 I'm curious how the concept of light table has stacked up against reality in the past year.
nickik 6 hours ago 0 replies      
Very nice. I hope this project keeps living. I use it for all my clojure dev and sometimes just as a editor. I rarly use the repl anymore, I just build up function inside the editor. The only thing that hinders this is that the function output should be pritty printed but there is some technical problem with that. If this would happen, I would not use the console much anymore.

The paredit is nice enougth to make working with clojure nicer then working with other language syntaxes.

Overall I really like it because I feel like its the way a IDE should be even if its lacking features, the architecture is nice.

johnsmith32 59 minutes ago 0 replies      
I really dislike how so many editors bind the evaluation/compile/menu key to ctrl + space or ctrl + shift. On the default multilingual key setting on windows, it is especially annoying when you want to run the code but instead your typing language changes to chinese
bachback 6 hours ago 1 reply      
Good news! I'm going to switch from Emacs some of the time, and hopefully 100% eventually. The thing that bugged me was how uneven the way editor would react in some cases. For me it didn't have the reliability I need. I was interested in the fundamentals and found underneath is code-mirror. I thought eventually all code-editing should move into a browser, but isn't quite there yet. Hopefully this will move to 1.0 and be a full emacs replacement.
tiffanyh 2 hours ago 2 replies      
So how is Chris (et al.) planning to make Light Table a sustainable business (e.g. "pay the bills").

It's great they have the kickstarter money but I haven't seen any announcements on them making this a product for sale. If anything, it appears they are doing the exact OPPOSITE and distancing themselves from the project all together.

wldcordeiro 6 hours ago 2 replies      
I've considered using Light Table for a while but the fact that there isn't a simple way to install it on Linux (either via apt or another package manager or a packaged version like a deb/rpm) keeps me from bothering.
JBiserkov 5 hours ago 1 reply      
Aaand the menus are gone?!

I'm on Win8.1 x64

btreecat 5 hours ago 1 reply      
I am not really sure what Light Table is and the announcement does not provide any information as such.
Questions with Vims Creator, Bram Moolenaar
86 points by ProfDreamer  8 hours ago   18 comments top 5
Reebles 4 hours ago 2 replies      
Not sure why Bram doesn't believe that removing support for obsolete OSes such as MS-DOS, Amiga, and BeOs helps with Neovim's goal of simplifying the codebase and maintenance. Are there even any actual users of these systems that are disappointed that they won't be able to use neovim? If they're satisfied on those OSes surely they'd be satisfied with legacy vim.
andyl 2 hours ago 0 replies      
It's unfortunate that Bram isn't more enthusiastic about NeoVim - it looks to me like they have made great progress and gathering momentum. NeoVim's support for Lua, Ruby and Python plugins, embeddable core would be a big step forward.
pherocity_ 3 hours ago 0 replies      
I can't help but feel, since many of the same arguments against neovim could be said of the rewrite from vi to vim, that this is the natural order of things and the point of open source.
adambenayoun 3 hours ago 0 replies      
If you liked this interview - make sure to check the other podcasts we recorded:


jvm 3 hours ago 2 replies      
> 8: How can the community ensure that the Vim project succeeds for the foreseeable future?> > Keep me alive. :-)

Odd to express narcissism in response to this question.

I have no idea whether NeoVim is going in the right direction but the feeling you get from this response is that Bram is focused on guarding his turf rather than thinking constructively about how to move Vim forward.

Bitmarkets: Private decentralized marketplaces based on two party escrow
28 points by stevedekorte  5 hours ago   3 comments top 3
zaroth 1 hour ago 0 replies      
Some background I wrote last year on two-party escrow: http://opine.me/future-of-bitcoin-escrow/

I think its a great approach. The reputation system and efficient private search which doesn't require copying the entire database are the hard parts!

voisine 5 hours ago 0 replies      
Very cool Steve, congratulations. I think this is the first 2 party escrow market to come out. Exciting stuff.
jdfellow 1 hour ago 0 replies      
What are the chances that this could be built for Windows or Linux using GNUStep?
The Startup That Built Googles First Self-Driving Car
121 points by spectruman  8 hours ago   39 comments top 12
Animats 4 hours ago 0 replies      
Well, Lewandowsky heads Google's self driving car project now. Thrun is long gone, off trying to make Udacity fly.

He's an impressive guy. I met him when he was going to Berkeley and doing the self-balancing motorcycle for the 2005 Grand Challenge. He'd already done a startup, with a specialized giant laptop for construction sites for people who needed to see blueprints.

The LIDAR Google uses is from Velodyne, which was "Team DAD" in the 2005 Grand Challenge. The first version of their LIDAR fell off their vehicle, but they improved the mechanics and produced that cone-shaped thing Google now uses. That's really a research tool; a different approach is needed for production vehicles. (I still like the Advanced Scientific Concepts flash LIDAR; it's expensive, but that's because it has custom silicon. If you had to get the price down, that's where to start. No moving parts, all electronics.)

I'm kind of disappointed with Google's self-driving effort on the hardware side. I'd expected flash LIDARs, terahertz phased array radars, and other advanced sensors by now. You need to be able to see in all directions, but the requirements to the sides and back are less than for looking ahead. The CMU/Cadillac effort is ahead on the hardware side; their self-driving car has all its sensors integrated into the vehicle so you don't notice them.

(I had an entry in the 2005 Grand Challenge: Team Overbot. Ours was too slow, and we worried about off-road capability too much.)

sam-mueller 5 hours ago 3 replies      
I recently learned that Elon Musk wasn't actually the founder of Tesla, but rather an early investor. This article borders on the same theme; there's more than meets the eye to the faces of today's most innovative technologies. It is alarming to see how often proper accreditation is being misattributed/stolen in our industry.
ohsnap 6 hours ago 2 replies      
Seems like the journalist is trying a little to hard to create a controversy. 510 was never 'unknown' and they always had a close relationship with Google. Consider the story in '08 - http://www.cnet.com/news/robotic-prius-takes-itself-for-a-sp...
mturmon 4 hours ago 0 replies      
The nut appears to be this:

From then on, we started doing a lot of work with Google, says Majusiak. We did almost all of their hardware integration. They were just doing software. Wed get the cars and develop the controllers, and theyd take it from there.

Anybody who has done robotics knows there is a lot of integration involved (hardware and software), and that doing integration is hard and tends to be thankless. It's nice to see some in-depth reporting in a major publication on the full depth of the engineering team.

amjaeger 7 hours ago 4 replies      
Maybe I'm missing something, but isn't the lidar on the google car made by Velodyne? I know the radio head video used Velodyne's Lidar. And the picture of the lidar that they show is a picture of the Velodyne HDL-64Can anyone explain?
Schwolop 5 hours ago 2 replies      
Replying to ekm2, who is hell-banned:

  So,who is Suzanna Musick?
Good question! I'm not going to post details here, but she appears to have plenty of public info easily Google-able, including a Linkedin profile and the name of her current startup. That said, I couldn't find anything about this company at all, and don't want to delve too much deeper as it's starting to make me feel creepy.

ilyaeck 3 hours ago 0 replies      
The article presents a one-sided story at best. I actually interviewed at Google for "project that shall not be named" in June 2009, with Sebastian Thrun and Chris Urmson (the current head of the project). I could tell the project was already well underway. Although none of them would say anything about, it was pretty clear what they were building. So, while '510' may have played some part, the writer is grossly exaggerating their significance.
threeseed 6 hours ago 1 reply      
Someone really needs to explain to me what the thought process behind this acquisition was.

Self driving car technologies have has actively developed by almost every car company for years now. Many of the beginnings of this work has already made it to market e.g. Parallel Park Assist, Auto Emergency Breaking, Lane Merge Detection, Adaptive Cruise Control. And companies like Volvo are already testing their self driving cars in real world, difficult conditions in Sweden. And because there are only a few car conglomerates they will simply share technology within each group.

So what is their end game ?

trhway 5 hours ago 0 replies      
like with Internet, the self-driving cars revolution is firmly rooted in DARPA's Grand/Urban Challenges of 2004/2005/2007 years. In fact i have hard time finding any principal progress in Google cars from the winners/top cars of 7 years ago. Though obviously there is a lot of evolutionary/product-development-style improvement, especially related to increased processing power available for sensor/image data processing.
wololo 5 hours ago 0 replies      
Early Street View history: http://thetrendythings.com/read/7424
ekm2 6 hours ago 0 replies      
So,who is Suzanna Musick?
general_failure 7 hours ago 0 replies      
Wow, this is seriously good journalism. Thanks for ythis!
Patch to Log SSH Passwords One Year Results
52 points by w8rbt  5 hours ago   22 comments top 9
ryan-c 1 hour ago 1 reply      
So, some stats:

There's 350,032 unique passwords in there.

* 122,094 (~35%) are in the rockyou dump (which has 14,344,391 unique entries)* 2898 passwords in my list of cracked linkedin passwords, excluding those in the rockyou dump (2,002,484 unique entries)* 27,639 are in the phpbb dump i have (184,344 unique entries)

machrider 1 hour ago 1 reply      
If you're running an ssh server that allows password authentication, make sure you're also running fail2ban.[1] Too many failed login attempts will block the IP (at an iptables level) for a configurable time period.

[1]: http://www.fail2ban.org/wiki/index.php/Main_Page

dsl 2 hours ago 1 reply      
I'll save you the time of downloading the full results.

'hunter2' is in the list.

ryan-c 3 hours ago 2 replies      
I wonder what's up with all the super long entries in there. Bugs in the bots?
peteretep 1 hour ago 0 replies      
Really no passwords with spaces in them, or a data-preparation error?
gburt 1 hour ago 2 replies      
Wait, who is logging SSH passwords? Is this an intentional attack on OpenSSH Portable or is it a honeypot?
jamiesonbecker 2 hours ago 4 replies      
I just don't know why people still use passwords with SSH anyway! (ie userify and stuff)
jijji 2 hours ago 0 replies      
use ssh keys or use iptables whitelisting on all your boxes
tedunangst 2 hours ago 0 replies      
Damn! I was certain nobody would guess my password of eight commas.
Queues Don't Fix Overload
136 points by craigkerstiens  9 hours ago   41 comments top 17
jasode 8 hours ago 2 replies      
First, Little's Law[1] can be mentioned as a formalized version of the "red arrow" in the blog post. It's true you cannot increase subsystem capacity (peak throughput) by adding queues.

However, queues can "fix" an overload in one sense by making an engineering tradeoff of increased latency and additional complexity (e.g. SPOF[2]). Peak capacity handling didn't increase but overall server utilization can be maximized because jobs waiting in a queue will eventually soak up resources running at less than 100% capacity.

If response time (or job completion time) remains a fixed engineering constraint, queues will not magically solve that.



FlyingAvatar 7 hours ago 2 replies      
This article in one sentence:

Using ONLY a queue to "fix" a bottleneck can serve to buffer your input, but will still fail when your sustained input is greater than your sustained output.


I feel like this is pretty common-sense to most people, that the only way to fix a bottleneck is to widen the neck. If people are running into this situation in the real world, they either don't understand their core problem or are blocked from actually fixing or measuring it.

The problem stated by the author doesn't have really anything to do with queues. A queue is a tool that someone, quite sensibly, might use as part of a solution to widen a bottleneck, but obviously it can't be the entire solution.

resonator 8 hours ago 2 replies      
As far as I know, queues are used for decoupling two systems so that you can scale individual components easier. In the example used in the article, adding more pipes out the wall.

A queue has a perk of smoothing out the peaks which may give the illusion of fixing overload, but really you haven't added any capacity, only latency. Also latency is sometimes a reasonable thing to add if it allows higher server utilisation rates.

euphemize 7 hours ago 0 replies      
Great article, a lot of these points really hit home with me, as we've been using queues to process a lot of our data in the last few years. I think if you consider your queue being fine while having items backed up under normal conditions, you're going to have serious problems to deal with as soon as you hit 1) more input 2) you get a nasty bug in your system 3) often a combination of both.

As an aside, I really enjoy working with AWS' SQS queues, as they allow you to define a maximum number of reads + a redrive policy. So you can throw, for example, items in a "dead messages" queue if they were processed unsuccessfully 2 times. We use this to replay data, stick it almost immediately in our unit tests, and improve software incrementally this way.

istvan__ 4 hours ago 0 replies      
Continuing the sentence from the title, "but they allow you to deal with it better".

The question is what is the business requirement and guarantee regarding the message delivery. In my experience with logging systems is that losing a very small percentage of your data in case there is an outage in the storage layer is tolerable.

That allows us writing code that sends or receives messages using a channel (Go, Clojure) and have sane timeouts to gracefully deal with overload, saving 200 (or so) messages in a local buffer and going back to reading the queue after a read timeout.

With this concurrency model queues are very powerful tool of separating concerns in your code. Having thread safe parts that can be executed as a thread or as a gorutine lets you use 1:1 mapping to OS threads or N:M.

Back to the original point, queues don't fix overload, but I still prefer deal with overload using queues over other solutions (locking or whatever).

stephenwilcock 8 hours ago 0 replies      
Nice article.

Anyone reading this may be interested in the related concept of the 'circuit breaker' technique employed by things such as Hystrix (Netflix), to avoid getting into this state.

Not a solution per se, but the simple philosophy is to set yourself up to fail fast: when back pressure reaches some threshold the 'circuit opens' and your servers immediately respond to new requests with an error, instead of ingesting huge numbers of requests which flood your internal queues or overload your infrastructure.


encoderer 8 hours ago 0 replies      
But they smooth the load graph, with the expectation of improved user experience during peak load.
general_failure 8 hours ago 2 replies      
Queues help you scale horizontally. Basically, you can put a lot of kitchen sinks on the right side of the diagrams. Of course, this is done with careful design of the application and not just adding a queue alone.

OT: The hardest part I have found with queue design is task cancellation. How does one 'cancel' tasks that are already in the queue or being processed? I haven't come across a good framework that solves this cleanly. For example, if I queue up a task X in the queue. The task now need to be 'cancelled'. How can I ensure this? Looks like I need some sort of messaging bus?

cordite 1 hour ago 0 replies      
Where can I learn more about back pressure? It isn't a set of terms that Google seems differentiate from spines.
bhz 8 hours ago 1 reply      
FTR: We use (Resque/Jesque) redis-backed queues extensively in our applications/APIs for interprocess communication, off-loading messages that don't have to be processed immediately, and for sharing load between servers.

This article could have been "X Don't Fix Overload". If you're not looking for, and fixing, the actual bottleneck, then any improperly implemented solution could be in that title.

With that being said, there is good information in there. I just didn't agree with the strong bias against queues.

wslh 9 hours ago 1 reply      
The last time I checked (~4 years ago) RabbitMQ and others didn't have good contention management. I am not sure if we can blame users all the time. I remember being involved in this thread: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-Ma... and this proposal: http://lists.w3.org/Archives/Public/www-ws-arch/2002Oct/0401...
huyegn 7 hours ago 0 replies      
Absolutely agree with this.

Another nice illustration of how when arrival rate is greater than departure rate we get overflowing queues:


korzun 6 hours ago 0 replies      
Queues Don't Fix Overload? True. They let you control it and prevent longer operations from impacting general end-user performance.

TLDR: Don't build queues for non-queue friendly processing.

venomsnake 6 hours ago 0 replies      
One name - Warlords of Draenor. The launch could be used as case study for everything in that article.
danielrhodes 8 hours ago 1 reply      
The whole article is a straw man.

Yes, if things further down your stack are not capable of handling the load in decent time your queue is going to overflow (assuming a fixed capacity). No, it does not make your entire stack faster -- it just defers processing in a way you can manage and tame it. What can become faster is things like front-end requests, which are no longer held up by blocking operations or a starved CPU. Either way, it buys you some time to actually re-engineer your stack to work faster and at greater scale.

fleitz 5 hours ago 1 reply      
To fix overload swap queues for stacks, set a timeout, and clear the stack when you estimate based on throughput the the time will expire.

eg.On a webserver where people get refresh happy, I'd set the timeout at about 5 seconds. If requests are taking on average 2 seconds and the request is already 3 seconds old, return a 500, drop the connection, etc. Then clear the rest of the stack.

Answer requests in a LIFO manner as at least that way in an overload condition some requests are answered and the 'queue' is cleared quickly.

It's like when you have 3 papers due and only time to do 2, you elect to not do one so you can do the other two. You should ideally kill the one that is due first.

Sacrifice a few for the good of the many.

notastartup 7 hours ago 2 replies      
Can the problem be solved by using a fair queueing?

chunk off the overloaded queue into small groups and then have your workers chew a bit of each at a time. the LAST-IN folks won't have to wait ages for FIRST-IN folks to finish.

off topic but parisian french sounds way nicer than quebecois, the language of blue collar workers from 18th century, had to unlearn the horrible quebecois after high school.

Many Older Brains Have Plasticity, but in a Different Place
72 points by npalli  8 hours ago   3 comments top 2
npalli 4 hours ago 0 replies      
It is fascinating to note that the changes occurred in the white matter portion of the brain. The general thought (challenged recently) is that only the gray/grey matter neurons 'mattered'. White matter was just conduction between the grey matter neurons (which had all the intelligence). The final quantity of these grey neurons was fixed shortly after birth and people kept losing them as they aged.

If it turns out the body can recruit white matter in learning then suddenly we have 10-50 times more cells (white matter/grey matter ratio) that can participate in intelligence. I suspect the way intelligence is organized would also differ between the white and grey regions. Not to mention how they interact with each other!. It calls into question a lot of the assumptions computational scientists make in coming up with the complexity of a simulated brain. We might be at the start of understanding how truly complex the brain is.

A good overview of this understudied portion of the brain is the the book "the other brain" by douglas fields.


MilnerRoute 4 hours ago 1 reply      
I drew a lot of inspiration from a similar article by a science writer at the New York Times (summarized in "The Secret Life of the Grown-Up Brain".)


The basic theory is that young brains soak up experience, while older brains consolidate it. So older brains can make bigger leaps of logic -- the old cliche about "wiser" old people actual does have a biological basis. I wonder if this new study is just another piece of the same phenomenon. (New memory is stored in a different part of the brain because the old plastic/learning-storage centers have already been optimized and compressed...)

The robotic worm
16 points by monort  6 hours ago   1 comment top
fpgaminer 1 hour ago 0 replies      
Really exciting research!

I've been trying to replicate this work. So far haven't succeeded. The code for the system described in the article isn't available. Only a "newer" version is available. In either case, both implementations will behave differently depending on how quickly they execute, so reproducing the described results is proving tricky. Also the motor output program isn't available, nor well defined.

When next I get time, I planned on posting to the Google Group for this project and see if they are willing to enlighten. I re-implemented the code as a single C program, and re-architected it to use a synchronous tick, moving from one state to the next. So my program has re-producible results, regardless of speed. It just doesn't appear to behave correctly yet. More tinkering...

EIZO Announces Monitor with 1920x1920 Resolution
115 points by ingve  9 hours ago   78 comments top 17
zachrose 5 hours ago 1 reply      
FWIW, the Soviet film director Sergei Eisenstein once gave a lecture outlining why square screens would be superior for cinema.

"Eisenstein playfully hyped the virtues of the "dynamic square," a screen that was exactly as high as it was wide. He did so in part because to him the square was modern, charged with productive machine force. This more purely cinematic screen was, according to Eisenstein, necessary for properly showcasing the energies, conflicts, and collisions germane to the moving image arts. It would also, at least in theory, be the most accommodating frame, capable of hosting images composed for planes that were either horizontal or vertical. Eisenstein proclaimed that previous industry standards (4:3), as well as contemporaneous calls for wider screens, were nostalgic, calling forth a dated viewing regime dictated by traditional art forms."


userbinator 7 hours ago 6 replies      
As someone who mainly reads text (developer) I think widescreens are advantageous only for multimedia. As for the arguments about how human eyes are horizontal so our field of view is wide, that's true but only for peripheral vision - where everything is out-of-focus and not actually "visible" for e.g. reading something. Otherwise it's implying that humans can independently use one eye for the left side and one eye for the right, a skill that I don't know of anyone having (it's possible though, just not something that would be common.) I have a dual-monitor setup equivalent to a 5:2 aspect ratio and I still need to rotate my eyes or head horizontally to focus on the right part of the screens.

I don't know if a completely square monitor would be as well received as something at least slightly rectangular - 1920x1440 (4:3) might be a good compromise.

Gravityloss 8 hours ago 2 replies      
Not so much of a stretch. We use Dells with a rotatable screen at our work place and about half of the people keep one vertical and one horizontal monitor.

Reading logs or browsing code is quite nice with the vertical screen, as is reading some vertical oriented pdf materials.Cat videos are usually watched on the horizontal screen. :)

Stratoscope 6 hours ago 2 replies      
1920x1920 on a 26.5" panel is only 102 pixels per inch. That is a low pixel density for a modern monitor. It's the same density as a 1920x1080 21.5" panel - certainly usable, but you won't get the crisp text you'd have on a higher density display.

Of course, many monitors are worse. A 1920x1080 27" monitor is only 82 pixels per inch!

The monitor I'm buying next is probably the Dell UP2414Q. With 3840x2160 resolution on a 23.8" panel, it has 185 pixels per inch. It's expensive and you need a machine that can drive it properly, but that is a nice pixel density.

dismal2 8 hours ago 1 reply      
At that size something like 2880x2880 would be more "modern"
devindotcom 8 hours ago 3 replies      
More weird panels, please! I still want a bare-bones monochrome laptop.
friism 8 hours ago 0 replies      
Perfect for Instagram.
Hoffmannnn 3 hours ago 0 replies      
Many monitors these days can be rotated on their side (check your stand mount!), and windows natively supports portrait mode.

My dual monitor setup has one landscape, one portrait, and I'm never going back. Viewing documents & code is just so much more convenient.

rsync 8 hours ago 2 replies      
This isn't that groundbreaking - there are plenty of 1:1 ratio screens that are made and sold for the air traffic control industry:


However, what would be interesting is if this was priced at normal monitor prices ... the ATC monitors are incredibly expensive.

sciurus 9 hours ago 5 replies      
I'm curious which resolution people would rather have: 1920x1920 or 2560x1440?
zokier 8 hours ago 0 replies      
I suppose it is highly use-case dependent if this is better than dual vertical 1920x1080 monitors. You essentially are choosing between having a bezel in the middle or losing 240 horizontal pixels. For having two things side-by-side the dual monitor setup is obvious win, so for coding etc I'd lean towards that
CompuHacker 7 hours ago 0 replies      
I've been looking for a 1:1 monitor for years before this; they're only made custom for military HUDs and such. I was so disappointed, especially in the people who called 5:4 and 4:3 "square" in response to forum posts asking about it.

Hoping for a power of 2 square model.

myrandomcomment 6 hours ago 1 reply      
I have an LG 34UM95-P which is a 34" 21:9 UltraWide with a resolution of 3440x1440. Once you get used to how wide the monitor is it works quite well. This replaced 2 x 1920x1440 29". It is on a mount on a standing desk.
mark212 7 hours ago 0 replies      
FWIW -- square monitors have long been popular on financial trading floors. I think there's a preset on the Bloomberg terminal for this ratio, but it's been a while.
rdl 7 hours ago 1 reply      
I'm pretty happy with a $250 4k 39" seiki and a dell u2410 1200x1920 (portrait) at work, but a square display would be great on a tabletop or otherwise in nonstandard positions.
Animats 5 hours ago 0 replies      
OK, a square monitor. Maybe Apple will announce a round monitor, with a round GUI to go with it. Apple fans would be camping out in front of Apple retail outlets for the thing.
deciplex 2 hours ago 0 replies      
Why 1920? I would rather have 1440 (or 2160) as it would allow me to put it to the side of a 2K or 4K monitor and keep the same vertical resolution. Unless you're planning to put them in an over-under configuration (pretty rare in the home, at least) I don't see the point of this resolution.
Game of Life: Total War An Analysis
13 points by nkurz  6 hours ago   discuss
New Search Strategy for Firefox
286 points by Osmose  7 hours ago   259 comments top 36
DevX101 6 hours ago 8 replies      
This is why Google built Chrome. Google's strategy has been to remove the layers between the user's intent to search and Google's own server. Every intermediate layer that Google does not control is a risk to their business.

When viewed through these lens, many of the seemingly ancillary Google business units start to make strategic sense. Android (control the device), Chrome (control the browser), Fiber (control the tubes).

Each of these channels is an opportunity for disruption by some competitor search engine and Google wants to make sure they don't get blindsided. Or one of the gateways could demand a massive tribute for Google to pass through (cable companies are pushing for this via the war against net neutrality).

If Google didn't have Chrome and Firefox was the leading browser, they'd be in big trouble with this news. Lucky for them they thought about this a long time ago and built a browser which now accounts for 50% of market share.

Yahoo NEEDS this deal. For Google, it's a nice to have.

nnethercote 6 hours ago 5 replies      
Some more details:

* This is a new, more flexible partnership strategy.

* Continuing the existing relationship with Google was an option, but Mozilla chose to end the Google relationship.

* All the options Mozilla considered had strong, improved economic terms (but the concrete numbers are not public). Because all the options had improved economics, that allowed Mozilla to really consider the strategic outlook.

* The Yahoo agreement in the US is for five years.

* Yahoo will be rolling out a new, improved search tool soon.

* Mozilla has agreements with Yandex and Baidu for Russia and China.

* Google will remain an included option in Firefox and Mozilla will continue to support its use.

sroerick 6 hours ago 4 replies      
Mozilla is making some major moves these days. They've ditched Google as their main revenue source, partnering with Tor and Yahoo.

Yahoo is angling to be a digital magazine, which I like as a business model much more than Google's.

Firefox is making a strong case for itself as the privacy centric browser.

I still remember when Firefox started gaining market share. Even non-tech-savvy people were getting firefox, because IE was so bad for security, and so hard to maintain.

Excited to see what the next few months brings.

kibwen 6 hours ago 4 replies      
It's always seemed as though Google's and Mozilla's relationship would weaken over the years, though honestly I expected Bing rather than Yahoo to be the one to step up and fill Google's shoes. Also surprising is that apparently Mozilla was the one to initiate the switch.

As a Firefox user, all in all I'm rather pleased. I've just tried a few of my typical searches on Yahoo and though the expected links aren't the top results (seeing links for Rust-the-game instead of Rust-the-language...), they're on the first page. Let's see if that improves with time as I use it more. And I'm happy to support some more diversity in this space.

_pius 6 hours ago 1 reply      
Note that Mozilla is making Yahoo reinstate support for Do-Not-Track, which it had dropped earlier this year.


throwawaymoz 6 hours ago 12 replies      
(throwaway account)

While I believe the party line is probably true ("Mozilla decided not to go with Google"), it's also disingenuous.

Mozilla chose not to go with Google because Google wasn't willing to pay what they were before. They straight up told Mozilla this 3 years ago when they signed the billion dollar contract; Mozilla had 3 years to become profitable. That's why they switched focus to FirefoxOS; they thought that by now they'd be profitable via selling phones and the app store. (At the time, Bing was bidding against Google, however Mozilla went with the smaller check from Google because they knew using Bing would seem like selling out.)

For the record, Google made billions off being Firefox's default search engine. They paid Firefox $300mil a year for three years, but that was only a small fraction of how much Google profited from Firefox searches. Not sure if it's still true, but three years ago they made more from Firefox than they did from Chrome.

So, yes, Mozilla could have gone with Google still. It's not like Google said "nope, you can't use us as the default!". However, they went with Yahoo! because Google wasn't willing to pay what Mozilla needed. The whole "Mozilla picked Yahoo! to enable choice" has been tweeted by every Mozillian I know (and said multiple times in this thread), but it's a meaningless statement. If they really meant that, you'd be prompted when you opened Firefox the first time to pick a search engine.

andreastt 7 hours ago 1 reply      
The correct link is https://blog.mozilla.org/blog/2014/11/19/promoting-choice-an.... Seems the link was changed.
opinali 6 hours ago 1 reply      
Yahoo! is OK, but... Yandex and Baidu are not exactly recommended company for an organization that pretends to hold the Open Web's moral higher ground. They practice large-scale censorship, they are instruments of repressive regimes. How does that make any sense?
s4sharpie 7 hours ago 5 replies      
Looks like a smart move by Yahoo to be the default search for a major browser: google/chrome, bing/ie. This also means that Firefox will likely get some serious support from Yahoo in $$s to build back market share. Question is can they catch Google?
andrewl-hn 7 hours ago 1 reply      
Note that is affects only U.S.-based users of Firefox. Good move for both parties, and I hope they can struck deals with regional search engines in other locations as well. Diversification is good for everybody.
dec0dedab0de 7 hours ago 5 replies      
The real question is did Google stop paying? did Yahoo offer way more? Or did Mozilla want to break out from Google's shadow?

Edit: I mean to say did Google decide to end it, or did Mozilla, and why?

ihuman 6 hours ago 1 reply      
Haven't they done something like this before? I remember Yahoo promoting a Yahoo-branded Firefox. It's default search and homepage were Yahoo, and every titlebar had "Firefox and Yahoo" at the end instead of just "Firefox."
mrschwabe 5 hours ago 0 replies      
This seems like a positive step forward for Mozilla (away from Google) but there's no denying how much more awesome this announcement would have been if the new default search provider was DuckDuckGo :)
drewda 6 hours ago 1 reply      
"Google will also continue to power the Safe Browsing and Geolocation features of Firefox."

It's disappointing to hear that Firefox isn't yet using Mozilla's location services project [1].

Background: These are the services that will, say, take the SSID of your current WiFI access point and map that to a latitude/longitude. My understanding is that almost all commercial users subscribe to Skyhook Wireless's database[2], other than Google, which has built its own WiFi AP maps using its StreetView trucks.

I think Mozilla's "open" service, contributed by individual users, is a welcome alternative, since it means you no longer have to send your location to a large corporation on every look-up.

[1] https://location.services.mozilla.com/

[2] http://en.wikipedia.org/wiki/Skyhook_Wireless

jlebar 6 hours ago 0 replies      
sp332 7 hours ago 2 replies      
I'd swear this happened already... I updated Firefox on an old Android tablet yesterday and the default search was already switched to Yahoo. Edit: ah, now that you mention it, it was a Beta version.
math0ne 6 hours ago 0 replies      
For me at least then moving away from google is a huge selling point. I think if they marketed themselves as a googless web experience it could be really good for them.
digitalnalogika 6 hours ago 0 replies      
What will be the default engine in countries not listed in post? It is not exactly clear, apart from saying that Google will be pre-installed (but not default?).
AshleysBrain 7 hours ago 1 reply      
Does this mean Yahoo outbid Google for being the new default search? How much is the deal worth? What about outside the US?
pwnna 7 hours ago 4 replies      
One question: wasn't Yahoo supposed to be a part of Bing at some point?
cwyers 5 hours ago 1 reply      
I can't get at the post, so maybe they answer this, but... in this thread, I'm seeing that it's Yahoo! in the U.S., Yandex in Russia, Baidu in China. That seems to leave... a lot of the globe unspoken for. What's the default search for those places?
mnemonik 6 hours ago 0 replies      
Related blog post: https://blog.mozilla.org/blog/2014/11/19/promoting-choice-an...

More details, Yandex in Russia and Baidu in China, etc.

ngokevin 6 hours ago 0 replies      
FWIW, Bing has looked similar to Google for a very long time.
briholt 7 hours ago 5 replies      
Anyone have any insights into this? From the outside it looks like Mozilla is parting ways with Google because of Chrome. Interesting they went with Yahoo and not Bing.
panabee 6 hours ago 0 replies      
google has been apple's greatest competitor for a while. apple has > $160B in cash. mobile bing is comparable to mobile google. the strategic value of weakening google seems to far outweigh the financial value of defaulting to google search (assuming google is willing to outbid microsoft). so why hasn't apple defaulted to bing yet, and when will it?
kevincox 6 hours ago 0 replies      
I'm really curious about the financial angle of this. I'm wondering how much was gained/lost based on the decision they made versus making a deal with Google for a similar system as in the past.
Osmose 6 hours ago 0 replies      
FYI blog.mozilla.org is down, IT people are working on it.
jimmaswell 6 hours ago 2 replies      
Is this why Firefox mobile started using Yahoo as the default and I can't change it? I tried changing it and it just kept using Yahoo.
victor27 7 hours ago 1 reply      
Curious - do people here view the Yahoo! Search Experience to be better or worse than Google Search?
mcintyre1994 4 hours ago 0 replies      
To be blunt, where's the innovation in Yahoo! Search? They're using Bing data and the result page images in the article look like a clone of Google's.
tn13 5 hours ago 1 reply      
Honestly I dont see how this is a "choice". In fact this sounds exactly opposite of the choice. In this particular case Firefox has made the choice on our behalf that we are better off using Yahoo's crappy search results instead of market leader Google.

If you are making a browser which is focused on giving freedom to users you are supposed to :

1. Either let the users chose the search engine as an on-boarding step. 2. Offer industry best/leader as default.

In this particular case Firefox has made a suboptimal choice on our behalf in the name of "choice".

How exactly is this different from :

1. Comcast taking more money from Netflix to give them better bandwidth ?

Now my grandmother will end up seeing 0 organic search results above the fold and will have to learn to either change the search settings or simply use that icon with Red Green and Yellow around a blue dot (Chrome).

tn13 6 hours ago 1 reply      
It is hard to figure out what a "strategic partnership" really means at the moment but ..

- I hope this strategic partnership does not mean 0 organic search results above the fold. That is what Yahoo is doing at the moment. - I hope FF does not come up with any Yahoo spyware/Toolbars etc.

Search is a weird thing on internet.

huhtenberg 7 hours ago 4 replies      
Here's the option to suppress the change - http://i.imgur.com/HwHqQU9.png
agapos 7 hours ago 1 reply      
Doesn't that mean that Google giving up on one of the biggest markets means they no longer need Mozilla and it can ditch it's support altogether?Will Yahoo be able to support Moz as Google did?Will this became precedence for other countries too (aside from exceptions like Russia's yandex)?
Osmose 7 hours ago 2 replies      
You're assuming that Yahoo is paying more, assuming that money was the motivation for making the deal, and assuming that Mozilla employees believe that partnering with Google is better for the open web than partnering with Yahoo.
rdl 7 hours ago 4 replies      
This makes me less likely to use Mozilla, but I was already a Chrome or Safari user (despite supporting the Mozilla mission).

Seems like a bad decision for Mozilla.

Primary Data Emerges from Stealth with Woz as Chief Scientist
28 points by coloneltcb  4 hours ago   7 comments top 6
blergh123 15 minutes ago 0 replies      
I thought that Woz had moved to Australia in a partnership with UTS (http://www.smh.com.au/technology/technology-news/apple-cofou...).
incision 2 hours ago 0 replies      
As the article points out, Woz was Chief Scientist at Fusion-IO [1] as well. That company struggled [2] before being acquired by SanDisk for less than the IPO price.

It was a different story in 2011 when Woz went on CNBC and said that the company "has grown as fast as Apple so far" [3] pushing the stock to the highest point it would ever reach just a month before the share lock-up expiration [4].

1: http://en.wikipedia.org/wiki/Fusion-io

2: http://www.theregister.co.uk/2013/10/25/fusionio_flasher_fla...

3: http://m.cnbc.com/us_news/45074931

4: http://www.forbes.com/sites/ericsavitz/2011/12/06/fusion-io-...

wdewind 1 hour ago 1 reply      
> "We think this is a fairly sizable market we are addressing," Smith said,estimating it is in the billions of dollars.

That's not a very big market.

joshu 3 hours ago 0 replies      
I was a judge for a different session at DEMO. First time there, pretty interesting stuff.
capkutay 3 hours ago 0 replies      
Does anyone know how this compares with Delphix?
ultimape 3 hours ago 0 replies      
I hope its like spacemonkey for enterprise.
Rules for Creating Gorgeous UI
248 points by mparramon  16 hours ago   40 comments top 12
bane 8 hours ago 2 replies      
Here's some rules that almost every designer I know ignores:

1) Map out your interface and interaction trees first

1-click - most common actions

2-clicks - second most common actions

3-clicks - power user level stuff

Put the most commonly used stuff at 1-click or interaction. If you don't know what goes at 2 and 3 clicks in, you don't understand how the application is used, because you don't understand what the most common interactions are. If you've run out of room for the 1-click stuff in your UI, then your UI concept is poorly designed. Keep iterating and collecting information until you can fulfill this.

Don't put anything at more than 3 clicks in.

2) Double the number of interaction points in the UI. Assume the application will grow and add features. If you optimize your design for the number of features you have today, you'll have no where to put all the stuff you're going to get over the application lifetime and it'll all just end up getting buried in menus. I've seen lots of gorgeously, carefully, designed applications die a year in because of this.

Double everything and see if that number of interaction points still fits within your concept, that way the interface has room to grow without getting messy.

3) Don't make your users interpret, make them understand.

If your concerned about how universally an icon is interpreted across cultures, you're doing it wrong. Interpretation is an additional step your users have to go through to use your UI, it's like putting everything at 2, 3 and 4 clicks in because they now have to not only look and scan the UI for what they want, they need to figure out what each interface item means before they can interact with it.

Even worse, as they grow to become accustomed to your UI, they're going to end up memorizing location and placement of options because the interface widgets take too long to interpret. Get 2 revisions down the road and you move a button and wham your tech support calls jump 50% because the users never bothered to remember what the symbol for their action looked like, just where it was on the screen.

4) Everything must be discoverable. This is why the world moved to GUIs from CLIs. Don't make your users play a 1990's era adventure game where they have to click every pixel on the screen to see if they can advance their usage. The Flat UI trend is notorious for this.

5) Consistency rules. Also see #3.

6) Eliminate Steps. Map out how many steps certain actions are. Cut them down to as few as possible. I remember one time going through a file import process with a tool, by the time you got the file imported the user had to navigate 27 different steps! Almost every step required minimal or no user input. Nobody had ever bothered to map out the interaction patterns in the tool before but users were constantly complaining about how difficult it was to use.

We reworked the workflow and got it down to 3 steps and user-engagement jumped triple digits.

6) After you've addressed 1-6, make it look nice.

mhd 10 hours ago 2 replies      
Not surprisingly, most screenshots don't present a lot of data, which is where things usually get problematic. Programmers often get criticized for their fruit salad GUIs, but usually that's the result of having to present plenty of options and data.

Sure, the first thing you'll hear to that issue is "Do you really need to have that many options/paths/data?". And granted, quite often this is applicable, although not always in the same way (hiding rarely used options vs. eliminating them, i.e. "advanced options" vs. "only one friggin mouse button").

But often enough, presenting lots of data and hierarchies is the whole point of an app, especially when it gets more about enterprise systems than "what pancakes do my friends like" web 2.0 frippery. And that's where the ideas coming from ad design and typography kinda fail.

Which is why people like Tufte are so respected, as they go beyond this. If I recall correctly, in the initial review of the iPhone Tufte recommended against even the minimal margins of the photo gallery, removing white space for a better experience. And yes, knowing the rules before you break them might be a part of that...

If you don't do this as your full-time job, I'd very much recommend going for "usable" instead of "gorgeous". The latter is very much a 80/20 deal, where you spend insane amounts of time, asking co-workers and A/B tests just to get that final ratio or pixel size right. Whereas most of your customers still have Nappa Valley as a background picture behind their copy of IE9...

I don't really miss under construction signs and rotating skulls, but I do have the slight feeling that a lot of what designers are doing will be like early 20th century typography in a few years, where even some of its major proponents aren't quite sure about it anymore...

Animats 10 hours ago 2 replies      
Gorgeous, or usable?

There's an annoying trend in UI design to make the simple stuff look really good, while making more difficult operations harder. If you think your UI concept is great, try mocking up something like Photoshop or a 3D drawing program. Those have really hard UI problems to solve. The mania for "clean design" has resulted in such things as invisible close buttons that only appear when you mouse over them. (Facebook ads work like that.)

Bob Lutz, who used to be head of General Motors, ran into this. His designers had built a concept dashboard which looked like something from Bang and Olufsen, with the black-on-black design popular in the 1980s. It looked really great. Nobody could operate the controls reliably without training or a manual.

There was a brief period when creative user interfaces on web pages got completely out of hand. Check out


for an over the top example from a French fashion design house. They went bankrupt a year after putting up that page.

GuiA 11 hours ago 4 replies      
> Good, generous whitespace can make some of the messiest interfaces look easy to use.

And they make them completely unusable to anyone with less than a 27" screen, but all the hip designers apparently don't care about users who have anything less than a 2560*1440 display just like them (1024x768 is still the norm for a lot of people outside of the Silicon Valley bubble).

I've seen so many web products that are less usable than 20 year old command line interfaces, notably because of this "you can never have too much whitespace!" mentality, it's appalling.

The first goal of an interface is to be used.

drderidder 11 hours ago 0 replies      
"I love clean and simple as much as the next guy, but I dont think [flat design] is a long-term trend here. The subtle simulation of 3-D in our interfaces seems far too natural to give up entirely."

Agreed, the "flat design" trend seems like a pushback against over-done skeuomorphism that went a little too far in removing all the visual depth cues.

serve_yay 8 hours ago 1 reply      
> I majored in engineeringits almost a badge ofpride to build something that looks awful.

I dislike this attitude, for me it is very reminiscent of the way people just shut down when math comes up. "I hate math" and that's it. I'm not a math person, my brain doesn't work that way. On and on. There is even a perverse sort of pride in it. Why not do the work, why not try to get better, why not try to expand into things we're not good at yet.

keyle 6 hours ago 0 replies      
Being a designer doing UI/UX for a living and building interfaces, I think this is a very good starter guide.

If devs only started by lining things up and thinking in terms of visual hierarchy, they'd already be 90% there.

toolz 4 hours ago 0 replies      
It took me a few refreshes and multiple clicks just to realize I had to scroll down. I don't understand why someone would think unusable could ever be gorgeous. That page has one of the worst layouts I've ever come across and ironically it's supposed to be about enhancing users experience.
organsnyder 11 hours ago 1 reply      
Rule 1: Stop talking about making "gorgeous" user interfaces. Create usable interfaces first; then you can worry about the subjective parts.
achr2 7 hours ago 2 replies      
Does anyone have some good resources for UI/UX design for data and option heavy application (enterprise)? There are fundamental differences between most 'apps' and the kind of dense applications I work on/with.
sjcsjc 9 hours ago 0 replies      
I really like this line: "Start thinking of whitespace as the default everything starts as whitespace, until you take it away by adding a page element." - speaking as someone whose design sense and skills are dreadful.
Terr_ 10 hours ago 0 replies      
> I had my excuses. I dont know crap about aesthetics. I majored in engineering its almost a badge of pride to build something that looks awful. I majored in engineeringits almost a badge ofpride to build something that looks awful.

I guess whoever did the pull-quotes was also an engineering-major...

Twitter Now Lets You Search for Any Tweet Ever Sent
14 points by tacon  6 hours ago   1 comment top
flashman 12 minutes ago 0 replies      
A bit of a shame you can't change the sort order. I think seeing oldest tweets first is going to be regularly useful, for instance to see who broke a certain piece of news.
RelativeWave Acquired by Google, Giving Its App Design Tool Away for Free
74 points by tonteldoos  7 hours ago   4 comments top 3
paulftw 2 hours ago 0 replies      
Is there a way to quantify the usage of tools like this?I mean, how many people actually use RelativeWave on a daily basis?

NoFlo, Origami, RelativeWave are all offering this visual programming, but any example I saw was becoming too complicated to follow before it was useful.

ggamecrazy 3 hours ago 1 reply      
Awesome! Looks very similar to Origami -> https://facebook.github.io/origami/

I have never used either, but both look so much better than working off static PSD comps. I would be curious if a person who has used both can chime in on their impressions.

christensen_emc 2 hours ago 0 replies      
AWS Innovation at Scale James Hamilton [video]
16 points by ook  6 hours ago   1 comment top
mad44 23 minutes ago 0 replies      
In summary, AWS rides the benefits of economies of scale. (http://en.wikipedia.org/wiki/Economies_of_scale)

They design/build their networking gear, full hw/sw stack. This is cheaper and more reliable (their code is simple/customized to their datacenter use case.)

They also have SingleRoot I/O virtualization at each server: each guest VM gets its own hardware virtualized link, which is great for reducing the giant tail at scale problem (google for Jeff Dean's description of the problem.)

Their relational DB system RDS is getting popular: 40% of customers using them. So they compete with Oracle by offering similar highly-available service with much less price. They keep adding new relational DBs: Aurora, RedShift, EBS.

They design/build their power infrastructure. Faster.

They are very customer oriented, they make things simple/painless for customer use cases. They are obsessed with metrics, measuring everything, with tight feedback loops to improve things weekly. They rolled 449 new services + major features in 2014 alone.

Show HN: C# 6.0 Functional Language Extensions
135 points by louthy  11 hours ago   54 comments top 12
louthy 7 hours ago 0 replies      
Due to popular demand, there's now a NuGet package. My first attempt at a NuGet package, so let me know if I've messed something up:


saosebastiao 10 hours ago 2 replies      
Beautiful. I've been trying to pick up C# simply due to Xamarin, and this definitely looks like it will make me feel more at home. Has anyone had success with 6.0 on Xamarin? (Is it even out yet? I saw announcements, but they looked like previews).
MichaelGG 3 hours ago 1 reply      
Coercing Some (null) to None seems like a bug. Null is a valid value in the type and should be allowed (F# allows it too).

Otherwise, a function might accidentally return None and cause a bug. For instance, a function that looks up s user and returns the name. The name might have null as a valid option. Some(null) indicates user found with no name. None indicates user not found.

tommyd 9 hours ago 1 reply      
Impressive looking stuff - I'm currently learning Scala and getting deeper into functional programming, and a lot of the concepts I come across (e.g. Option types) make a lot of sense, so it's cool to see them being implemented in other languages - hopefully with the recent MS announcement, I'll be able to get back into C# development in the future as it's a pretty great language all in all.

As an aside, I recognised your name in the example code on there - I used to occasionally hang around on 4four :)

Strilanc 2 hours ago 1 reply      
A comment on the option type: I think transforming Some(null) into None is wrong.

- There is an x for which Some(x) is not a Some. This violates useful invariants.- Some((int?)1) is a Some, but Some((object)(int?)1) is a None.- optionValue.Cast<object>().Cast<T>() is not the identity function.- I can't use your option type in existing code without doing careful null analysis.- As a rule of thumb, generic code you should treat null as just another instance of T with no special treatment (beyond the acrobatics to make the code not throw an exception). That way both users enforcing a non-null constraint and users allowing nulls can use your type.

steego 10 hours ago 2 replies      
Funny I come across this today. Only yesterday did I start turning some of my helper functions and extension methods into a little NuGet library. It's amazing how much overlap there is between your library and mine, though I'm probably going to steal some of your tricks. :)

While I do have an Option type with all the relevant LINQ operators defined, I've resorted to using Nullable<T> for all string parsing functions that return primitives as Nullable is a type that's actually supported databases. For any nullable, I can use a nullableVar.DefaultTo(defaultValue) to convert it to a non-nullable primitive type. Personally, I don't find Option types in C# all that compelling because the pattern matching mechanism doesn't compel you to handle all cases like F#.

All in all, it looks like an interesting library.

Thanks for posting!

dstone16321 9 hours ago 1 reply      
Feel free to pilfer some ideas from my own take on a similar lib for C#. https://github.com/danstone/lambit - includes some basic pattern matching support for example.

This sort of library is kind of necessary I feel for things like poor tuple support at least.

I'm not entirely sure about the casing conventions. I mostly code in clojure/haskell/f# but 'when in rome'. Its likely the sort of thing that will turn off a lot of stubborn developers.

danbruc 8 hours ago 6 replies      
I hope this does not come across to negative, definitely a nice project, but is this really useful in practice? If you want F# or Haskell syntax, why not just use F# or Haskell? Why bent C# until it looks like something that already exists? If you work together with (pure) C# developers, you will just confuse them. If you work together with Haskell or F# developers, you can just use that languages.

If this is just a fun project, scratch what I said, I love building useless stuff myself, but if it is intended to be used seriously I don't get the point.

bbcbasic 8 hours ago 1 reply      
I wonder what the impact is on performance of routinely wrapping your types in Option<T>?

I am assuming it is minimal, because the struct would remain on the stack, and the object on the heap. There is just an unwrapping and wrapping cost but no more than Nullable? However this may be a naive view.

I like the syntax though. Where I work we use Code Contracts. This reduces bugs due to nulls but sometimes 25% of the code is Contract.XXX(...) which is annoying to read. And more typing too.

rukugu 7 hours ago 0 replies      
I was thinking of writing something like this myself. This looks great!
bcbrown 10 hours ago 1 reply      
Cool stuff. Isn't there still a problem with Option, where when a method takes Option<T> as a parameter, invocations can pass null for that Option<T>? So there's still the possibility of null exceptions.
martijn_himself 9 hours ago 2 replies      
This is a really interesting experiment, although personally I'd find using:

var ab = tuple("a","b");

awful. It's very hard to read what is going on here- someone else reading your code would be utterly confused. The method name starts with a lowercase character, and I'm not sure I like the possibility to omit class names for static classes (granted this is not your invention but a new C# feature right?); to me it would seem it is missing the 'new' keyword.

I'd rather use Tuple.Create<T1,T2>(T1 first, T2 second) until they add proper support for:

var ab = ("a","b");

Electrical brain stimulation beats caffeine and the effect lasts longer
99 points by Libertatea  12 hours ago   51 comments top 14
Xcelerate 8 hours ago 5 replies      
Something about electrical brain stimulation freaks me out. That, along with ECT (electroconvulsive therapy). I suppose it's hypocritical of me to not fear substances such as caffeine, alcohol, or antidepressants, but the idea of sending current through an organ as sensitive as the brain makes me wary.
amckenna 6 hours ago 2 replies      
I wish they would release the placement they used for the testing. I would be interested in trying this out at home with my tDCS device.

My guess is they are using one of the following placements:

Accelerated Learning (DARPA) F10/Left Arm - http://tdcsplacements.com/placements/accelerated-learning/

"Savant Learning" (Chi & Snider (2011)) T4/T3 - https://www.reddit.com/r/tDCS/comments/2e7idx/simple_montage...

hansjorg 4 hours ago 0 replies      
NPR Radiolab had a story on this recently [1] where amongst other things, a reporter visits a US military lab and tries sniper training with and without electrical brain stimulation. Pretty interesting.

1: http://www.radiolab.org/story/9-volt-nirvana/

the_cat_kittles 8 hours ago 1 reply      
This type of image analysis task is not well suited to automation. Theres no computer algorithm that can go in and autoselect targets for you, its a human endeavour. If we can help people pay attention for long periods of times, thats really important,

at the very least, they should save the images and targets to use as training data, since that's being generated manually already. then they could see how predictive of a model could be generated, instead of just guessing that it would be bad.

dreamweapon 7 hours ago 6 replies      
Researchers in the US have used electrical brain stimulation to boost the vigilance of sleep-deprived military personnel working on an airforce base.

Knowing the U.S. military, rather than addressing the root cause of the issue (namely: the totally senseless cult of sleep deprivation in the armed forces -- despite the ample research showing the mental and physical damage it causes), they'll start offering, what shall we call them? -- special "performance-enhancing" helmets. First on an optional basis, but then on a not-so-optional basis -- to administer optimally measured voltage, at optimally timed occasions.

From there it's a short hop to having these helmets (by then no longer optional at all) administer other kinds of signals, directly to the soldier's brains: to relay orders, identify targets... and to tell them when to pull the trigger.

devindotcom 8 hours ago 2 replies      
Doesn't like 30 seconds of exercise beat caffeine as well? Seems like if you just want a jolt of energy and cognitive capacity, coffee is a bad choice. It is, on the other hand, delicious.
oblique63 7 hours ago 1 reply      
I've been wondering, has there been any research directly comparing the effects of tDCS with modafinil and/or any other nootropics like piracetam?
Jonovono 4 hours ago 0 replies      
Has anyone tried the Focus device? http://www.foc.us/ Curious to hear any feedback.
debacle 8 hours ago 1 reply      
I've been really interested in tdcs for a long time, simply out of curiosities' sake, but the tdcs subreddit is private and there aren't many good resources for people who aren't sure they want to commmit to the heavier stuff.
nlh 6 hours ago 1 reply      
"Theres no computer algorithm that can..."

"Yet", my friends, "yet".

I love reading quotes like that because it speaks to pure opportunity. Someone will eventually figure out an algorithmic solution to X, and that should remind us all how wrong the "all the good ideas have been done" line of thinking really is.

ericcumbee 7 hours ago 0 replies      
I wonder if this might have future applications as a treatment for Attention Deficit Spectrum Disorders. I found this line particularly interesting

I think the reason were getting these long-term effects is they are making some longer-lasting changes to the neural connections,

astrodust 8 hours ago 1 reply      
Interestingly, science fiction author Larry Niven called it decades ago.


kleer001 8 hours ago 0 replies      
maw 7 hours ago 0 replies      
A shocking result. Hook me up!
Cache is the new RAM
331 points by aristus  9 hours ago   68 comments top 12
temuze 9 hours ago 15 replies      
The database I want still doesn't exist.

Here's what I want:

- Easy sharding, a la Elasticsearch. I want virtual shards that can be moved node to node and an easy to understand primary/replica shard system for write/reads. I want my DB nodes to find each other with an easy discovery system with plugins for AWS/Azure/Digital Ocean etc.

- Fucking SQL. I don't want to learn your stupid DSL. I want to give coworkers a SQL client a say "go! You already know how to use this!". If I want a new feature, then dammit, build on top of SQL the way PostgreSQL has. Odds are, regardless if its some JSON API or SQL, my language will have a client for it that will be superior than writing raw queries anyway.

- Easily pluggable data management systems. For example, if I do a lot of SUMs and I know I'm not doing writes very often, I want to use CStore. If I'm storing a bunch of strings, I want to able to index it anyway I please - maybe one index with Analyzer/Tokenizer X and another with Analyzer/Tokenizer Y - all in a nice inverted index. Good, I can make an autocomplete now. Oh, and sometimes I want a good ol' RDBMS.

- Reactive programming! It works well in the front end and it'd be amazing in the backend. For example, I want to make a materialized view that's the result of a query, but that gets updated as new rows get inserted or as the rows it uses gets updated. Let's call it a continuous view or something. Eventual consistency is fine. Clever continuous views can solve a lot of performance issues.

- I want to be able to choose if a table/db is always in memory or not. I don't care about individual rows - that sounds like someone else's problem.

- Easy pipelining - these continuous views mean that an insert can span a lot of jobs because one continuous view can be dependent on another. I want my database to manage all of this for me and I want to forget that Hadoop ever existed. I want to be able to give my database a bunch of nodes that are just for working jobs if need be. Maybe allow custom throttling for the updates of these "continuous views" so the queries don't get re-run every update if they're too frequent.

- While I'm at it, I want a pony, too. But I'd settle for this being open source instead.

There's a lot of possible directions for the DB world in the next decade. Me, I think the line between DBs and MapReduce/ETL/Pipelining is going to be blurred.

jandrewrogers 8 hours ago 2 replies      
A couple points I would make with respect to the article:

- In-memory databases offer few advantages over a disk-backed database with a properly designed I/O scheduler. In-memory databases are generally only faster if the disk-backed database uses mmap() for cache replacement or similarly terrible I/O scheduling. The big advantage of in-memory databases is that you avoid the enormously complicated implementation task of writing a good I/O scheduler and disk cache. For the user, there is little performance difference for a given workload on a given piece of server hardware.

- Data structure and algorithms have long existed for supercomputing applications that are very effective at exploiting cache and RAM locality. Most supercomputing applications are actually bottlenecked by memory bandwidth (not compute). Few databases do things this way -- it is a bit outside the evolutionary history of database internals -- because few database designers have experience optimizing for memory bandwidth. This is one of the reasons that some disk-backed databases like SpaceCurve have much higher throughput than in-memory databases: excellent I/O scheduling (no I/O bottlenecks) and memory bandwidth optimized internals (higher throughput of what is in cache).

The trend in database engines is highly pipelined execution paths within a single thread with almost no coordination or interactions between threads. If you look at codes that are designed to optimize memory bandwidth, this is the way they are designed. No context switching and virtually no shared data structures. Properly implemented, you can easily saturate both sides of a 10GbE NIC on a modest server simultaneously for many database workloads.

nemo44x 7 hours ago 0 replies      
This article is full of so much logical fallacy I'm surprised it made it here. And it's an advertisement none the less.

Creates a red herring by stating he's been doing this a long time and has seen it all.

Creates straw man after straw man in the trashing of memory caches (avoids their use cases), Dynamo (there's a good reason tons of people use various NoSQL Databases) and Hadoop (C'mon, now).

He also creates more logical fallacy in calling various concepts silver bullets that ended up having problems. I don't think anyone serious about technology thinks replication, sharding, load balancing "solves everything". Nothing is a silver bullet and anyone who says something is is selling you something...

And then he fails to really address the MemSQL uses replication, sharding (in a limited sense since the core SQL concept of a JOIN is wrecked here and they have a big warning on their troubleshooting page about an error you users must see often).

SQL is great but I have plenty of great reasons to use other data stores. SQL isn't a silver bullet for data.

Point is, he is calling MemSQL a silver bullet and is obviously trying to sell something while ripping plenty of great ideas and concepts by picking the worst implementations of them and largest misunderstandings of them.

brendangregg 9 hours ago 0 replies      
Yes. Or as I've said: memory is the new disk. This is why PMCs (performance monitoring counters) are more important than ever, to provide observability for cache and memory analysis. (I'd like some PMCs made available in EC2. :)
xacaxulu 6 hours ago 0 replies      
Laughing so hard at this line:

"Bringing you yesterday's insights, TOMORROW"

maerF0x0 9 hours ago 2 replies      
Amazon doesnt expose much of these statistics (how fast of ram do i get with a M3.large or a c3.med etc) . Does this mean real performance is for those who own their servers?
hcarvalhoalves 8 hours ago 2 replies      
> Its been 65 years since the invention of the integrated circuit, but we still have billions of these guys around, whirring and clicking and breaking. Its only now that we are on the cusp of the switch to fully solid-state computing.

Am I missing something, or should it read "hard disk" rather than "integrated circuit" here?

Roboprog 8 hours ago 0 replies      
I have been saying this since the late 90s.


Small code & data fit in cache, and run full speed. Fortunately, I can get at the GB that used to be (mainly) on my hard drive faster, now.

mmphosis 8 hours ago 0 replies      
It means that caching is often more trouble than its worth.
farresito 9 hours ago 1 reply      
I've always found very unfortunate that memsql is not open source. It looks very interesting. VoltDB seems to fill a similar niche. Has anyone tried both?
contingencies 3 hours ago 0 replies      
Database vendor frames history of computing in database evolution, makes snide remarks about competing technologies, admits it has no idea where the world is going while invoking the 'history repeats itself' notion. Well, duh.

OTOH, databases are only one component of modern architectures, which the article correctly asserts are largely limited in terms of scalability by throughput and latency. However, scalability is often secondary to functionality. And in terms of functionality, the long list of database types trawled out through the article only serve to highlight the real chokepoint: cognitive overhead.

Perhaps what we really need are tools that enable us to more easily stop and think about the problem. Ideally, tools to test, profile, compare and switch between storage or other subsystem architectures without having to delve in to infinitesimal intracacies of each.

Success really depends on the conception of the problem, the design of the system, not in the details of how it's coded. - Leslie Lamport

aesede 8 hours ago 0 replies      
how come nobody noticed qwantz.com's T-Rex yet!
Show HN: Picky Pint Scan beer lists with a photo
116 points by mp_mn  11 hours ago   42 comments top 27
dfan 9 hours ago 1 reply      
I'd love to see ABV on the main list (instead of having to go to that beer's individual page). When I'm scanning through a menu, that's one of the primary things I look at.
jcr 10 hours ago 1 reply      
Bravo! The site is beautiful and unusually complete. You've put in a lotof thought and work, and it shows. The press release for today was anice touch. It's great to see the choice of iOS and Android as well asfree and paid versions. I have no idea how many "beer ratings" apps areout there with a menu snapshot feature, but it's a wonderful idea.

Suggestion #1: Describe what we get for the paid "Pro" version. Atpresent, your site only says there's a "Free" version and a "Pro"version, but does not differentiate between the two.

(Note To Self: Do I really want the "Professional" version of a drinkingapp on my phone? -- Hmmm.... Decisions, Decisions, ;-)

Suggestion #2: Give a bit more information about the ratings andreviews, like where they come from.

Suggestion #3: I know there's a craft brewers association of some sortin the US (I saw it in a documentary I watched a while ago). It might bea useful source of data, particularly for the more esoteric, seasonal,and limited run, brews. I think the following is the group site:


Good Luck!

joshyeager 10 hours ago 1 reply      
Pretty cool idea. I don't have a beer menu here to test with, but the search is very fast.

One critique: your website doesn't explain the difference between Free and Pro. I had to go to the App Store to find that info, which took a lot longer.

One suggestion: add other dimensions for sorting besides bitterness. There are a lot of things other than bitterness that distinguish different styles.

One feature request: Let me track my own ratings and view them later. This is the first beer app that feels fast enough to use for tracking my own beer ratings. I love tracking books I've read in Goodreads because it makes it easy to find them again. I want to do the same thing for beer, but the apps I've tried (Pintly and BeerAdvocate) have been painfully slow and hard to use.

gentlebend 9 hours ago 1 reply      
Great, now bars will start implementing MITM attacks on their wifi routers to steer you to yesterday's flat keg.
josephjrobison 8 hours ago 0 replies      
You've got to be kidding me - 5 days ago I was looking at a long beer list from random brewers at Porter Ale House in Austin and thought of this exact same idea. The rise of hipster craft brew places with constantly rotating lists and rarely listing ABV makes this absolutely necessary.

Very excited that I can stop dreaming of it existing and use your version!

organsnyder 9 hours ago 2 replies      
I don't have a menu handy, so I tried taking a picture of the example on your website on my monitor. The first time, it said the picture wasn't clear enough, but the second time, it's stayed hung on "Scanning now..." for over five minutes so far. Nexus 5 running Android 5.0 stock, connected to wifi.

As I was about to submit the comment, it finally finished processing, returning a list of beers that weren't in the original menu image. Not surprised, given the poor quality inherent to taking a picture of a low-res picture on a low-density LCD screen, so the only issue I see here is how long it took to process the image.

Great idealooking forward to trying it at the pub tonight (hopefully I'll have better success there).

Jonovono 4 hours ago 0 replies      
Awesome! Can't wait to try this out. Something that I wouldn't mind added would be to keep like a history of all the beers you have seen and the places you saw them at so if I am craving my favorite beer I can quickly see what spots near me i've been to have it, but then you could crowdsource it and search any restaurants anyone has been to to see if they have certain beers. Just a thought.

Oh, and if they are bottles the normal ml in the bottle and the ABV and a score saying best buy for your buck :p

Thanks for making this!

* heads off to the bar

jsm386 8 hours ago 0 replies      
Very cool! This is like the beer version of WineGlass (http://wineglassapp.com/)! They managed to get ratings from CellarTracker via private access to their API (like Beer Advocate they don't offer anything publicly), so from the start that had access to the largest? library of user generated wine ratings on the web.

Are you blending user ratings with the ratebeer ratings in screenshots, or keeping them separate?

Wineglass has a feature I thought was pretty neat -- letting you know whether or not the price was 'fair for a restaurant' given typical industry markups. Perhaps not as applicable to beer, but could be a cool feature. And then you could surface 'bars with the best deals on beers you'll love.'

I'm in online wine media, so not as familiar with the beer space but it seems like lots of areas for collaboration (eg Nextglass, Untapped, BeerMenus.com as a fallback for OCR fails)

P.S. I understand the need to monetize, but having used heavily/played with dozens of apps in the wine/beer/liquor space, both free and paid, its rare to see random iAds (so far Target.com, some casino game install ad, and another casino game install ad. Perhaps the revenue is worth it but feels like there are much more interesting ways to monetize (native ads in terms of featured beers/all sorts of brewery/bar partnerships) than that sort of junky ad...

yarri 9 hours ago 0 replies      
Nice job & congratulations on shipping! Some brief feedback before I start field testing :-) this...

- Would love the ability to save a collection of favorites, maybe add folders or tagging?

- I often get asked to create list of recommended craft beers and so would like the ability to share these lists easily

- Your data will be sparse to start with, but I'd be interested in knowing how many other users put beers into their favorites (I find the BeerAdvocate listings a bit tedious to wade through...)

- Not sure you want to, but the professional brewers I speak to are interested in a lower-cost alternative to Untappd [0] and you might be able to build a business here?

- Maybe this is an East Coast US thing, but there's a growing trend to pair beers with cheese [1]; maybe that's too specific a request but allowing used to add notes to the beers (meta data beyond BeerAdvocate) might be useful

Good luck!

[0] https://untappd.com/business[1] http://www.huffingtonpost.com/2014/02/25/beer-cheese-pairing...

steakejjs 4 hours ago 1 reply      
So this looked really great and I'm sure it will be. I realize that there are a lot of barcodes, but I happened to see this thread while at Costco. I downloaded the app while here and it only recognized 2 beers from the entire beer isle.

Some were tricky but some were things that you definitely should have (like Sam Adams Winter). This will be really great when it's more complete but there is still some work to do

dreyfiz 6 hours ago 1 reply      
So, is this a RubyMotion app? (Asking because I'm curious, they recently released Android support in addition to iOS/OS X).

Regardless of whether or not you're using RubyMotion, would you like to share any comments or experiences about developing and releasing the app for both iOS and Android at the same time? I think it's remarkable, it seems like people pick either iOS or Android to launch. A lot of small iOS shops don't even build their Android versions in-house, they will contract the Android version out to an Android firm.

Great work on this app, I love it! Will try it out in the real world later today.

deltaecho1338 8 hours ago 0 replies      
I was not able to find your privacy policy. Are you tracking what beers I'm looking up? Where I am? How long I spend drinking? Are you serving ads generally or are they somehow targeted to me based on my behavior? Do I get more privacy with the pro version?

You will be collecting data that will be useful to other people besides me. I don't care if statistics on all app users are sold but if you want to sell things based on my specific behavior I won't use your app.

All that said I'm excited to give it a try.

brettkc 10 hours ago 2 replies      
Do you integrate with Beer Advocate or Untappd for your ratings?

Great idea, good luck!

t413 10 hours ago 0 replies      
Very similar to WhatWine App (http://www.whatwineapp.com): ocr menu parsing and reviews / recommendations. Cool! (Not my app, just saw it at a hackathon)
jsumrall 8 hours ago 0 replies      
Awesome! I just finished a project doing beer recommendations, and I wanted to use the RateBeer data but they put up a notice that they were not giving out API access anymore. We went with BreweryDB and were trying to add some ratings ourselves, which was sufficient for our project.

How did you get the use of RateBeers API?

twic 4 hours ago 0 replies      
Fantastic stuff! Now all you need to do is train your OCR to decipher the scrawl on the blackboards at the Euston Tap ...
colinbartlett 5 hours ago 0 replies      
This is really nice work. Good job.

Do you have plans to utilize the data for anything else? That could scare me and excite me at the same time.

chuckcode 8 hours ago 0 replies      
Very refreshing idea! I can't tell from the site if you let users upload their own ratings after trying a pint but I think that would a great feature and also a great independent source of reviews for the app.
strick 9 hours ago 0 replies      
Cool! would be nice if I could select a photo I have already taken from inside the app. http://www.sipsnapp.com/ has this feature and I use it.
PeterWhittaker 10 hours ago 0 replies      
I shall download forthwith! I look forward to being able to sort by IBU and screen out anything below whatever my current threshold happens to be (runs between 40 and 80, depending on the day).
codereflection 6 hours ago 0 replies      
This is a fantastic idea! Is there a specific requirement for Android 4.2+?
_nickwhite 9 hours ago 0 replies      
What are your thoughts on rolling this out to Windows Phone? A lot of us are stuck (or choose to be) on the platform.
macleodan 8 hours ago 0 replies      
It would be good if you could flag which ones are vegan, or only show vegan results.
amit_m 6 hours ago 0 replies      
What library/API did you use for OCR?
adamio 5 hours ago 0 replies      
What are you using for OCR?
egonschiele 7 hours ago 0 replies      
Downloaded. Can't wait to try it out!
pierreski 8 hours ago 0 replies      
Nice! This even sorts by bitterness and rating. It can even filter based on personal bitterness preferences.
PHP Cross-Platform Desktop GUI Framework
129 points by psykovsky  12 hours ago   63 comments top 22
TamDenholm 10 hours ago 2 replies      
Can i just say that clearly everyone knows its not the best tool for an actual serious project, but theres nothing wrong with whimsy. Its fun to do things because you can, that doesnt mean its going to become a serious tool. We just had a story about "Stupid Projects From The Stupid Hackathon"[1], just because its not a serious tool doesnt mean we should be throwing the hate, lets just take it for what it is, a bit of fun and a project you do because you can.

[1] https://news.ycombinator.com/item?id=8621886

gtCameron 11 hours ago 1 reply      
This is really cool. We have a few internal automation scripts built with PHP but its a pain to train non-technical users on using the command line. This would make it very easy to create a simple UI and make those tools more accessible, not to mention easier to bundle and distribute.
fredsted 11 hours ago 4 replies      
When it said "GUI framework", I kind of expected a way to make cross-platform apps with their native UI controls, not a web view with an embedded PHP executable.
jakejake 10 hours ago 1 reply      
I can see this being pretty useful for utility type apps on the desktop. If you already have a lot invested in PHP then it seems like it would be really comfortable. Since it's basically just running a local server, it's going to be just like writing a web app except (I assume) with access to the local file system and system calls.

There's a few comments here about using native GUI widgets and such but I think that's a tricky approach. Dealing with long-running state of a native app seems very un-PHP. I consider myself fairly experienced with PHP but I'm not familiar with creating background threads with PHP which you'd need for a responsive GUI.

My greatest fear with all of these various wrapper frameworks is that they become abandoned. With a node.js/Javascript one at least there are several out there and the code is likely to be portable from one to the next. With this PHP one, you'd be committed to Nighttrain.

jarnix 10 hours ago 1 reply      
I don't understand why this would not be a good idea for some of the commenters here. It can be useful to develop something for the desktop without learning c++ or c# (even if c# is rather easy), Adobe Air is not a solution of course. Yes it's a webview but how many developers know how to create a web app ? More than c#/c++/java/...
aikah 7 hours ago 1 reply      
First it's not a GUI Framework,it's a webview that uses a Python GUI framework with a PHP server. I bet one could create the exact same tool on top of node-webkit,since you'll be basically running a server.

The hell,you could do the exact same stuff with next to no "serverside" javascript by launching a PHP server yourself in node-webkit,with the advantage of using a better webview because i'm pretty sure the one used in wxPython is pretty old.


lepunk 11 hours ago 1 reply      
It is nice and everything, but don't really see the point. By the look of it is just a webview + a server running on localhost:8000.

Much rather prefer using node-webkit for this kind of desktop app (note: PHP is my go-to language)

RobAley 9 hours ago 0 replies      
As people are noting, there are other GUI toolkits for PHP. In fact there is growing interest in using PHP for more than just web apps, and I think this is only going to increase as PHP develops further as a general purpose language, and more and more non-web libraries mature. Then again I may be biased [1]

[1] http://www.phpbeyondtheweb.com

Tehnix 11 hours ago 0 replies      
I wish there was something like this for Haskell :(... Currently I'm aware of threepenny-gui[0], but that launches in the actual browser (correct me if I'm wrong), which isn't exactly what you'd want.

[0] https://hackage.haskell.org/package/threepenny-gui

FranOntanaya 6 hours ago 0 replies      
psykovsky 11 hours ago 0 replies      
I should maybe add that the main reason I posted this project to HN was I noticed they need a push, i.e.: contributors.

Oh, yeah, about the submission title, I couldn't find anything that described it better, even after a lot of head scratching. No, I'm not eloquent, I know... ;)

denysonique 10 hours ago 0 replies      
This is a good project, it opens the desktop platform to PHP developers.

I am no fan of PHP myself and for this kind of requirement I would use node-webkit which provides seamless integration of the webview and host JS (node.js)

But the most important thing here is that it opens the doors to desktop programming to PHP devs and therefore we may see more useful apps for the desktop being developed, previously some PHP devs may have had some interesting concepts for the desktop but they weren't able to make them happen due to their lack of skills or time to learn GUI programming.

aruggirello 10 hours ago 1 reply      
I think the title is a bit misleading; proper GUI frameworks currently available for PHP are wxPHP (working, and you may export PHP code from wxFormBuilder), Qt (experimental, I think) and GTK (probably abandoned).
kyriakos 11 hours ago 0 replies      
I can't think of many use cases for this but it does look like a cool project.
bigtunacan 8 hours ago 0 replies      
I shuddered when I saw this. It was like looking at the inverse of Java Applets. While interesting as a flight of fancy for the developer; I would hope a real GUI toolkit would be used for a serious application.
leeoniya 11 hours ago 0 replies      
anyone else remember WinBinder [1] or wxPHP [2]?

[1] http://winbinder.org/ (inactive since '10)

[2] http://wxphp.org/ (active)

emehrkay 11 hours ago 0 replies      
Seems to work by using Python's wx module for html display. I cannot find info on what kind of HTML5/CSS3/JS support it has, anyone know?
girishso 9 hours ago 1 reply      
Looks like a clone of node-webkit. What are the serious frameworks for cross platform GUI these days??
zkhalique 7 hours ago 0 replies      
How is this different than MacGap etc.?
tmmm 11 hours ago 0 replies      
Looks awesome! Any examples though?
10098 9 hours ago 0 replies      
Worst idea ever.
huhtenberg 11 hours ago 3 replies      
> Create your next OS X, Windows or Linux desktop application in pure PHP

No, thank you.

More specifically, I spent last Saturday setting up PocketMine (a Minecraft server) for my kids. This remarkable piece of engineering is a console app done completely in PHP and one of the first things it logged was "Can't keep up! Is the server overloaded?". That's a completely idle server on a beefy box. And all I could think was how regrettable it was that a clearly capable programmer voluntarily painted himself in a corner by picking a language that wasn't fit for the job. Same thing with GUI in PHP - yes, it's doable, yes, there's probably a demand for it, but this demand is misplaced and misguided and it's not worth of endorsing. It's like giving devs a heavier weight to sink deeper into a tar pit instead of giving them a rope for getting out. Desktop apps should never ever be written in PHP unless it's some sort of quick and disposable hack, which is likely not what this project has in mind.

       cached 20 November 2014 05:02:03 GMT