hacker news with inline top comments    .. more ..    8 Jun 2016 News
home   ask   best   2 years ago   
1
Mastering Programming: An Outline facebook.com
232 points by KentBeck  6 hours ago   42 comments top 15
1
jondubois 1 hour ago 1 reply      
This article should be called 'Mastering large-scale team programming'. In reality there is no single correct approach to programming. All programmers/engineers/developers have different specializations.

Some developers are really good at getting an MVP out the door quickly but their code may not quite work at scale. Others are good at working in large teams on large projects, others work better alone or in small teams. These different types will produce different types of code - the utility value of various programming habbits changes based on team size, project size and urgency requirements.

There could be some 'Master MVP programmers' and 'Master team-player programmers', 'Master large-project programmers'... You can rarely put them all under a single label - As developers we tend to get stuck with particular styles depending on which kinds of companies we have worked for.

It is not quite correct to assume that because a company is financially successful and handles many millions of users, that its methodologies are the only correct way to do things.

Programming is an adaptive skill and should change based on economic/scale requirements.

2
0mbre 1 hour ago 1 reply      
Something that is helping me a lot recently is trying to know all there is to be known about the tools/concept that I am using and the problem that I am solving. Too often have I used tools I half understood to solve problem that I didn't define clearly enough.
3
keyle 3 hours ago 2 replies      
Sadly just an outline but I didn't mind that. Good read.

I'd add a few things I've noticed over the years. Great developers like to pair program on tricky stuff, because they always learn something. They will try 2 different implementations if they're not sure which is best, and then the right one will then be obvious. They back their arguments with real world proofs. They try most fringe/new technology, even if it's not right for the current project. They hold large amount of domain knowledge in their heads. They admit when something is twisting their brains, draw it on paper, talk about it and then bang it out. They fantasize about what-ifs, in a perfect world, scenarios. And they love to share knowledge.

4
sleepychu 11 minutes ago 0 replies      
My top piece of advice:Programs behave predictably, when something impossible is happening it's because one of your assumptions is wrong. When that happens you'll find the bug the moment you start testing your full set of assumptions.

For some reason, even though this is invariably true, my friends at school didn't appreciate "I can't understand why I'm seeing this weird behaviour", "One of your assumptions is wrong!" xD

5
bcbrown 6 hours ago 3 replies      
> Call your shot. Before you run code, predict out loud exactly what will happen.

That's probably my favorite bit of advice. It really helps with understanding how much your assumptions diverge from reality.

6
elliotec 3 hours ago 0 replies      
I'd like to see this fleshed out more with examples, because I don't really know what some of these mean.
7
sigill 1 hour ago 2 replies      
The article makes a number of good points. The first three points in the "Learning" section resonated very well with me.

Then there's stuff I just don't understand. For example:

> Multiple scales. Move between scales freely. Maybe this is a design problem, not a testing problem. Maybe it is a people problem, not a technology problem [cheating, this is always true].

What does he mean by scales?

8
KentBeck 2 hours ago 2 replies      
I updated the description of 80/15/5, a career risk management strategy.
9
shadesof 3 hours ago 1 reply      
> When faced with a hard change, first make it easy (warning, this may be hard), then make the easy change.

This is my favorite bit. Katrina Owen mentions this in her talk on refactoring. "Make the change easy; then make the easy change."

https://www.youtube.com/watch?v=59YClXmkCVM

10
kevindeasis 2 hours ago 0 replies      
() Anyone else knows other paths/checklist from beginner programmer toexpert/senior programmer in different domains (front-end,back-end, dev-ops/sysadmin, android, ios, system programming, gaming, 3d, image, video) ?
11
quincyla 2 hours ago 1 reply      
There is wisdom behind these bullet points. This wisdom could be better communicated through a series of fleshed-out articles with real life examples.

Otherwise, these points are difficult to contextualize, retain, and apply.

12
hcarvalhoalves 3 hours ago 0 replies      
This is a great summary. One could change the title to "Mastering Problem Solving" and it would be just as true.
13
riazrizvi 1 hour ago 1 reply      
How does this drivel get upvotes?
14
makuchaku 2 hours ago 0 replies      
I just noticed, this note is being served by www.prod.facebook.com? Note : ".prod.facebook"
15
partycoder 2 hours ago 1 reply      
While the author is known (technical coach at Facebook, creator of XP software methodology), I sort of disagree.

You can follow this guide and still be a low value programmer. This guide won't take you to mastery level.

And, there is also a sense of irresponsibility around one item: "easy changes". Easy changes as in, duct tape programming? That's pretty much turning your project into a Jenga tower... you add your "easy change", that incurs technical debt, fix a problem... but lower productivity for following changes. Also sets a bad example for other people to follow.

2
25-year-old lived for more than a year without a heart sciencealert.com
88 points by rocketpastsix  4 hours ago   29 comments top 7
1
kerryfalk 2 hours ago 2 replies      
Admittedly I didn't read the whole story, the click-bait title got me.

I had a girlfriend with a similar device attached to her (an LVAD). At first it was a little unnerving seeing her unplug the batteries that were keeping her alive and charge them every night. It quickly became normal for me. Life with it seemed relatively normal except she had to have a purse with her at all times (it carried the batteries).

The interesting part was when she received a transplant and went cordless. After being attached to it for two years and not having a heartbeat the sound of her heat beating kept her up at night for a couple of weeks. I hadn't even considered the beating of a heart to be relevant to our daily lives until she had mentioned it once she heard hers beating again (an LVAD is just a constant velocity pump. The blood is always flowing).

Not sure how much it really adds to the conversation other than an interesting anecdote.

2
jamesblonde 8 minutes ago 0 replies      
Carmat have a competing device that's been implanted in 5 patients so far. It is much more technically advanced, with Alan Carpentier designing, I think, to prevent blood clots. However, it's. A pump based device Nd the jury is still out on how long pump based devices can last. They expect the xarmat heart to last 5 years.
3
chillacy 1 hour ago 0 replies      
The pump is shown in this video: https://www.youtube.com/watch?v=i9WUHSJrhm4

It's surprisingly loud

4
mentos 3 hours ago 4 replies      
Separate thought, if you were to replace your lungs with a machine that oxygenated your blood, could you calmly sit without breathing? Is the anxiety/impulse to breathe do to a lack of oxygen?
5
andrewclunn 3 hours ago 3 replies      
I wonder if it affected his mood at all. I mean is the heart pumping of anxiety or anger a symptom or part of a feedback loop?
6
Ma8ee 2 hours ago 2 replies      
That surprises me. I thought that we had ample proof that Cheney lacked a heart.
7
darawk 3 hours ago 4 replies      
This is not new or particularly interesting. Dick Cheney had one of these for a long time. I knew someone who had one too. I'm not sure why this is being written about as if it's new.
3
Mentoring in Gaza's first hackathon dopeboy.github.io
217 points by dopeboy  7 hours ago   47 comments top 16
1
iamcreasy 1 hour ago 0 replies      
"I met a Gaza Sky Geeks staff member who told me about his story. His parents had to flee their ancestral home from Jaffa in 1948. He lost his childhood home in the war of 2014. He also lost friends and neighbors in the war.

Despite all of this, you would never be able to guess of any of his past after talking to him. In fact, the only reason I knew to talk to him is because I overhead someone else mention his past. Hes the most upbeat, jovial guy at the space who was on his way to the U.K. in a couple days with an eye on a seed round for a startup hes working on. One might expect atleast some chip on the shoulder, some bitterness, maybe even a little anger. I havent seen that from him or any of other Gazans Ive met and thatmore than anything elsehas been the biggest surprise for me on this trip."

This is amazing.

2
jalami 42 minutes ago 1 reply      
My father was born in Gaza, but got out as soon as he could and went to college in the states. I'm glad this is happening! There's certainly talent there like everywhere else. A big problem I see with Gaza digging itself out of these problems is the destruction of infrastructure, inability to get supplies and the effect this has on the job market. Software development is one of those few professions that, as you touched on, allow Palestinians to find a somewhat stable flow of work.

Thanks for mentoring and doing what you can there. I'd like to go and see some extended family sometime myself.

Edit: How available are things like computers? I remember when my dad brought electronics over as gifts it was usually something more-or-less unattainable, but this was years ago: Super Nintendos and things like that.

3
aidos 4 hours ago 1 reply      
What was the process like in terms of applying for the Israeli military permit? Did you experience any problems with border control moving in and out of Israel/Gaza? Did you take your own laptop? What's the internet like? I'm assuming you don't get mobile coverage?

There are ground rules every mentor had to agree to before going. We could not go anywhere unaccompanied - Who's rule is this?

So. Many. Questions!

4
this2shallPass 55 minutes ago 1 reply      
>>6 The gulf (UAE, Saudi, Qatar, etc) is the regional tech hub.

I would imagine Israel is the regional tech hub, even if there are few to no interactions with tech people in Gaza. Would it be more accurate to say the gulf is the regional tech hub for these programmers' day to day lives?

5
bbcbasic 4 hours ago 1 reply      
This is turning into a very interesting AMA. Thanks dopeboy.

I wanted to ask if you felt in any danger at any time. Either in Gaza on onroute through the checkpoints?

What are the ambitions of the participants? Do they want to work / set up companies in Gaza or work abroad?

6
Myrmornis 5 hours ago 2 replies      
Excellent write-up and well done for doing this!

Were there any female instructors? If there had been some could they have led a female-only session in the evening?

7
tuna-piano 6 hours ago 1 reply      
Great writeup.

You touched on it - but was there much political talk at all? Or was it just the normal kind of conversations?

How was their english?

8
bjourne 4 hours ago 0 replies      
Wouldn't it be cool if there was street view in Gaza? If you search for Khan Yunis you find lots of pictures of dead bodies and smoking buildings, but absolutely nothing about what the city actually looks like.
9
Myrmornis 4 hours ago 1 reply      
Are there any relevant internet restrictions?

Is there any program for remote google-hangout type help, or remote buddy system, or something?

10
faizmokhtar 6 hours ago 0 replies      
Wow, this is an awesome write-up. Kudos to you.
11
Myrmornis 5 hours ago 1 reply      
What was the educational background/level of the participants? Where did they get programming skills from?
12
astronautjones 3 hours ago 2 replies      
this is awesome. do you think you or anyone you worked with might be affected by that terrible anti-BDS law Cuomo passed (if you were in its jurisdiction?)
13
blisterpeanuts 5 hours ago 2 replies      
Really interesting. Shame that they can't deal directly with Israelis, with all of their start-ups and like-minded technology types.
14
wprapido 4 hours ago 0 replies      
amazing story, dude!
15
partycoder 3 hours ago 1 reply      
The Gaza strip is an enemy state of Israel so, if you are Israeli, you immediately lose your citizenship by entering the Gaza strip (or any of the other enemy states).

https://en.wikipedia.org/wiki/Israeli_nationality_law#Cancel...

16
dannypgh 6 hours ago 2 replies      
Political talk is abnormal? The number of conversations I've had about Trump recently must make me an outlier, then.
4
When Haskell is Faster than C (2013) paulspontifications.blogspot.com
47 points by mightybyte  3 hours ago   39 comments top 11
1
tikhonj 2 hours ago 6 replies      
The title is a bit click-baity, but the core point is sound: C is not fast, it's optimizable[1].

If you just write readable, friendly C code it won't be all that much faster than normal code in a high-level language like Haskellit might even be slower. I know, I've done that myself. But you never see that in benchmarks, do you? That's not what benchmarks are about.

Here's another illustration: we can compile high-level languages like Haskell and Scheme to C. Does this make them "as fast as C"? Yesthey're literally running as C. But also no. Hand-optimized C code is going to beat those compilers any day. It's not even close.

C is not magical performance sauce you can sprinkle over your code to make it fast. It's just a language that doesn't have much mandatory overhead (no GC, minimal runtime... etc) and gives you access to certain low-level knobs that you can twiddle to optimize your code. I mean, that's important and useful, but unless you're going to apply that level of effort to your whole codebase, most of your C code won't be all that fast. Some applications need this level of optimization; most don't.

[1]: I actually wrote a little article about this myself too: http://www.forbes.com/sites/quora/2014/01/09/can-a-high-leve...

2
jlg23 2 hours ago 1 reply      
That's a great short read on the idiocy of over-optimization. I get a lot of similar questions when people learn that I write common lisp for a living - in most cases they go silent when I provide a sufficiently efficient solution to a problem that was just given to me in the meeting in which we were supposed to agree on a schedule for the implementation.... (which btw also happened a lot when I was mostly hacking in perl 15 years ago - if you haven't read "beating the averages"[1], read it now!).

[1] http://paulgraham.com/avg.html

3
theseoafs 4 minutes ago 0 replies      
A ridiculous article. Yes, Haskell will outperform C if you write a crazily inefficient C program.
4
im3w1l 1 hour ago 1 reply      
> I wrote the inner loop of the C to operate on a linked list of blocks

Linked lists are slow...

5
vvanders 2 hours ago 1 reply      
If you're not using C/C++ to take advantage of locality of reference, low memory footprint and the things that make C/C++ fast then why the hell are you using it?

Seems like this should just be filed under "right tool for the job".

6
chadaustin 1 hour ago 1 reply      
If you find yourself setting out to write Haskell that can outperform C, you will probably be disappointed. I love Haskell, but the myth that it just takes a little bit of elbow grease to make it as fast as C really needs to die. See for example: https://chadaustin.me/2015/02/buffer-builder/

THAT SAID, I have a relevant story. IMVU used this file format called "CFL" for its 3D content. It was basically a kind of zip file except it used LZMA for asset compression. The original CFL library was written in C++, and while it worked fine, it was getting annoying to maintain and compile it, as well as inefficient to pass data from Python file buffers into C++ and back out. So one day I decided to see if I could replace it with a bit of Python code and pylzma. After I made this change, parsing our content files was _twice_ as fast. Having the code in a couple hundred lines of simple Python allowed me to optimize the data flows, minimizing copying from the file buffers into the LZMA decoder and then to the consumer of the data.

Sometimes the best way to make something fast is to write it in a safe, expressive language that lets you directly express your intent. :)

8
rtpg 1 hour ago 0 replies      
A similar paper on Haskell's stream fusion has a pretty similar PDF filename (http://research.microsoft.com/en-us/um/people/simonpj/papers...). With benchmarks showing the "naive" implementation in Haskell of a simple stream problem beating hand-tuned C.

Haskell is still pretty well-defined in terms of execution, but freedom from how to manage memory is in itself a major liberator for the optimisers.

9
bipvanwinkle 1 hour ago 0 replies      
I think it's worth pointing out though that idiomatic C is probably going to be more consistently performant. It seems common to run in to situations in Haskell where one change can cause 10X speedup, but I don't see that nearly as often with C code. I don't have a lot of evidence on hand to support this, just what I've observed personally. This seen fair? Relevant?
10
divkakwani 37 minutes ago 1 reply      
What tools did the OP use for profiling his C and Haskell code?
11
tacos 2 hours ago 0 replies      
"...you don't need C. Haskell will give you the same performance, or better, and cut your development and maintenance costs by 75 to 90 percent as well."

I want both the compiler from the year 2030 and the drugs this guy has.

5
Electric vehicle battery costs rapidly declining, Tesla cited as leading evannex.com
310 points by mgdo  11 hours ago   199 comments top 16
1
narrator 10 hours ago 13 replies      
I find it amusing to see so many people questioning why Tesla didn't release a cheap Tesla sooner. The answer is: They don't have the batteries to do that. That's why they are building the GigaFactory.

People don't get that, on a large industrial scale, you actually have to think about supply. They think you just spend money and supply shows up because they are used to thinking on a household scale where supply is generally infinite for anything they could possibly consume.

I think one pervasive economic fallacy is that any amount of money can fix anything. We just need to find someone to write a big enough check, either the government or wall street and all those batteries will materialize instantly out of some technology horn of plenty. This has driven the focus on "stimulus" and "aggregate demand" in various forms of economic dialogue. We could use more supply side thinking about very large problems that the economy faces, especially in the public sector. Health Care, for example, is one of those things where, if you're not thinking about supply, spending more money just makes everything more expensive.

2
pmarreck 5 hours ago 1 reply      
Tesla is going to be a gigantic company one day.

Disclaimer: I own one (Model S P85+, about 2 years old). And everyone I let drive it (yes, I am crazy enough to do that... As a fan who pushes the bleeding edge, it's important to share the experience in order to change minds... Anyway, my insurance covers me even when I'm a passenger!) cannot stop talking about it afterwards. I actually think Tesla's market is limited right now not only by the people who can't afford one, but by the people who haven't even had the opportunity to drive one yet and have thus not yet had their eyes opened...

Driving is believing. If you have any opportunity at all to experience driving one... Do not hesitate. It makes everything else feel like a noisy clunker.

The older guys I let drive it, like the real car guys of old... they have the best reactions. The look of shock and disbelief on their faces... The stories they start telling about their first muscle cars... The whooping and "OH MY GOD"'s and whatnot... It's totally awesome

3
OliverJones 7 hours ago 1 reply      
US$145 / kWh -- GM's late 2015 price for Volt / Bolt power -- is OK.

But the differential in price, in New England where I live, between peak and offpeak electricity, is US$0.027 (2.7 cents) per kWh. I worked it out: my investment in a battery setup at those rates would take about 17 years to pay off: far too long.

If we want this to work, we need a smart grid: a grid that can announce pricing based on current costs. Then we need baseload electricity costs (hydro, nuclear, gas, coal) to be significantly lower than peakload (fuel oil, Storm-King style pumped gravity storage) costs.

The smarts for a household energy storage system wouldn't be hard to work out IF the grid were smart enough to advertise present costs, and meters were smart enough to bill for present costs. My Power Wall could charge with cheap power and run my lights during a nasty summer brownout.

I understand they're experimenting in Europe with announcing prices using the FM radio sub channels now used to display song names. That's interesting.

4
sremani 10 hours ago 3 replies      
caution.1. a tesla accessory site.2. the report is full of projections and hard to understand what is wishful and what is not.3. GM: $145/kw Oct 2015 - Tesla $100/kw in 2020, how is Tesla killing it?
5
ChuckMcM 3 hours ago 0 replies      
I would love to see $100 kWh batteries. Worst case, my house uses 30kWh a day in electricity so 90kWh would be a 3 day+ UPS for the house. Update the solar on my roof and be off grid for an addition $9K in batteries? That would be totally doable for me.

I think it would be hilarious if houses went back to just having a gas hookup like it was prior to the spread of electricity.

6
jpm_sd 9 hours ago 1 reply      
"since 2008 [...] battery energy density had a fivefold increase"

Citation needed! In that time period, the highest energy density 18650 cells on the market have gone from ~200 Wh/kg to ~250 Wh/kg.

7
olivermarks 10 hours ago 1 reply      
Look to China for mass production of cost effective batteries, not Tesla... firms like EV West help me demystify the true state of maturity of this market http://evwest.com/catalog/
8
macspoofing 8 hours ago 1 reply      
Can Lithium battery supply even meet the demand if a significant portion of the auto market switches to electric?
9
Aelinsaar 11 hours ago 1 reply      
Between this and the projected increase in PV solar efficiency, drop in cost and toxicity of manufacture... I don't think I've felt this kind of cautious optimism for a renewable energy infrastructure before.
10
mtgx 11 hours ago 2 replies      
4x drop in battery prices, (or) 5x increase in density in only 8 years. Pretty damn good for batteries that "aren't following Moore's Law", something most battery-related articles are quick to remind us. Hopefully, this continues for at least another decade, which should make EVs more than competitive with ICE cars.
11
disposeofnick9 9 hours ago 1 reply      
Can't wait to buy the plastic battery holder grids for Tesla-sized cells. My first 18650 pack is made from 4 laptop batteries and is capable of about 63 A at 0.2C.
12
Animats 10 hours ago 3 replies      
From the article: "Since 2008, estimates of battery costs were cut by a factor four and battery energy density had a fivefold increase."

We're not seeing that kind of improvement in mobile devices.

13
Shivetya 9 hours ago 1 reply      
So were are only talking about numbers they are hoping for, not currently in production? GMs numbers are current, what are Tesla's current battery costs? Just because they want 100 by 2020 doesn't mean they are leading the charge, simply leading the wish list.

Still waiting on density improvements because frankly 400kg for 200 odd miles of range is not good. Of course with higher density means better charging and hopefully standards are ready for it

14
agumonkey 9 hours ago 2 replies      
What about recycling btw ?
15
tn13 9 hours ago 1 reply      
I don't see how cars will have any impact on battery life. How many cars do we sell each year? Even if we assume 10% of cars are electric it is nothing but a drop in an ocean in terms of number of batteries being sold.

I am unable to see how Tesla car could have any impact on battery industry in terms of economy of scale. Any battery based solution for homes etc. could possibly bring economy of scale into picture.

16
bunkydoo 5 hours ago 3 replies      
The big problem lies still in the fact that most electricity is generated from coal and nuclear. Every Tesla on the road today is effectively a coal powered car with the potential to be converted to green energy. Solar panels likely being the equilibrium. But the decline in battery cost is good, means it won't be long before they stop losing 1k on every car sold
6
Planned GPS outages in southern California avweb.com
319 points by eajecov  13 hours ago   180 comments top 23
1
lb1lf 12 hours ago 5 replies      
From the article: "Operators of Embraer Phenom 300 business jets are being urged to avoid the area entirely. Due to GPS Interference impacts potentially affecting Embraer 300 aircraft flight stability controls, FAA recommends EMB Phenom pilots avoid the testing area and closely monitor flight control systems, the Notam reads."

That is beyond scary; how anyone can defend having critical aircraft control systems rely on an input which may be turned off at will is beyond me.

Let us at least hope the system fails gracefully and notifies the pilot that something odd just happened and you will have to do your own flying from this point on, rather than just going titsup and be done with it.

2
matt_wulfeck 12 hours ago 7 replies      
Testing against GPS is an interesting challenge. It's technically a critical public service, so any disruption of it should also be broadcast. Any weapon therefore must be tested out in the open.

The development of these weapons probably has some influence on the Navy's decision to bring back celestial navigation[1].

Come to think of it, I don't think I even own a compass.

[1] http://www.npr.org/2016/02/22/467210492/u-s-navy-brings-back...

3
cryptoz 12 hours ago 0 replies      
4
supernova87a 9 hours ago 2 replies      
So, here is just a little bit of amateur desk research into some things we might be able to gather from the information:

The FAA flight advisory provides the coordinates and the nature of the GPS signal disruption, which is centered near China Lake, and has expanding rings of area, each of which rises in altitude. For the pilots out there, imagine the classic upside-down wedding cake shape. Or cone with its point at the ground.

This would seem to indicate some kind of broadcast or interference from a source that is located at the ground, propagating line of sight with larger radii with altitude. Rather than something to do with the satellite itself.

The center of the coordinates are 360822N, 1173846W, which is in a big empty desert area, just south (SSW of Darwin, California), see here: https://www.google.com/maps/place/36%C2%B008'24.0%22N+117%C2...

It could of course be some kind of antenna, or even a flight that is producing this signal. But there's also an interesting long V-shaped two-legged testing(?) facility just to the east of these coordinates, which you can see in the Google Earth image. I might be mistaken about what that facility is, because aeronautical sectional charts also show a mine in that area, but this doesn't look like a mine site. Also there are a bunch of vehicles that look like Humvees on the pad nearby. And there are three antenna looking structures at the north end of the paved line.

Anyway, it's interesting to speculate about.

5
alpb 11 hours ago 2 replies      
From Reddit comments at r/aviation (https://www.reddit.com/r/aviation/comments/4msmh7/gps_interf...) it appears like this could affect civilian GPS usages such as geolocation apps. I wonder if Google Maps or any other GPS apps should be showing a warning that because those apps can just behave weirdly?

As a foursquare/swarm user myself I would be quite pissed off by my OCD if I cannot check in to places I go haha.

6
guelo 12 hours ago 6 replies      
Interesting that the interference occurs 50' Above Ground Level, not sea level. I can't even imagine what technology that is that can somehow jam along the contours of the earth.
7
Artlav 12 hours ago 3 replies      
I wonder how many of the plane "GPS receivers" can pick up GLONASS as well?

Most consumer units can see both networks these days, however aviation tech is known to lag a lot in such matters.

8
tjohns 11 hours ago 1 reply      
Interstingly, it looks like this is a semi-regular thing in different parts of the US:

https://www.google.com/search?q=FLIGHT+ADVISORY+GPS+Interfer...

Last one looks to have been May 22-23 in Louisiana, with another one from June 1-30 in New Mexico.

9
lutorm 11 hours ago 3 replies      
How does the FAA think this is going to work after 2020 when air traffic control will run off of ADS-B positional telemetry from aircraft. It seems a GPS shutdown like this would basically shut down IFR flying and airport terminal control since ATC has no other way of knowing the position of airplanes.
10
disposeofnick9 10 hours ago 2 replies      
Why on earth does the military-industrial complex need to spend money on a duplicate, irrelevant technology? I worked at Trimble Nav in the radio group, and the POTUS has the ability to increase selective available (SA) to an enormous number (which can be defeated by differential/kinematic corrections, but was set to 0 by executive order under Clinton) or disable the unencrypted channel entirely for a particular region or the entire planet (which includes space). WTF!
11
brc 12 hours ago 2 replies      
I know someone who was at sea when their GPS stopped working. They found out later a nuclear sub had come into port around the same time. Seems like it wasn't a coincidence as gps outages are rare.
13
jackgavigan 6 hours ago 0 replies      
GPS jamming is old hat.

A lot of weapons testing takes place at China Lake (where the disruption will originate from), including missiles and guided bombs that use combination GPS and inertial guidance systems.

They're probably testing various weapons systems' ability to continue to function in the face of GPS jamming.

14
sandworm101 5 hours ago 0 replies      
Um .. why do they need a weapon to "disrupt" GPS? It's their birds. They can turn it on/off selectively whenever and wherever they deem necessary. Or is this meant as a test of something to disrupt the Russian/Chinese systems?

And why southern california? Alaska, the pacific... northern Canada ... there are lots of lower-traffic areas. We aren't getting the full story.

15
KateBone 12 hours ago 1 reply      
I'm definitely crossing off the Embraer Phenom 300 from my shopping list of personal jets !

Seriously though, who thought this was a good, or safe, idea?

16
nameless912 12 hours ago 4 replies      
I, for one, cannot believe that GPS doesn't have a pre-prod environment. I guess they just don't grok dev ops like us young hip developers.

No but seriously, the fact that we don't have a backup for when GPS inevitably shits the bed sometime in the future is a fundamental existential threat to mankind. We should probably do something about that.

17
chockablock 6 hours ago 0 replies      
According to the NOTAM [1], this extends to SF Bay area as well, even near ground level.

Tests may be repeated on June 7, 9, 21, 23, 28, and 30, between 9:30a-3:30p.

[1] https://www.faasafety.gov/files/notices/2016/Jun/CHLK_16-08_...

18
gene-h 9 hours ago 0 replies      
I have a hunch that some people are going to have difficulty withdrawing cash from ATMs[0] tomorrow. Although, perhaps they are being overly cautious here.

[0] https://www.newscientist.com/article/dn20202-gps-chaos-how-a...

19
Animats 10 hours ago 0 replies      
I wonder what this will do to ground-based GPS users. Aviation doesn't really need GPS; aircraft have multiple other systems. But phones, cell towers, and other devices have no other position input. Car navigation systems may become lost. We now get to see which GPS units have enough smarts to detect inconsistent data.
20
emblem21 4 hours ago 0 replies      
Satellite warfare adaptability simulation?
21
Raphmedia 11 hours ago 1 reply      
Something like that could wreck havoc in a world where all cars are self driving... !
22
awqrre 8 hours ago 1 reply      
Does that have anything to do with voters suppression for today's election?
23
shapiro44 12 hours ago 3 replies      
Coincidence? Is this meant to disrupt the California primary? Voters getting lost on way to voting stations without reliable GPS. There are a lot of first time voters.
7
Program your next server in Go golang.org
424 points by rjammala  14 hours ago   274 comments top 36
1
fpgaminer 12 hours ago 13 replies      
All of the server backends at my company are written in Go. This was a result of me writing a couple servers in Python a few years back, ending up with lots of problems related to hanging connections, timeouts, etc. I tried a couple different server libraries on Python but they all seemed to struggle with even tiny loads. Not sure what was up with that, but ultimately I gave Go a swing, having heard that it was good for server applications, and I haven't looked back. It has been bullet proof from day one and I am overall happy with the development experience.

That was the good. The bad? Garbage collection, dependency management, and lack of first-tier support in various libraries. Garbage collection makes the otherwise lightweight and speedy language a memory hog under heavy loads. Not too bad, but I have to kick the memory up on my servers. Dependency management is a nightmare; honestly the worst part about it. The lack of first-tier support in various libraries is a close second. AWS's API libraries had relentless, undocumented breaking changes when we were using them, all on the master branch of their one repo (breaking Golang's guidelines for dependencies). Google itself doesn't actually have any real API libraries for their cloud services. They autogenerate all API libraries for golang, which means they're not idiomatic, are convoluted to use, and the documentation is a jungle.

We continue to use Go because of its strengths, but it just really surprises me how little Google seems to care about the language and ecosystem.

2
oconnor663 2 hours ago 1 reply      
> When writing code, it should be clear how to make the program do what you want. Sometimes this means writing out a loop instead of invoking an obscure function.

For example instead of the obscure function

 a.reverse()
you can use the clear for loop

 for i := len(a)/2-1; i >= 0; i-- { opp := len(a)-1-i a[i], a[opp] = a[opp], a[i] }
:(

3
matthewmacleod 12 hours ago 1 reply      
Go has been great for me at providing things like simple microservices, network plumbing, CLI tools and that kind of thing. The C integration is also super simple and makes it easy to wrap up third-party libraries.

It's also a bit tedious to write in practice. It's dogmatic, and that's obviously a benefit in some ways but comes with the cost that quite a lot of time in my experience is wasted fiddling around with program structure to beat it into the way Go wants it to work. Dependency management is better with Glide but still not perfect. The type system is quite annoying, and although it's a cliche the lack of generics is quite annoying. Lots of silly casting to and from interface{} or copy-and-pasting code gets old quickly.

Still, it's a great tool for its niches and I really think everyone should pick it up and use it - the idea of simplicity it promotes is actually kind of interesting, in contrast to the "showy" features one might expect of a modern language.

4
avitzurel 13 hours ago 4 replies      
I love Go.

It has become the default Go-To (pun intended) language for me for almost anything that needs to be small and portable.

However, I don't see myself writing a full server with it, I would still prefer a dynamic language like Ruby/Python for that and use Go for micro-services CLIs and the rest.

For example:

Our main application is Rails, it communicates with SOLR as the search index, in between the application and SOLR there's a proxy server that backups the documents onto S3 and also does Round-Robin between slaves.

One other thing is that we use Go to communicate with all external APIs of 3rd parties, the application code is rails and it communicates transparently with a Go server that fetches the data from 3rd parties and responds to the main application.

5
tokenizerrr 13 hours ago 5 replies      
What about debugging? This is the major pain point for me. I've tried using GDB, but...

> GDB does not understand Go programs well. The stack management, threading, and runtime contain aspects that differ enough from the execution model GDB expects that they can confuse the debugger, even when the program is compiled with gccgo. As a consequence, although GDB can be useful in some situations, it is not a reliable debugger for Go programs, particularly heavily concurrent ones. Moreover, it is not a priority for the Go project to address these issues, which are difficult. In short, the instructions below should be taken only as a guide to how to use GDB when it works, not as a guarantee of success.

https://golang.org/doc/gdb

6
nimmer 13 hours ago 5 replies      
I'd love to see Nim on this diagram: https://talks.golang.org/2016/applicative.slide#13 - it could be close to the top right corner.
7
voltagex_ 44 minutes ago 0 replies      
Can anyone convince me to use Rust over Go, or the other way around?

My target machines range from i7s with massive amounts of RAM to Raspberry Pi with slightly-less-massive amounts of RAM.

8
robohamburger 12 hours ago 0 replies      
I will have to try Go again. It seemed really awesome at first then quickly seemed like a regression in a lot of PL design things (which is good in some cases). I personally like rust but maybe I am a glutton for type based punishment.

Solution: design the language for large code bases

This seems crazy but whatever works. I would assume that would only buy you some wiggle room inside whatever order of magnitude of committers you have. It seems like eventually you would need to split up the code base if you are having contention issues.

9
shurcooL 11 hours ago 2 replies      
I like slide 41 [0].

 What just happened? In just a few simple transformations we used Go's concurrency primitives to convert a - slow - sequential - failure-sensitive program into one that is - fast - concurrent - replicated - robust. No locks. No condition variables. No futures. No callbacks.
It's the ability to make these kind of transformations effortlessly at any level, whenever I need to, that make me appreciate choosing Go when solving many tasks.

[0] https://talks.golang.org/2016/applicative.slide#41

10
capote 9 hours ago 0 replies      
How do the bullet points in "Why does Go leave out those features?" address why Go leaves out the features on the preceding slide?

All it talks about is clarity (important but not the only important thing) and I just don't see how any of the left-out things are inherently unclear. I think you can write clear and unclear code alike with all of those left-out features.

11
yanilkr 12 hours ago 8 replies      
I once tried to convince an enterprise java developer to give golang a try. The guy passionately hated it and the reasons were very very petty. The other younger engineers who did not have prior bias loved golang and they were productive so fast.

The person truly had a java supremacy attitude that was very difficult to deal with. Golang is a kind of shift in thinking that you have to first unlearn your existing ways of thinking and then you will have a place for it. Some people are not willing to take that leap of faith unfortunately.

12
zZorgz 9 hours ago 0 replies      
I had a PHP program that processed HTTP requests and stored some data onto a local database, and decided I needed to rewrite it for various reasons so I decided to choose Go. Some points I recall:

* Static typing is good.

* As I expected, the standard library and other packages available had the http & routing stuff I needed, which is all good.

* I like that errors are specified in function signatures, unlike exceptions in languages like ruby/python.

* I don't like errors being easily ignored, and return values being assigned default or arbitrary values. I once may have also accidentally used the wrong equality operator against nil.

* Defer is nice, but would be better if it was based on current {} scope.

* Append on arrays? has very bizarre semantics sometimes mutating or returning a different reference.

* Initially I ran into trouble reasoning how to use some sql package and ran into "invalid memory" deference issues or some such when passing a reference. Thus, I'm skeptical about "memory safety."

This was only a simple program though and turned out to be worthwhile for me in the end.

13
thom 12 hours ago 2 replies      
Holding up Perl and JavaScript as examples of languages that are 'fun for humans' makes it pretty clear I'm not the target market.
14
PeCaN 13 hours ago 5 replies      
What are some cases where I would choose to write a server in Go instead of in Erlang?
15
fauigerzigerk 13 hours ago 1 reply      
">50% of code base changes every month"

I wonder what unit is being counted here. I don't think it's possible to actually review and rethink 50% of what has been created before. That's just not sustainable.

16
BooneJS 12 hours ago 0 replies      
I use Go exclusively for command-line applications, previously using Perl (ducks). It's a fairly simple language, you can pick it up quickly, and gofmt/godoc/etc are useful utilities in reducing friction.
17
squiguy7 13 hours ago 1 reply      
> Clarity is critical.

When you need to write high performance code this is a great maxim. I enjoy the simplicity of Go and the guarantees it provides. Being able to reason about code and not having to guess is a win for any development team.

18
iagooar 12 hours ago 2 replies      
One important niche I see that Go serves very well is in distributed, fault-tolerant deploy platforms (aka schedulers), like Kubernetes or Mesos. If you look at the amount of tooling that uses Go, you almost feel there just is no other choice out there.

I would not adventure to say state-of-the-art schedulers would not have been possible without Go, but for sure Go fits the requirements pretty well.

19
dicroce 7 hours ago 0 replies      
"Sometimes this means writing out a loop instead of invoking an obscure function."

I can't help but think this is specifically a dig in C++'s direction. Since C++11 lambda's I've been using <algorithm> a lot more and I don't think you could get me to go back at this point... Yes I had to learn exactly what a few methods do, but now I have beautiful straight line code...

20
chuhnk 7 hours ago 0 replies      
Go is a phenomenal systems programming language and becoming quite useful as a general programming language too. It's clear from the projects that are now coming into existence that Go lends itself well to the world of distributed systems and from the language design you can see that it was created with network programming in mind. The fact that concurrency is built into the language and errors are treated as values that should be dealt with just highlights those facts.

We used Go at Hailo for our microservices platform and it served us incredibly well. I've gone on to create an open source project called Micro https://github.com/micro/micro that builds on those past experiences. It's just a joy to write micro services in Go.

21
bfrog 2 hours ago 0 replies      
Go is a fine language and a very good run time, though having written a large program with it I've learned its warts well enough to not want to use it again personally.
22
justinsaccount 12 hours ago 1 reply      
> Lingo: Logs analysis in Go, migrated from Sawzall

Would love to play around with this

23
inglor 12 hours ago 2 replies      
The slides are awesome and I really am fond of go, but the examples using channels are all more code to write considerably than I'd write in C# or JavaScript with async/await and not any more robust or safe.

Go is great for actor based systems where you model things using channels and goroutines for what they stand conceptually - not when you use it to simulate Task.WhenAll/Promise.all with a timeout.

I think _that's_ what they should be selling - that your server's architecture should typically be different.

24
satysin 13 hours ago 1 reply      
The only place I would want to use Go is for a server tbh. It isn't all that great for anything else imho.
25
Matthias247 9 hours ago 0 replies      
The presentation focuses a lot on [web] servers and google scale, but I found that Go also works quite well for applications/services on embedded linux systems.

Main pros for me there are:

 - Easy to cross compile and deploy - Daemons often need to do a lot of communication (some also for providing web APIs) and need to embrace concurrency. Both are covered very well by Go's ecosystem. - Compile-to-binary eases distribution concerns in cases where you want to avoid to publish all source code (and thereby know-how) compared to VM languages or scripting languages.

26
yvsong 12 hours ago 3 replies      
Any comment on Swift vs Go, potentially for server programming?
27
insulanian 4 hours ago 0 replies      
Which kind of applications does one write in Go? Asking this from perspective of a developer working mostly on business apps with Angular frontend and .NET (C#/F#) backend.
28
poorman 6 hours ago 0 replies      
server is a pretty loose term. Most servers* these days require some sort of full stack, with frontend, ORM, etc... Go adds a lot of development time if you need all of that. ...On the other hand, one off tiny microservices, it's absolutely great!
29
sly010 12 hours ago 0 replies      
Go is my secret and I wish less people used it, so I had an advantage over them ;) Edit: typos
30
hobo_mark 10 hours ago 1 reply      
I might have strange requirements for a server, but I need rdma, verbs, libfabric... Is there any way to use them in an idiomatic way in go?
31
callumjones 12 hours ago 0 replies      
I think "Program your next server in Go" is a little too broad, the specific language features that Go explicitly leaves out makes it hard to build an extensive backend server. Go is best suited to use cases listed in these slides: simple services that do very focused things.

I love Go and used it to build some very useful web hook and CLI tools. It just doesn't lend itself to something where you expect to have a vast set of APIs under one Go project.

32
aj7 12 hours ago 0 replies      
Can't read on IOS
33
codedokode 1 hour ago 1 reply      
Here are the problems I had when tried to write a simple CLI utility (tool to run any program in seccomp-bpf based sandbox) in Go:

- using case of a first letter of identifier as a public/private flag. You end up with half names starting in a lowercase letter, half in an uppercase (the code looks inconsistent) and forgetting how to spell them. And having to rename the function everywhere when you decide to change it from private to public.

- no official package manager. Unclear how to add external libraries to your project and how to set specific version you need. I ended up adding necessary files into a separate folder in my project.

- Go manual suggests you have single directory for all projects and libraries. That was inconvinient because I develop on Windows and use Linux only to test and run code in /tmp directory, I do not keep the code there. And why would I want to keep unrelated projects inside the same directory anyway?

- no rules how to split contants, types and functions into files and folders. For example in PHP there are certain rules: each class goes to its own file and you always know that class Some\Name is stored at src/Some/Name.php. Easy to remember. And in Go you never know what goes where. Large projects probably look like a mess of functions scattered around randomly

- no default values for struct members, no constructors

- no proper OOP with classes

- standard library is poor

- open source libraries you can find on github are not always good. I looked for library to handle config files and command line arguments and didn't like any.

- standard testing library doesn't have asserts

- easy to forget that you need to pass structures by pointer (in OOP objects are passed by reference by default). And generally use of pointers makes the code harder to read and to write.

- weird syntax for structure methods. They are declared separately from the structure.

- go has 2 assignment operators (= and :=) and it is easy to use the wrong one

- having to check and pass error value through function calls instead of using an exception. So most of functions in your code will have two return values - result and error

- no collections library

- simple things like reading a file by lines are not so simple to implement without mistakes

- static typing is good but sometimes you cannot use it. For example I wanted to have the options in a configuration file mapped to the fields of a structure. I had to use reflection and every mistake lead to runtime panic. And you cannot use complex types like "pointer to any structure" or "pointer to a reflect.Value containg structure" or "list of anything" or "bool, string or int".

Of course Go has also many good parts that might outweight its disadvantages but I am not writing about them. For example I have not used goroutines but they look like a simple solution for processing async tasks or writing servers.

I think Go is not ready yet for writing large applications. It might be ok if you write a small utility but I cannot imagine ORM like Hibernate or web application written in Go.

Also I took a look at the code in the presentation. I wouldn't want to write such code. For example, here https://talks.golang.org/2016/applicative.slide#20 they use static methods (http.HandleFunc(), log.Fatal()) instead of instance methods. So you cannot have two logs or two servers. Using static methods everywhere is bad especially in large applications. Google itself uses Go only for small utilities like simple proxy servers.

34
moron4hire 11 hours ago 0 replies      
I've been experimenting with this concept with C# recently [0], where I have a small backend written in C#, exposing a simple, RESTful HTTP server, that automatically finds itself a local port to run on and opens the default browser to a default page.

It's actually kind of nice. Until I did this the first time, I hadn't realized just how much bullshit I had previously put up with, with setting up local web servers, trying to get configurations down, etc., etc. At some point, I think most web framework's configuration options just got too complex to be considered configuration options and became weird, poorly defined scripting languages for defining web servers. Having a real programming language to do that instead is just a wonderfully smooth experience.

Some things I plan on implementing with it:

* local file system access, to ultimately implement an FSN [1] clone in my WebVR project.

* my own Leap Motion WebSocket service, because the default one doesn't use the latest Orion beta and its associate JS library is complete garbage.

* A similar dude for MS Kinect data.

* Ultimately, get the previous two to run over WebRTC instead (not easy, there is no WebRTC library for Windows outside of major browser implementations) to be able to stream their respective camera data.

* Live raytracing of model textures for baked lighting in scenes in the WebVR session.

Right now, it's just a source file I drop into a standard C# console project. I'm thinking about making it a full-on library, though at this point there isn't much need.

[0] https://github.com/capnmidnight/HereTTP

[1] https://en.wikipedia.org/wiki/Fsn

35
naivepiano 12 hours ago 2 replies      
I'm afraid HN has a serious problem with downvoters. Why -in heavens name- is the above a question that deserves downvoting?UPDATE: whoray - I got downvoted too. Gee man. Just not worth it. Buy and thanks for the fish.
36
13 hours ago 13 hours ago 3 replies      
The instructions are literally at the bottom of the first slide.
8
Microsoft Finds Cancer Clues in Search Queries nytimes.com
89 points by hvo  7 hours ago   29 comments top 13
1
vmarsy 6 hours ago 3 replies      
For those curious about what kind of queries the researchers were interested in : "it typically produces a series of subtle symptoms, like itchy skin, weight loss, light-colored stools, patterns of back pain and a slight yellowing of the eyes and skin that often dont prompt a patient to seek medical attention." [1]

The article name is:J. Paparrizos, R.W. White, E. Horvitz. Screening for Pancreatic Adenocarcinoma using Signals from Web Search Logs: Feasibility Study and Results, Journal of Oncology Practice, June 2016.[2]

[1]https://blogs.microsoft.com/next/2016/06/07/how-web-search-d...

[2] http://jop.ascopubs.org/content/early/2016/06/02/JOP.2015.01...

2
huuu 8 minutes ago 0 replies      
Isn't the real story here that Bing is keeping track of user's search queries for months?

Google Flu is working different. They try to predict a flu epidemic by counting related search queries.

But Microsoft is predicting the health of a single person based on his search history.

Edit: Thinking about it: ofcourse Google, Facebook and others could do the same because they also gather user data.

3
rathish_g 5 minutes ago 0 replies      
Wait ... they used Bing for such a long time and lived to tell the tale? :)
4
panic 48 minutes ago 1 reply      
To translate the false-positive rate into concrete numbers:

According to the American Cancer Society (http://www.cancer.org/cancer/pancreaticcancer/detailedguide/...), about 53,070 people will be diagnosed with pancreatic cancer this year. The abstract says this method detects 5% to 15% of cases: that's about 2,700 to 8,000 correct detections. Assuming there are 100 million people using Bing (https://www.quantcast.com/bing.com), between 1,000 and 10,000 cases will be wrongly detected (0.00001 to 0.0001 false positive rate).

5
seizethecheese 5 hours ago 2 replies      
Methods: We identified searchers in logs of online search activity who issued special queries that are suggestive of a recent diagnosis of pancreatic adenocarcinoma. We then went back many months before these landmark queries were made, to examine patterns of symptoms, which were expressed as searches about concerning symptoms. We built statistical classifiers that predicted the future appearance of the landmark queries based on patterns of signals seen in search logs.

Results: We found that signals about patterns of queries in search logs can predict the future appearance of queries that are highly suggestive of a diagnosis of pancreatic adenocarcinoma. We showed specifically that we can identify 5% to 15% of cases, while preserving extremely low false-positive rates (0.00001 to 0.0001).

6
avbor 4 hours ago 0 replies      
Target actually did something similar, though their intent was to eventually find better ads for families who were expecting. In their case, fully deanonymizing and being straight forward - straight out advertising baby products - turned out to be a nightmare, and they eventually turned to more subtle advertising by inserting baby products into the weeklies.

Could we see a case where, when someone searches for one thing, instead of seeing results that pertain to that immediate query we see results that match common future searches?

http://www.nytimes.com/2012/02/19/magazine/shopping-habits.h...

7
kalleboo 5 hours ago 1 reply      
If people are searching for medical symptoms on a search engine, aren't they already ending up at WebMD or whatever and finding possible diagnoses?

This would have been a lot more interesting if the keywords were a lot more subtle - like a change in behavior marked by a sudden craving for salty foods or whatever.

8
jlg23 3 hours ago 0 replies      
I'm not sure that this is a good way to demonstrate data mining skills. The survival rate for pancreatic cancer is abyssal:

> While five-year survival rates for pancreatic cancer are extremely low, early detection of the disease can prolong life in a very small percentage of cases. The study suggests that early screening can increase the five-year survival rate of pancreatic patients to 5 to 7 percent, from just 3 percent.

WP claims 20%[1] though a glance at the referenced source suggests that the WP summary is bogus.

So the only ones who benefit from this data mining would be health insurances who could get rid of people who'll incur very high treatment cost with low expectancy of success.

[1] https://en.wikipedia.org/wiki/Pancreatic_cancer

9
jerryhuang100 4 hours ago 0 replies      
> " We showed specifically that we can identify 5% to 15% of cases, while preserving extremely low false-positive rates (0.00001 to 0.0001)."

Back several years ago Google Flu Trend also claimed to have 97% accuracy compared to CDC data. But later on it just found to be way off to the real data. Did the author compare their study to the Google Trend.

Also it's not clear how they achieve the conclusion of low FP. Did they randomize their sample pool and run their predictability model several round?

10
damianknz 4 hours ago 0 replies      
Can't one already run ad campaigns based on a users search history? What if a cancer foundation or similar organisation ran a targeted ad campaign?
11
fideloper 6 hours ago 0 replies      
No mention of what type of queries they believe after associated :/
12
mooneater 3 hours ago 0 replies      
Insurance companies would just love this data...
13
uptownfunk 5 hours ago 0 replies      
Any links to the non-paywalled technical paper? I'm curious as to the learning models they built to run the actual predictions.
9
MongoDB queries dont always return all matching documents engineering.meteor.com
313 points by dan_ahmadi  10 hours ago   222 comments top 37
1
im_down_w_otp 9 hours ago 6 replies      
Said it before, will say it again... "MongoDB is the core piece of architectural rot in every single teetering and broken data platform I've worked with."

The fundamental problem is that MongoDB provides almost no stable semantics to build something deterministic and reliable on top of it.

That said. It is really, really easy to use.

2
hardwaresofton 6 hours ago 1 reply      
If you're currently using MongoDB in your stack and are finding yourselves outgrowing it or worried that an issue like this might pop up, you owe it to yourself to check out RethinkDB:

https://rethinkdb.com/

It's quite possibly the best document store out right now. Many others in this thread have said good things about it, but give it a try and you'll see.

Here's a technical comparison of RethinkDB and Mongo:https://rethinkdb.com/docs/comparison-tables/

Here's the aphyr review of RethinkDB (based on 2.2.3):https://aphyr.com/posts/330-jepsen-rethinkdb-2-2-3-reconfigu...

3
lossolo 9 hours ago 7 replies      
I've just migrated one project from mongo to postgresql and i advise you to do the same. It was my mistake to use mongo, after I've found memory leak in cursors first day I've used the db which I've reported and they fixed it. It was 2015.. If you have a lot of relations in your data don't use mongo, it's just hype. You will end up with collections without relations and then do joins in your code instead of having db do it for you.
4
ahachete 8 hours ago 0 replies      
Strongly biased comment here, but hope its useful.

Have you tried ToroDB (https://github.com/torodb/torodb)? It still has a lot of room for improvement, but it basically gives you what MongoDB does (even the same API at the wire level) while transforming data into a relational form. Completely automatically, no need to design the schema. It uses Postgres, but it is far better than JSONB alone, as it maps data to relational tables and offers a MongoDB-compatible API.

Needless to say, queries and cursors run under REPEATABLE READ isolation mode, which means that the problem stated by OP will never happen here. Problem solved.

Please give it a try and contribute to its development, even just with providing feedback.

P.S. ToroDB developer here :)

5
lath 7 hours ago 1 reply      
A lot of Mongo DB bashing on HA. We use it and I love it. Of course we have a dataset suited perfectly for Mongo - large documents with little relational data. We paid $0 and quickly and easily configured a 3 node HA cluster that is easy to maintain and performs great.

Remember, not all software needs to scale to millions of users so something affordable and easy to install, use, and maintain makes a lot of sense. Long story short, use the best tool for the job.

6
cachemiss 5 hours ago 0 replies      
My general feeling is that MongoDb was designed by people who hadn't designed a database before, and marketed to people who didn't know how to use one.

Its marketing was pretty silly about all the various things it would do, when it didn't even have a reliable storage engine.

Its defaults at launch would consider a write stored when it was buffered for send on the client, which is nuts. There's lots of ways to solve the problems that people use MongoDB for, without all of the issues it brings.

7
jsemrau 9 hours ago 0 replies      
Weird to see that Mongo is still around. We started to use them on a project ~4 years ago. Easy install, but that's where the problems started. Overall terrible experience. Low performance, Syntax a mess, unreadable documentation.

They seem to still have this outstanding marketing team.

8
shruubi 8 hours ago 1 reply      
Seriously, who looks at MongoDB and thinks "this is a sane way of doing things"?

To be fair, I've never been much of a fan of the whole NoSQL solution, so I may be biased, but what real benefits do you gain from using NoSQL over anything else?

9
fiatjaf 7 hours ago 2 replies      
CouchDB is simple and reliable. You can understand it from day one. I can't imagine why it isn't being used.
10
rgo 6 hours ago 2 replies      
Everytime I hear arguments for going back to relational databases, I remember all the scalability problems I lived through for 15 years in relational hell before switching to Mongo.

The thing about relational databases is that they do everything for you. You just lay the schema out (with ancient E-R tools maybe) load your relational data, write the queries, indexes, that's it.

The problem was scalability, or any tough performance situation really. That's when you realized RDBMSs were huge lock-ins, in the sense that they would require an enormous amount of time to figure out how to optimize queries and db parameters so that they could do that magic outer join for you. I remember queries that would take 10x more time to finish just by changing the order of tables in a FROM. I recall spending days trying different Oracle hints just to see if that would make any difference. And the SQL-way, with PK constraints and things like triggers, just made matters worse by claiming the database was actually responsible for maintaining data consistency. SQL, with its naturalish language syntax, was designed so that businessman could inquire the database directly about their business, but somehow that became a programming interface, and finally things like ORMs where invented that actually translated code into English so that a query compiler could translate that back into code. Insane!

Mongo, like most NoSQL, forces you to denormalize and do data consistency in your code, moving data logic into solid models that are tested and versioned from day one. That's the way it's supposed to be done, it sorta screams take control over your data goddammit. So, yes, there's a long way to go with Mongo or any generalistic NoSQL database really, but RDBMS seems a step back even if your data is purely relational.

11
xenadu02 7 hours ago 0 replies      
Use of MongoDB at PlanGrid is probably the single worst technical decision the company ever made.

We've migrated our largest collections to Postgres tables and our happiness with that decision increases by the day.

12
ruw1090 9 hours ago 6 replies      
While I love to hate on MongoDB as much as the next guy, this behavior is consistent with read-committed isolation. You'd have to be using Serializable isolation in an RDBMS to avoid this anomaly.
13
tinix 4 hours ago 0 replies      
Y'all know other storage engines exist, right?

I searched the comments for "percona" and found nothing...

Figures.

Meanwhile, https://github.com/percona/percona-server-mongodb/pull/17

14
Animats 9 hours ago 0 replies      
Not when they're changing rapidly, anyway. Well, that's relaxed consistency for you.

Does this guy have so many containers running that the status info can't be kept in RAM? I have a status table in MySQL that's kept by the MEMORY engine; it's thus in RAM. It doesn't have to survive reboots.

15
twunde 3 hours ago 0 replies      
The real problem with Mongo is that it's so enjoyable to start a project with that it's easy to look for ways to continue using it even when Mongo's problems start surfacing. I'll never forget how many problems my team ended up facing with Mongo. Missing inserts, slow queries with only a few hundred records, document size limits. All while Mongo was paraded as web scale in talks.
16
doubleorseven 10 hours ago 3 replies      
Mongo, in one word: sucks.Couchbase, does not.
17
jtchang 9 hours ago 0 replies      
This single issue would make me not want to use MongoDB. I'm sure there are design considerations around it but I rather use something that has sane semantics around these edge cases.
18
spullara 3 hours ago 0 replies      
It literally returns wrong answers for queries. I can't believe anyone this thread is defending it.
19
rjurney 7 hours ago 0 replies      
Mongo is hilarious. Ease of use is so important, we just don't much give a shit that it has all these gaping holes and flaws in it.
20
avital 8 hours ago 1 reply      
I believe this is solved by Mongo's "snapshot" method on cursors: https://docs.mongodb.com/v3.0/faq/developers/#faq-developers...
21
wzy 9 hours ago 1 reply      
Does Meteor support a proper database system yet, a la. MySQL or Postgres?
22
d3ckard 7 hours ago 0 replies      
I worked with MongoDB quite a lot in context of Rails applications. While it has performance issues and can generally become pain because of lack of relations features, it also allows for really fast prototyping (and I believe that Mongoid is much nicer to work with than Active Record).

When you're developing MVPs, work with ever changing designs and features, ability to cut off this whole migration part comes around really handy. I would however recommend to anybody to keep migration plan for the moment the product stabilizes. If you don't, you end up in the world of pain.

23
hendzen 6 hours ago 0 replies      
Actually, if this lack of index update isolation is correct, you can get the matching document zero, one or multiple times!
24
jitix 9 hours ago 1 reply      
What storage engine are you using? I wonder if the same issue comes in wiredtiger MVCC engine.
25
Osiris 7 hours ago 1 reply      
I hear a lot about MongoDB's reliability issues. How do CouchDB or other document store database compare in terms of reliability and consistency?
26
wvenable 4 hours ago 0 replies      
I wonder how much data they are storing and in what pattern that they actually need a NoSQL database. I'm curious why someone would make that choice.
27
xchaotic 6 hours ago 0 replies      
Unless you want to code every rdbms and enterprise feature in the application layer, don't use Minho, use Postgres or Use Marklogic. It is 'nosql', but it is acid compliant and uses MVCC so what the queries return is predictable.
28
paradox95 7 hours ago 1 reply      
Should an infrastructure company be advertising the fact that it didn't research the technology it chose to use to build its own infrastructure?

All these people saying Mongo is garbage are all likely neckbeards sysadmins. Unless you're hiring database admin and sysadmins, Postgres (unless managed - then you have a different set of scaling problems) or any other tradition SQL store is not a viable alternative. This author uses Bigtable as a point of comparison. Stay tuned for his next blog post comparing IIS to Cloudflare.

Almost every blog post titled "why we're moving from Mongo to X" or "Top 10 reason to avoid Mongo" could have been prevented with a little bit of research. People have spent their entire life working with the SQL world so throw something new at them and they reject it like the plague. Postgres is only good now because they had to do some of the features in order to compete with Mongo. Postgres been around since 1996 and you're only now using it? Tell me more about how awesome it is.

29
vs2370 5 hours ago 0 replies      
I am pretty excited about cockroachDb. Its still in beta so not suggested for production use yet, but its being designed pretty carefully and by a great team.. check them out cockroachlabs.com
30
mouzogu 6 hours ago 1 reply      
Is MongoDB really that bad?

I am someone just getting into Meteor Js and it seems like moving from MongoDB would make it Meteor trickier to learn.

Is it difficult to switch to an alternative? Thanks

31
geoPointInSpace 5 hours ago 0 replies      
I'm prototyping in meteor using MongoDB and Compute Engine.

I have two VM instances in google cloud platform. One is a web app and the other is a MongoDB instance.They are in the same network. The connection I use is their internal IP.

Can other people eaves drop between my two instances?

32
vegabook 9 hours ago 1 reply      
I have moved from Mongo to Cassandra in a financial time series context, and it's what I should have done straight from the getgo. I don't see Cassandra as that much more difficult to setup than Mongo, certainly no harder than Postgres IMHO, even in a cluster, and what you get leaves everything else in the dust if you can wrap your mind around its key-key-value store engine. It brings enormous benefits to a huge class of queries that are common in timeseries, logs, chats etc, and with it, no-single-point-of-failure robustness, and real-deal scalability. I literally saw a 20x performance improvement on range queries. Cannot recommend it more (and no, I have no affiliation to Datastax).
33
partycoder 8 hours ago 1 reply      
This use-case is not something that you would use MongoDB for. Try Zookeeper.

This being said, I would feel embarrassed to post this on behalf of the engineering department of a company.

This post is just a very illustrated way of saying "we have no idea about what we are doing and our services are completely unreliable".

This is so bad that is more of an HR problem than it is an engineering problem.

34
acarrera 6 hours ago 0 replies      
If you were inserting changes in the status you'd have much better data and never incur in such issues.
35
apeace 9 hours ago 1 reply      
TL;DR During updates, Mongo moves a record from one position in the index to another position. It does this in-place without acquiring a lock. Thus during a read query, the index scan can miss the record being updated, even if the record matched the query before the update began.
36
wizardhat 7 hours ago 2 replies      
TLDR: He was reading the database while another process was writing to it.

Why all the Mongo hate? I'm sure this would happen with other databases.

37
TimPrice 9 hours ago 1 reply      
The article is interesting, but title is fud.Besides, all this is not unexpected:

> How does MongoDB ensure consistency?

> Applications can optionally read from secondary replicas, where data is eventually consistent by default. Reads from secondaries can be useful in scenarios where it is acceptable for data to be slightly out of date, such as some reporting applications.

https://www.mongodb.com/faq

10
Kniterate is a 3D printer for clothes [video] arduino.cc
83 points by kevcampb  10 hours ago   25 comments top 13
1
paulhart 9 hours ago 2 replies      
Most knitting machine manufacturers that targeted the prosumer market have gone out of business (or left the business of making knitting machines). However, there are large communities of owners who still keep the flame burning.

One of the most interesting hacks I've seen is from a German hackerspace, who have taken two Passap E6000 machines and merged them into a fully computer-controlled "Frankenpassap" (only one bed on a normal machine is dynamically controlled).

https://www.hackerspace-bamberg.de/Passap_pfaff_e6000

I have a Passap E6000 at home and will soon start working on reverse engineering the firmware in the computer that comes with the device so that we can start the process of migrating to a more modern toolchain.

2
jrk 4 hours ago 0 replies      
The paper isn't up yet, but Disney has a paper at SIGGRAPH this year on a DSL and compiler for controlling knitting machines:

http://s2016.siggraph.org/technical-papers/sessions/cloth

Unlike most additive manufacturing, knitting is an area where industrial-standard machine technology used for much of the fabric you already wear is very advanced, but computational technology for driving it in nontrivial ways seems to be the main limiting factor in realizing this potential.

3
bitwize 8 hours ago 0 replies      
Wow, even better than the Nintendo Knitting System!

(Nintendo should bring that idea back and make patterns featuring Mario, Goombas, etc. available in the eShop. They'd make a killing off the hipster market.)

4
grizzles 6 hours ago 0 replies      
There is a good link here about the tech already in this industry: https://www.youtube.com/watch?v=s2S3eLrdqk4

In my opinion end to end automation in the textile industry is only a few rethink robotics style robots away from feasibility. It's only a question of investment.

It will be pretty cool when a company puts it all together, because they will be able to deliver a tailor made product and slaughter the competition on costs & overall quality.

5
vessenes 9 hours ago 1 reply      
There are tons of industrial knitting machines out there, check Alibaba. A home knitting machine that was the equivalent of a C&C machine for knits would be pretty rad. I'd use it all the time.

The reality, last I looked, is that the gap is pretty large -- the industrial machines are very feature specific "25 sizes of socks in up to 10 yarn weights!" and the home knitting machines are for hobbyists, full stop.

6
aaron695 5 hours ago 0 replies      
DARPA is looking at similar

http://www.economist.com/news/technology-quarterly/21651925-...

It will really fk up a lot of low income earners.

But like a lot of these techs I'm hoping with freeish food, clothing and housing we'll pop out the other end with everyone better off.

7
mdorazio 9 hours ago 0 replies      
Thought I had seen something similar before, and it turns out I was thinking of OpenKnit (at least two years old), which this is based on (as referenced in the article). Good to see that they're evolving the tech still.
8
MichailP 9 hours ago 0 replies      
How could this be modified to make something more complex like knitted gloves? Although there are machined knitted gloves they really don't look handmade. In my opinion handmade look increases value.
9
foota 4 hours ago 0 replies      
I wonder if you could operate one of these at home for a profit.
10
imaginenore 9 hours ago 0 replies      
Computerized knitting machine from 2010:

https://www.youtube.com/watch?v=3Q4tPYavChI

Knitic: open hardware, open source knitting machine (though you still have to push it by hand, the actual pattern knitting is computerized)

http://makezine.com/2015/01/07/circular-knitic-an-open-hardw...

11
thedogeye 5 hours ago 0 replies      
Cool name
12
vegabook 9 hours ago 2 replies      
Nice machine, but what exactly is "3d" about this other than the hype-factor? Knitwear is 2d last time I looked, unless this thing does pom poms too.
13
powera 9 hours ago 2 replies      
How is this used any differently from a sewing machine?
11
FarmBot Open-Source CNC Farming [video] farmbot.io
175 points by ctingom  11 hours ago   83 comments top 22
1
nostromo 10 hours ago 4 replies      
This is cool but seems a tad over engineered.

I have a garden. It uses a drip line that's buried in the ground. It's $10 worth of tubing from Lowes with a few holes where plants go. It's on a $15 timer that waters automatically once a day. I do, however, have to take a few minutes to put seeds in the ground.

That said, I would be throwing money at the monitor right now if this thing was smart enough to identify weeds and remove them. (Maybe that's in the plans? They have a part for weeding shown.)

But I love all these new ideas around farming. The most interesting is hydroponics considering how much more resourceful it is with water (sometimes using 90% less water per equal amount of harvest).

Edit: for those asking, all I used was half inch black tubing (the kind they use for automated sprinkler systems), drilled small holes every 12 inches, buried it, hooked it up to a spigot with a timer, and that's it.

2
justsaysmthng 9 hours ago 4 replies      
This is pretty much how I see the future of agriculture. Give the FarmBot a bunch of wheels / robotic feet to move around and it could theoretically handle huge fields.

The key thing here is the possibility to monitor each individual plant and react to changes in its development or environment (in contrast to modern industrial agriculture, were things are done with huge monster tractors).

I'm sure it can/will be improved to serve more functions - like removing weeds and collecting pests without the need for herbicides or pesticides and so on.

Eventually, these will all become software problems which the global programmer community will be more than glad to tackle.

The most important thing about FarmBot and similar tech, though, is the potential to de-centralize agriculture again and make small-scale, local agriculture possible, without needing to employ human labor.Not only would this create a new market for high-tech agricultural tools and software and make growing your own food easy (even in the city !), it is a very welcome solution to the many environmental problems that large-scale industrial agriculture generates today.

So I'm very optimistic and happy about this tech and I wish you guys all the luck.

3
gtvwill 7 hours ago 1 reply      
Holy crap is this thing for real? Sorry but this is straight up inefficiency in its finest. Honestly the amount material to produce one farm bot far out weighs the amount of produce in can produce. Like I hate monoculture farming but a 20 Ton tractor can service like 10 Thousand acres of dry area/irrigated cropping country a piece of piss, do it automated and by GPS and return Hundreds of Tons of produce.

TBH if someone would just build me a robot that has 20 km range, can deal with crawling up hills and can identify coloured shapes and "pick" them (pneumatic suction would probably do it), We could put a few tens of thousand blueberry/coffee pickers out of business.

FarmBot will not put anyone out of business. The Japanese Aeroponic farms have a better chance of being the future of production.

4
Loughla 10 hours ago 3 replies      
I can't get your site to load, so I checked out the hackaday page.

Trying not to be overly cynical here, but how is this worth the cost? It appears that it simply plants, waters, and detects/removes weeds. In a 1,250sq.ft. garden, we invest less than 2 hours per week on these tasks.

How would this be scalable? How do you spell scalable?

How would this justify its cost?

How does it withstand being outside all year, year round?

How does it not just destroy your crops when they grow tall?

How could this possibly improve on current farming methods (outside of removing chemical weed-killers maybe)?

I understand it's in its infancy, but I'm genuinely having a hard time with this.

5
dustinmoorenet 10 hours ago 0 replies      
I like that the design is open-source but it is just putting seeds in the ground and watering them. That is not a complex task. I believe this http://openag.media.mit.edu/ is a better system for food growth.
6
Animats 7 hours ago 0 replies      
This thing might be cost-effective if it were scaled up to round farm size. A standard center-pivot irrigator is 400 meters long. An arm that could travel along a track on the irrigator, do precision planting, and look at the plants might be useful. The amount of mechanism would be modest for the area covered.
7
cconcepts 8 hours ago 1 reply      
This is clever, and I applaud CNC principles being applied in such a unique way. But once the planting and prep is done (which takes a few hours max for me to do by hand) I now have a several thousand dollar watering can.....

Make no mistake, this is where agriculture is heading and the people behind this are obviously clever and innovative. I just don't think this is a very compelling product (outside of the "A robot planted my veges" kudos)......yet.

8
Jack000 10 hours ago 1 reply      
site seems to be down? I'm guessing it's this project: https://hackaday.io/project/2552-farmbot-open-source-cnc-far...
9
joshpadnick 10 hours ago 1 reply      
What a cool concept! If any members of FarmBot team are watching this thread, could you comment on why you decided to make everything open source? Clearly, it's an awesome benefit for the community, but how does it also serve FarmBot the company?

I'm asking b/c I'm curious about business models that build heavily on open source.

10
cellularmitosis 5 hours ago 0 replies      
Using roller-skate wheels and having the gantry ride directly on top of the side-boards of the raised-bed garden would be a great way to reduce parts count here. https://youtu.be/3vgjJikt9B4?t=41s
11
carapace 10 hours ago 0 replies      
I like this but I think a crane rather than track and gantry makes more sense, eh?
12
exar0815 10 hours ago 2 replies      
Really nice concept. Especially with greenhouses, you can build large systems of it in them.

But, and don't Vote me down, considering current events, how long until we see a "Weedbot"?

13
new_accnt 3 hours ago 0 replies      
Instead of using this cnc-like machine with tracks and stuff like that, wouldn't it be more cheap and simple to use a radial design like many large crop fields already use? It might remove the weed pulling feature in early iterations, but it seems much easier for just watering/nutrition.
14
IvanK_net 9 hours ago 0 replies      
I think it is the "remake" of a harvester. It is still a machine, that moves above a field on wheels and performs some useful activity on the place where it is located, before moving to the next place.
15
machbio 9 hours ago 0 replies      
Memories of Farmville - this is amazing but it wont work on the large scale, the models seem to slow to in its workflow to care for a larger area.
16
jannyfer 8 hours ago 0 replies      
Interesting that people have a very different reaction when they see a working video and not just some ideas.

Last time this was posted on Hacker News three years ago: https://news.ycombinator.com/item?id=6451350

17
coenhyde 10 hours ago 0 replies      
Awesome! I think i'll build this on my balcony! I've acually been planning something similar in my head for years. But never actually did anything about it.
18
rhgraysonii 10 hours ago 0 replies      
I've interacted with Rick a bit in the past and hes an awesome developer. Really glad to see progress continuing with this.
19
imaginenore 9 hours ago 0 replies      
I bet installing that thing, programming, and feeding it is at least an order of magnitude more time consuming than the actual farming by hand.
20
ccallebs 11 hours ago 3 replies      
Wow. Just wow.

I don't know that I've been this excited about a piece of machinery before. This product has the potential to completely change the way we obtain and consume food. I understand it's very niche at the moment. And the price tag will likely be huge for the first run. But this is a great first step and I'll try to pre-order a kit.

21
ElijahLynn 9 hours ago 0 replies      
Holy fucking amazing!
22
VLM 10 hours ago 0 replies      
In my climate, I get a little extreme fighting the cold. A fairly obvious interchangeable tooling suggestion is some manner of hook or electromagnet or "whatever" to manipulate a cold frame door. Just a helpful suggestion.
12
Takatas Air Bag Crisis bloomberg.com
48 points by usaphp  6 hours ago   24 comments top 11
1
unexistance 1 hour ago 0 replies      
1998 article on the Paresh Khandhadia, the Takata engineer

http://www.autonews.com/article/19980223/ANA/802230779/slow-...

2
metaphor 3 hours ago 1 reply      
> [Shigeshisa Takada] didnt mention that Takata had tried to fix the problem by changing the propellant formula in 2008.

My Acura TL is affected by this airbag recall, and it's a fairly recent 2012 model too. Whatever changed circa 2008 apparently didn't work.

> NHTSA says those companies are making 70 percent of the replacement inflators.

When I took it to the dealership for servicing about a week ago, I also inquiried about the airbag issue. According to service rep, it's a quick 30 minute fix assuming parts on hand...problem is the 30-45 day lead time for parts, which blew my mind away considering the nature of the issue in a state well known for its warm weather and humidity. Considering the logistic predicament, I wonder how quality will be affected given the number of 3rd-party players manufacturing a complex replacement component for a proprietary airbag system.

3
riffraff 9 minutes ago 0 replies      
the article mentions 13 deaths of which 10 in the US. This seems a very odd distribution, can someone explain it?
4
mc32 4 hours ago 1 reply      
There is little doubt these bags are dangerous and pose a known threat to passenger safety. But to call these car bombs is irresponsible. They are not purposely armed with the intent to inflict damage to people. Yes, they have design flaws which have resulted in deadly injuries. But while technically these are explosives to aid in deploying the safety mechanism, they are not bombs as most people understand things.

This is as unjust as calling people who get arrested "disappeared" with all the connotations that word has (i.e. secret summary executions). This has the same approach, a bombastic approach to headline writing. For shame!

5
danso 3 hours ago 0 replies      
A great longform article, one of the best indepth overviews of a company's culture and current controversies that I've read in awhile...and a pretty egregious controversy at that.
6
WallWextra 3 hours ago 2 replies      
The article mentions Honda's efforts to track down owners of cars with defective airbags. Why can't state motor vehicle registries cooperate in recalls like this?
7
LeifCarrotson 4 hours ago 3 replies      
> Takada has only $520 million on hand and is worth about $340 million

Does this mean they have $520m in cash, but the stock valve is less than their bank account?

8
Animats 2 hours ago 0 replies      
Check your car by VIN here: https://vinrcl.safercar.gov/vin/
9
BooneJS 2 hours ago 1 reply      
My 2007 CR-V is affected. Since I live in the northern part of the US, my airbag isn't a priority as they're doing southern states first.
10
King-Aaron 1 hour ago 0 replies      
Very interesting read, and makes me recall back to being a kid and setting off airbags in the back yard. You could immediately tell the difference between the older propellant and the newer ones, though I wasn't aware of how poisonous it was for us.
11
CamperBob2 3 hours ago 2 replies      
One of the early criticisms of air bags in passenger cars was that they were said to protect only people who aren't belted in, with nothing but potentially-injurious effects for anyone who is.

Does anyone know if that is still (or was ever) true?

13
Wells Fargo's Bid to Vanquish Screen Scraping americanbanker.com
115 points by octavien7  10 hours ago   49 comments top 11
1
curun1r 9 hours ago 4 replies      
This is good to hear, but also rings a bit hollow given the industry's history with trying to keep data locked up. There are very simple things the industry could have done long ago to make their users more secure and they've done none of it. For instance, banks could have let users create read-only credentials to give to aggregators greatly reducing the potential for fraud. They could've created application-specific passwords, similar to what Google has done, that would allow more intelligent application of MFA and such...every time I log into Mint, I get a text message from Vanguard telling me that an unrecognized device is attempting to log in, requiring me to fix the account in Mint. Instead, they've basically adopted a "let's make it as difficult to scrape as possible" mentality which has contributed to the insecure and buggy situation we have today.

APIs are good, as are OAuth-style permissions requests where users get to at least know what data a service is asking for. But they shouldn't be used as a way to kill off screen scraping. They should be a better option that allows screen scraping to die off normally. The aggregation industry that scrapes hates it even more than the banks do. It costs them a ton of man power to keep it working and each integration needs to be done as a one-off. If the banks provide a better solution, it will get used. Better yet, if they can come up with a single standard API that will work with most/all banks, that would be even better. But if the banks also take measures to prevent scraping, it is going to cause problems and not be a good thing for account holders.

2
spangry 9 hours ago 1 reply      
A more informative title for the article probably would have been "Wells Fargo to publish API". It's about damn time too. Government, take note.
3
renownedmedia 8 hours ago 0 replies      
What took these assholes so long?

We've been typing usernames and passwords for our very important _banking_ accounts into third parties like Mint (instead of using OAuth) for several years now.

4
sjtgraham 7 hours ago 2 replies      
Retail banking is a classic case of diametrically opposed incentives. Banks rely on the opacity of their products, apathy and the fear that the majority of people have of simply opening their bank statement, to inflict punitive charges on their customers. You want to keep your money, banks want to take it away from you.

Banks also depend on cast-iron control of the channel to cross-sell other products and services. The thing about 1st party bank APIs is they completely undermine all of this and that is why they haven't happened.

The end-of-days scenario for retail banking is a 3rd party coming along to build a superior banking experience atop of their APIs. The 3rd party starting from a market share of 0 has no choice but to align their incentives with the user in order to grow. This will manifest in apps that proactively warn users before their account incurs charges, notifies users when they do, and present products and services that compete with the banks but are better value for the user. A 3rd party will de facto end up owning the most important banking channel and this will ultimately devastate the bank's revenues. All of this is terrible for the bank but great for the user.

When you decompose things into underlying incentives it becomes clear why things have or have not happened and will or will not happen.

There are various initiatives to compel banks to provide open APIs, e.g. PSDII in Europe. However considering the aforementioned incentives it seems obvious that banks will not act in good faith and will find any excuse (vague hand-waving to security, fraud, etc) to subvert the UX of the API such that any service built on top of it is awful to use. A concrete example of this is the gestating RBS API, they require a 2FA SMS code before moving money over 30. This is something they do not do and will never do in their own private APIs that power their own mobile apps because users will not stand for it, but they can do this with a public API that has no users to speak of very easily.

Considering the current incentives 1st-party banking APIs (at least the ones we would wish to see) will not happen. The only way that can change that is through market forces, i.e. one bank has to provide the APIs that cause material customer churn at other banks. Given this it's clear screen-scraping is going nowhere anytime soon, in fact it will evolve, by directly hooking in to the private APIs that power the banks own APIs for more robust, and fully transactional APIs, i.e. payments and transfers.

Disclaimer: I have started a company that does this - https://teller.io/

5
klinskyc 9 hours ago 0 replies      
Thought that the article was going to go in a totally different direction before reading it. Instead of solely trying to block screenscrapers, Wells Fargo is actually providing a better alternative. If only everything worked that way
6
smockman36 9 hours ago 5 replies      
Do most banks not have an API? If you use software (e.g. something from Intuit) that accesses your banking info, is it likely screen-scraping?
7
RexM 8 hours ago 0 replies      
While reading this article, I remembered an article posted to HN about the introduction of TAuth from teller.io that might be relevant to this discussion.

https://news.ycombinator.com/item?id=11636847

8
byoogle 3 hours ago 0 replies      
Perhaps Wells Fargo should finish implementing their website first. Theyre missing basic services like letting you make a wire transfer online. Our company is in the middle of switching banks because dealing with them is such a hassle.
9
hannasm 8 hours ago 1 reply      
Forget about existing banking interchange formats which (from personal experience) wells fargo both doesnt supportOFX or implements poorly QFX, they should definitley define a new API and be a leading stakeholder.
10
twblalock 8 hours ago 1 reply      
It would be nice if an industry standard developed around Oauth for financial data. It would be far easier for data aggregators to use, and far safer for customers as well.
11
mschuster91 8 hours ago 0 replies      
Oh, how much do I like the German HBCI standard... nice to see that at least some non-German players decide to follow the API trend.

However, it is disappointing that this is just a single bank and not a group of banks developing this - and especially, that a battle-tested standard was not adopted.

edit: in Germany, actually, there's for commercial use the DTA standard (https://de.wikipedia.org/wiki/Datentr%C3%A4geraustauschverfa...) since 1976 (!), which has been replaced only recently by SEPA/ISO20022. Meanwhile, US banks decide to follow xkcd #927 (https://xkcd.com/927/)...

14
Oracle whistleblower suit raises questions over cloud accounting reuters.com
45 points by danielconde  6 hours ago   27 comments top 3
1
suprgeek 4 hours ago 2 replies      
Slightly relevant - about Oracle and its single-minded pursuit of money over all else:https://www.youtube.com/watch?v=-zRN7XLCRhc&t=33m
2
firebones 2 hours ago 1 reply      
Can anyone with a finance/CFO background give a primer in how a) to detect such aggressive accounting in a cloud company's earnings statements and/or b) what aggressive means in this context?

It would be nice to have a field guide to how to spot those without bathing suits when the tide goes out.

3
tyler_larson 2 hours ago 0 replies      
Oracle whistleblower suit raises questions over Oracle accounting.

FTFY.

15
ParaText: CSV parsing at 2.5 GB per second wise.io
57 points by flashman  6 hours ago   10 comments top 4
1
justinsaccount 4 hours ago 2 replies      
This is impressive, but...

"A fast reader exploits the capabilities of the storage system"...

the graphs show that their storage system is doing 4.00 GB/sec

I wonder what processor this is running on and what their storage system this is.. multiple PCIe SSD?

I tried running a quick test but only succeeded in OOMing my 8G laptop.

Even just doing

 import paratext it = paratext.load_csv_as_iterator("/dev/shm/tmp/c.log", expand=True, forget=True) x = it.next()
Starts eating up all my ram after about a minute of spinning the cpu... so I think they have a slightly different definition of an iterator as everyone else.

Compared to

 cut -d , -f 5 < c.log > /dev/null
which runs in a few seconds, or a slightly more domain specific and optimized version of 'cut'[1] that runs even faster (300-500MB/s on a single core depending on which fields you want)

 $ du -hs c.log ;wc -l c.log 2.1Gc.log 16197412 c.log
I also wonder if that is 2.5 GB/s per core.

https://github.com/BurntSushi/rust-csv does 241 MB/s in raw mode, so I find it a little hard to believe that this is 10x faster... unless that is while maxing out multiple cores.

[1] https://github.com/bro/bro-aux/blob/master/bro-cut/bro-cut.c

2
ben_jones 28 minutes ago 0 replies      
Not MIT licensed
3
dantiberian 4 hours ago 0 replies      
When looking at the graphs, remember they are in log scale. The results are a lot more impressive than they look at first glance.
4
falaki 5 hours ago 1 reply      
I wish there was more details on the benchmark. For example, I am interested to know if schema inference was turned on for spark-csv?
16
The effects of living in a poor neighborhood vox.com
109 points by micaeloliveira  10 hours ago   80 comments top 12
1
ausjke 2 hours ago 1 reply      
Why blacks are not doing well?

Because they were from Africa, but new immigrants from Africa came to US earn a pretty decent life quickly than those have been here centuries ago.

Because they're minority, but other minorities such as the Jews and Asians are working hard and earning a decent life here.

Because of less chance to get education, but you have affirmative action that you can get into decent college at a huge discount as far as scores go. Also so many financial support for economically disadvantaged families.

Because of unfair tax system, election system... those seem not the key reason to me too.

I might be missing something else.

I think it's probably more of a culture thing, that favors education, hard working, loyal to family, etc. Without that, the civil rights law or various diversity initiatives can only help that much. The black community needs to address that gradually and it's the only way to fix the "poor neighborhood" for good.

In where I live, the elected democratic officers decided to bridge the poor community with "rich" community by building government-sponsored condos right next to or in the middle of up-middle-class communities,the result is that people move out and the house price drops, does not look like a right fix to me.

2
lacker 5 hours ago 5 replies      
Hmm, wait a moment.

59% of white kids moved up.

43% of white kids moved down.

Even accounting for rounding, there's no way to get those numbers. It's even worse in other places:

25% of black kids had parents who grew up in the bottom two levels and moved up at least one.

78% of black kids had parents who grew up in the top three levels and moved down at least one.

Those two groups should sum to less than 100%, certainly not 103%.

Also, except for the "Moving to Opportunity" experiment, most of this seems to ignore the distinction between correlation and causation.

This reminds me of the recent finding that 50% of papers in reputable psych journals reported mathematically impossible data. https://medium.com/@jamesheathers/the-grim-test-a-method-for... The more people start to dig in, the more the "social sciences" look unscientific.

3
ashwinaj 6 hours ago 4 replies      
One of the comments I get people who visit India (I'm Indian) is that how is there a slum next to a swanky high rise or classy residential/commercial areas (at least in most big cities). I don't really have an explanation for this (lack of space, cheap labor etc.), but the more I read about economic segregation in the US the more I'm convinced that the inadvertent mixed income neighborhoods in India could be a solution.

If you don't live around people who are less fortunate than you, how can you understand their plight and their point of view? Handing out a few bucks to a homeless guy or cutting a check to a non-profit doesn't create any substantial understanding of the issue.

Obviously there will be opposition to this in the US since people want to live with other people in the same socio-economic strata as themselves. Other reasons: 1) I don't want my property price to depreciate 2) I don't want to live next to "da hood" etc. Until people have a drastic change in attitude this will be ongoing. The article touches on this a bit (section 8 vouchers) but why can't we have neighborhoods with multiple levels of housing options encouraging people with mixed incomes to be around each other.

4
mjevans 6 hours ago 1 reply      
While the data might show these correlations I disagree on the premise that is reached as a result.

It is not just economic stratification, but cultural differences.

Those who have great wealth, and are not celebrity, tend to respect education, or at least attending 'upper tier academic institutions' for their upper society connections. Among this group of people knowledge /is/ power and it is respected.

In economically down-trodden situations it is as if the population has been conditioned that they are beyond redemption, that there is no hope in enlightenment bringing them a better future. Maybe they are correct in that respect as greed is a major driving factor in politics.

I suggest that the culture must change to enshrine knowledge and intellect as well as society rewarding normal productive labor for a real solution to these issues.

5
mc32 8 hours ago 1 reply      
I can dig it.

Living in a bad neighborhood exposes you to an in-ideal living atmosphere. The people you are around influence you in many ways and one of those ways is the wrong way --not because people in the neighborhood want bad things for their brethren --but because their way of life and expectations impinge on the growth of the new young people in the neighborhood with respect to the wider society.

Also, it's worth noting that widening inequality is a global phenomenon but not only between people but amongst nations. The inequality between let's say Mexico and Finland was different back in 1910 and what it is now. The inequality between poor and rich in the US was also less in the US back in the early 1900s.

6
randyrand 6 hours ago 1 reply      
This just in - being surrounded by good influences is good for you. And vis-versa.

This isn't rocket science, of course. We've know about the important of cultural influence for a long time. There was a reason my parents cared about where I grew up, and who I hung out with.

If you grow up in a culture where people don't care about academics, where welfare and child support are acceptable forms of provision, where machismo and crime are glorified, where you are told that discrimination will prevent you from otherwise succeeding, and that your failures can be blamed on racism - no, I don't think that culture will have a good influence on you.

7
aab0 7 hours ago 2 replies      
A highly one-sided presentation. Let me point out some issues: it makes major hay of Moving to Opportunity, but MTO is a notorious failure - it increased crime rates in targeted areas, and showed no improvements on anything until the cited Chetty paper managed to torture out of the data, just recently, improvements in only 1 subgroup on only a handful of metrics (which, aside from Vox misstating the Chetty results by implying it was all MTO participants, rather than one post hoc subgroup, contradicts the later experiment: the claimed effects do not remotely overlap); non-randomized moves are heavily confounded by upward mobility and human capital; many other studies like Swedish lottery studies and the land lottery natural experiment show that exogenous shocks of wealth do not produce meaningful improvements in things like health anywhere remotely close to the observed correlations of wealth/health; genetically informed family designs which account for heritability by looking at siblings differentially exposed or by looking at relatives tend to find that most 'poor neighborhood' effects are driven by genetic confounds; specific versions of the 'poor neighborhood' hypothesis tend to go down in flames when rigorously tested (most recently, the claim that schizophrenia is caused by bad neighborhoods, which everyone was sure about; polygenic scores for schizophrenia show that it is the other way around - schizophrenics and those vulnerable to schizophrenia drift to poor neighborhoods); in genetics studies, the correlation between wealthy parents and one's own SES turns out to be entirely genetically mediated; the parent/child IQ thing is differential regression to the mean; and so on. Many of the claims made in OP are naively causal and are guaranteed to be overestimated based simply on not including genetic relatedness. These are just the recent contrary studies I happen to remember offhand.

When you look at especially the recent genetic studies using polygenic scores from the GWASes to test environmental claims (eg Mendelian randomization), it's becoming increasingly clear: the slate is not blank and the sociology emperor has no clothes. I don't even know... what can be done about sociology? It is 2016 and every time I see a newspaper article or editorial invoking sociological results, it's totally wrong, and the sociologists quoted seem 100% committed to ignoring all the contrary evidence that not all differences are 100% due to environments. I mean... can you imagine seeing Vox cite a recent paper like "The Genetics of Success: How Single-Nucleotide Polymorphisms Associated With Educational Attainment Relate to Life-Course Development" http://pss.sagepub.com/content/early/2016/05/26/095679761664... or a newspaper article on it (which didn't quote some scientists arguing that genetics research should be defunded or that the research is irrelevant)? That would be nice - society needs to come to grips with the confirmation of behavioral genetics. But it doesn't seem like it will happen anytime soon. It's all terribly frustrating.

Anyway, I didn't mean to rant or try to write an in depth carefully cited rebuttal, but I just wanted to say: it is not as simple as 'poor people are poor and unhealthy because of racism and poverty', and there is lots of high-quality evidence of this which tends to go undiscussed in liberal media outlets, so pieces like this are closer to advocacy than research popularization.

9
dang 9 hours ago 4 replies      
This looks like a pretty substantive article with an unfortunately infantilized presentation. Let's try to stick to the important content if we can.
10
cb18 5 hours ago 1 reply      
Doesn't all this seem somewhat backward.

There's this victimization mentality that acts like poverty is this absolute thing that is meted out like a cursed rock, outside the control and agency of whom it's been placed upon.

Why don't we have more discussion about the causes that lead to the effect of poverty.

It's really more productive to see poverty as being an effect, or outcome of other causes or actions. Change the input and get a different outcome. And at a certain point it becomes a personal responsibility. If you want to help someone else, great, but the two of you have to be pushing in the same direction.

And likewise, external conditions do have effects, if it comes to light that someone appears to be needlessly and negatively imposed upon there's clearly a place in society for discussing how best we can optimize everyone's external conditions. Everyone faces obstacles, but placing blame wholly outside oneself is always going to be a losing battle.

>On top of it all, if a murder occurred in a child's neighborhood in an area of about six to 10 square blocks their score fell by 7 to 8 points.

This is so ridiculous that people can make a pronouncement like that, thinking it's a reasonable analysis, or indeed the only analysis.

We know that intelligence is a highly stable trait. Meaning for intelligence to be significantly diminished[0] requires something pretty drastic, like a serious brain injury or some kind of long term deprivation. There is no reason to think someone being killed within a particular distance of someone would have any kind of effect. Do all soldiers come home as dunces?

Something seems to be missing from the kind of people that write these articles. Hard to say what it is exactly, a more holistic view, better analytical capabilities, less pre-conceived notions, intelligence? something...

If they had whatever missing ingredient, then they might understand that an analysis that stands up to rigor is something more like,

People tend to self-segregate along a whole host of traits[1], intelligence is a trait. The trait of intelligence is linked to a propensity for certain types of crimes. Murder among them, therefore it follows that the people that commit murder are statistically likely to have lower intelligence, as are the people around them.

>Oh, another thing: Living in these poor neighborhoods makes you significantly less happy, less hopeful, and less healthy

Oh, another thing: Any analytical mind of any worth can clearly see that the inverse has just as much going for it, if not more.

It seems to clearly follow that people who are less happy, less hopeful, and less healthy would make their neighborhoods poorer. How a focus on improving happiness, hope, and health? Telling people their predicaments are caused by factors outside of their control is the last way to improve any of those things.

[0]7 to 8 points is half a standard deviation, i.e. significant.

[1]See Schelling's Macro Micro, for some of the mechanisms involved.

11
dudul 10 hours ago 4 replies      
"Another popular left-wing idea is a universal basic income" UBI is not only supported by left-wingers.
12
PoorBloke123 9 hours ago 5 replies      
This article is the biggest bunch of rubbish that I have read in months. It takes bias to whole new levels. People are starting to realize that "studies" are almost always biased in the first place and should not be trusted. Secondly, look at some of the examples used:

A comparison is made between:

> 25% of black kids had parents who grew up in the bottom two levels* and moved up at least one.

and

> 59% of white kids moved up

Notice any difference here? The whole article is like this. Comparing completely separate things. This is laughable. I feel embarrassed by the bias and sorry for those who read this article with a straight face.

17
Splice Machine open-sources its dual engine RDBMS splicemachine.com
24 points by dougdonohoe  6 hours ago   1 comment top
1
Someone 23 minutes ago 0 replies      
Doesn't look open source to me yet. The only things I can find are this announcement and a Surveymonkey page where would-be contributors can indicate what they think they could contribute to the project. I couldn't find any licensing information.

The URL also mentions "announces move to open source offers early access to developers".

From what I can read, this could mean a not too well executed 'real open source' release, a "let's try and find free developers by letting them work with the source, but not deploy it commercially", or anything in-between.

18
Bullet journal: A simple productivity system that just uses pen and paper qz.com
86 points by smalera  10 hours ago   36 comments top 19
1
bellebethcooper 5 hours ago 0 replies      
Hello, I'm the author of this article, and just wanted to add something here: for anyone who finds the Bullet Journal method overwhelming, there's another system called Strikethru that might appeal more. It's still based on pen and paper, but is designed to be a bit less complex and doesn't use an index at all.

Here's the official Strikethru site: http://striketh.ru/#welcome

And here are a couple of blog posts I've written about my own process that might be useful. I generally use a mixture of ideas from Bullet Journal and Strikethru.

http://blog.bellebethcooper.com/strikethru-notebook.html

https://exist.io/blog/strikethru/

And finally, I wrote another article recently that looked at both these systems, as well as some other less-developed analogue methods that you might find useful: https://foundrmag.com/analog-methods-for-getting-things-done...

2
luxpir 9 hours ago 0 replies      
Great concept. Tempting, but the digital version I set up saves a bit of drudgework with the pen and could be a little more private. [0]

I like the bullet signifiers. I split events and tasks into calendar and todo list, but the bullets could work in a long, timestamped single file. Perhaps using # to comment out lines that are complete. Might make more sense than the archiving I do by moving to the bottom of the todo list. Actually, my calendar events comment out the done lines, so perhaps the todo really should too. Anyway, good share.

[0] https://github.com/luxpir/plaintext-productivity

3
edtechdev 7 hours ago 0 replies      
Here are some critiques of this technique:http://blog.sandglaz.com/demystifying-the-bullet-journal-why...http://joshmedeski.com/bullet-journal-didnt-work/

I personally just use Google Tasks for deadlines, Google Keep for shopping lists, etc., and a private Google Doc for notes (like a private blog/journal). And of course there are kanban board tools like Trello for collaborative productivity/projects.

4
jonmc12 4 hours ago 0 replies      
From a workflow perspective, this is basically a practical tickler system (but also integrates notes/collections). Ticklers can be very helpful for removing anxiety about things falling through the cracks, but requires staying in the habit or they become useless. Another once popular system was 43 folders.

imo, the major innovation of GTD and related systems is the inclusion of context beyond do/due dates - specifically the notion of resource availability and alignment towards explicit long-term goals. Done right, this can bring clarity to 'next action' in a way that decreases the incremental cognitive load to select a task to complete - hypothetically making it easier to act in a state of flow and stay in the habit.

Everyone's different, and if any system works for a while thats a win. If you talk to a productivity coach / trainer they will probably share experience that the GTD-type systems have the best impact on busy professionals long-term. When someone figures out how to remove the cognitive overhead of both organization and task selection in a contextual todo-list, I believe its going to change the way many people work.

5
jxy 8 hours ago 3 replies      
The charm of pen and paper is simplicity and restriction. It is simple to implement this /bullet journal/ in a plain text file, but then... you want to add a timestamp; you want to add more markups; you want to add links; you want to add tags; you want to add code snippets; you start to move around your file aimelessly for an hour;... You whole text file life scheme blows up in front of you and you don't even know what hits you.

Think of why people like type writers? Simplicity and restriction! You are forced to focus on writing down what's on your mind.

Is there a solution on our over powered computers? Yes. Use ed, the text editor!

6
hexagonc 5 hours ago 0 replies      
I tried to hack together my own personal stack overflow or Q&A system using a directory of text files and Sublime Text (any other text editor that has regular expression searching also work). Basically, each fact or note that I wanted to be able to retrieve would be stored as semi-structured text that is easy to search using regular expressions. For example, when I was first learning web development, I saved a note like this in my note file:

 { [how do you create a simple image button in html and css] -> style: .simple-button { background-image: url(...), background-repeat: no-repeat, background-position: center, background-size: 100% 100%, width: 100px, height: 100px, display: inline-flex } <div class = 'simple-button'></div> }
I had a directory called "knowledgebase" which had numerous files with similarly structured notes. In order to find the note above using keywords, I would just do a search of the knowledgebase directory in Sublime Text using the regular expression:

 ^\{\s*\[.+button.+css.*\]
At first I was pretty excited about the method because it actually saved me a few times when I needed to find some boilerplate Linux bash scripts and vim keyboard shortcuts. Unfortunately, the small bit of initial success left me wanting to add more features to the extent that I was never satisfied with the notation for structuring questions and answers. First I used the system for simple notes and Q&A. Then the notation had to support hierarchical notes, like a tree. Eventually, decided I was going to write a program to parse these Q&A files and expose them as a mobile application that I could search via voice. This seemed like a pretty good idea until I decided that what I really needed was to not only be able to search notes by voice but also create them. The whole project kept growing until eventually I dropped the whole note taking thing and turned my attention to the general problem of NLP for question answering.

If I had simply been satisfied with the original, simple process and stuck to it these last couple of years, I'd have a pretty comprehensive and useful knowledgebase by now. I realize that I'm too lazy to keep such a system up to date but hopefully I can adopt something similar as a commenting convention in my source files to make code fragments searchable.

7
rydercarroll 7 hours ago 2 replies      
Howdy, Ryder here - the creator of the Bullet Journal. Let me know if you have any questions.
8
shanecleveland 8 hours ago 0 replies      
I really wish I could start and stick to something like this. I once worked in a newsroom with a reporter who had the best/simplest list for story ideas. He just used a legal pad and wrote new ideas on each line. He then had a system of symbols to signify importance, status, etc. I've strived for something like that since then, but I have not been successful at sticking to it.
9
ivan_ah 5 hours ago 0 replies      
This reminds me of a the dash-plus system for note taking: http://patrickrhone.com/dashplus/
10
21echoes 3 hours ago 0 replies      
I don't quite follow. The system is: write down what's happening that day (todos, events, meeting notes), and then if it's something you want to reference later, add it to the index? That's it?
11
chevas 7 hours ago 0 replies      
I did a comparison of Bullet Journal and Consol: https://www.consol.io/5-reasons-why-youll-like-consol-if-you...
12
agrafix 9 hours ago 2 replies      
You could probably do this with emacs org-mode pretty well too
13
kybernetyk 5 hours ago 0 replies      
Looks interesting. If you can decipher your own handwriting that is. (Which I can't).
14
chillacy 8 hours ago 1 reply      
So every time you want to note something important you have to add it to the index? Denormalization at its finest, I suppose. The index reminds me of a page table.
15
gravypod 7 hours ago 1 reply      
I'd keep my pen and paper if my hand writing was legible.

I wish there was a way top clean up my hand writing.

16
imron 6 hours ago 1 reply      
Someone should make an app for that ;-)
17
Ezhik 9 hours ago 0 replies      
Oooh, I tried this before. Never quite got far with it, but it sure went far since then, with the Kickstarter and all.
18
kinai 4 hours ago 0 replies      
Not sure about the rest of humanity but I got a memory for that...never understood why people waste their time on productivity apps and other nonsense
19
Karunamon 8 hours ago 2 replies      
I always find myself hoping that someone will come up with an e-ink notepad of some kind with multiple pages. If you've ever used a BoogieBoard, you get what I'm looking at. The killer for me, and the reason I never use a paper notepad is because it can't be cleanly erased.
19
How a 30K-member Facebook group filled the void left by Uber and Lyft in Austin techcrunch.com
136 points by abhi3  10 hours ago   126 comments top 18
1
jdrock 9 hours ago 7 replies      
The title and content of this article couldn't be farther from the truth.

1. Uber and Lyft's absence has created a huge void that remains to this day. There are crazy long lines at the airport.

2. There are multiple ridesharing companies that have sprung up in the meantime. Arcade City is just one of them.

3. I'm pretty sure Arcade City will disappear once either (a) U/L return or (b) one of the newer companies (Fare, Fasten, etc.) get more drivers.

Transportation in Austin is terrible right now. Arcade City hasn't changed that fact.

2
frakkingcylons 8 hours ago 2 replies      
I live in Austin, and at this time there is still a void, but it's being filled faster than I expected. Fasten launched last week and I've used it four times already and there's not much to say other than it works just like Uber and Lyft.

The price is $1-2 more per ride and a I have to wait a few more minutes, but it does the job. The drivers I've talked to also prefer the compensation structure more than Uber and Lyft. Fasten's cut is fixed at $1 per ride (vs 20-28%), or $12 for the day. And like Uber, they get paid every Wednesday.

It's not available at the airport, but there's a city bus that drives from the airport to downtown (where getting a rideshare is easy) in 30 minutes for $1.75.

EDIT: Clarified Fasten's commission.

3
rockarage 6 hours ago 2 replies      
Few journalist know that UBER & Lyft are operating under more restrictive regulations in NYC. These regulation include fingerprinting, drug test, medical test and a couple of classes. The cost to comply is at least $600 per driver to start. There are other cost like special license plates.

Essentially you have to be a professional to drive an Uber/Lyft in NYC, legally. They comply in NYC so I'm not sure what Uber & Lyft are complaining about. At least one of them should have stayed, especially Lyft this a missed opputunity for Lyft.

4
xur17 8 hours ago 3 replies      
I live in Austin, and it's really the other apps like Fare, Fasten, and Get Me that are filling the void. Fasten ends up being the best price (they also have the best app coincidentally), coming out to a few dollars more than Uber or Lyft.

I ended up buiding a web app that keeps track of what apps work where (for example, Fasten currently doesn't work from the airport), along with the relative pricing this past weekend:

https://ridefinder.io

5
yohoho22 8 hours ago 0 replies      
Here's a Texas Tribune article from today detailing other Austin ridesharing developments in the wake of the Uber/Lyft withdrawal: https://www.texastribune.org/2016/06/07/austin-post-prop-1/
6
jalami 8 hours ago 2 replies      
I've always wondered how necessary something like Uber or Lyft really was. Centralized networks lead to critical mass and that's what I always saw as the appeal. No one wants to hunt for something manually.

I feel like something like craigslist, but with an ebay-like "95% driver approval rating" would do the job well enough. Sure, there are people that won't use it for security reasons and lack of standards, but if that's the case an alternative should prop up that ensures these things with standards for their drivers. One being cheaper as all funds go between driver and driven, the other providing a middle man that spends some of these funds to filter bad actors.

In my opinion, the mandatory regulation isn't really necessary if competition is healthy. If there's enough demand for finger-printed drivers, a service should crop up to provide it. Facebook though sounds like a terrible medium for this kind of thing. People will use what's familiar I guess.

7
Animats 9 hours ago 1 reply      
I'm amazed that anybody can organize anything via Facebook groups. It's like pounding a screw.
8
cperciva 9 hours ago 1 reply      
So, to steal a line... capitalism interprets regulations as damage and routes around them?
9
mpeg 9 hours ago 2 replies      
I love how the only effective way to market an Ethereum dapp seems to be to create a Facebook group and let people pay with good old cash.

Maybe I'm being a bit too harsh, but the decentralised model just presents way too many risks for both riders and drivers, and the lack of fees means AC is not incentivised to protect either.

10
zitterbewegung 9 hours ago 0 replies      
Looking at this post if Arcade City can provide a background check for people who use the service and also insurance then they would actually have the people who drive for the service be closer to contractors instead of the faux one that Uber/Lyft try to pull off.
11
Spooky23 9 hours ago 3 replies      
This brings the empire building nature of Uber into question.

If a group of people on a Facebook group can replace its functions for a significant group of people in a few days, why is this company valued in the billions again?

12
aembleton 9 hours ago 0 replies      
I heard this the other day on the Radio Motherboard podcast. You can listen to it here: https://soundcloud.com/motherboard/when-uber-left-austin
13
guelo 7 hours ago 1 reply      
FB normally starts choking off group notifications after a group reaches a certain size. I wonder if Arcade City is paying to keep the group up and running.
14
ars 9 hours ago 3 replies      
Oh that's too funny.

So instead of having fingerprints of all drivers, now they don't even have verified names, but simply completely anonymous people.

Great going lawmakers!

Some things you just can't legislate.

15
caseysoftware 7 hours ago 1 reply      
There's some dirtiness and shenanigans around the whole thing too.. including the Mayor holding a secret meeting with the Uber/Lyft competitors and the City Council investing taxpayer money in them. I've dug up some of the details but more to come:

https://medium.com/@CaseySoftware/mayor-steve-adler-is-scamm...

16
abhi3 8 hours ago 1 reply      
Can someone explain why Uber had to shut down? Couldn't they have just complied with the regulation by fingerprinting drivers?
17
TheHolyLancer 7 hours ago 0 replies      
When you enact prohibition, people don't stop drinking, you get moonshine and stills blowing up.
18
8 hours ago 8 hours ago 3 replies      
michael_storm is right, but I want to add that accusing other users of shillage and astroturfing without evidence is particularly disallowed here. Someone's holding an opposing view doesn't count as evidence.

We detached this subthread from https://news.ycombinator.com/item?id=11857854 and marked it off-topic.

20
Scalability, but at what cost? (2015) frankmcsherry.org
150 points by r-u-serious  14 hours ago   65 comments top 15
1
3pt14159 13 hours ago 2 replies      
I've been doing data analysis and machine learning client work for quite some time now and for companies as small as a 3 person startup to advising a department of the Canadian government.

Almost always numpy matrix math + cython or C or Java on a single machine is enough. Not always-always; but if you can relax requirements slighly say by accepting a 45 minute lag from new data impacting the total model, or by caching the results of the top 10k most likely queries, or by putting more effort into stripping out the garbage parts of the data, or, sometimes, just throwing a $10k a month server or mathematician at the problem (sure is cheaper than a bunch of cheap servers + larger infrastructure team).

The times you need real scalability you know you need it. You'd laugh at how silly someone would be for trying to put it onto one machine. You're solving the travelling salesman problem for UPS (although I can think of some hacks here - I probably can't get it down to a single machine), or you're detecting logos in every Youtube video ever made, or you're working for the NSA.

Even if you know for sure you're going to need scalability. I don't think it hurts to just do it on a single box at first. Iterating quickly on the product is more important and once you have something proven you can get money from the market or from VCs to distribute it.

2
ChuckMcM 13 hours ago 6 replies      
I like the analysis, basically it says "hey you don't have big data" :-) but that requires a bit more explanation.

The only advantage of clustered systems like Spark, Hadoop, and others is aggregate bandwidth to disk and memory. We know that because Amdahl's law tells us that parallelizing something invariably adds overhead. So from a systems perspective that overhead has to be "paid for" by some other improvement, and we'll see that it is access to data.

If your task is to process a 2TB data set, on a single machine using a 6GBS SATA channel and 2TB of FLASH SSDs you can read in that dataset into memory in 3333 seconds (at 600MB/sec which is very optimistic for our system), process it, and lets say you write out a 200GB reduced data set for another 333 seconds. so, conveniently, an hour of I/O time.

Now you take that same dataset and distribute it evenly across a thousand nodes. Each one then has 2GB of the data on it. Each node can read in their portion of the data set in 3 seconds, process it and write out their reduction in .3 seconds.

You have "paid" for the overhead of parallelization by trading an I/O cost of an hour for an I/O cost of about 4 seconds.

That is when parallel data reduction architectures are better for a problem than a single threaded architecture. And that "betterness" is purely artificial in the sense that you would be better off with a single system that had 1,000 times the I/O bandwidth (cough mainframe cough) than 1,000 systems with the more limited bandwidth. However a 1,000 machines with one SSD it still cheaper buy than one mainframe of similar capability. So if, and its a big if, your algorithm can be expressed as a data map / reduce problem, and your data is large enough to push the cost of getting it into memory to have a look at significantly beyond cost of executing the program, then you can benefit positively by running it on a cluster rather than running it on a local machine.

3
eternalban 7 hours ago 0 replies      
I've [had] this conversation with clients, CTO level, mostly in context of microservices. A few observations:

- Peter Principle: most decision makers are/feel technically insecure in the blog driven tech age, and cave in to direction from below. Of course, young developers want to play with shiny new things (given the general drudgery of the work involved).

- Emergence of DevOps: Engineers are being commoditized. There is an undeniable deskilling that goes hand in hand with having to wear all the technical hats. (A side glance here to pattern of deskilling of pilots in the age of fly by wire.) Sure, you will need to learn new 'tools' as 'operators', but what's the vote HN: what percent of these engineers could actually build one of these distributed systems? (To say nothing of being able to reign in the asynchronous distributed monster when it starts hitting its pain points.)

- You're not Google: I'm rather blunt when a team points to "Google does it". Google and the like have made a virtue out of necessity. Google/Facebook/Netflix/etc. had to resort to the pattern of lots of disposable commodity boxes. They also have the chops in house to field SREs that are simply not going to play machine room operator for enterprise IT.

The overwhelming majority of systems out there can run on a deployment scheme that 1:1 matches the logical diagram (x2 for fail over). And yes, it is amazing what one can do on a single laptop these days.

4
Eugr 8 hours ago 0 replies      
I agree with author that [in most cases] you don't need distributed processing for your algorithms. But sometimes you do, and when you do need it you have to understand that there is no silver bullet.

Creating a distributed system is very difficult, even when using platforms like Spark. Not all algorithms can be scaled easily or scaled at all, and not all algorithms in Spark MLLib or GraphX are actually designed to be truly distributed or work equally well for all use cases/data.

We tried to implement one of our algorithms (written in Java) that was taking hours on a single machine (even when using all the cores) using methods from Spark MLLib just to find that Spark job was constantly crashing. Turned out that some of the functions just fetch all the data to the "driver" instance and calculate the result there.

My guess this is what happens with author's use case - yes, he ran it on Spark, but only one node ended up crunching all numbers. And/or network overhead of course.

After we found out that MLLib can't give us what we need, we reimplemented it from scratch in Scala, making sure we balance the load equally and don't have too much network (shuffle) traffic between the nodes.

As a result, we went from 2.5 hours on a single machine, to under 2 minutes on a cluster of 25 instances (same Ivy Bridge processor, just more cores per node). The algorithm scaled almost linearly, but it required carefully designing it with Spark specifics in mind.

5
dikaiosune 12 hours ago 0 replies      
The author recently gave a talk at a Rust meetup about similar things:

https://air.mozilla.org/bay-area-rust-meetup-may-2016/

6
btilly 11 hours ago 1 reply      
I have long offered the following advice.

If you have code that is not able to run any more in a scripting language, and it is not embarrassingly parallel, you have two choices.

1. Move to something like C++, and optimize the heck out of it. You will gain something like 1-2 orders of magnitude in performance and then hit a wall.

2. Move to a distributed architecture. You immediately lose 1-2 orders of magnitude in performance, but then can scale essentially forever.

If you expect your distributed system to need less than 100 machines, you should seriously consider option #1.

7
grayrest 13 hours ago 0 replies      
If you're interested in reading more, Frank moved his blog to a github repo:

https://github.com/frankmcsherry/blog

8
wueiued 13 hours ago 1 reply      
I think graph operations are not fair comparison. It is notoriously difficult to scale.

On other side AWS now offers 2TB RAM machine. And single huge machine has smaller per GB cost than several smaller machines. I think clustered computing as we know will be soon gone. Only reason for multiple machines will be availability.

9
dang 13 hours ago 0 replies      
10
kibwen 12 hours ago 0 replies      
Note that this is from January 2015 and thus predates the stable Rust 1.0 release in May 2015, so it's possible that the code examples do not compile on post-1.0 Rust.
11
theanomaly 13 hours ago 0 replies      
Thanks for the analysis -- it is good that people have this context in their heads when designing systems. The missing conversation from this article is that some people conflate scalability with performance. They are different, and you absolutely trade one for the other. At large scale you end up getting performance simply from being able to throw more hardware at it, but it takes you quite a while to catch up to where you would have been on a single machine.

This is true not just for computing algorithms, but for developer time/brain space as well. Single-threaded applications are far simpler to understand.

The takeaway shouldn't be "test it on a single laptop first", but rather "will the volume/velocity of data now/in the future absolutely preclude doing this on a single laptop". At my work, we process probably a hundred TB in a few-hour batch processing window at night, Terabytes of which remain in memory for fast access. There is no choice there but to pay the overhead.

12
dzdt 13 hours ago 1 reply      
This reminds me of the "your data fits in ram" website which was on HN last year. Basically that site asked you your data size, then answered "yes" for any size up to a few TB.

The website is down, but the HN discussion is still there : https://news.ycombinator.com/item?id=9581862.

In fact the top comment there links to the original post here.

13
psiclops 13 hours ago 0 replies      
I was at mozilla for your talk!! Very interesting stuff
14
eva1984 12 hours ago 1 reply      
I bet the author didn't count the account of time of downloading data to a single box. Scalability, sometimes, is not a choice.
15
ap22213 4 hours ago 0 replies      
I think the missed point is that Spark is very easy. I can get an average Java or Python developer trained up on it in less than a day. The python shell is very simple to use out of the box. And, it's incredibly convenient to be able to either run locally or on a huge cluster. I can use the same code to easily process batch jobs from 1 MiB to 100 TiB. In my mind, it's just a cost savings. Developer time is expensive, and it's hard to find great developers. Hardware is cheap.

No way am I a scalability expert, and I really don't have time to be one. I started using Spark when I had to sort 10 TiB on disk, and it scored the highest on sorting performance. I struggled with implementing a fast disk sort quickly, and I gave Spark a whirl, and it fixed my problem, fast. Since then, I've found it useful in a lot of other ways.

21
Do Evil with ESP8266: Slow Down the WiFi yoursunny.com
91 points by asimuvPR  12 hours ago   28 comments top 9
1
mutagen 10 hours ago 1 reply      
This reminds me of an old practical joke gone wrong.

Back around 1990 we had a college lab full of PC XT clones on an early Novell network on 'thinnet' (shared coax cable instead of star topology twisted pair to a switch). Some of the games floating around relied on the original PC's 4.77 MHz clock and ran too quickly on our 'turbo' 10 MHz machines so there was a slowdown utility that generated regular interrupts looping over a NOP to allow the games to run.

I thought it would be a great prank to put this slowdown utility on the boot floppies and AUTOEXEC.BAT of half the machines in the room and watch my classmates scratch their heads over some slow computers. Instead, it messed with the CSMA/CD (collision handling) on the network cards and essentially brought that portion of the network down. I showed up to a very angry network admin who had spent hours troubleshooting network cards and cable before finally finding one of the affected boot floppies. Fortunately (for me) he blamed another group and I escaped the full brunt of his wrath.

Shared medium, wired or wireless, always offers some extra fun.

2
willidiots 11 hours ago 1 reply      
Welcome to unlicensed frequencies. You can also buy any number of legitimate narrowband continuous transmitters (e.g. an analog video transmitter) and effectively destroy one of the available channels.
3
daveloyall 10 hours ago 1 reply      
I completely believe the author's test results.

However...

> One class of network packets is broadcast packet [...] a major difference from regular unicast packets [...] the access point is no longer able to choose a speed according to the channel quality between the access point and the single recipient

What's that last part? Why not?

WiFi standards mandate the access point to transmit the broadcast packet at the lowest speed

Oh, I see. Wait, no I don't. The author implies that 802.11 is IP aware. Dubious to me, but ok, I'll google that... Aaaand, sorry, none of the titles of the top hits convinced me.

I think that 802.11 is analogous to ethernet. That is, it's link-level and TCP just happens to work over it. I don't think the AP changes transmit speed to all connected clients just because somebody sends UDP packets to 255.255.255.255... Especially since not every client has to have an IP address to be connected to the wifi!

If the author had some C code which contained 'FF:FF:FF:FF:FF:FF', I would have accepted the whole thing without a second thought. :)

> Since each channel refers to a single wireless frequency, this would affect all WiFi networks on the same channel, even if the stations are connected to other access points.

Probably the author meant "have some performance impact" instead of "affect" there. Affect makes it sound like the same broadcast-related phenomenon (still dubious to me) is occurring between other APs.

In short, I believe that the author has caused the little device to slow down "the wifi", but I'm gonna have to see some packet dumps to believe the explanation of 'why'.

(Probably that tight loop generates X packets per second, enough to saturate the channel, and the slowdown is just due to an ordinary flood.)

4
milge 8 hours ago 0 replies      
Ugh. If you guys keep posting about nefarious uses with the ESP8266, they're gonna take our toys away!
5
deutronium 11 hours ago 2 replies      
Could you use the ESP8266 to generate deauth frames?
6
rxaxm 9 hours ago 0 replies      
> [T]he impact of adding a second evil ESP8266 is much greater than the first one. One possible cause is the exponentially increased probability of having the channel jammed due to simultaneous transmissions on the same channel.

The reason is that TCP connections increase their speed linearly (aka the the first derivative of packet transfer is increasing) until they experience packet loss. If many packets are lost, or if the network is congested, the speed will not increase

7
yrro 12 hours ago 1 reply      
How illegal is this?
8
Fry-kun 9 hours ago 1 reply      
Wait, I thought wifi had frequency hopping built-in..?
9
9 hours ago 9 hours ago 1 reply      
Please don't post unsubstantive comments.
22
Investigatory Powers Bill passes through Commons after Labour backs Tory spy law arstechnica.co.uk
56 points by iamben  10 hours ago   6 comments top 4
1
pascalmemories 8 hours ago 0 replies      
As the article mentions, UK agencies have always conducted intense snooping under the 1984 Telecommunications act (spying suggests some target or objective in mind when what's happening is really just hoovering up data on people, 'just in case it's needed'). The 1984 act was a useful fig leaf to cover what had been a long-standing activity (as witnessed by the preposterous Wilson Doctrine to supposedly protect MPs from the snooping that everyone else was subject to).

The UK Government loves making supposed legal rules for what is essentially a no-holds-barred snoop-fest. Any legislation which would limit what are, in effect, unrestrained powers, are neutered. e.g. The Data Protection Act has blanket exclusions for "prevention and detection of crime" (handy for snooping employers too!) and the Protection From Harassment Act specifically permits law enforcement bodies to harass people without the ability of people to seek redress [that bill itself was initially a device created for Huntingdon Life Sciences to have a way to deal with animal rights protesters (which was a real problem - no matter where you sit on that issue); it's since been usefully pivoted by those being harassed by debt collectors to turn the tables and gain compensation, so it's not all bad.[1]).

This new law will do nothing to protect UK residents, nor anyone unfortunate enough to have any data transiting UK routing nodes, where their data is recorded by bulk surveillance.

The pretense that something is being improved or balances and safeguards of peoples liberty are somehow being created, is insulting.

[1] http://www.lawgazette.co.uk/law/torts/49567.fullarticle

2
YeGoblynQueenne 8 hours ago 1 reply      
>> "a historic commitment that trade union activities cannot be considered sufficient reason for investigatory powers to be used."

Which is roughly equivalent to giving a terminal patient aspirin and bragging you cured their headache (the patient being the unions).

3
torpilla 9 hours ago 0 replies      
For those who thought there was any real difference between Labour and Tories ... this is you wake up moment.
4
eggy 5 hours ago 1 reply      
I am not familiar enough with the UK process, but could Scotland be a large enough dissenting body within the Westminster Parliament anyway to affect a vote like this?

Could be another reason for Scotland to secede from the UK, or at least another item to add to the 'Pros' column.

24
An unofficial WWDC app for OS X github.com
102 points by insidegui  12 hours ago   38 comments top 10
1
Longhanks 11 hours ago 3 replies      
So refreshing to see a nice desktop app built without Electron.

A well defined interface, designed specifically for the platform the author targeted. Major props to the author!

2
egwynn 11 hours ago 1 reply      
I mis-parsed the headline as WWDC for (OS X 4.0) instead of (WWDC for OS X) 4.0. I was confused until I clicked.
3
Philipp__ 11 hours ago 0 replies      
Amazing! So nice... When I saw github page, and that Swift yellow color in language details line, smirk was drawn on my face! <3 Native desktop apps
4
FireBeyond 11 hours ago 0 replies      
I thought this was an official app for a moment. Saw this:

"The app has a powerful search feature. When you first launch the app, It indexes the videos database and downloads transcripts from ASCIIWWDC, so when you search, not only will you get search results from session titles and descriptions, but also from what the presenter said in the sessions."

And was all pissed off.

"Damnit, Apple, how about you do something similar for the App Store?!?"

Then I realized...

5
SakiWatanabe 3 hours ago 0 replies      
Electron based app takes way too long to start up. Yes, I like visual studio code etc. But I hate its startup time so for quick changes I just use sublime text or vim.
6
czk 11 hours ago 0 replies      
This is pretty convenient for watching past videos. The inclusion of subtitles from ASCIIWWDC is a nice touch.
7
xufi 5 hours ago 0 replies      
Pretty cool. I've been wanting something like this for a bit sine I've been wanting to make an app and this coming up so I can keep in touch with the new technologies being announced . Thanks!
8
nicky0 11 hours ago 0 replies      
This a superbly designed app and really well implemented. Thanks for posting it here.

Genuinely useful both for past WWDC videos and (presumably) for keeping up to date next week. Love the ability to search text within the talks.

9
archagon 10 hours ago 1 reply      
I wish there was an easy way to support projects like this. Click a button? Put a quarter in the tip jar.
10
dceddia 9 hours ago 1 reply      
Anyone else notice the string "macOS" in the screenshot? Is Apple moving away from the OS X naming scheme?
25
HTTPie: a CLI, cURL-like tool for humans github.com
92 points by celadevra_  3 hours ago   31 comments top 12
1
alayne 2 hours ago 2 replies      
I've been using HTTPie a lot more recently. It really takes the tedium out of using curl and I can produce color coded output for people. However, I am still finding myself in situations where I can't figure out how to induce the correct request. In other words, the user friendliness of being able to do things like construct JSON from parameters is great until it isn't.

Is there anything similar for GUI users? The standalone application form of Postman is popular with some coworkers for general HTTP work as is Fiddler on Windows.

2
paulannesley 36 minutes ago 0 replies      
I like using HTTPie for many things, however the current release does a bad job of rendering XML, e.g. it'll display `<sitemapindex xmlns="http://">` as `<ns0:sitemapindex xmlns:ns0="http://">`. But I just checked and found the not-yet-released v1.0.0 fixes this by removing the XML formatter completely as discussed in https://github.com/jkbrzt/httpie/issues/443 so my gripe is sorted.

I still tend to go back to cURL when I want to see exactly what's been received, and use httpie for when I know the response headers and body serialization are fine and I want to see the data therein.

3
theaustinseven 3 hours ago 1 reply      
I really like this because by default it gives all of the http headers and makes everything look really nice. Curl still has its place, and I wouldn't dream of replacing it, but I would definitely use this as a sort of command line shortcut. Cool project.
4
the_common_man 2 hours ago 1 reply      
Almost 20k stars. That's quite incredible for a project especially since I have never heard of it before now.
6
rdtsc 2 hours ago 0 replies      
I keep switching between curl + jq vs httpie. Lately I've been using mostly httpie. It is a great tool. One of my favorite things is it builds json objects (say for 'put' and 'post' for example) using command line arguments. So can have:

 $ http put url key1=val1 key2=val2
If one of the fields is a larger nested object can use :=

 $ http put url key1=simpleval1 key2:='{literaljson...}'

7
gamedna 2 hours ago 0 replies      
First off, I am really surprised how often this gets re-posted to HN. https://hn.algolia.com/?query=httpie&sort=byPopularity&prefi...

I discovered HTTPie a few years back and it has completely replaced curl/wget during our API development and testing.

8
gamedna 2 hours ago 0 replies      
Forgot to mention that when dealing with JSON APIs, httpie + jq is a killer combo.

https://stedolan.github.io/jq/

9
steveax 2 hours ago 0 replies      
There's also a nice auto-complete utility built on top of HTTPie:

https://github.com/eliangcs/http-prompt

10
orliesaurus 2 hours ago 0 replies      
One of the first tools I install every time I buy a new machine
11
homero 2 hours ago 0 replies      
Incredible
12
partycoder 3 hours ago 3 replies      
Right, let's take the human denomination away from people that use cURL...
26
The world of mushroom growing medium.com
64 points by sergioisidoro  9 hours ago   20 comments top 8
1
roel_v 5 minutes ago 0 replies      
But then what about mushroom farms? If everything needs to be so sterile, how do the commercial mushroom farm I've visited work? I know of one that is basically in a few stables next to other stables where cows are kept, with the sterility you' expect from the average farm; and another one inside old limestone quarries, because the temperature there is low, constant and it's moist. Not the most sterile place I've seen either - they do daily tours even! Is the sterile thing only for some species?
2
pluteoid 5 hours ago 2 replies      
>All the techniques that I learned can be applied to cultivate any kind of mushroom...

If only this were true. Sterile culture techniques only work for the subset of species that aren't obligately mycorrhizal (forming mutualisms with plants), parasitic, or that have other complex ecological requirements. Thus there are all kinds of delicious and interesting species we can't grow so easily, or at all.

But I have a lot of respect for home cultivators like this guy, who go beyond the grow kit stage. It's straightforward to culture and fruit many mushroom species in a properly equipped microbiology lab. But when you're in your kitchen, making do with "gloveboxes"[1] instead of HEPA laminar flow hoods, stovetop pressure cookers instead of autoclaves, and fridges and terrariums instead of programmable incubator units, things can get really challenging.

[1] http://www.instructables.com/id/Glove-bag-for-Mushroom-Growi...

3
finnh 7 hours ago 0 replies      
The article references a mushroom body as the largest on earth [0], which is true by area. But the largest by mass is a stand of aspen trees [1]. Because aspen clones can regenerate vegetatively from their underground roots, they in some ways can be thought of as "a fungus with tree-like appendages" (I forgot where I read that, sadly).

[0] http://www.bbc.com/earth/story/20141114-the-biggest-organism...

[1] https://en.wikipedia.org/wiki/Pando_(tree)

4
mikereedell 3 hours ago 0 replies      
Reminds me of when I lived in southeast PA by Kennett Square, a town with a lot of mushroom farms. I was curious as to why they were in that area. Turns out you need hay and horse urine to grow mushrooms. That area has a lot of hay farms and a lot of horse farms.

Riding a bicycle by the farms when they were changing over a grow house on a humid summer morning is an olfactory experience I won't forget.

6
Alex3917 5 hours ago 2 replies      
For what it's worth, false morels are only considered edible in certain areas. E.g. in New England they're considered deadly poisonous, but they're considered a delicacy in Cincinnati. There are a lot of different species, and also possibly gene transfer across species, so it's not really clear what's going on.
7
andreapaiola 1 hour ago 0 replies      
I live near the best site in the world for porcini (boletus)...
8
mkoryak 5 hours ago 0 replies      
someone pointed me to a site that sells mushroom pellets that you stick into logs to grow. ill try it soon.

something like this:

http://www.shii-take.de/irw_lang.454e47.list.4b41543333.html

or this

https://www.mushroomadventures.com/

27
Toward a URL for every function sourcegraph.com
121 points by joeyespo  14 hours ago   61 comments top 20
1
sqs 12 hours ago 2 replies      
Sourcegraph founder here. We built this to make it much easier to grok code. It saves us hours every day. Would love to hear your feedback!

The README has some good links to try Sourcegraph at https://github.com/sourcegraph/sourcegraph/blob/master/READM...:

https://sourcegraph.com/github.com/square/okhttp/-/def/JavaA... semantic code browsing for Java)

https://sourcegraph.com/github.com/golang/go/-/info/GoPackag... http.NewRequest used in 8801 repositories)

Sourcegraph supports Go and Java right now. If you want to get access to the upcoming beta of JavaScript, Python, or other languages, send us a note at support@sourcegraph.com or https://twitter.com/srcgraph.

2
foota 3 minutes ago 0 replies      
Don't we already have this in the form of NPM?
3
majewsky 14 hours ago 6 replies      
When I read the title, I was imagining something more theoretical, e.g. a URL encoding of lambda calculus terms.
4
alpyne 12 hours ago 1 reply      
Sourcegraph folk, are you aware of Rich Hickey's codeq [0][1] for clojure:

codeq allows you to track change at the program unit level (e.g. function and method definitions) and query your programs and libraries declaratively, with the same cognitive units and names you use while programming

[0] http://blog.datomic.com/2012/10/codeq.html

[1] https://github.com/Datomic/codeq#codeq

5
z3t4 1 hour ago 0 replies      
Most JS programmers seems to use modules (require/import) as masqueraded globals, like importing complexed functions instead of just standalone modules. And in that case it's better to just declare all dependencies in the root (html file). You would probably want to use a package manager though, to keep track of name conflicts and manage the script tags (dependencies of dependencies).

As for central hosting of packages I think it will work. But we will probably need to be able to have many src attributes in script-tags for redundancy and optimal caching.

7
skybrian 13 hours ago 1 reply      
For Go in particular, a possible alternative is godoc.org:

https://godoc.org/flag#Arg

8
heynk 10 hours ago 0 replies      
This should be a great long-tail SEO boost. It's just like one of (Rap) Genius's best early SEO advantages, which was that they had a URL for every line.

https://moz.com/blog/how-i-would-do-seo-for-rap-genius

9
Shendare 14 hours ago 1 reply      
A URL for every function on GitHub, at least. Cool idea.
10
zeveb 10 hours ago 1 reply      
Neat idea, although I'm not sold on the style of the URLs themselves. It'd be cool to introduce a new URL scheme:

 code://github.com/edicl/hunchentoot/master/log.lisp?macro=with-log-stream
That would handle multiple definition namespaces. One could use

 code://github.com/edicl/hunchentoot/master/log.lisp?macro=with-log-stream&commit=0951a0df8fe93d99e6f2aa3f9612a2d6e581e84f
to refer to a particular commit. No idea what the equivalent would look like for other VCSes though.

11
alberto_balsalm 12 hours ago 0 replies      
Some of you may find Unison interesting: http://unisonweb.org/2015-05-07/about.html#post-start
12
ed 10 hours ago 1 reply      
It'd be cool if you could create a permalink from a github URL (with a line no. param).

Then the interface could look like tinyurl (anonymously paste a github link, get a sourcegraph link in return).

Bonus points if it simply redirects you to the new line number on GitHub's master.

13
jesalg 13 hours ago 1 reply      
Curious why they decided not to work on adding Ruby support especially when underlying srclib which they use has support for it.
14
danvoell 12 hours ago 1 reply      
I'm not sure if I understand this correctly, but my first thought is, what if the function that I am using needs to change? For instance, using css, if I later discover that the design was incorrect, I would rather just change the design code instead of updating each linking instance.
15
sandebert 13 hours ago 1 reply      
...for an extremely narrow definition of "world". (Github
16
kazinator 13 hours ago 0 replies      
Or, just embed the entire function in the URL. :
17
waxjar 10 hours ago 1 reply      
What happens if a function is renamed?
18
Roritharr 12 hours ago 1 reply      
Hi, this looks very interesting but maybe more on the side of a feature i wish GitLab, Bitbucket etc would have than other getting a dedicated for.
19
nojvek 13 hours ago 0 replies      
Would make a lot of node modules as functions redundant
20
partycoder 3 hours ago 0 replies      
You mean like RPC? If so take a look at Thrift, Protocol buffers, Avro, or whatever.

If the idea is to just expose reusable functions, you can take a look at https://algorithmia.com/

28
Facebook's AI Research Labs fastcompany.com
78 points by Osiris30  14 hours ago   16 comments top 4
1
mturmon 12 hours ago 1 reply      
Clearly the examples of AT&T Research and MSR are close to LeCun and Candela's minds.
2
projectramo 12 hours ago 1 reply      
I appreciate that they took the time to set up two different labs to differentiate AI from ML.

To my mind, the former is largely (though not exclusively) based on logical reasoning (as in formal logic) and the latter is largely (though not exclusively) based on statistical reasoning.

I hope one of these articles will take some time to bring us up to date on the recent developments in contrast to the other.

3
joe_the_user 12 hours ago 4 replies      
Honest question: Is making advertising effective where the battle for the best AI is really going to be fought?

I think feel like it's important to somehow measure the level of progress being made by the current explosion of deep learning processes. I'm personally not that impressed by translation applications or Google search innovations - the translations I see still seem barely functional, noticeably better than purely literal translation but not very much more useful than purely literal translation.

Alphago was definite progress. Are there that many problems that could be approached in a similar way?

Clearly, making ads work is important to a company's bottom line. But it seems like there are going to be hard limits to just extrapolating patterns - I know youtube's recommendation engine has gotten worse for me over time and it seems like the smartest entity in the world can only figure out so much future buying from past online surfing and past purchases combined. And even more, there's only so much ads in particular are going change this.

4
meeper16 12 hours ago 1 reply      
Google's is 100x facebooks
29
17,000 islands of imagination: discovering Indonesian literature theguardian.com
28 points by lermontov  10 hours ago   1 comment top
1
contingencies 3 hours ago 0 replies      
Scary. Not only have I personally met the organizers of the festival that was muzzled last year, but I have also filmed in Indonesia which could have resulted in a nominal 'visa violation'. Of course, the latter example was clearly politically motivated: it's widely held that the Indonesian military authorities profit handsomely from and even help to organize piracy. On the other hand, in my experience Indonesia is pretty decentralized, somewhat like the 1990s internet. People in many areas hold much more affinity for their island or regional identity than that of the nation, and significant animosity is held against the politically dominant Javanese. In fact, people on some islands even said "we like you guys, you can stay here as long as you like, fear not central government visa issues". Back on the political side, I once attended a Wikimedia event at the National University of Jakarta, where it seemed that the main bureaucratic function was the decision on whether or not to approve additional 'minority' language Wikipedias. A young man from the Minangkabau region of Sumatra had endured a ridiculous amount of bureaucracy to reach this point, and I argued heavily in favor of adding the language since in my view it cost so little to maintain and Wikipedia has no business politicizing language availability and should equally support all linguistic communities who adhere to the general format, regardless of size. Unfortunately, the Minangkabau Wikipedia seems to have entirely stagnated in growth over the last three years, which while sad does not devalue its >220,000 articles: https://min.wikipedia.org/wiki/Laman_Utamo
30
Bold: Make Your Words Stand Out bold.io
35 points by GuiA  8 hours ago   23 comments top 12
1
guywithabike 7 hours ago 2 replies      
This basic post with two images (four counting the logo and author avatar) clocks in at 30 requests and a plump 6.10 MB. Bold!
2
asimuvPR 6 hours ago 2 replies      
Is this a new blogging engine? I can't seem to get a proper mental picture for some reason.
3
ytjohn 5 hours ago 0 replies      
After reading over their landing page a few times, I think that this is writing assistant "service". You start writing out your proposal and some automated "asssistant, not a bot" is supposed to analyze your writing and provide suggestions to make it more memorizable and easy to understand (group this into 3 phrases instead of sentences. Remove this adverb. Change this sentence from passive to active). I assume once you write it up, it gives you the ability to share your masterpiece as a link, possibly export to word/pdf/stone tablet.

Much like Microsoft's Clippy, the idea is pretty sound, but a bad implementation will make it more of an annoyance than a feature. Given the confused meandering of their landing page, I don't have much hope for their product.

4
jbob2000 7 hours ago 0 replies      
What problem is this solving? I don't really need ambiance music in a word processor.
5
wcarss 6 hours ago 0 replies      
The content is served in a span in a span in 11 nested divs in a span in a section in 2 more nested divs -- at least it looks nice.

The "discuss on slack" feature is pretty neat. The thought of being able to hop into a discussion with people on a topic rather than making static posts would be cool.

6
chasing 3 hours ago 0 replies      
As an aside: I do not like the idea behind the Hemingway app. Editing prose is not the same as debugging code. Removing adverbs will not make you a good writer. And the whole idea of having some bot making automated comments on my text as I write it sounds distracting at best.

If you want to write better, write more. And let people read your writing. Hear what they have to say. Style handbooks like "Writing with Style" or "The Elements of Style" are great, but you should attempt to understand the reasons they behind their recommendations, not just use them as a mindless checklist.

Craft your own voice.

7
needcaffeine 6 hours ago 1 reply      
I truly honestly have no idea what this is. Is it a blogging platform? A CMS?
8
acafourek 4 hours ago 0 replies      
I can definitely see how this could be useful for teams that create content collaboratively. When our team works on release notes, blog posts, support articles, etc we use a combination of Slack and Google Docs.

After editing, we post to tumblr (product updates), Medium (blog/marketing) or any one of half a dozen other places where we out stuff. Bold feels to me like Medium with bonus collaboration features + integrations. Tools like http://www.hemingwayapp.com/ built in sound awesome. Add in the ability to create your own assistants (import brand assets, pull up GitHub issues, insert content from your YouTube channel, find the right gif for this paragraph) and it adds up to a much more centralized writing experience for modern work-related content creation.

9
bcherny 7 hours ago 0 replies      
Looks like a Medium clone?
10
King-Aaron 2 hours ago 0 replies      
How many times does the user need to read the page before they discover what bold.io even is? I'm at about five now and just want to know before I move on to my sixth.
11
undoware 6 hours ago 1 reply      
"Hi! It looks like you're trying to recycle an idea from the late 90's into yet another SaaS product. Would you like to (a) post to HN a bloated landing page with almost no details, (b) collect email addresses, or (c) both?"
12
libeclipse 6 hours ago 1 reply      
What's the source for the first image?
       cached 8 June 2016 07:02:01 GMT