hacker news with inline top comments    .. more ..    24 Aug 2016 News
home   ask   best   3 years ago   
1
Planet Found in Habitable Zone Around Nearest Star eso.org
581 points by Thorondor  5 ago   261 comments top 26
1
mjhoy 3 ago 7 replies      
The fun stuff is buried in footnote [4]:

> The actual suitability of this kind of planet to support water and Earth-like life is a matter of intense but mostly theoretical debate. Major concerns that count against the presence of life are related to the closeness of the star. For example gravitational forces probably lock the same side of the planet in perpetual daylight, while the other side is in perpetual night. The planet's atmosphere might also slowly be evaporating or have more complex chemistry than Earths due to stronger ultraviolet and X-ray radiation, especially during the first billion years of the stars life. However, none of the arguments has been proven conclusively and they are unlikely to be settled without direct observational evidence and characterisation of the planets atmosphere. Similar factors apply to the planets recently found around TRAPPIST-1.

2
taliesinb 4 ago 2 replies      
Wow, amazing result. And talk about synchronicity - just last night I watched an interesting 2015 talk about the search for planets around Alpha Centauri using the radial velocity technique: https://www.youtube.com/watch?v=eieBXGpNYyE

The speaker even mentioned the previous incorrect HARPS announcement, which was later found to be an artefact due to the windowing function they used - a pretty embarrassing mistake. This new finding involves a completely different period: 11.2 days instead of the previous 3.24 day signal.

Also, link to the Nature paper for the lazy: http://www.eso.org/public/archives/releases/sciencepapers/es...

3
kjell 4 ago 2 replies      
Just in time for the third book in Liu Cixin's space opera ("Remembrance of Earth's Past") to be released in english next month: https://www.goodreads.com/book/show/25451264-death-s-end

Previously on HN: https://duckduckgo.com/?q=site%3Anews.ycombinator.com+cixin+...https://hn.algolia.com/?query=Cixin%20Liu&sort=byPopularity&...

4
owenversteeg 7 ago 0 replies      
Although we can't image it with current technology (JWST and Hubble both have resolution of 100 milliarcseconds) we might be able to within a few years.

IR inferometers will be able to give us some data in just a few years, and the E-ELT/TMT will also let us "image" it. The "image" won't be anything you can really look at (E-ELT has resolution one milliarcsecond) but it'll give us important data.

5
afreak 4 ago 17 replies      
Keep in mind that at best it would take maybe 1,000 years with current technology to get there with a probe or human-supporting ship. It would be highly unpopular however as it involves exploding nuclear bombs behind the craft to get it there that fast--that and it would probably cost trillions to build the thing.
6
thatha7777 26 ago 0 replies      
Sorry for the , but if you took a Space Shuttle to Proxima Centauri it'd take 160,865 years.

http://www.wolframalpha.com/input/?i=distance+to+Proxima+Cen...

7
hoodoof 26 ago 1 reply      
Is any article ever published on an exoplanet in without speculating that it might harbor life?
8
derflatulator 7 ago 0 replies      
But is everything on a cob?
9
thedangler 4 ago 1 reply      
So I guess we start sending light patters to that planet and wait 8 years for a response?
10
natch 1 ago 0 replies      
What's super confusing to me is: If the planet is so much closer to its star, and the star is so much larger than ours, why does the artist's conception show the star as being so "small" (perceived size, not actual size) as viewed from the planet? Was the artist just not thinking straight that day, or am I missing something? Yes I understand it's an "artist's conception" but the question remains.
12
ngoldbaum 4 ago 0 replies      
And here's the paper describing the discovery: http://www.eso.org/public/archives/releases/sciencepapers/es...
13
sakopov 2 ago 1 reply      
Posted this story when it came out a week ago but it got no traction. [1] This is quite exciting but as far as I understand we are not quite there in terms of technology to reach it within my lifetime.

[1] https://news.ycombinator.com/item?id=12302489

14
shmerl 2 ago 1 reply      
Is it feasible to send deep space probes to such planet? Let's say the probe is accelerated to high sub light speeds with ion thrusters. Can it reach it in some sensible time then?
15
bcjordan 4 ago 1 reply      
Would it remain in a habitable state longer than Earth?
16
Diederich 3 ago 3 replies      
Does anyone here have an idea of which kind of resolution https://en.wikipedia.org/wiki/James_Webb_Space_Telescope will provide? I'm assuming that this little rock might not even occupy a single pixel, but I'd love to be wrong.
17
Cortez 4 ago 1 reply      
There's too many factors to say the zone may be habitable for life.
18
sangd 4 ago 0 replies      
That will take the New Horizons 73,796 years to fly by.
19
jomamaxx 37 ago 0 replies      
"Major concerns that count against the presence of life are related to the closeness of the star."

I think the 'major concerns' are that we don't exactly know what 'life' is, and that since we have no information about any other 'biological entities' such as ourselves anywhere else, we can't entirely assume that it's a common thing.

I suggest that if we find life out there, it will be very common. But it's not entirely plausible that this is the case.

It's an interesting statistical game, made very difficult by the fact we don't fully grasp how 'we' became in the first place. I mean, we have the gist of it, but there's so much that remains unknown.

20
withinrafael 4 ago 0 replies      
Melnorme are known to hang out around that system.
21
ralusek 3 ago 0 replies      
Funny that there have been exoplanets found on so many stars, but our closest neighbor can still surprise us.
22
stormbrew 4 ago 0 replies      
Was there an error in an early version of this article? There are two comments in here saying 500ly away. Proxima is only ~4ly away.
23
lutusp 4 ago 0 replies      
This discovery will greatly increase interest in gigantic telescopes, to allow a closer look at the planet and its atmosphere.
24
api 4 ago 7 replies      
How big of a space telescope would we need to see this planet in any actual detail?

One of my sci-fi fantasies is to take a photo of an extrasolar planet and see someone else's city lights. :) Of course if we could see that we could also probably detect their radio emissions, but seeing someone else's lights would somehow be cooler.

25
bordercases 2 ago 0 replies      
Pod Recovered
26
ommunist 1 ago 0 replies      
If there is oil on it, centaurian bloody dictatorship cannot be tolerated by progressive democratical forces.
2
Show HN: Carbide A New Programming Environment trycarbide.com
69 points by antimatter15  1 ago   22 comments top 14
1
nostrademons 5 ago 1 reply      
I think that something like this will probably be the future of programming, but Carbide itself needs to dial it back and focus on which data visualizations give the biggest bang for the least obtrusiveness.

Apple's been moving in a similar direction with Swift playgrounds, and recent Java IDEs (IntelliJ, and I think Eclipse) will display the values of variables next to the line of code when you pause in the debugger. These are both useful features. They get cluttered really quickly, though, and in the playground case take you out of your normal development flow.

If you want this to be impactful, focus on delivering information at your fingertips without delivering information overload. The core idea of being able to inspect & manipulate run-time values alongside the code that generates them is sound. The implementation - with lots of fancy gadgets that overshadow the code itself - needs some design love.

2
antimatter15 1 ago 0 replies      
Hey HN!

I'm one of the creators of Carbide, and I'm really excited to share it with you all.

We're thinking of releasing Carbide as open source in the coming weeks if there's a community interested in building stuff on top of it.

One of the areas we'd appreciate help with is adding support for different languages Python, Scala, Haskell, Rust, etc.

Other than that, general feedback / questions welcome.

We've scrabling to turn off jet-engine mode on the website :)

3
monkmartinez 32 ago 1 reply      
Cool looking site/project... However, the web page kicked the fans on my laptop into jet mode.

Is this like Jupyter or built with Jupyter tech?

4
pmontra 3 ago 0 replies      
The most interesting feature is how it computes the inverse of the program to yield the inputs that produce a given output. I'm not sure if it's useful but it's cool.

But saving to public gists, no thanks.

5
wodenokoto 22 ago 2 replies      
Looks interesting, but the website is so ressource intensive that I practically can't scroll down on it. I gave up reading what it is.
6
jc4p 18 ago 0 replies      
This looks like it could be cool. I'm mostly a plaintext editor kind of programmer but a IDE that helps me get my job done better would obviously be the better solution.

There's... a lot happening here though. What does:

> Comments live in a Rich Text sidebar #LiterateProgramming

mean? I played around with some samples and it seems that there's a method for displaying text or something?

I was hoping it'd be an inline `// yeah I know doing +1 looks wrong but it's because` --> automatic transcribing over to a sidebar but I don't know what's actually happening.

What does:

> Imports modules automatically from NPM or GitHub

mean? Does it mean "import" in the sense that you don't have to write the import statement, or in the `npm install` sense? What happens if I misspell a package name and there is a malicious package under that name? Will it auto install and auto run the post install scripts??

7
elliotec 8 ago 0 replies      
But, why?

Like whats the purpose of using this as opposed to vim and a browser?

And what do they mean that it requires no installation or setup? Is it not a native program?

How does one use this to give it a try?

None of this was clear after a few reads through that page.

8
hibbelig 30 ago 0 replies      
How does this compare to LightTable?
9
outworlder 9 ago 0 replies      
The terminology is interesting. "Notebooks", "Kernels"...

So, Jupyter-like?

10
_prototype_ 6 ago 0 replies      
Their neural network example is broken
11
scottmf 31 ago 1 reply      
I must be missing something. Link to download or Github?
12
fake-name 9 ago 1 reply      
"A new programming environment" for javascript.

Welp, there goes my interest.

13
kasajian 12 ago 0 replies      
dafuq!
14
antar 20 ago 0 replies      
Looks limiting.
3
Text Summarization with TensorFlow googleblog.com
124 points by runesoerensen  3 ago   15 comments top 4
1
8ig8 2 ago 1 reply      
There's SUMMRY (http://smmry.com). I don't recall it being very smart -- that is, it doesn't rewrite sentences. It extracts the most important/relevant ones to basically shorten an article. It's definitely useful.
2
Cozumel 1 ago 2 replies      
I wonder what it'd be like on a novel, say something like 'pride and prejudice', would it be able to essentially summarise the plot or would it end up like 'movie plots explained badly'

Either way, this is great research with a ton of real world applications!

3
ProxCoques 2 ago 1 reply      
That reminds me. Whatever happened to http://summly.com?
4
hyperbovine 2 ago 2 replies      
The table is nice, but I'd like to see examples where it performs poorly as well.
4
How AI and Machine Learning Work at Apple backchannel.com
154 points by firloop  4 ago   77 comments top 12
1
throwanem 2 ago 5 replies      
I like how Apple can't win here.

If they publish, they aren't doing anything new and they haven't innovated since Steve died, and they should really just give up because there's obviously no point to anything they do and hasn't been since 1997.

If they don't publish, they're evil secretive bastards who don't contribute to the ML community and probably drown puppies or something because who knows what goes on behind closed doors?

I don't really have a dog in this fight, except inasmuch as I'm a generally satisfied iPhone owner. I just think it would be really neat if people would settle on one narrative or the other, instead of keeping on with both at once.

2
Smerity 3 ago 4 replies      
None of this really answers the overlying question that Jerry Kaplan and Oren Etzioni raised. The question raised by most in the field isn't whether Apple use AI/ML internally, the real question is why they avoid the research community so strongly.

For me, the greatest thing about the ML/AI community is how open it is and how strong a sense of camaraderie there is between people across the entire field, regardless of whether they're from industry or academia.

Employees from competing companies will meet at a conference and actually discuss methods.Papers are released to disseminate new ideas and as a way of attracting top tier talent.Code is released as a way of pretraining students in the company's stack before they ever step through the company's doors.Papers are published on arXiv when the authors feel they're ready - entirely free to access - without waiting for a conference for their ideas to be spread.

This entire push of camaraderie has accelerated the speed at which research and implementation have progressed for AI/ML.

... but Apple are not part of that. They publish little and more broadly don't have a good track record. On acquiring FoundationDB, they nixed it, with little respect to the existing customers. Fascinating pieces of technology lost. If they aren't using the exact thing internally, why not open source it? I fear the same is likely to happen to Turi, especially sad given the number of customers they had and the previous contributions that many of Turi's researchers made to the community via their published papers.

Apple may change in the future - they may become part of the community - but a vague article of self congratulation isn't going to sway me either direction.

"We have the biggest and baddest GPU farm cranking all the time" ... Really? \_()_/

3
mikesf888 2 ago 0 replies      
IMO the most profound quote in the interview: Our practices tend to reinforce a natural selection biasthose who are interested in working as a team to deliver a great product versus those whose primary motivation is publishing, says Federighi.
4
theinternetman 1 ago 0 replies      
Finding these heavily curated advertorials Apple has been pushing out (This and the recent wired advertorials come to mind) a bit of a sign that not everything is sunny at One Infinite Loop.
5
devy 35 ago 0 replies      
There are a lot of criticisms here about Apple's secretive AI/ML development practices. But it's not unusual to Apple's long culture heritage of secrecy.

From consumer's perspective, I applaud their firm believe of customer privacy as well as pioneering consumer products based on AI/ML development AND with differential privacy in mind.[1][2]

[1] http://highscalability.com/blog/2016/6/20/the-technology-beh...

[2] http://www.imore.com/our-full-transcript-talk-show-wwdc-2016...

6
KKKKkkkk1 1 ago 1 reply      
Why should Apple researchers contribute back to the AI community on their shareholders' dime? What makes AI special as opposed to any other field of computer science?
7
LeanderK 3 ago 1 reply      
i am really disappointed by apple. I respect Apples wish to develop products in secrecy and i understand that you can't just open source your secret sauce. I also really like their products.

But not publishing your advancements harms the community greatly. Its like building your product entirely with open-source software (the published work of other researchers) and not contributing back.

8
tahoeskibum 21 ago 0 replies      
Nice article, but in my perception, Apple is way behind in AI vs. Google on mobile. Siri's current speech recognition is still far behind Google on my iPhone. Half of the time it Siri doesn't recognize something but Google recognizes it right way. Apple Maps continues to lag in the following features: biking and public transit and even things like which lane to take on a big interchange. As a result I end up using Google (Now) and Google Maps by default.
9
emehrkay 1 ago 1 reply      
Off topic a bit, but this article makes me wonder where does one start with Deep/Machine Learning/AI? I've seen a few posts the past few days talking about the topic (Deep Learning with Python, etc.), but what are the core requirements regarding math, statistics, programming, etc? Where should a web developer start?
10
msoad 53 ago 0 replies      
Apple sends their employees to ML/AI conferences with fake company names on their badges to avoid leaking a single bit of their knowledge. I don't know how any AI researcher resists working at Apple!
11
vonnik 2 ago 3 replies      
I have a big problem with articles like this.

Apple's PR is notorious for cracking the whip, which means that the "inside story", if they give it to you, comes with a warning to the journalist to behave and be nice. Levy's piece is generous with flattery and cautious with criticism. He quotes Kaplan and Etzioni high and briefly in the piece, and spends the rest of it refuting them. Apple will give him another inside story down the road.

Apple has a big question to resolve for itself about the tools it's going to use to develop this. It can't go with Tensorflow, because TF is from Google. It's kind of at another turning point, like the one in the early 90s when it needed it's own operating system and Jobs convinced them to buy next and use what would become OSX.[0]

The most pointed question to ask is: What are they doing that's new? The use cases in the Levy story are neat, and I'm sure Apple is executing well, but they don't take my breath away. None of those applications make me think Apple is actually on the cutting edge. There's no mention of reinforcement learning, for example; there is no AlphaGo moment so far where the discipline leaps 10 years ahead. And the deeper question is: Is Apple's AI campaign impelled by the same vision that clearly drives Demis Hassabis and Larry Page?

We see what's new at Google by reading DeepMind and Google Brain papers. Everyone else is letting their AI people publish, which is a huge recruiting draw and leads to stronger teams. Who, among the top researchers, has joined Apple? Did they do it secretly? (This is plausible, and if someone knows the answer, please say...) The Turi team is strong, yes, but can they match DeepMind? If Apple hasn't built that team yet, what are they doing to change their approach?

Another key distinction between Apple and Google, which Levy points out, is their approach to data. Google crowdsources the gathering of data and sells it to advertisers; Apple is so strict about privacy that it doesn't even let itself see your data, let alone anyone else. I support Apple's stance, but I worry that this will have repercussions on the size and accuracy of the models it is able to build.

> We keep some of the most sensitive things where the ML is occurring entirely local to the device, Federighi says.

Apple says it's keeping the important data, and therefore the processing of that data, on the phone. Great, but you need many GPUs to train a large model in a reasonable amount of time, and you simply can't do that on a phone. Not yet. It's done in the cloud and on proprietary racks. So when he says they're keeping it on the phone, does he mean that some other encrypted form of it is shared on the cloud using differential privacy? Curious...

> "How big is this brain, the dynamic cache that enables machine learning on the iPhone? Somewhat to my surprise when I asked Apple, it provided the information: about 200 megabytes.."

Google's building models with billions of parameters that require much more than 200MB, and that are really, really good at scoring data. I have to believe either that a) Apple is not telling us everything, or b) they haven't figured out a way to bring their customers the most powerful AI yet. (And the answer could very well be c) that I don't understand what's going on...)

[0] If they have a JVM stack, they should consider ours: http://deeplearning4j.org/

12
dstaten 2 ago 3 replies      
siri has gotten worse over time, or at least that's why friends and i have noticed
5
Does a compiler use all x86 instructions? (2010) pepijndevos.nl
187 points by pepijndevos  6 ago   119 comments top 20
1
ajenner 5 ago 6 replies      
It doesn't, because there are lots of special-purpose x86 instructions that would be more trouble than they're worth to teach a compiler about. For example, instructions for accessing particular CPU features that the C and C++ languages have no concept of (cryptographic acceleration and instructions used for OS kernel code spring to mind). Some of these the compiler might know about via intrinsic functions, but won't generate for pure C/C++ code.

Regarding the large number of LEA instructions in x86 code - this is actually a very useful instruction for doing several mathematical operations very compactly. You can multiply the value in a register by 1, 2, 4 or 8, add the value in another register (which can be the same one, yielding multiplication by 3, 5 or 9), add a constant value and place the result in a third register.

2
jcranmer 2 ago 1 reply      
In general:

* x87 floating point is generally unused (if you have SSE2, which is guaranteed for x86-64)

* BCD/ASCII instructions

* BTC/BTS/related instructions. These are basically a & (1 << b) operations, but because of redundant uses, it's generally faster to do the regular operations

* MMX instructions are obsoleted by SSE

* There's some legacy cruft (e.g., segment management) that's generally unused by anyone not in 16-bit mode.

* There are few odd instructions that are basically no-ops (LFENCE, branch predictor hints)

* Several instructions are used in hand-written assembly, but won't be emitted by a compiler except perhaps by intrinsics. The AES/SHA1 instructions, system-level instructions, and several vector instructions fall into this category.

* Compilers usually target relatively old instruction sets, so while they can emit vector instructions for AVX or AVX2, most shipped binaries won't by default. When you see people list minimum processor versions, what they're really listing is which minimum instruction set is being targeted (largely boiling down to "do we require SSE, SSE2, SSE3, SSSE3, SSE4.1, or SSE4.2?").

As for how many x86 instructions, there are 981 unique mnemonics and 3,684 variants (per https://stefanheule.com/papers/pldi16-strata.pdf). Note that some mnemonics mask several instructions--mov is particularly bad about that. I don't know if those counts are considered only up to AVX-2 or if they extend to the AVX-512 instruction set as well.

3
fizixer 3 ago 7 replies      
And therein lies the rub.

What is the minimum number of instructions a compiler could make use of to get everything done that it needs?

I came across an article that says 'mov is turing complete' [1]. But they had to do some convoluted tricks to use mov for all purposes.

I think it's safe to say that about 5-7 instructions are all that's needed to perform all computation tasks.

But then:

- Why do compilers not strive to simplify their code-gen phase, or enable themselves to do advanced instruction-level program analysis, or both?

- Why do microprocessors not strive for simplicity, implement only a handful of instructions in an optimized way, with a very small chip footprint, to be followed by proliferation of cores (think 256-core, 512-core, 1024-core).

Besides the completely valid reason that humans tend to overly-complicate their solutions, and then brag about it, the main reason is historical baggage and the need for backwards compatibility.

Intel started with a bad architecture design, and only made it worse decades after decades, by piling one bunch of instructions over another, and what we now have is a complete mess.

On the compiler front, the LLVM white-knights come along and tell people 'you guys are wimps for using C to do compilers. Real men use monsters like C++, with dragons like design-patterns. No one said compiler programming is supposed to be as simple as possible.'

To those lamenting javascript and the web being broken, wait till you lift the rug and get a peek at the innards of your computing platform and infrastructure!

[1] https://www.cl.cam.ac.uk/~sd601/papers/mov.pdf

4
barrkel 4 ago 0 replies      
I extended the Borland debugger's disassembler (as used by Delphi and C++ Builder IDEs) to x64, so I had professional reason to inspect the encodings. There are whole categories of instructions not used by most compilers, relating to virtualization, multiple versions of MMX and SSE (most are rarely output by compilers), security like DRM instructions (SMX feature aka Safer Mode), diagnostics, etc.

On LEA: LEA is often used to pack integer arithmetic into the ModRM and SIB prefix bytes of the address encoding, rather than needing separate instructions to express a calculation. Using these, you can specify some multiplication factors, a couple of registers and a constant all in a single bit-packed encoding scheme. Whether or not it uses different integer units in the CPU is independent of the fact that it saves code cache size.

5
JoeAltmaier 5 ago 0 replies      
Intel's own optimizing C++ compiler uses more, or well different ones anyway. Its really amazing what it can do. Uses instructions I never heard of.
6
nixos 4 ago 8 replies      
My question is if compilers use "new" x86 instructions, as then the program won't work at all on old systems.

For example, if Intel decided today that CPUs need a new "fast" hashing opcode (I don't know if they actually do), a compiler can't compiles to it, as programs won't work on older computers.

Is it like the API cruft in Android, where "new" Lollipop APIs are introduced for 10 years from now, when no one uses any phones from before 2014?

7
shanemhansen 5 ago 4 replies      
There are instructions that would almost never be useful. See Linus's rant on cmov http://yarchive.net/comp/linux/cmov.html

The tl;dr is that it would only be useful if you are trying to optimize the size of a binary.

8
rwmj 5 ago 1 reply      
Of course what really matters is which instructions are dynamically used the most. Can Intel performance counters collect that data? You could modify QEMU TCG mode to collect it fairly easily.
9
Const-me 1 ago 0 replies      
> It would be interesting to break it down further in normal instructions, SIMD instructions

Theres a free disassembler library with that functionality: http://www.capstone-engine.org/

Heres that break up, in its .NET wrapper library: https://github.com/9ee1/Capstone.NET/blob/master/Gee.Externa...

10
35bge57dtjku 2 ago 1 reply      
> Note that the x86 was originally designed as a Pascal machine, which is why there are instructions to support nested functions (enter, leave), the pascal calling convention in which the callee pops a known number of arguments from the stack (ret K), bounds checking (bound), and so on. Many of these operations are now obsolete.

http://stackoverflow.com/questions/26323215/do-any-languages...

11
chkras 2 ago 0 replies      
"It would be interesting to break it down further in normal instructions, SIMD instructions, other optimizations, and special purpose instructions if anyone can be bothered categorizing 600+ instructions."

sandpile.org

Also, nop == xchg acc,acc

12
rdtsc 5 ago 6 replies      
> but I have no clue why there are so many lea everywhere.

Pointer arithmetic? Which is used for well, ... many things.

13
sklivvz1971 4 ago 2 replies      
The article assumes that no software in bin is written natively in asm or has asm blocks or linked objects... which seems a bit out there.
14
filereaper 5 ago 0 replies      
The point of a compiler, specifically the code-generator is to use the most effective instruction where applicable to get the job done. Its not necessary to have full coverage over the entire instruction set.

Sometimes new and fancy instructions can end up being slower as opposed to using more but more standard "older" instructions to get the job done.

15
JosephRedfern 1 ago 0 replies      
Can we not look at the compiler source-code itself, rather than binaries generated by the compiler?
16
Hydraulix989 5 ago 2 replies      
mov is Turing complete
17
WalterBright 3 ago 1 reply      
I don't know of any that use the BCD instructions like AAA. Other instructions have been essentially obsoleted like STOSB, etc.
18
epx 4 ago 3 replies      
Certainly the BCD instructions are not used?
19
kazinator 4 ago 2 replies      
Likely counterexample: GCC probably doesn't use AAA (ASCII Adjust after Addition). Or does it?
20
0xdeadbeefbabe 4 ago 2 replies      
What's the name for a piano song that touches all the keys on the keyboard?
6
Ways Your Wi-Fi Router Can Spy on You theatlantic.com
117 points by secfirstmd  4 ago   31 comments top 15
1
bgentry 2 ago 2 replies      
A system called WiKey presented at a conference last year could tell what keys a user was pressing on a keyboard by monitoring minute finger movements. Once trained, WiKey could recognize a sentence as it was typed with 93.5 percent accuracyall using nothing but a commercially available router and some custom code created by the researchers.

Ok that's pretty cool. But incredibly concerning.

The actual abstract of that paper mentions much higher figures for individual keys:

WiKey achieves more than 97.5\% detection rate for detecting the keystroke and 96.4% recognition accuracy for classifying single keys. In real-world experiments, WiKey can recognize keystrokes in a continuously typed sentence with an accuracy of 93.5%.

http://dl.acm.org/citation.cfm?id=2790109

2
xg15 3 ago 0 replies      
A rare example of a misleading title that is actually too underwhelming: No, they are not talking about spying on your internet traffic but using the router to observe your real-world movements...
3
Vexs 35 ago 0 replies      
Chainfire of SuperSU/android fame has an app that does something to combat this called pry-fi[1] that randomizes a bunch of data, and can appear to be many devices at once. Requires root of course.

https://play.google.com/store/apps/details?id=eu.chainfire.p...

4
StillBored 19 ago 0 replies      
Hello radar... Is it any surprise that MMIO technology, designed to work around problems caused by different signal return paths can make a really nice 3d radar?

Google for "SDR passive radar" for some really cool projects.

5
redblacktree 1 ago 1 reply      
Is there anything on github that I can use with DD-WRT to play with any of these effects? Maybe visualize the people my router can see?
6
Cieplak 12 ago 0 replies      
This is a great business opportunity for Wi-Fi-cancelling devices. No doubt we'll see increased use of wired-only and line-of-sight networks.

Edit: s/WIFI/Wi-Fi/

7
cannonpr 2 ago 1 reply      
There are a lot of interesting things you can do with wifi signals, human bodies show up with a lot of accuracy in them.Such as breathing rate detection, and heart rate.

https://staticfloat.github.io/papers/WiBreathe_PerCom2015.pd...

8
krick 1 ago 1 reply      
This is pretty frightening all by itself, but I'm thinking: if this is possible with a router, should't a simple smartphone be capable of the same, since it can be used as hot-spot, and covers even more signal ranges? A device owned by everyone, everywhere, generally less secure and more carelessly used, with loads of proprietary software pretty much always installed? Does it mean the whole city can be almost realistically observable by someone, even when no video camera is there?
9
pmlnr 3 ago 2 replies      
Many years ago I've seen this topic, and it suddenly vanished for a decade.

Now it's here again, and I'm even more frightened.

10
redbeard0x0a 1 ago 1 reply      
Does anybody know if they are talking about 2.4Ghz or 5Ghz wifi spectrum? I would imagine that the 2.4Ghz spectrum would be the easiest to use to use for detection since it penetrates walls, etc. better.
11
nickysielicki 2 ago 0 replies      
People who are capable of writing side-channel attacks like this have an intuition for radio and signal processing that must be completely overwhelming during their day-to-day life.
12
libeclipse 2 ago 1 reply      
I feel this technique could have some substantial military applications.
13
guelo 1 ago 3 replies      
I don't get it. A much more accurate sensor for detecting human movement is a camera.
14
nyqstna 2 ago 0 replies      
So this concerns Wi-Fi, but why wouldn't it work with other terrestrial broadcasts?
15
cagey_vet 2 ago 0 replies      
if one is a lone 'researcher' good luck weaponizing this.
7
Some of the most important deep learning papers adeshpande3.github.io
120 points by adeshpande  5 ago   9 comments top 7
1
fizixer 1 ago 1 reply      
With this field advancing so fast, I guess if we could do something like this, that would be great:

Maintain a running list of:

- 3-5 most important papers in the last 3 months

- 3-5 in the last 1 year (not all in the 3 month list would make into this list)

- 3-5 in the last 5 years.

I guess it's difficult for a small number of people to rank the papers. Maybe a hackernews or reddit style upvote/downvote system can be used, with a list that essentially scrapes arxiv for papers.

2
vonnik 1 ago 0 replies      
We've compiled a fairly extensive list of deep learning papers here: http://deeplearning4j.org/deeplearningpapers

We also Tweet out new ones as they're published here: https://twitter.com/deeplearning4j

3
sja 2 ago 0 replies      
Good list! I think it's important to note that this article is (intentionally) focused on modern CNN architectures, and not "deep learning" in general.

I'd also add in the following "technique" articles: Geoff Hinton et al.'s dropout paper[0] and Loffe and Szegedy's Batch Normalization paper[1]. I don't think there's been enough time for the dust to settle, but I'm excited about the possibilities Stochastic Depth[2] could offer, too.

[0]: http://arxiv.org/abs/1207.0580[1]: http://arxiv.org/abs/1502.03167[2]: http://arxiv.org/abs/1603.09382

4
epberry 2 ago 0 replies      
Glad to see R-CNN and its follow-on work on the list. We've been using R-CNN for a few weeks now and have seen great results on object detection and localization. A few papers this year have played around with substituting different convnets and different classification schemes and improving the network in various ways. I'm excited to see where this specific architecture goes in the next few years.
5
andrewtbham 1 ago 0 replies      
Here is the same list but including another important paper, inception-v4, it beat MS resnets on top 5 error.

https://github.com/andrewt3000/MachineLearning/blob/master/c...

6
currywurst 2 ago 0 replies      
I really enjoyed this write up .. would be so great if research papers in general were commented on like this (what is interesting, what is the significance of the result etc)
7
haddr 2 ago 1 reply      
Only applied to image recognition.
8
Baidu Takes FPGA Approach to Accelerating SQL at Scale nextplatform.com
90 points by okket  4 ago   38 comments top 13
1
sixdimensional 1 ago 2 replies      
This is kind of amazing. I have often really wondered what would happen if you basically created a directed acyclic graph/dataflow for data processing (for example, how Apache Spark distributes processing/operations), and then accelerated the operations using physical implementations in FPGAs. After all, a SQL query, when optimized, is essentially a graph of operators that data flows through.

You do have to pass your data through the accelerator to get the processing... which potentially means huge volumes of data moving into this physical processing layer (probably can be done in parallel over a network at high speed) - I would assume this is why shared memory bandwidth was a problem.

This would provide some really interesting options though - imagine feeding data from two disparate databases (say, Oracle and SQL server) in a data flow into this thing - now you have accelerated cross database joins (as long as you can handle the bandwidth and processing on the way in).

There was this post before on HN previously lamenting the state of tools for working with FPGAs, and my related comment wondering if what Baidu has done here was possible:

- https://news.ycombinator.com/item?id=9408881

- https://news.ycombinator.com/item?id=9410160

2
smilliken 3 ago 0 replies      
For GPUs, not FPGAs, but PostgreSQL has PGStrom[1] for offloading large table scans. Still a work in progress.

[1] https://wiki.postgresql.org/wiki/PGStrom

3
trhway 1 ago 1 reply      
such FPGAs seems to be on the edge of what is export controlled, recently even regular Intel CPU got blocked for delivery to China :

http://www.pcworld.com/article/2908692/us-blocks-intel-from-...

and around an year ago some Russian guys got busted for selling the FPGAs to Russia (and those FPGAs were less powerful):

http://kron4.com/2015/03/21/sf-business-owner-arrested-for-a...

4
koolba 4 ago 2 replies      
This looks pretty cool. Any idea of which database they're doing this on?

I don't see it listed out in the article anywhere. I would guess it's a FOSS database (i.e. either Postgres or MySQL) but also wouldn't be surprised if at their scale they've created something entirely in house.

5
DigitalJack 54 ago 0 replies      
I'm an FPGA/ASIC engineer that does some software as a hobby (mostly clojure, some python. C and Pascal if you go back far enough).

Although I focus more on verification these days, I've done ASIC design for about 16 years.

I'd be very interested in working with anyone on figuring out if we can make something better by leveraging hardware.

6
tkyjonathan 1 ago 1 reply      
Finally! In 2011, when I was working on optimizing 100Gb databases on dedicated hardware for small businesses, everyone in the database world was talking SQL chips and PCI-E fast storage.Fast forward to today when cloud and virtualisation and now I'm working on "big data" 400Mb RDS instances.Its time we go "retro" and #gophysical for some things.
7
tmostak 1 ago 0 replies      
Very cool work but since its easy to get memory-bandwidth bound with SQL I would wonder if it would make more sense to use GPUs with 10X the bandwidth. I know FPGAs will be getting HBM as well so this might help.

Disclaimer: I work at MapD, a GPU database company. (http://www.mapd.com)

8
astrodust 3 ago 0 replies      
It's no wonder Intel made a big FPGA acquisition lately. This sort of acceleration has enormous upside potential, especially if this sort of thing comes part and parcel with your CPU.
9
CodeSheikh 2 ago 0 replies      
Online stock brokers and finance companies have been using FPGAs for quite some time now because of latency as a microsecond can literally cost them millions. It is good to see these beasts making their way into crunching "Big Data" in non-finance domain.
10
srigan 2 ago 0 replies      
isn't this already implemented in Datawarehouse appliances like Netezza (now part of IBM), which targeted solving the problem of analytical queries on huge sets of data??
11
kiril-me 4 ago 4 replies      
They are crazy! Very interesting case. How they support such DB? As I know FPGA has limits on updates.

What tools they use to create relational algebra?

12
malkia 3 ago 1 reply      
Naive question, but wouldn't this work only with ECC-enabled graphics cards like NVidia Quadros? Possibly that's the only reasonable choice for servers, but the price is much higher.
13
srigan 2 ago 1 reply      
isn't this already implemented in datawarehouse appliances like Netezza (now part of IBM)?
9
Chrome Dinosaur and Common Lisp sdf.org
138 points by VitoVan  5 ago   32 comments top 9
1
FroshKiller 3 ago 0 replies      
If you like this kind of content, consider joining the SDF. It's a free public access computing community with lots of artists, hackers, and grognards of all stripes. Your support is appreciated: https://sdf.org/
2
gkya 3 ago 0 replies      
This is really cool. So is http://sdf.org . You can get a lifetime Unix (NetBSD) shell account at them for as little as $5, mail, http/gopher hosting, etc. included, and more.
3
Sidnicious 3 ago 1 reply      
To see this error page without disconnecting from the internet, visit chrome://network-error/-106.
4
justinhj 3 ago 1 reply      
This is one of the most fun and informative programming posts I've seen on Hacker News recently. I also killed my wifi immediately after reading the article to play the game.
5
paulrpotts 4 ago 2 replies      
My god, how have I seen that error page a million times and never realized it was actually a video game?
6
PuercoPop 2 ago 1 reply      
Regarding the code, its ok although there are some parts that are unnecessarily messy. Here[0], and in other parts you use labels in the middle the function. It is best to do in the outer level form. I like the rlabels of misc-extension[1] when using local functions but I understand most people would prefer not to add 'util' dependencies. Another thing in that function. The (let ((data (or ...)))) is unnecessary. You place the form in the default case of the keyword argument So it would end up looking like:

 (defun x-find-color (rgba &key (x 0) (y 0) (width default-width) (height default-height) (test #'equal) (snap-data (x-snapshot :x x :y y :width width :height height))) ...)
Also if not using asdf, the quickload form should be wrapped in an eval-when. It will fail If I Try to compile the file (C-c C-k) on a fresh session.

This are nitpicks, the code is understandable and easy to read. Nice writeup and cool idea!

[0]: https://github.com/VitoVan/cl-dino/blob/master/cl-autogui.li...[1]: https://github.com/slburson/misc-extensions

7
behnamoh 3 ago 8 replies      
Apart from Clojure (which is not LISP, really), I haven't seen end-user applications written in LISP dialects (I don't think emacs is end-user software). Scheme (and its PLT dialect Racket) was in academia for so long that it literally got "friend-zoned" and has little (read "no") use in the industry.

Then again, I expected to see more Common LISP around. Paul Graham really put it right, but I guess any leverage/advantage LISP had on other languages, it waned in the 90's (maybe Viaweb was one of the last projects done in LISP, and even that was translated later to other langs...)

I don't think s-expressions are _that_ problematic. The main issue with LISP's nowadays is lack of _momentum_. Java gained its momentum by Sun, after that it just keeps rolling in the industry and apparently, there's no stopping that. Python also has its momentum from the academia, what with all the great libraries and all. But one would expect to see some _major_ projects done entirely in LISP (and not a Trojan Horse LISP like Clojure), after that, it's only a matter of time for developers to become LISPers.

Of course, one could also claim that LISP actually _did_ have like half a century to prove itself, or gain momentum. After all, Golang and ruby are relatively new, but they eat LISP's lunch at launch of new apps.

I really appreciate things like this project. It's not enough, but what is?

8
junke 3 ago 1 reply      
Quite inspiring. That's how you beat an endless game.
9
Iispm 3 ago 0 replies      
How have I never heard of this "common" lisp?
10
Linux is 25 today groups.google.com
48 points by SpaceInvader  46 ago   7 comments top 7
1
astrodust 25 ago 0 replies      
Being able to install a UNIX-type operating system on my personal computer, one that was free, was a life-changing experience for me.

At the time there were prohibitively expensive UNIX operating systems on the market, many of which required even more expensive proprietary hardware to run on, and then you'd have to fork out even more money for a compiler.

An enormous thanks to Linus and the GNU team for changing all of that and making this accessible to pretty much anyone crazy enough to try and install it on their computer.

2
stonogo 23 ago 0 replies      
Ah, google groups, the only reliable way to download more than a megabyte of data to render 1kb of text.

Somewhere, I still have a copy of this message, from when it appeared in my newsreader. It would be an interesting exercise in archaeology to see if modern linux has the tools to mount that old filesystem... guess the rest of today's productivity will have to take a back seat.

3
tokenizerrr 1 ago 0 replies      
> I don't want to be on the machine when someone is spawning >64 processes, though.

Ha.

4
eloy 19 ago 0 replies      
NO! It is GNU/Lin.. oh wait...

Congrats Torvalds, thanks for changing the world!

5
elliotec 17 ago 0 replies      
I love all the parentheses (including nested parentheses!) in that introduction post from Linus, and how he's careful to make it clear that it's just a hobby and not professional.
6
jff 19 ago 0 replies      
> Most of these seem possible (the tty structure already has stubs for window size), except maybe for the user-mode filesystems

And thus Linus dug a hole out of which Linux has only recently begun to clamber.

7
TheLarch 23 ago 0 replies      
I'm not usually nostalgic but my circa 1994 Slackware CD distro is special.

Incidentally I can't overemphasize how far basic sysadmin skills will get you.

11
Detecting voting rings using HyperLogLog counters (2013) opensourceconnections.com
18 points by bemmu  2 ago   1 comment top
1
placeybordeaux 56 ago 0 replies      
Nice idea, but it doesn't matter if this doesn't work in practice.
12
A few HTML tips mozilla.org
416 points by nachtigall  12 ago   155 comments top 16
1
adregan 9 ago 4 replies      
One thing I remembered while reading the section on telephone inputswhat happened to all the HTML5 date and time input types?

http://caniuse.com/#search=Date%20and%20time%20input%20types

All of the other cool new input types appear to be on the way in the modern browsers (http://caniuse.com/#search=input%20type), but Safari and Firefox don't appear to be budging at all.

I would prefer to use the native date pickers across devices, but I feel like I have been waiting on those elements to land for years now. Did I miss something?

2
jonespen 2 ago 0 replies      
A note regarding alt attributes on img: they should almost always be empty. Please don't just fill it with the related title or heading, as it just creates a lot of unnecessary text for screen readers. More: http://www.456bereastreet.com/archive/200811/writing_good_al...
3
rimantas 2 ago 0 replies      
Anyone remembers Dan Cederholm (http://www.simplebits.com/) and his excellent books "Web Standards Solutions" and "Bulletproof Web Design"? He even had a series of exercises like this on his blog.Oh, the gone days of the web which was a web and not some misfit wannabe app engine :(
4
mattmanser 10 ago 7 replies      
Is it common these days to nest inputs in labels? As a comparison, bootstrap doesn't, http://getbootstrap.com/css/#forms.

To me that's conceptually as well as semantically broken, the two are separate things why is the input inside the label? He's saying make everything semantic, and then semantically breaking forms.

Also if you have any inputs that don't have a label for whatever reason you either have to wrap them in them any way to get CSS to work or make sure you never rely on the input being nested in the label in your CSS.

And having them apart also makes it easier to design responsively if you want a side-by-side design for desktop vs a stacked design for mobile.

Saves having to add a for tag I guess.

I guess a lot of that article is opinion though, so you're always going to get people objecting to one point or another, though a lot of it seems sensible advice.

5
wtbob 10 ago 5 replies      
Web Fonts for images are a true anti-pattern. It's actually almost funny how many pages are full of and so forth, because they assume that folks are using a graphical browser running JavaScript.
6
ravenstine 6 ago 2 replies      
Funny how they answered a question about paragraph tags that I was wondering yesterday. Most answers to the "p vs br" question seem to be "do whatever you want", but the recommendation in the article seems most correct to me, since p tags are for the parser(in case it needs to know what the paragraphs of the body content are) while breaks are presentational.
7
msl09 9 ago 5 replies      
I can see the value of almost all of those recommendations but I don't understand the value of using the correct header sequence or tagging subheaders. I don't see anything in the linked W3C article that explains why I should use one idiom over another.
8
sirtastic 6 ago 3 replies      
Question: Can anyone comment on how important HTML taxonomy in terms of SEO (disregarding accessibility). Will the use of <header> and <article> make any difference in how a page is indexed and ranked directly or indirectly?

One indirect way I can see SEO benefiting is via sharing. Facebook crawling will have an easy time grabbing the <h1>Title</h1> along with whatever comes first in <p>for an excerpt</p>.

Is there any no-brainer reasons or supporting evidence proper taxonomy helps rankings?

9
pjungwir 4 ago 2 replies      
I'm curious if there is any good solution to use sprite sheets with background-image, possibly with a hover effect, and still get the accessibility benefits of alt tags? For instance:

 <a class="fb sprite" href="..."> </a> .sprite { background-image: url("/sprites.png"); background-repeat: no-repeat; } .sprite.fb { background-position: -4px, -4px; width: 42px; height: 42px; } .sprite.fb:hover { background-position: -4px, -54px; }
What would you do to make that accessible? I did some Googling about this a few weeks ago, and it seemed to be a topic of "ongoing research." None of the solutions I could find were very appealing. So, any suggestions?

10
perlwle 3 ago 0 replies      
I wish safari supports currency keypad with decimal in HTML5. It's working for android but not for iOS ;-(
11
nailer 6 ago 1 reply      
Slightly off-topic, but:

> Couldnt be much more from the hearth<br>

Couldnt be much more from the heart.

12
donjh 4 ago 0 replies      
I found the section on SVG sprites particularly interesting. Here's a Webpack plugin to generate SVG spritesheets from individual assets: https://github.com/mrsum/webpack-svgstore-plugin.
13
lllorddino 7 ago 3 replies      
That feeling when the first paragraph of an article points out something you've been doing wrong all along :D. I use <br> in between paragraphs and it looks horrible, what should I be using instead?
14
intrasight 6 ago 2 replies      
> There are some CSS limitations though: when using SVG this way, with <use> linking to a <symbol>, the image gets injected in Shadow DOM and we lose some CSS capabilities.

Which is why SVG "symbol" and "use" are best avoided. Your client will inevitably ask you to style an icon and you won't be able to. Avoid the shadow DOM and you'll have full CSS capabilities.

15
DanielBMarkham 9 ago 8 replies      
Can anybody tell me how legit the placeholders recommendation is? I understand it's not accessible, but on a limited viewport, having the text tell the user what you want seems like a no-brainer. And I see it a lot. Isn't there some other way to provide accessibility?
16
56k 9 ago 3 replies      
Everyone know these things. What's the big deal?
13
Tesla Motors cut its employee handbook to 4 pages tehabo.co
19 points by tehabo  45 ago   1 comment top
1
andreasklinger 7 ago 0 replies      
Couldnt find a link to the handbook in the post - does anyone have the link to share
17
Taking stock of the new French-German encryption proposal politico.com
33 points by taylorbuley  3 ago   11 comments top 3
1
spdy 1 ago 0 replies      
Elections are coming and everyone has to play tough.None of this is targeted at terrorist as is does not even remotely make sense its all about controlling the population.

Let`s see when they propose to ban math.

Mandatory John Oliver on Encryption :)https://www.youtube.com/watch?v=zsjZ2r9Ygzw

2
Freak_NL 1 ago 1 reply      
Ignoring the ample amount of ethical and political objections to the whole notion of government (which government, incidently?) backdoors, how does one even start to implement this kind of legislature? You might be able to coerce Facebook or Google to cooperate, but you can't exactly outlaw any alternative that is out of reach of the EU, or any jurisdiction for that matter. These tools exist now, and won't just dissappear. Any terrorist worth his salt will use whatever works, regardless of legality.
3
wyager 1 ago 3 replies      
18
Long-Range (200m) BLE Beacons with 1Mb EEPROM estimote.com
12 points by jimiasty  1 ago   2 comments top 2
1
Animats 17 ago 0 replies      
These should be great for attackers - a long-range Bluetooth device with years of battery life. Reprogram these for the MouseJack attack.[1] Get a bag of 50 and spread them around. Profit! If you want to attack a specific company, spread them around that company's building and nearby restaurants. If you repaint them, who's going to notice an extra rock in the landscaping?

If the "remote updating for fleet management" can be attacked, you don't even need to buy them; you can take over other people's.

[1] http://www.computerworld.com/article/3037377/security/mousej...

2
jimiasty 56 ago 0 replies      
Hi HN, this is Jakub, founder of Estimote, Inc. (YC S13).

We just released a new revision of our Location Beacons.We implemented new, low power Nordic nRF52 chip and extended range to 200 metres (+10dB).

Beacons can simultanously advertise both iBeacon and Eddystone packets as well as telemetry & sensor data + they have built-in GPIO slot.

There is also 1Mb EEPROM, so you can read/write data directly in the beacon.

You can read more on our blog: http://blog.estimote.com/post/149362004575/updated-location-...

I will be more than happy to answer any questions here.

19
MyLG Network Diagnostic Tool github.com
19 points by snehesht  2 ago   2 comments top
1
samat 56 ago 1 reply      
hping from remote node would be hilarious, but it looks like it silently fails :(
21
Sharing Research about Adverse Childhood Experiences nytimes.com
15 points by Jasamba  2 ago   1 comment top
1
panic 8 ago 0 replies      
Here's the infographic version of the research that's linked in the article: http://vetoviolence.cdc.gov/apps/phl/resource_center_infogra.... Even one adverse experience increases the chance of drug use, depression, alcoholism and suicide attempts by over 50%. Almost two thirds of participants (drawn from a cross-section of middle-class Americans) reported at least one such experience.
22
Reactors: Foundational framework for distributed computing reactors.io
32 points by acjohnson55  4 ago   14 comments top 3
1
paulddraper 2 ago 1 reply      
Haven't used this, but hopefully it is better than the nightmare that is Akka (more reasons than I want to list here...maybe a blog post).
2
vegabook 2 ago 4 replies      
jvm jvm jvm. Flink. Storm. Spark. Beam. projectreactor. Reactors. Samza. How spoiled are Java/Scala programmers but is there anybody doing anything similar in distributed computing frameworks that targets something other than the Java Virtual Machine? Go? Erlang? C? Ocaml? Anyone? Why always the JVM? Sure I understand that business loves java, but every single one?
3
carapace 2 ago 0 replies      
(Another site useless without JS enabled. Boo.)
23
How a Case of Pure Alexia Confirmed the Role of Brain's Visual Word Form Area scientificamerican.com
30 points by jessaustin  4 ago   3 comments top 2
1
noobermin 36 ago 0 replies      
Interesting science for a non-expert. Even a better story of the power of the human spirit to overcome obstacles like a damaged visual word form area.
2
trhway 2 ago 1 reply      
>This made it hard to believe in a brain structure expressly devoted to reading.

recently on NPR it was mentioned that like reading did change the brain, very recently and very quickly, we can soon start to see/expect changes related to interaction with computing devices and those changes can be expected at least as profound if not more.

>A decade before Mikes stroke, Turkeltaub had shown that a childs brain shifts where and how it processes text as he or she learns to read.

still remember the time, like before and during the first couple years of elementary school, when i was reading whole paragraphs at once, visually, silently, ie. without in-head pronunciation.

25
Our picks of promising companies from Y Combinator S16 Demo Day 2 techcrunch.com
9 points by smb06  1 ago   5 comments top 4
1
etrautmann 29 ago 0 replies      
Sort of a dreary view of the future - companies that provide anti-drone defense, gunshot localization as a service, and big-brotherish monitoring of worker efficiency (in sales).
2
DelaneyM 3 ago 0 replies      
Does Techcrunch have any history of successfully picking winners from demo days?

This just feels like an excuse to make an ad-rich slideshow.

3
outworlder 4 ago 0 replies      
Is it just me, or do some of these resemble more "traditional" companies? In the startup == growth sense.
4
danvoell 20 ago 1 reply      
Does this type of slide show do anything besides give TechCrunch more clicks/page views? Do people like this style? I do not.
26
Uber launches flat fares in San Francisco through subscription uber.com
70 points by nikunjk  2 ago   85 comments top 15
1
nicwolff 28 ago 3 replies      
Uber has taken $12B of capital, broken the regulatory regimes and opened the drive-for-hire markets, marketed the benefits of drive-for-hire and ride-sharing to drivers and consumers, and helped drivers finance vehicles but now there seems to be no barrier to entry for any competing service except writing an app. I'm happy to let VCs pay for my comfortable daily ride to work but I don't see how this is sustainable...
2
mmanfrin 2 ago 6 replies      
I used Pool last night in the city. Took me from the Fidi to Inner Richmond. $2.37. It was almost cheaper than Muni. I can't imagine the drivers are making much at all.
3
rayiner 25 ago 2 replies      
I pay $3 to commute to my office in Uber Pool or Lyft--cheaper than Metro. Usually no other rider. I don't know how this is sustainable.
4
bpodgursky 2 ago 3 replies      
> Flat fares are designed to cover full fares for almost every trip, with a $20 limit for uberPOOL and $25 limit for uberX. Youll be responsible for paying portions of the fare that exceed those maximums on each trip.

The pricing felt convoluted even before I saw this part. I can't see this being worth the effort, if you can't even guarantee the $7 prices.

5
zaidf 2 ago 3 replies      
The copy on this page is super unclear. I just realized(after signing up) that the subscription is to get the special $2/ride pricing, not the actual ride. Put another way: after you have the $30/month subscription, you can get UberPools for $2/ride.
6
foota 4 ago 0 replies      
This is being offered in Seattle as well.
7
Jake232 1 ago 2 replies      
They launched this yesterday in San Diego also. I signed up.

https://www.uber.com/info/plus/sandiego/

8
manojlds 1 ago 0 replies      
Interestingly, Ola in India had already done something like this very recently.

https://www.olacabs.com/olaSelect

9
adamseabrook 33 ago 0 replies      
They also have flat fares in Chicago for $3.12 with uberPool, Monday to Friday 6am to 10pm. All trips must begin and end south of Irving Park, north of 71st Street, and east of Western Avenue. https://www.uber.com/info/chi-uberpool-312/
10
huac 2 ago 0 replies      
Makes sense as a way to lock riders into Uber rather than Lyft. Wonder how drivers will react.
11
fludlight 2 ago 0 replies      
How does this affect driver compensation?
12
davidf18 1 ago 0 replies      
Hey Uber, it is very nice to offer this in SF, SD, Boston, but what about NYC? :-(
13
alaskamiller 51 ago 0 replies      
Back in September of 2009 JetBlue offered an all you can jet pass because September is traditionally the worst slump of the year. Summer's over, people start schools and jobs, travel drops overall.

This is the same trick to get you out there more. It's also inevitably the end game for Uber. Deliver value for less than your monthly car payment.

Someone is crunching the numbers right now figuring out exactly how much LTV they can extract from you in exchange for paying $500 a month every month to Uber.

Then when Uber wins that game by the end of next year many other CPG and big box retailers will follow suit, riding along the subscription box model.

By 2020 it will be normal for you to receive income from the government just so you can hand it over to lifestyle providers of all stripes every month to live how you personalized it to be.

Latest fashion every month from WalMart! Semi-prepped meals every day from Safeway! 30 day supply of customized toothpaste from Target! Just insert token.

14
chrisper 1 ago 3 replies      
I do not really like Uber. The company does shady stuff quite often but also their drivers seem to be bad. They do not care at all about traffic laws. Illegal u-turns, stop wherever they want, etc.

I can't surely be the only one who experienced this?

15
cwilkes 1 ago 0 replies      
Clicking on the link launched the Uber app for me. I then rage quitted it before reading anything more.

Also happens in "no so" private mode on the iPhone. Apple should really fix that as it leaks information.

27
Artist Peter Doig victorious as court agrees '$10m' painting is not his work theguardian.com
59 points by lnguyen  3 ago   46 comments top 12
1
biztos 1 ago 2 replies      
First, let me say that this looks like mistaken identity or worse; and that I myself am an artist, and like most part-time artists I'm very sympathetic to the interests of full-time artists.

However, this sentiment is unrealistically simple:

"I feel a living artist should be the one who gets to say yea or nay and not be taken to task and forced to go back 40 years in time."

The way the art market works, attribution (and provenance) is a big part of valuation. If you're a famous artist and I bought one of your paintings without having you sign a bunch of paperwork -- and who does that with young artists and amateur collectors? -- then you have the power to give or withhold a lot of value.

I think the artist was in the right here. But you have to consider what his statement would mean if he had in fact painted that picture. By denying its authenticity he would be trying to destroy its value. You can choose what you acknowledge, but as an artist you don't get to choose your own history retroactively. Like anybody else you can tell your own story, but yours might not be the only version.

Of course, this is why real collectors document provenance, sometimes extensively; and why a lot of artists will by default include a "certificate of authenticity" of some kind with every sale.

2
yubiox 1 ago 1 reply      
What if he really had painted it? What duty does he have to authenticate it?
3
jeandejean 2 ago 2 replies      
And just when will court punish people wrongly suing artists that denies you make money with counterfeits ? Even worse is the greediness of these art dealers that just want to make money whatever the cost to artists. Art market is sick.
4
nathan_f77 1 ago 2 replies      
I wonder if the painting will actually be worth something now that it's been all over the media.
5
kevinastone 2 ago 1 reply      
The facts of the case (as summarized in the article) makes this seem like an obvious case of mistaken identity. Not sure what's noteworthy here.
6
jaynos 2 ago 2 replies      
I wonder what the auction price will be now that the notoriety of the trial is attached to the work.
7
pfarnsworth 2 ago 1 reply      
The idea that this had to be argued in a court of law is absolutely ridiculous.
8
noobermin 1 ago 0 replies      
Could such a case be made where the work is a piece of code? I'm not sure why someone would sue someone for claiming they are not the author. May be for a coder to take responsibility for bad code?

At times like this I am happy to have a rare (non-English) last name.

9
zentiggr 2 ago 0 replies      
I can't imagine what grounds the plaintiffs would have to appeal... if he didn't start painting until after the claimed meeting, Q.E.D., suit dismissed, money grubbing plaintiffs pay the defense's costs.

(I know it doesn't work that way, Doig is out of pocket to defend himself and that's the shitty way it turns out.)

10
Ericson2314 1 ago 0 replies      
I kinda want that Pete Doige painting now...
11
samirillian 1 ago 0 replies      
Art's a lot like real estate: an investment with a view. Sadly, the integrity of the view cannot be tolerated to outweigh the integrity of the market.

The really strange thing to me was how little resemblance the painting bore to Doig's work. The former looks like a bad Dali.

12
JustSomeNobody 1 ago 0 replies      
Why these things have to take 3 years is sad. Nobody should have to go through 3 years of this crap.
28
Credo: Static code analysis for Elixir with a focus on consistency and teaching github.com
101 points by rrrene  7 ago   4 comments top 4
1
corysama 55 ago 0 replies      
I don't know if it's coincidence, but over in https://www.reddit.com/r/elixir/ today, someone happened to post

"Automated Elixir code review with Github, Credo and Travis CI" https://medium.com/@fazibear/automated-elixir-code-review-wi...

2
killercup 6 ago 0 replies      
Heard a talk about this at the last Elixir meetup in Cologne, looks pretty amazing especially because of the focus on great explanations.

(Should copy more lints from https://github.com/Manishearth/rust-clippy though! ;))

3
sotojuan 5 ago 0 replies      
I just ran this on some old code and it told me a function was too complex (it is!) and could use refactoring, that's some smart linting :-)
4
svetob 2 ago 0 replies      
We have been using this as a build step for our services, it's possible to configure what exactly it will fail you on. Has been very helpful and not only helped us ensure good code, but taught us some good Elixir conventions as well!
30
IBMs 24-core Power9 chip nextplatform.com
143 points by ajdlinux  10 ago   119 comments top 15
1
renox 8 ago 6 replies      
The article say Power9 chip will have "hardware assisted garbage collection" and that's all, that's a bit short for such an important topic..Does anyone have more information about it?
2
apaprocki 8 ago 3 replies      
ISA 3.0 adds a new instruction 'darn' -- Deliver a Random Number. That's a pretty good asm mnemonic :) I wonder if anyone has dug into the details of how that works yet?
3
kbenson 6 ago 0 replies      
So, this is the chip that Google, Facebook and Rackspace used in their design for the open compute platform[1][2]. Maybe the entry level price will drop when like those are buying them (or their components) at scale.

1: http://www.nextplatform.com/2016/04/06/inside-future-google-...

2: http://www.pcworld.com/article/3053092/ibms-power-chips-hit-...

4
jalfresi 8 ago 4 replies      
I'm quite naive regarding Power processors; does anyone know where I can see any motherboards with prices for power processors, or even if any of the BSDs run on them? It seems like it would be quite a nice platform/OS combination that side-steps the traditional Apple/OSX vs Intel/Linux workstation options
5
theandrewbailey 8 ago 5 replies      
Am I the only one who thinks that Power9 die shot looks pretty? Reminds me of a stained glass window.

http://www.nextplatform.com/wp-content/uploads/2016/08/ibm-h...

6
redtuesday 10 ago 3 replies      
I don't know why, but I always had a thing for Power CPU's. Really curious how Power9 compares to Power8 and Intel Broadwell-EP/EX when it comes to power efficiency (and of course performance wise).
7
walki 7 ago 5 replies      
When POWER8 came out in 2014 I ran a couple of numerical benchmarks against Intel Xeon CPUs. In my benchmarks the POWER8 CPUs where slightly faster than Intel Xeon CPUs when the data fitted into the CPU's cache, so far so good (the speed advantage of the POWER8 CPU can easily be explained by the much higher clocking). But once I started running heavy benchmarks involving gigabytes of data the POWER8 CPUs where at least twice as slow. Now 3 years later POWER9 CPUs will come out that are about 1.5 times faster, in my opinion this is not enough to compete against Intel. Why would anybody want to get locked into a rare and more expensive CPU architecture if there is no speed advantage?
8
rwmj 9 ago 1 reply      
Now if only they'd make development boards available for less ~$4000.
9
EdSharkey 2 ago 1 reply      
Do these IBM systems have anything like Intel ME that might be used by Big Bro for remote, covert rooting or subversion of the server? If yes on the standard model, can we order a version of these systems that lack that feature?

I'd love to see some market forces nudging Intel to offer CPU's and Motherboards without ME.

10
Unklejoe 9 ago 3 replies      
I'm not too familiar with the PowerPC world, so maybe someone could help me out. I know I should just search, but I'm feeling lazy.

Is this a new ISA or just the latest revision in their existing ISA?

Also, what's the relationship with Freescale (NXP) here, who sells pretty decent "PowerPC" chips for server-ish platforms, such as the T2080.

Do they license the ISA from IBM sort of like how ARM works?

11
IgorPartola 8 ago 6 replies      
So realistically, does processor architecture matter today from the custkmer's point of view? I mean performance and all that matters, but presumably that will be offset by market forces. But for example I rent a couple of ARM servers from Scaleway as well as a whole bunch of x86-64 servers from various providers, and they all can do all the same stuff. So why is it that x86 is the chip data centers prefer when they could have a variety of chips and let the customers and the market choose?
12
crudbug 8 ago 0 replies      
Interesting, IBM wants to become ARM of server computing - "Open to IP changes"
13
crudbug 5 ago 0 replies      
24-core seems odd here, 32-core will be a more symmetric design ? What is main reason for this ? Cost ?
14
reacharavindh 6 ago 1 reply      
Why wouldn't IBM just have a simple product page for such an innovation?

Like an Apple.com/iphone page for Power systems :-)

Search for Power9 on IBM.com yields practically nothing.

http://www.ibm.com/search/esas/search?q=power9&v=17&sn=23&us...

Perhaps the website is all maintained by the folks busy in "the cloud".

15
mtgx 9 ago 2 replies      
Between Power9, AMD's Zen-based Naples, and Qualcomm's ARM-based Centriq, things are looking pretty interesting for the future of servers.
       cached 24 August 2016 22:02:02 GMT