hacker news with inline top comments    .. more ..    18 Jul 2017 News
home   ask   best   3 months ago   
1
Sketching mountains and coastlines to guide procedural terrain generation redblobgames.com
56 points by dougb5  2 hours ago   3 comments top 2
1
Flux159 1 hour ago 1 reply      
I see blog posts from redblobgames posted sometimes and I really like how in-depth the explanations are. I remember reading about 2d height maps and map generation for a side project I was working on a while back and the explanations he gave for why he was using a particular algorithm were fantastic (in addition to the interactive demos):

http://www.redblobgames.com/maps/terrain-from-noise/

http://www-cs-students.stanford.edu/~amitp/game-programming/...

2
theandrewbailey 30 minutes ago 0 replies      
Unless it's modeling islands, I find most terrain generators unnatural, at least from above. I wonder how feasible (and realistic looking) it would be to build a terrain generator using a bunch of fluid simulation to model plate tectonics, weather, and erosion.
2
The Limitations of Deep Learning keras.io
556 points by olivercameron  11 hours ago   190 comments top 34
1
therajiv 10 hours ago 12 replies      
As someone primarily interested in interpretation of deep models, I strongly resonate with this warning against anthropomorphization of neural networks. Deep learning isn't special; deep models tend to be more accurate than other methods, but fundamentally they aren't much closer to working like the human brain than e.g. gradient boosting models.

I think a lot of the issue stems from layman explanations of neural networks. Pretty much every time DL is covered by media, there has to be some contrived comparison to human brains; these descriptions frequently extend to DL tutorials as well. It's important for that idea to be dispelled when people actually start applying deep models. The model's intuition doesn't work like a human's, and that can often lead to unsatisfying conclusions (e.g. the panda --> gibbon example that Francois presents).

Unrelatedly, if people were more cautious about anthropomorphization, we'd probably have to deal a lot less with the irresponsible AI fearmongering that seems to dominate public opinion of the field. (I'm not trying to undermine the danger of AI models here, I just take issue with how most of the populace views the field.)

2
toisanji 10 hours ago 4 replies      
There is some good information in there and I agree with the limitations he states, but his conclusion is completely made up.

"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction."

There are tens of thousands of scientists and researchers who are studying the brain from every level and we are making tiny dents into understanding it. We have no idea what the key ingredient is , nor if it is 1 or many ingredients that will take us to the next level. Look at deep learning, we had the techniques for it since the 70's, yet it is only now that we can start to exploit it. Some people think the next thing is the connectome, time, forgetting neurons, oscillations, number counting, embodied cognition,emotions,etc. No one really knows and it is very hard to test, the only "smart beings" we know of are ourselves and we can't really do experiments on humans because of laws and ethical reasons. Computer Scientists like many of us here like to theorize on how AI could work, but very little of it is tested out. I wish we had a faster way to test out more competing theories and models.

3
Houshalter 31 minutes ago 0 replies      
This article is a bit misleading. I believe NNs are a lot like the human brain. But just the lowest level of our brain. What psychologists might call "procedural knowledge".

Example: learning to ride a bike. You have no idea how you do it. You can't explain it in words. It requires tons of trial and error. You can give a bike to a physicist that has a perfect deep understanding of the laws of physics. And they won't be any better at riding than a kid.

And after you learn to ride, change the bike. Take one where the handle is inversed. And turning it right turns the wheel left. No matter how good you are at riding a normal bike, no matter how easy it seems it should be, it's very hard. Requires relearning how to ride basically from scratch. And when you are done, you will even have trouble going back to a normal bike. This sounds familiar to the problems of deep reinforcement learning, right?

If you use only the parts of the brain you use to ride a bike, would you be able to do any of the tasks described in the article? E.g. learn to guide spacecraft trajectories with little training, through purely analog controls and muscle memory? Can you even sort a list in your head without the use of pencil and paper?

Similarly recognizing a toothbrush as a baseball bat isn't as bizarre as you think. Most NNs get one pass over an image. Imagine you were flashed that image for just a millisecond. And given no time to process it. No time to even scan it with your eyes! You certain you wouldn't make any mistakes?

But we can augment NNs with attention, with feedback to lower layers from higher layers, and other tricks that might make them more like human vision. It's just very expensive.

And that's another limitation. Our largest networks are incredibly tiny compared to the human brain. It's amazing they can do anything at all. It's unrealistic to expect them to be flawless.

4
siliconc0w 8 hours ago 0 replies      
A neat technique to help 'explain' models is LIME: https://www.oreilly.com/learning/introduction-to-local-inter...

There is a video here https://www.youtube.com/watch?v=hUnRCxnydCc

I think this has some better examples than the Panda vs Gibbon example in the OP if you want to 'see' why a model may classify a tree-frog as a tree-frog vs a billiard (for example). IMO this suggests some level of anthropomorphizing is useful for understanding and building models as the pixels the model picks up aren't really too dissimilar to what I imagine a naive, simple, mind might use. (i.e the tree-frog's goofy face) We like to look at faces for lots of reasons but one of them probably is because they're usually more distinct which is the same, rough, reason why the model likes the face. This is interesting (to me at least) even if it's just matrix multiplication (or uncrumpling high dimensional manifolds) underneath the hood,

5
CountSessine 9 hours ago 2 replies      
Surely we shouldn't rush to anthropomorphize neural networks, but we'd ignoring the obvious if we didn't at least note that neural networks do seem to share some structural similarities with our own brains, at least at a very low level, and that they seem to do well with a lot of pattern-recognition problems that we've traditionally considered to be co-incident with brains rather than logical systems.

The article notes, "Machine learning models have no access to such experiences and thus cannot "understand" their inputs in any human-relatable way". But this ignores a lot of the subtlety in psychological models of human consciousness. In particular, I'm thinking of Dual Process Theory as typified by Kahneman's "System 1" and "System 2". System 1 is described as a tireless but largely unconscious and heavily biased pattern recognizer - subject to strange fallacies and working on heuristics and cribs, it reacts to it's environment when it believes that it recognizes stimuli, and notifies the more conscious "System 2" when it doesn't.

At the very least it seems like neural networks have a lot in common with Kahneman's "System 1".

6
fnl 1 hour ago 0 replies      
Put a lot simpler: Even DL is still only very complex, statistical pattern matching.

While pattern matching can be applied to model the process of cognition, DL cannot really model abstractive intelligence on its own (unless we phrase it as a pattern learning problem, viz. transfer learning, on a very specific abstraction task), and much less can it model consciousness.

7
meh2frdf 6 hours ago 2 replies      
Correct me if I'm wrong but I don't see that with 'deep learning' we have answered/solved any of the philosophical problems of AI that existed 25 years ago (stopped paying attention about then).

Yes we have engineered better NN implementations and have more compute power, and thus can solve a broader set of engineering problems with this tool, but is that it?

8
deepGem 46 minutes ago 0 replies      
This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another.

Per my understanding - Each vector space represents the full state of that layer. Which is probably why the transformations work for such vector spaces.

A sorting algorithm unfortunately cannot be modeled as a set of vector spaces each representing the full state. For instance, an intermediary state of a quick sort algorithm does not represent the full state. Even if a human was to look at that intermediary step in isolation, they will have no clue as to what that state represents. On the contrary, if you observe the visualized activations of an intermediate layer in VGG , you can understand that the layer represents some elements of an image.

9
cm2187 9 hours ago 2 replies      
I think the requirement for a large amount of data is the biggest objection to the reflex "AI will replace [insert your profession here] soon" that many techies, in particular on HN, have.

There are many professions where there is very little data available to learn from. In some case (self-driving), companies will invest large amount of money to build this data, by running lots of test self-driving cars, or paying people to create the data, and it is viable given the size of the market behind. But the typical high-value intellectual profession is often a niche market with a handful of specialists in the world. Think of a trader of financial institutions bonds, or a lawyer specialized in cross-border mining acquisitions, a physician specialist of a rare disease or a salesperson for aviation parts. What data are you going to train your algorithm with?

The second objection, probably equally important, also applies to "software will replace [insert your boring repetitive mindless profession here]", even after 30 years of broad adoption of computers. If you decide to automate some repetitive mundane tasks, you can spare the salary of the guys who did these tasks, but now you need to pay the salary of a full team of AI specialists / software developers. Now for many tasks (CAD, accounting, mailings, etc), the market is big enough to justify a software company making this investment. But there is a huge number of professions where you are never going to break even, and where humans are still paid to do stupid tasks that a software could easily do today (even in VBA), and will keep doing so until the cost of developing and maintaining software or AI has dropped to zero.

I don't see that happening in my life. In fact I am not even sure we are training that many more computer science specialists than 10 years ago. Again, didn't happen with software for very basic things, why would it happen with AI for more complicated things.

10
ilaksh 4 hours ago 0 replies      
Actually there are quite a few researchers working on applying newer NN research to systems that incorporate sensorimotor input, experience, etc. and more generally, some of them are combining an AGI approach with those new NN techniques. And there has been research coming out with different types of NNs and ways to address problems like overfitting or slow learning/requiring huge datasets, etc. When he says something about abstraction and reasoning, yes that is important but it seems like something NNish may be a necessary part of that because the logical/symbolic approaches to things like reasoning have previously mainly been proven inadequate for real-world complexity and generally the expectations we have for these systems.

Search for things like "Towards Deep Developmental Learning" or "Overcoming catastrophic forgetting in neural networks" or "Feynman Universal Dynamical" or "Wang Emotional NARS". No one seems to have put together everything or totally solved all of the problems but there are lots of exciting developments in the direction of animal/human-like intelligence, with advanced NNs seeming to be an important part (although not necessarily in their most common form, or the only possible approach).

11
kowdermeister 7 hours ago 3 replies      
> In short, deep learning models do not have any understanding of their input, at least not in any human sense. Our own understanding of images, sounds, and language, is grounded in our sensorimotor experience as humansas embodied earthly creatures.

Well maybe we should train systems with all our sensory inputs first, like newborns leans about the world. Then make these models available open source like we release operating systems so others can build on top of that.

For example we have ImageNet, but we don't have WalkNet, TasteNet, TouchNet, SmellNet, HearNet... or other extremely detailed sensory data recorded for an extended time. And these should be connected to match the experiences. At least I have no idea they are out there :)

12
debbiedowner 10 hours ago 0 replies      
People doing empirical experiments cannot claim to know the limits of their experimental apparatus.

While the design process of deep networks remains founded in trial and error, and there are no convergence theorems and approximation guarantees, no one can be sure what deep learning can do, and what it could never do.

13
pc2g4d 9 hours ago 1 reply      
Programmers contemplating the automation of programming:

"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like."

14
eanzenberg 10 hours ago 2 replies      
This point is very well made: 'local generalization vs. extreme generalization.' Advanced NN's today can locally generalize quite well and there's a lot of research spent to inch their generalization further out. This will probably be done by increasing NN size or increasing the NN building-blocks complexity.
15
lordnacho 10 hours ago 3 replies      
I'm excited to hear about how we bring about abstraction.

I was wondering how a NN would go about discovering F = ma and the laws of motion. As far as I can tell, it has a lot of similarities to how humans would do it. You'd roll balls down slopes like in high school and get a lot of data. And from that you'd find there's a straight line model in there if you do some simple transformations.

But how would you come to hypothesise about what factors matter, and what factors don't? And what about new models of behaviour that weren't in your original set? How would the experimental setup come about in the first place? It doesn't seem likely that people reason simply by jumbling up some models (it's a line / it's inverse distance squared / only mass matters / it matters what color it is / etc), but that may just be education getting in my way.

A machine could of course test these hypotheses, but they'd have to be generated from somewhere, and I suspect there's at least a hint of something aesthetic about it. For instance you have some friction in your ball/slope experiment. The machine finds the model that contains the friction, so it's right in some sense. But the lesson we were trying to learn was a much simpler behaviour, where deviation was something that could be ignored until further study focussed on it.

16
andreyk 9 hours ago 3 replies      
"Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data."

This statement has a few problems - there is no real reason to interpret the transforms as geometric (they are fundamentally just processing a bunch of numbers into other numbers, in what sense is this geometric), and the focus on human-annotated data is not quite right (Deep RL and other things such as representation learning have also achieved impressive results in Deep Learning). More importantly, saying " a deep learning model is "just" a chain of simple, continuous geometric transformations " is pretty misleading; things like the Neural Turing Machine have shown that enough composed simple functions can do pretty surprisingly complex stuff. It's good to point out that most of deep learning is just fancy input->output mappings, but I feel like this post somewhat overstates the limitations.

17
eli_gottlieb 9 hours ago 0 replies      
>But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like.

I'm really looking forward to this. If it comes out looking like something faster and more usable than Bayesian program induction, RNNs, neural Turing Machines, or Solomonoff Induction, we'll have something really revolutionary on our hands!

18
danielam 5 hours ago 1 reply      
"This ability [...] to perform abstraction and reasoning, is arguably the defining characteristic of human cognition."

He's on the right track. Of course, the general thrust goes beyond deep learning. The projection of intelligence onto computers is first and foremost wrong because computers are not able, not even in principle, to engage in abstraction, and claims to the contrary make for notoriously bad, reductionistic philosophy. Ultimately, such claims underestimate what it takes to understand and apprehend reality and overestimate what a desiccated, reductionistic account of mind and the broader world could actually accommodate vis-a-vis the apprehension and intelligibility of the world.

Take your apprehension of the concept "horse". The concept is not a concrete thing in the world. We have concrete instances of things int he world that "embody" the concept, but "horse" is not itself concrete. It is abstract and irreducible. Furthermore, because it is a concept, it has meaning. Computers are devoid of semantics. They are, as Searle has said ad nauseam, purely syntactic machines. Indeed, I'd take that further and say that actual, physical computers (as opposed to abstract, formal constructions like Turing machines) aren't even syntactic machines. They do not even truly compute. They simulate computation.

That being said, computers are a magnificent invention. The ability to simulate computation over formalisms -- which themselves are products of human beings who first formed abstract concepts on which those formalisms are based -- is fantastic. But it is pure science fiction to project intelligence onto them. If deep learning and AI broadly prove anything, it is that in the narrow applications where AI performs spectacularly, it is possible to substitute what amounts to a mechanical process for human intelligence.

19
cs702 7 hours ago 0 replies      
Yes.

Here's how I've been explaining this to non-technical people lately:

"We do not have intelligent machines that can reason. They don't exist yet. What we have today is machines that can learn to recognize patterns at higher levels of abstraction. For example, for imagine recognition, we have machines that can learn to recognize patterns at the level of pixels as well as at the level of textures, shapes, and objects."

If anyone has a better way of explaining deep learning to non-technical people in a few short sentences, I'd love to see it. Post it here!

20
gallerdude 9 hours ago 1 reply      
I'm sorry, but I don't understand why wider & deeper networks won't do the job. If it took "sufficiently large" networks and "sufficiently many" examples, I don't understand why it wouldn't just take another order of magnitude of "sufficiency."

If you look at the example with the blue dots on the bottom, would it not just take many more blue dots to fill in what the neural network doesn't know? I understand that adding more blue dots isn't easy - we'll need a huge amount of training data, and huge amounts of compute to follow; but if increasing the scale is what got these to work in the first place, I don't see we shouldn't try to scale it up even more.

21
latently 8 hours ago 1 reply      
The brain is a dynamic system and (some) neural networks are also dynamic systems, and a three layer neural network can learn to approximate any function. Thus, a neural network can approximate brain function arbitrarily well given time and space. Whether that simulation is conscious is another story.

The Computational Cognitive Neuroscience Lab has been studying this topic for decades and has an online textbook here:

http://grey.colorado.edu/CompCogNeuro

The "emergent" deep learning simulator is focused on using these kinds of models to model the brain:

http://grey.colorado.edu/emergent

22
thanatropism 5 hours ago 0 replies      
This is evergreen:

https://en.m.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_...

See also, if you can, the film "Being in the world", which features Dreyfus.

23
denfromufa 9 hours ago 2 replies      
If the deep learning network has enough layers, then can't it start incorporating "abstract" ideas common to any learning task? E.g. could we re-use some layers for image/speech recognition & NLP?
24
zfrenchee 8 hours ago 2 replies      
My qualm with this article is disappointingly poorly backed up. The author makes claims, but does not justify those claims well enough to convince anyone but people who already agree with him. In that sense, this piece is an opinion piece, masquerading as a science.

> This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models [why?]for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task [why?], or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex [???], or there may not be appropriate data available to learn it [like what?].

> Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues [why?]. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold. [really? why?]

I tend to disagree with these opinions, but I think the authors opinions aren't unreasonable, I just wish he would explain them rather than re-iterating them.

25
LeanderK 6 hours ago 0 replies      
the author raises some valid points, but i don't like the style it is written in. He just makes some elaborate claims about the limitation of Deep Learning, but conveys why they are limitations. I don't disagree about the fact that there are limits to Deep Learning and many may be impossible to overcome without completely new approaches. I would like to see more emphasis on why things, like generating code from descriptions, that are theoretically possible, are absolutely impossible and out of reach today and not make the intention that the tasks itself is impossible (like the halting-problem).
26
msoad 10 hours ago 1 reply      
Then people are assuming Deep Learning can be applied to a Self Driving Car System end-to-end! Can you imagine the outcome?!
27
ezioamf 9 hours ago 1 reply      
This is why I don't know if it will be possible (at current limitations) to let insect like brains to fully drive our cars. It may never be good enough.
28
nimish 10 hours ago 1 reply      
This is basically the Chinese Room argument though?
29
graycat 8 hours ago 1 reply      
On the limitations of machine learning asin the OP, the OP is correct.

So, right, current approaches to "machinelearning* as in the OP have some serious"limitations". But this point is a small,tiny special case of something else muchlarger and more important: Currentapproaches to "machine learning" as in theOP are essentially some applied math, andapplied math is commonly much morepowerful than machine learning as in theOP and has much less severe limitations.

Really, "machine learning" as in the OP isnot learning in any significantlymeaningful sense at all. Really,apparently, the whole field of "machinelearning" is heavily just hype from thedeceptive label "machine learning". Thathype is deceptive, apparently deliberatelyso, and unprofessional.

Broadly machine learning as in the OP is acase of old empirical curve fitting wherethere is a long history with a lot ofapproaches quite different from what is inthe OP. Some of the approaches are undersome circumstances much more powerful thanwhat is in the OP.

The attention to machine learning isomitting a huge body of highly polishedknowledge usually much more powerful. Ina cooking analogy, you are being sold astate fair corn dog, which can be good,instead of everything in Escoffier,

Prosper Montagn, Larousse Gastronomique:The Encyclopedia of Food, Wine, andCookery, ISBN 0-517-503336, CrownPublishers, New York, 1961.

Essentially, for machine learning as inthe OP, if (A) have a LOT of trainingdata, (B) a lot of testing data, (C) bygradient descent or whatever build amodel of some kind that fits thetraining data, and (D) the model alsopredicts well on the testing data, then(E) may have found something of value.

But the test in (D) is about the onlyassurance of any value. And the value in(D) needs an assumption: Applications ofthe model will in some suitable sense,rarely made clear, be close to thetraining data.

Such fitting goes back at least to

Leo Breiman, Jerome H. Friedman, RichardA. Olshen, Charles J. Stone,Classification and Regression Trees,ISBN 0-534-98054-6, Wadsworth &Brooks/Cole, Pacific Grove, California,1984.

not nearly new. This work is commonlycalled CART, and there has long beencorresponding software.

And CART goes back to versions ofregression analysis that go back maybe 100years.

So, sure, in regression analysis, we aregiven points on an X-Y coordinate systemand want to fit a straight line so that asa function of points on the X axis theline does well approximating the points onthe X-Y plot. Being more specific coulduse some mathematical notation awkward forsimple typing and, really, likely notneeded here.

Well, to generalize, the X axis can haveseveral dimensions, that is, accommodateseveral variables. The result ismultiple linear regression.

For more, there is a lot with a lot ofguarantees. Can find those in short andeasy form in

Alexander M. Mood, Franklin A. Graybill,and Duane C. Boas, Introduction to theTheory of Statistics, Third Edition,McGraw-Hill, New York, 1974.

with more detail but still easy form in

N. R. Draper and H. Smith, AppliedRegression Analysis, John Wiley and Sons,New York, 1968.

with much more detail and carefully donein

C. Radhakrishna Rao, Linear StatisticalInference and Its Applications: SecondEdition, ISBN 0-471-70823-2, John Wileyand Sons, New York, 1967.

Right, this stuff is not nearly new.

So, with some assumptions, get lots ofguarantees on the accuracy of the fittedmodel.

This is all old stuff.

The work in machine learning has addedsome details to the old issue of overfitting, but, really, the math in oldregression takes that into consideration-- a case of over fitting will usuallyshow up in larger estimates for errors.

There is also spline fitting, fitting fromFourier analysis, autoregressiveintegrated moving average processes,

David R. Brillinger, Time SeriesAnalysis: Data Analysis and Theory,Expanded Edition, ISBN 0-8162-1150-7,Holden-Day, San Francisco, 1981.

and much more.

But, let's see some examples of appliedmath that totally knocks the socks offmodel fitting:

(1) Early in civilization, people noticedthe stars and the ones that moved incomplicated paths, the planets. WellPtolemy built some empirical models basedon epi-cycles that seemed to fit thedata well and have good predictive value.

But much better work was from Kepler whodiscovered that, really, if assume thatthe sun stays still and the earth movesaround the sun, then the paths of planetsare just ellipses.

Next Newton invented the second law ofmotion, the law of gravity, and calculusand used them to explain the ellipses.

So, what Kepler and Newton did was farahead of what Ptolemy did.

Or, all Ptolemy did was just someempirical fitting, and Kepler and Newtonexplained what was really going on and, inparticular, came up with much betterpredictive models.

Empirical fitting lost out badly.

Note that once Kepler assumed that the sunstands still and the earth moves aroundthe sun, actually he didn't need much datato determine the ellipses. And Newtonneeded nearly no data at all except tocheck is results.

Or, Kepler and Newton had some good ideas,and Ptolemy had only empirical fitting.

(2) The history of physical science isjust awash in models derived fromscientific principles that are, then,verified by fits to data.

E.g., some first principles derivationsshows what the acoustic power spectrum ofthe 3 K background radiation should be,and the fit to the actual data from WMAP,etc. was astoundingly close.

News Flash: Commonly some real science oreven just real engineering principlestotally knocks the socks off empiricalfitting, for much less data.

(3) E.g., here is a fun example I workedup while in a part time job in gradschool: I got some useful predictions foran enormously complicated situation out ofa little applied math and nearly no dataat all.

I was asked to predict what thesurvivability of the US SSBN fleet wouldbe under a special scenario of globalnuclear war limited to sea.

Well, there was a WWII analysis by B.Koopman that showed that in search, say,of a submarine for a surface ship, anairplane for a submarine, etc. theencounter rates were approximately aPoisson process.

So, for all the forces in that war at sea,for the number of forces surviving, withsome simplifying assumptions, we have acontinuous time, discrete state spaceMarkov process subordinated to a Poissonprocess. The details of the Markovprocess are from a little data aboutdetection radii and the probabilities at adetection, one dies, the other dies, bothdie, or neither die.

That's all there was to the set up of theproblem, the model.

Then to evaluate the model, just use MonteCarlo to run off, say, 500 sample paths,average those, appeal to the strong law oflarge numbers, and presto, bingo, done.Also can easily put up some confidenceintervals.

The customers were happy.

Try to do that analysis with big data andmachine learning and will be in deep,bubbling, smelly, reeking, flaming, blackand orange, toxic sticky stuff.

So, a little applied math, some firstprinciples of physical science, or somesolid engineering data commonly totallyknocks the socks off machine learning asin the OP.

30
erikb 7 hours ago 1 reply      
I don't get it. If reasoning is not an option how does deep learning beat the boardgame go?
31
reader5000 10 hours ago 1 reply      
Recurrent models do not simply map from one vector space to another and could very much be interpreted as reasoning about their environment. Of course they are significantly more difficult to train and backprop through time seems a bit of a hack.
32
beachbum8029 8 hours ago 1 reply      
Pretty interesting that he says reasoning and long term planning are impossible tasks for a neural net, when those tasks are done by billions of neural nets every day. :^)
33
deepnotderp 8 hours ago 1 reply      
I'd like to offer a somewhat contrasting viewpoint (although this might not sit well with people): deep nets aren't AGI, but they're pretty damn good. There's mounting evidence that they learn similar to how we do, at least in vision; https://arxiv.org/abs/1706.08606 and https://www.nature.com/articles/srep27755

There's quite a few others but these were the most readily available papers.

Are deep nets AGI? No, but they're a lot better than Mr.Chollet gives them credit for.

34
AndrewKemendo 9 hours ago 0 replies      
the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data.

Yes, but that's what human's do too, only much much better from the generalized perspective.

I think that fundamentally this IS the paradigm for AGI, but we are in the pre-infant days of optimization across the board (data, efficiency, tagging etc...).

So I wholeheartedly agree with the post, that we shouldn't cheer yet, but we should also recognize that we are on the right track.

I say all this because prior to getting into DL and more specifically Reinforcement Learning (which is WAY under studied IMO), I was working with Bayesian Expert Systems as a path to AI/AGI. RL totally transformed how I saw the problem and in my mind offers a concrete pathway to AGI.

3
Robust Adversarial Examples openai.com
27 points by eroo  3 hours ago   3 comments top
1
therajiv 33 minutes ago 2 replies      
It's not clear to me how malicious actors can manipulate this observation to confuse self-driving cars. That said, I don't think this discredits the point of the article; it's important to note how easily deep learning models can be fooled if you understand the math behind them. I just think the example of tricking self-driving cars is difficult to relate with / understand.
4
A decentralized Bitcoin exchange github.com
157 points by ColanR  7 hours ago   50 comments top 12
1
olegkikin 4 hours ago 2 replies      
2
cocktailpeanuts 6 hours ago 5 replies      
Just finished watching their video and got nothing other than "Read our white paper if you're interested", plus bunch of buzzwords.

Why make the video at all? If I were them I would scrap all the bullshit and just spend the two minutes in the video explaining the basics of WHY and HOW it works. People visiting that site already know what a "decentralized bitcoin exchange" is.

3
contingencies 2 hours ago 0 replies      
Fiat transfer systems are typically unreliable. They can be intercepted, halted, delayed, reversed, and generally cannot be considered objectively predictable, with a wide variety of unique and nontrivial failure modes - not all of which are recoverable - and no objective SLA / service description.

The problem with assuming good faith and using actor reputation (even third-party arbitrated) is that, in becoming a trusted actor, the amount of money available for cut-and-run scenarios increases exponentially (both for arbitrator and actor), until it ultimately makes sense and happens (eg. numerous scam darknet markets, etc.)... often the claim is "sorry we got hacked!"

Using real world user identities as insurance has the issue that using one's fiat bank account to perform automated or semi-automated trades on behalf of others is probably dubious to against terms of service, or at a minimum vaguely arguably so when politically expedient. Therefore, revealing the real world user identity of an accused bad actor (ie. fiat account holder) as an insurance against bad behavior is likely to expose them to an undue scale of legal hassle and/or asset seizure, which is not something wise to trust a third party with no matter how trustworthy the arbitrators are supposed to be.

My gut feeling is that such systems work only at small scale, with a veneer of trust that can be established in different ways: deposit is placed with counterparty, reputation within some shared community, mafia boss will murder you if you rip off the system, etc. Between absolute strangers, it is exceptionally difficult to reliably scale, even if you can establish it.

Finally, an important point is that frequent <1BTC transfer activity to random destinations on conventional fiat accounts are likely to trigger bank anti money laundering (AML) heuristics.

4
Jaepa 6 hours ago 0 replies      
This is interesting, but I could see some possible issues.

It would be fun to ask the Devs some questions.

eg: if peers are able to select their arbitrators, how do you prevent a peer and & arbitrator from gaming the system. There is a secondary arbitrator but from the docs it looks like after the initial arbitration the funds are released.

Is there a way to protect against root DHT node hijack? Only refernce I see to this is a TODO: See how btc does this.

5
Uptrenda 3 hours ago 1 reply      
It may have changed since the last time I used this, but here's how it works:

 1. There are two sides to a trade, Alice and Bob. 2. Alice has USD and Bob has Bitcoins. 3. Both sides wish to trade money but they don't trust each other. 4. To do this, they deposit collateral in the form of Bitcoins into a escrow account (multiple mediators need to sign to give back the collateral to their owners.) This is a bond separate from the money they are already trading. 5. Alice sends her USD to Bob. 6. Bob sends his Bitcoins to Alice. 7. If either side cheated the mediators won't sign the "check" to release funds from the Escrow account. Therefore, so long as the value of the collateral is worth more than their potential profit from scamming -- there is no incentive to scam.
In BitSquare step 4 I think is done with third-party mediators and the mediators make decisions based on evidence. So first, how do you prove that a user sent Bitcoin: easy, its on the blockchain. Second, how do you prove that a user sent USD? Well, I believe BitSquare uses something called "TLS Notaries" -- this allows a person to cryptographically prove that an SSL website was in their browser, potentially enabling a person to prove that they sent funds.

As you can see this scheme has a few problems:

 1. Users are required to have Bitcoins for collateral. So if you don't already have Bitcoins you can't buy any (strange scenario.) 2. It relies on collateral, period, so you can never buy and sell the full amount of funds that you have. 3. Liquidity is poor. BitSquare could be improved if they had more investors and structured the exchange to provide liquidity themselves at a premium. 4. It's unclear how secure the notaries are and whether or not it can be cheated. 5. Reputation isn't that secure and the model doesn't account for attackers, though I think BitSquare solves this with multiple mediators.
Another option to solve the same problem is to use micro-payment channels. A service would have credit that represented a USD balance and micro-amounts of this balance would be sent to the recipient as the sender receives micro-amounts of Bitcoin. This is a better model, IMO, but still can potentially be reversed.

It's good to see that BitSquare is still around though. Decentralized exchanges haven't had much adoption so far and I haven't seen anyone who nailed every usability problem that these exchanges have. Even assets on Ethereum where you can literally write simple code that says "a transfer occurs if two users agree to it" are traded on "decentralized exchanges" with multiple vulnerabilities and bad UX for traders.

6
benjaminmbrown 6 hours ago 1 reply      
Etherdelta has been doing this for ERC20 tokens for some time: https://etherdelta.github.io/
7
dharma1 4 hours ago 1 reply      
Coould you have a decentralised p2p fiat <-> ETH exchange with a smart contract and PayPal, where the sold ETH is being held on the smart contract until PayPal transfer from buyer to seller is verified by the smart contract?

Like localbitcoin but no need to meet up

8
mattbeckman 4 hours ago 1 reply      
How does this differ from BitShares that's been around a long time? https://bitshares.org/
9
equalunique 5 hours ago 1 reply      
Interesting. Previously, the only incarnation of this idea was, to my knowledge, NVO: https://nvo.io/
10
kinnth 4 hours ago 0 replies      
I get it. I like it, but I wouldn't use it just yet. I think the fees charged by most exchanges are low enough for me not to worry. Also there is a large amount of legal bureaucracy now happening over real money exchanges, how might that affect it?
11
gragas 3 hours ago 1 reply      
This is going to get so destroyed but HFT strategies. A million already come to mind.
12
brian_herman 7 hours ago 1 reply      
Who enforces it?
5
Two days in an underwater cave running out of oxygen bbc.com
312 points by Luc  10 hours ago   210 comments top 12
1
j9461701 7 hours ago 21 replies      
I might be speaking out of line, but taking on these kinds of risks with young children at home seems kind of selfish. The fact that he went back into the same cave that nearly killed him only a month later...almost as if to say:

"I would rather my kids grow up without a Dad than live without my adrenaline fix"

I am neither a father or a cave diver though, so I might be missing a piece of the puzzle. Would either groups of people care to comment?

2
benzofuran 8 hours ago 4 replies      
When you're learning to cave dive, one of the first things that you learn is that you may very well die in there.

Most of the training focuses on systems, skills repetition, and understanding and using redundant systems - folks getting into cave diving typically are already extremely experienced divers who if anything need only some minor skill tweaks - most cave instructors will not take on students who don't already have significant open water technical diving experience (multiple tanks, mixed gas, rebreathers, decompression, wreck, etc).

A running joke is that the lost line drill (where you're placed intentionally off of the guide line and have to find it without a mask/light/visibility) is the most punctual cave task you'll ever do - you have the rest of your life to get it right.

Here's a few good books on it (non-affiliate links):

Caverns Measureless to Man by Sheck Exley (the father of cave diving): https://www.amazon.com/Caverns-Measureless-Man-Sheck-Exley/d...

The Darkness Beckons by Martyn Farr: https://www.amazon.com/Darkness-Beckons-History-Development-...

Beyond the Deep by Bill Stone (the Tony Stark of cave diving): https://www.amazon.com/Beyond-Deep-Deadly-Descent-Treacherou...

The Cenotes of the Riviera Maya by Steve Gerrard (patron saint / mapper of Yucatan caves): https://www.amazon.com/Cenotes-Riviera-Maya-2016/dp/16821340...This is more of a map and explanatory notes but gives great insight into the complexity of it. Currently there are 2 systems that almost all cenotes are part of in the Yucatan, and there's some really interesting work going on trying to link the two. Current work is going on at about 180m depth through a number of rooms at the back of "The Pit", and there are multi-day expeditions going on trying to find the linkage.

3
curtis 8 hours ago 3 replies      
Cave diving is one of those things that I am happy to only experience vicariously through the stories of others.
4
biggc 8 hours ago 4 replies      
> He realised the water at the surface of the lake was drinkable

Can someone explain this phenomena? How can the water in a sea-cave become potable?

5
gregorymichael 8 hours ago 1 reply      
Did a sensory deprivation tank for the first time a few weeks ago. An hour was tough. Hard to imagine 60, with the added doubt of "you may never get out of here."
6
acdanger 7 hours ago 0 replies      
See also this story if you want to make sure you don't really want to go diving in a subterranean cave: http://www.bbc.com/news/magazine-36097300
7
ajarmst 6 hours ago 2 replies      
Reminded me of the very sad story of Peter Verhulsel: http://www.upi.com/Archives/1984/11/12/Scuba-diver-lost-in-c...
8
A_Person 1 hour ago 3 replies      
I'd like to address the false assumption in this thread that cave diving is more dangerous than driving a car!

I cave dive on a regular basis with two other guys. We've dived together as a team for nearly 10 years. I'm late 60s and single, the second guy is 50s and has a partner but no children, and the third is early 40s with a six-year-old, who he has every intention of seeing grow-up into adulthood.

We often dive in a system comprising a complex maze of 8kms of underwater tunnels. Some are large, and would fit several divers across, but some are small, and you can barely squeeze through. The only entry to and exit from this system is a small pond, about 6 feet across and 4 feet deep, just big enough for one person to get in at a time. Then you scrunch yourself up, and drop down through a slot to enter the system.

We'd generally go about 700m into this system, making up to 13 seperate navigational decisions (left? right? straight ahead?) which we have to reverse precisely to get back out at the end. This is all completely underwater, there's no air anywhere except for two air pockets hundreds of meters apart. As I like to say, in cave diving there is no UP there is only OUT!

It all sounds pretty dangerous, right? Wrong.

NAVIGATION. The whole system is set up with fixed lines, each of which has a numbered marker every 50m or so. Before each dive, we consult the map, and plan exactly where we're going to go. I commit that plan to memory, write it down on a wrist slate, and also in a notebook which I take underwater. All three of us do this independently. Underwater, when we come to a junction, each of us checks the direction to go, then marks the exit direction with a personal marker. If anyone makes a mistake, for example, turns in the wrong direction, or forgets to leave a personal marker, the other two pick that up immediately. On the way back, when we get to each junction, each of us checks that it's the junction we expected, and we can see our personal markers. Each individual's markers can be distinguished by feel alone, so we could get the whole way back, separately, in total darkness, if we had to. So the odds of us getting lost in the system are very low.

LIGHT. These caves are absolutely pitch black, so naturally you need a torch. In fact, nine torches! Each of us individually has a multi-thousand-lumen canister battery light, plus 2 backup torches, each of which would last the whole dive. I could also navigate by the light of my dive computer screen, and I'm considering carrying a cyalume chemical lightstick as well. So then I personally would have five different sources of light, and we'd need 11 sources of light to fail before the team would be left in the dark. The risk of this happening is basically zero.

GAS. Each of us has two tanks in a fully redundant setup. If one side fails, we just go to the other and call the dive. In fact, our gas planning allows one diver's entire gas supply to fail, at the point of maximum penetration, and either one of the other two divers could get that guy back, plus himself, without relying on the third diver at all. However, gas is certainly a limited resource underwater, so it's always on our minds, and all three of us will turn the dive as soon as anyone hits their safety limit.

There's lots more equipment involved, but let's leave it there for the moment, and turn our attention to...

DRIVING! Each of us lives >400 km away from that system. So there and back is a five hour drive. During that drive, you could fall asleep and run off the road; have local fauna run out in front of your car; get head-on crashed by drunken drivers, and so on. Several of those are external risks that are not under our control.

So the simple fact of the matter is this. Our cave dives are almost certainly SIGNIFICANTLY SAFER than driving to and from the dive site! The cave dives carry significant potential risks, but most of those are mitigated with proper training and equipment. Whereas there's not much I can do to stop a drunken driver running head-on into me.

Certainly there are risks like tunnels collapsing and blocking the exit. But statistically, I'm sure that those are orders of magnitude less likely than having a heart attack, or falling over and breaking your neck.

Hope that helps :-)

9
fit2rule 7 hours ago 1 reply      
I'm quite surprised at the detail that the rescuers attempted to drill into the cave from above in order to provide supplies .. is anyone familiar with the depth of the cave pocket? This seems like a surprising choice to make given the logistics - but I guess a safer one, in the end .. assuming one has a drill system available and the depth is not too great.
10
belovedeagle 8 hours ago 4 replies      
I wonder - did they take steps to replenish the cave's oxygen? If not, it's useless for the next person...

I guess this is kind of silly and naive, but it's what I would do.

11
surgeryres 4 hours ago 1 reply      
No one has mentioned the risk subjected upon the rescue team to come get him. So there's that.
12
tysonrdm 8 hours ago 4 replies      
There should be a law against bringing in and leaving nylon ropes in the cave. If this continues, all the caves are going to be filled with nylons ropes left by previous divers. Do we want these caves, too, to eventually become a garbage dumping ground?
6
Machine Learning Crash Course: The Bias-Variance Dilemma berkeley.edu
432 points by Yossi_Frenkel  14 hours ago   45 comments top 10
1
taeric 12 hours ago 4 replies      
This seems to ultimately come down to an idea that folks have a hard time shaking. It is entirely possible that you cannot recover the original signal using machine learning. This is, fundamentally, what separates this field from digital sampling.

And this is not unique to machine learning, per se. https://fivethirtyeight.com/features/trump-noncitizen-voters... has a great widget that shows that as you get more data, you do not necessarily decrease inherent noise. In fact, it stays very constant. (Granted, this is in large because machine learning has most of its roots in statistics.)

More explicitly, with ML, you are building probabilistic models. This is contrasted to most models folks are used to which are analytic models. That is, you run the calculations for an object moving across the field, and you get something within the measurement bounds that you expected. With a probabilistic model, you get something that is within the bounds of being in line with previous data you have collected.

(None of this is to say this is a bad article. Just a bias to keep in mind as you are reading it. Hopefully, it helps you challenge it.)

2
rdudekul 11 hours ago 0 replies      
Here are parts 1, 2 & 3:

Introduction, Regression/Classification, Cost Functions, and Gradient Descent:

https://ml.berkeley.edu/blog/2016/11/06/tutorial-1/

Perceptrons, Logistic Regression, and SVMs:

https://ml.berkeley.edu/blog/2016/12/24/tutorial-2/

Neural networks & Backpropagation:

https://ml.berkeley.edu/blog/2017/02/04/tutorial-3/

3
amelius 12 hours ago 5 replies      
The whole problem of overfitting or underfitting exists because you're not trying to understand the underlying model, but you're trying to "cheat" by inventing some formula that happens to work in most cases.
4
therajiv 12 hours ago 3 replies      
Wow, the discussion on the Fukushima civil engineering decision was pretty interesting. However, I find it surprising that the engineers simply overlooked the linearity of the law and used a nonlinear model. I wonder if there were any economic / other incentives at play, and the model shown was just used to justify the decision?

Regardless, that post was a great read.

5
eggie5 10 hours ago 0 replies      
I've always liked this visualization of the Bias-Variance tradeoff: http://www.eggie5.com/110-bias-variance-tradeoff
6
gpawl 2 hours ago 0 replies      
Statistics is the science of making decisions under uncertainty.

It is far too frequently misunderstood as the science of making certainty from uncertainty.

7
CuriouslyC 10 hours ago 0 replies      
One good way to solve the bias-variance problem is to use Gaussian processes (GPs). With GPs you build a probabilistic model of the covariance structure of your data. Locally complex, high variance models produce poor objective scores, so hyperparameter optimization favors "simpler" models.

Even better, you can put priors on the parameters of your model and give it the full Bayesian treatment via MCMC. This avoids overfitting, and gives you information about how strongly your data specifies the model.

8
plg 11 hours ago 0 replies      
like many things in science and engineering, (and life in general) it comes down to this: what is signal, what is noise?

most of the time there is no a priori way of determining this

you come to the problem with your own assumptions (or you inherit them) and that guides you (or misguides you)

9
known 12 hours ago 0 replies      
Brilliant post; Thank you;
10
Pogba666 13 hours ago 0 replies      
wow nice.Then I have things to do on my flight now.
7
Red Programming Language 0.6.3 red-lang.org
82 points by ZenoArrow  3 hours ago   9 comments top 5
1
throwaway7645 2 hours ago 0 replies      
I'm always excited for a new Red release. I'm glad they're taking time to get the GUI DSL right for 10 platforms, but I hope it doesn't come at the detriment of enhancements to Red/System, Multi-Core, Concurrency, and all the other promised goodies. The thing badly needed at this point I think is better doc. I know it is ~95% compatible with REBOL, but honestly the REBOL doc is very basic. It shows you how to do a lot of neat things with the built in DSLs, but nowhere and i mean anywhere does it show you in any detail how to implement your own DSL, while you frequently hear about how powerful the dialects are. I'm still super pumped about this language and am thrilled for each new release.
2
_mhr_ 13 minutes ago 0 replies      
Could I use Red in a manner similar to Awk for record processing?
3
axaxs 2 hours ago 3 replies      
Am I reading this right? A single binary, less than 1mb in size, that handles GC and cross compiles? Color me impressed. Looking at platforms, does it not do 64 bit?
4
Buttons840 2 hours ago 1 reply      
Red looks like a good language. They seem to be supporting Linux last though, which has kept me from trying it personally.
5
tgb 2 hours ago 0 replies      
Non-mobile link which has more links to other content: http://www.red-lang.org/2017/07/063-macos-gui-backend.html
8
Show HN: tttfi Middleware for IFTTT github.com
74 points by kamikat  7 hours ago   37 comments top 7
1
rockostrich 6 hours ago 1 reply      
I'm not sure middleware is the right term for a server that responds to webhooks.

Either way, I think this is a good introduction for someone who is looking to do very simple things with an IFTTT integration. I don't think a node.js server running a python script inside of a docker container is the best way to go about it. Anyone who is trying to learn how to write integrations to services (such as IFTTT) will get the wrong impression if they try to dissect this code.

2
zrail 3 hours ago 1 reply      
Zapier has this built in. Granted you can only really write scripts that run directly on their platform in JavaScript, but within that scope you can do basically anything you want.

You can also write your own private apps that execute on your own infrastructure in whatever language you want.

https://zapier.com/developer/documentation/v2/scripting/

3
_Marak_ 6 hours ago 0 replies      
If you are interested in doing this locally ( without IFTTT dependency ), I'd suggest checking out: https://github.com/stackvana/microcule
4
jld 6 hours ago 1 reply      
I wish IFTTT were faster. Many recipes only seem to be run a few times a day.

I've been concocting a bunch of things in AWS Lamdba lately which should be in a service like IFTTT.

5
hengheng 7 hours ago 6 replies      
Slightly OT: I signed up for the IFTTT newsletter a long time ago, in hopes to find out what people use the thing for. Turns out it's impossible to unsubscribe from that newsletter, which disqualified the whole service in my eyes.
6
sdoering 7 hours ago 4 replies      
Legacy python nowadays? I really cannot understand why one would use legacy with a new project.

Sorry but I could understand if you had to maintain a legacy code base.

7
dmerrick 7 hours ago 1 reply      
Wonderful idea. So wonderful, in fact, it makes one wonder what is taking IFTTT so long to offer up similar functionality.
9
Redesigning GitLab's navigation gitlab.com
71 points by theoretick  6 hours ago   31 comments top 12
1
wolfgang42 2 hours ago 0 replies      
My first thought on seeing this headline was "what, again?"

Don't get me wrong, the new design looks great, but it feels like every six months they switch between having side navigation and putting everything on top. Every time they change there's a definite improvement, but you'd think they'd be able to come up with a design that works for at least a few years.

2
zanny 3 hours ago 3 replies      
Since you are redesigning UI features, are there any plans for a site-native dark theme? Seeing as there already multiple code themes available, it seems within the context of the configurable UI part of the goal. There have been Stylish themes for gitlab in the past but they break a lot because gitlab is a very large website to try restyling from the outside.

Solarized-dark with a white frame can be quite grating on the eyes.

3
vogt 2 hours ago 0 replies      
GitLab are one of the only companies I really look up to as a UXer. You guys just Do It Right. Thanks for being so transparent into your process and even sharing InVision prototypes. More companies should do this.
4
btown 1 hour ago 1 reply      
Is it really accepted best practice for UX testing to use a sample size of 12? That sounds remarkably low to draw product conclusions.
5
allan_s 4 hours ago 1 reply      
Something I found missing is the missing shortcut to easily go the page of "merged request assigned to me" (which is the link 'merge request' in the global menu) as it's the single page I visit the most every day at work, to see across all projects , the merge requests that i need to check.
6
pksadiq 1 hour ago 0 replies      
Hope, the design would be good. The issue I have faced so far is that the navigation (and several other features of GitLab site) doesn't work well when javascript is turned off. The equivalent features in Github works without javascript with no much issues.

Hope this will be fixed.

7
faizmokhtar 3 hours ago 1 reply      
Props to the Gitlab team for the release. Have enabled it for a few days now and I really love it. I feel like it is much less confusing now.
8
luord 1 hour ago 0 replies      
This is awesome. Probably the only gripe I have about GitLab is the navigation and this is a great improvement.
9
leipert 4 hours ago 1 reply      
This looks nice, but hopefully this is a "stable" release for a while, in the last versions so much stuff moved around navigation wise. I especially like that the three vertical navigations (http://imgur.com/UN1TDkw) in a repo are gone.

Thumbs up for creating such an awesome product and involving the community in your thought process!

10
prh8 5 hours ago 1 reply      
Heads up for the Gitlab people here-- The user preferences link under try it yourself 404s for me, both when logged in and out. Excited to try out these changes though!

Edit: Appears to be the same path but with `about` subdomain

11
bau5 5 hours ago 2 replies      
You can tell someone is copying someone else's work when they copy the bad stuff. The search box on GitHub and now GitLab wastes space with "This repository" spelled out. It's also odd that you get more space to type when you're not searching inside of a repo, and that backspace is how you switch to a more general search. Pity. I thought that GitLab had learned their lesson and stopped copying GitHub.

Aside from that, it sounds like a well thought out feature, and it's good that they're redoing it instead of just changing it bit by bit.

12
Karunamon 5 hours ago 3 replies      
Bloody hell, this looks like Bitbucket, but purple.

Compare:

Gitlab final prototype: https://about.gitlab.com/images/blogimages/redesigning-gitla...

Bitbucket server:https://confluence.atlassian.com/bitbucket/files/304578655/5...

The resemblance is uncanny.

10
How Google Wants to Rewire the Internet nextplatform.com
167 points by Katydid  9 hours ago   57 comments top 6
1
komali2 5 hours ago 6 replies      
I really wanted to understand this article. I tried wikipediaing and googling some of the things I didn't really get (Jupiter is a thing... ok and Andromeda.. riiiight). Then I got to the chart "Conceptually, here is how Espresso plugs into the Google networking stack:", which was totally unparseable by me. All the green things look the same, but one of them is the thing this article is about (Espresso, right?), and Google somehow is represented by a vague dark-grey blob... I just don't get it.

Can anybody help? Am I simply not technically competent enough to consume this article yet?

2
emersonrsantos 6 hours ago 1 reply      
> But running a fast, efficient, hyperscale network for internal datacenters is not sufficient for a good user experience

It will never be sufficient. A good backbone infrastructure doesn't compensate for the fact that the majority of users don't have ISP choices especially for fast speed fixed/mobile networks.

3
deegles 8 hours ago 5 replies      
"one out of four bytes that are delivered to end users across the Internet originate from Google"

Such a mind blowing statement. Wonder when (if) they'll hit one-in-three bytes.

4
eru 6 hours ago 0 replies      
Somewhat related: Google's efforts to speed up TCP.

"BBR: Congestion-Based Congestion Control" http://queue.acm.org/detail.cfm?id=3022184

5
konpikwastaken 8 hours ago 2 replies      
Can someone ELI5 the difference between this and https://azure.microsoft.com/en-us/services/expressroute/? Is the technology principle the same?
6
zzzcpan 7 hours ago 3 replies      
I don't know, feels like a massive waste of resources and if Google is doing it simply because it can. It's probably much cheaper for everyone else to handle latency/throughput problems on the client side and application level, sticking to all the traditional networking, but not relying on it for quality. Even in the web browser we already can send all kinds of asynchronous requests to multiple servers in multiple datacenters, choosing the fastest response and making all kinds of decisions to where to send requests dynamically in real time.

And while I agree about overcomplicated routers and box-centric thinking in computer networks, it's pretty much impossible to change things because of the monopolistic nature of the ISP industry. They are very far from competing on the levels of quality where SDN could matter.

11
Myspace lets you hijack any account just by knowing the persons birthday leigh-annegalloway.com
10 points by happy-go-lucky  1 hour ago   3 comments top 2
1
thinkfurther 8 minutes ago 0 replies      
I remember a time where you could embed js and css in the forums. I never want farther than seeing if I could steal my own login cookie (being new to js I was sure I just had to have overlooked something) and change posts of a user without that user seeing that change haha (test user also being myself, in some god forsaken part of the forum nobody used), then backed off that stuff for fear of being banned and made little "utilities" like expanding text boxes, and pretty stylesheets of course. There was just nooooobody paying attention, I can absolutely vouch for that.
2
rosariotech 1 hour ago 1 reply      
Does MySpace still exists?
12
Strange Signals from the Nearby Red Dwarf Star Ross 128 upr.edu
92 points by r721  9 hours ago   19 comments top 6
1
pavement 2 hours ago 1 reply      

 V* RY Sex
How this particular red dwarf came to obtain such a name, gives me an inkling that whoever did it must have known the effect it would have on search engines.

http://simbad.u-strasbg.fr/simbad/sim-id?Ident=V*+RY+Sex

3
somedangedname 6 hours ago 1 reply      
It's worth pointing out that this is a different star than the dimming / 'Dyson sphere' star KIC 8462852 which was covered in the press earlier in the year.

All very exciting!

4
noahdesu 3 hours ago 0 replies      
Is seems like there is a more recent update [0] to the topic than the linked-to page, but it is really technical and perhaps just a data update.

[0]: http://phl.upr.edu/library/notes/barnard

5
SubiculumCode 6 hours ago 0 replies      
I'm no astronomer, but interstellar mysteries are cool...so are stellar ones.
6
c3534l 48 minutes ago 0 replies      
It's not aliens.
13
Facets: An Open Source Visualization Tool for Machine Learning Training Data googleblog.com
116 points by stablemap  9 hours ago   4 comments top 2
1
jxramos 5 hours ago 0 replies      
Very impressed to see the confusion matrix consist of the actual images in that deep zoom style rendering. We've implemented something similar in spirit in some image processing machine learning application but instead I have a traditional confusion matrix with counts that are "<a>" anchor links to a webpage that displays all the constituent images. Nice work Facets team.

I particularly like this language here..."Dive is a tool for interactively exploring up to tens of thousands of multidimensional data points, allowing users to seamlessly switch between a high-level overview and low-level details. ...Dive makes it easy to spot patterns and outliers in complex data sets."https://github.com/pair-code/facets#facets-dive

That's key functionality to drill into our data with powerful navigable dashboards and visualization tools. We're creating this seamless transition with some Python and Flask and Bokeh tooling but nothing as impressive is Facets. But we've cued in all the domain specific things of interest, but it's nice to see a general purpose feature set on display with Facets.

2
canada_dry 3 hours ago 2 replies      
This looks amazing.

And... I keep waiting for MS to provide an add-in to Excel that will allow ML analysis and similar visualization.

Even better, someone beat MS to it and do one for Libre Calc.

14
Formal verification of the WireGuard protocol wireguard.com
133 points by zx2c4  10 hours ago   16 comments top 5
1
zokier 6 hours ago 1 reply      
First of all congratulations, I do believe that this is good step forwards.

I took a look at the tamarin model, and at least for me it looks pretty much impenetrable (no surprise there). Is it realistic to think that (any?) implementors can use the proven model as the primary reference when implementing the protocol? Especially if you'd strip away the comments, which are not proven to be correct and as such might be misleading?

2
shock 9 hours ago 3 replies      
I'm not extremely experienced in formal verification methods, but isn't usually an implementation that's formally verified? I understand that verifying the protocol is a big deal but if the protocol is not correctly implemented you can't count on any of the promises the protocol makes.
3
mzs 8 hours ago 0 replies      
A bit off-topic, but anyone have info about how the move to BSD-compat license went?
4
nh2 9 hours ago 1 reply      
Could you have chosen other provers than Tamarin (which is 100% Haskell for people interested in that) for this job?

If yes, what made you prefer it?

5
KenanSulayman 8 hours ago 2 replies      
Talking about WireGuard... I was just trying to build it a few hours ago, but cant get it to build against a 4.2.0 kernel :/

Looking at the primitives, isnt WireGuard effectively using the Noise protocol?

15
Alienation 101: On Chinese Students in the American Midwest 1843magazine.com
15 points by sohkamyung  2 hours ago   5 comments top 4
1
aphextron 11 minutes ago 0 replies      
This problem is even worse in the UC system. All STEM related classes are easily >50% Chinese international students. They are extremely insular and hang out/study exclusively with other Chinese. It wouldn't be so bad if it didn't mean effectively cutting in half the fellow students you can turn to for support in a class. They should really force the point of integration, rather than using these kids as paychecks to the detriment of local students.
2
goobynight 4 minutes ago 0 replies      
It's not alienation; it's isolation. Trust me. I even asked a few.
3
analyst74 26 minutes ago 0 replies      
The rich Chinese kids probably want to make some American friends, but they also expected a high social status where people cater to their needs and conveniences.

But the American kids only know how to deal with foreigners admiring the American way and looking to assimilate into American culture, plus, catering to foreigners is not really in the dictionary of the mightiest country.

Talk about expectation mismatch.

4
aphextron 23 minutes ago 1 reply      
>SORRY, YOU NEED TO ENABLE JAVASCRIPT TO VISIT THIS WEBSITE.

Sorry, no I don't.

16
Time Is Contagious nautil.us
71 points by dnetesn  7 hours ago   8 comments top 4
1
loudin 3 hours ago 0 replies      
This study supports the notion that are all truly interconnected in ways that are beyond our conscious comprehension.

It's an important reminder we do not exist in a vacuum and our actions have an indirect impact on a large number of people.

2
kornish 5 hours ago 1 reply      
> The effectiveness of social interaction is determined by our capacity to synchronize our activity with that of the individual with whom we are dealing, Droit-Volet writes. In other words, individuals adopt other peoples rhythms and incorporate other peoples time.

Reminds me of all the other mimicry humans will unconsciously engage in, like mirroring body language or even subtly mirroring an accent in conversation.

3
wonderous 4 hours ago 0 replies      
Best example to me is how population density impact the use of time:

https://www.citylab.com/life/2012/03/why-people-cities-walk-...

4
DiabloD3 4 hours ago 2 replies      
How many people on HN have decided to pay the Nautilus subscription?
17
Death of a Pig (1948) theatlantic.com
75 points by samclemens  7 hours ago   22 comments top 8
1
davis 5 hours ago 0 replies      
Wow. I was just listening to Conversations with Tyler with Jill Lepore[1] this morning. She mentioned this article at the end:

COWEN: Final question. The world of social media, we all know its not going away. Maybe it has some problems, but if you were to give a student or a person some piece of advice or intellectual ammunition to carry with them through this worldsome book, some essay, some thoughtso as to make it marginally better rather than marginally worse, what would that be?

LEPORE: Read this E. B. White essay called Death of a Pig.

COWEN: And what does he tell us in Death of a Pig?

LEPORE: A pig dies on his . . . He was in Maine. Hes trying to understand what it means when something dies when you didnt expect it to die and you couldnt save it, and I just find it a very beautiful essay. But I think something is dying, and we cant save it, and thats a good place to start, to figure out how to feel about that.

[1]: https://medium.com/conversations-with-tyler/tyler-cowen-jill...

2
pmoriarty 3 hours ago 0 replies      
This reminds me of how recently an activist was charged with criminal mischief for giving water to thirsty pigs on the way to a slaughterhouse.[1] (She was found not guilty.[2])

[1] - http://www.cbc.ca/news/canada/hamilton/news/animal-activist-...

[2] - https://www.washingtonpost.com/news/animalia/wp/2017/05/05/j...

3
jeffwass 4 hours ago 0 replies      
For those that may not click on title alone, this essay is by E. B. White, author of Charlotte's Web, Stuart Little, and coauthor of Strunk & White's The Elements of Style.
4
aptwebapps 38 minutes ago 0 replies      
Compare and contrast with Mona, by James Taylor. Superficially similar subject, but very different effect. E. B. White certainly could make every word tell.
5
abetusk 4 hours ago 6 replies      
I have to say I really don't understand this story. For those of you who don't want to read, it's a 1948 story about a farmer (presumably E.B. White?) who tries to save his pig before slaughter that has fallen sick and dies (from erysipelas presumably?).

The narrator seems to have a lot of investment in the pig's health:

 The pig's lot and mine were inextricably bound now, as though the rubber tube were the silver cord. From then until the time of his death I held the pig steadily in the bowl of my mind; the task of trying to deliver him from his misery became a strong obsession. His suffering soon became the embodiment of all earthly wretchedness.
The ending few sentences concludes with

 I have written this account in penitence and in grief, as a man who failed to raise his pig, and to explain my deviation from the classic course of so many raised pigs. The grave in the woods is unmarked, but Fred can direct the mourner to it unerringly and with immense good will, and I know he and I shall often revisit it, singly and together, in seasons of reflection and despair, on flagless memorial days of our own choosing.
On a superficial reading, there might be a confusion as to the empathy of the narrator, but the narrator clearly understands what the conclusion of a healthy pig is:

 I had assumed that there could be nothing much wrong with a pig during the months it was being groomed for murder; my confidence in the essential health and endurance of pigs had been strong and deep, particularly in the health of pigs that belonged to me and that were part of my proud scheme.
So it's understood that in the best of cases, the pig is being cared for so that it can be murdered for food.

Is the sense of loss one of not conforming to a rigid script that society sets out? Is it because the narrator has a genuine sense of empathy but just ignores the fact that they'll slaughter and eat the thing they have empathy for? Or is it a statement that we are all eventually bound for a soulless premeditated murder and the only thing we can hope for is a comfortable prison before the time comes?

Or am I just expecting too much introspection from someone who hasn't examined their own motives and actions?

6
tptacek 5 hours ago 0 replies      
That's a hell of a last sentence.
7
bonzi_buddy 57 minutes ago 1 reply      
I loved the emotional paradox here and all, but why can't you eat a pig that died from some natural cause?
8
vool 5 hours ago 1 reply      
some pig
18
Cybersecurity Humble Book Bundle humblebundle.com
314 points by ranit  9 hours ago   72 comments top 12
1
dsacco 8 hours ago 8 replies      
So, I've read most of these. Here's a tour of what is definitely useful and what you should probably avoid.

_________________

Do Read:

1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.

2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.

3. Security Engineering - You can probably read either this orThe Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.

4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.

5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.

_________________

You Can Skip:

1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.

2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.

3. The Art of Deception - See above for Social Engineering.

4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.

_________________

What's Not Listed That You Should Consider:

1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.

2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.

3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.

4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.

5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.

6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.

2
EnFinlay 8 hours ago 5 replies      
Is there a legal / not crazy expensive way to buy humble bundle books and get them printed on standard 8.5x11, bound in a series of binders / duotangs / twine? I'm going to buy the bundle, but greatly prefer physical pages to reading on a screen.
3
twoquestions 7 hours ago 0 replies      
Great, now there's another collection of books which I'll want to read which I'll feel bad about missing the deal for, then kick myself for never actually reading them in-depth.

I think I've bought 50 books from Humble Bundle (spending about $1/book), but I've only cracked open a few of them.

Also thank you dsacco for the recommendations!

4
mr_overalls 8 hours ago 2 replies      
Schneier's "Applied Cryptography" by itself justifies the $15 bundle, IMHO. This is a great deal.
5
Tepix 5 hours ago 2 replies      
I use 2FA on Humble Bundle. In order to log in, I have to solve several captchas.I then have to solve more to buy stuff.

All in all I have to solve the captcha 5 times or so, each time involves marking multiple images.

What sense does this make?

Either they trust the captchas (then they only need one), or they don't (then they should remove them). I've complained about this to them in the past but they haven't changed it.

6
kirian 7 hours ago 0 replies      
I find this ironic this offering - "Bitcoin payments have been disabled for the Humble Book Bundle"
7
dronemallone 2 hours ago 0 replies      
Security Engineering is free on the author's website :) http://www.cl.cam.ac.uk/~rja14/book.html
8
znpy 7 hours ago 0 replies      
Remember to choose a charity entity for your donation!

ProTip: entities like the FSF, the EFF, Wikimedia and many others can be helped via the humble bundle!!

9
nonamechicken 8 hours ago 5 replies      
I am interested in learning more about securing web servers (nginx, nodejs). Is there a book in this bundle that could help me? If you know any good books, please recommend me one.
10
komali2 7 hours ago 0 replies      
Fantastic, glad to have more reading to prep for defcon!
11
SadWebDeveloper 8 hours ago 1 reply      
CEH v9 at 15 USD bundle-level is quite a joke, IMHO that should go to the 1 USD level but anyway as someone said Applied Cryptography might be the selling point here.

Personally speaking the only books valuable in this bundle are "Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation" and "Applied Cryptography: Protocols, Algorithms and Source Code in C, 20th Anniversary Edition" the other are either quite outdated, too oversimplified or script-kiddie level stuff.

12
gergles 9 hours ago 1 reply      
"Pay what you want^"

^As long as it's at least $15.

It bothers me that Humble Bundle has so heavily embraced this type of marketing.

19
Maryam Mirzakhanis Pioneering Mathematical Legacy newyorker.com
75 points by anthotny  4 hours ago   20 comments top 5
1
therajiv 42 minutes ago 0 replies      
I knew Maryam was incredible the first time I heard her give a talk at Stanford. To my fellow cancer researchers out there: this is about as good motivation as we'll ever have to keep plugging along.
2
adrianratnapala 34 minutes ago 1 reply      
So I am trying to undesrstand what her work was about. It seems fields medal was for "the dynamics and geometry of Riemann surfaces and their moduli spaces" (my emphasis).

What is dynamics to a geometer? Clearly it is important -- because it is important in physics. But I want to understand what mathematicians are getting excited about.

We can talk about a family of geometrical objects foo(t) where the parameter t is a single real value, which stands in for time. But -- other than its relation to the physical world -- why is that $t$ parametrisation important? Why not complex parameters, or multiple parameters, or something completely different?

3
c517402 1 hour ago 1 reply      
The article does a very poor job of describing her work. Does anyone have links to something understandable?
4
wfunction 3 hours ago 5 replies      
Still bothers me that we didn't get a black bar. Can someone explain what the criterion is?
5
racl101 38 minutes ago 2 replies      
"this is about as good motivation as we'll ever have to keep plugging along."

So, other people dying of cancer not as good motivation?

21
Get eyes in the sky with your Raspberry Pi alexellis.io
124 points by alexellisuk  11 hours ago   21 comments top 5
1
pdelbarba 8 hours ago 2 replies      
I use one of these (in a way) whenever I fly. I have an RPi running Stratux with a WAAS GPS reciever and two modified RTL-SDR dongles, one for each ADS-B band. It then rebroadcasts all this info over a wifi network that my ipad picks up and displays with foreflight. The whole thing sans ipad was ~$100, is super reliable, and fits cleanly in a small case. The commercial version (Stratus 2s) is $900 and has reliability issues.

On a side note, these only pick up AC that are broadcasting ADS-B which is most commercial aircraft and a minority of commuter/personal planes. This will change at the beginning of 2020 when the FAA will mandate all aircraft will need it (or at least anyone planning to ever operate in a mode C ring/near a major city)

2
jjwiseman 6 hours ago 0 replies      
Once you've gone to all the work of setting this up, you need to upgrade from the rudimentary dump1090 web interface. The best open source web UI I know of is Virtual Radar Server, which I run on my Raspberry PI using Mono: http://www.virtualradarserver.co.uk/

VRS displays much more information about flights and about your receiver performance, and is more customizable.

3
rb808 9 hours ago 1 reply      
SDR aside - wow a rpi DVB-T tuner for 12 quid!? Is there an ATSC tuner for US at a price like that?
4
qubex 9 hours ago 6 replies      
I'm fascinated by the idea and practice of SDR but I find the focus on tracking airliners to be rather boring (for most) and uninspired. Surely we can come up with something more interesting and potentially discovery-orientated, or at least a little more awesome, such as perhaps radio astronomy or somesuch?
5
zeep 2 hours ago 0 replies      
many flights are not required to broadcast this signal....
22
What is Eventual Consistency? concurrencyfreaks.blogspot.com
75 points by ingve  11 hours ago   10 comments top 4
1
Terr_ 8 hours ago 0 replies      
Perhaps due to the title (which is evocative in Seattle) I often think of "Starbucks Does Not Use Two-Phase Commit" [0]

[0] http://www.enterpriseintegrationpatterns.com/ramblings/18_st...

P.S.: I feel like I may have thrown this one out without enough context, so I'm just gonna bloviate a bit in this edit.

The "object" referred to in the OP link -- or what I like to vaguely call the "unit of consistency" -- is a single Starbucks employee. They hopefully have an internally-consistent history of what they believe to be true about the universe. (And stay sane during coffee-rush times.)

The phrase "eventual consistency" describes the relationship between multiple employees. Individuals can drastically disagree with one-another, but there's a framework for detecting disagreements and resolving the discrepancy in some way, even if that just means agreeing to ignore it and logging it for management to fix.

A lot of eventually-consistent systems involve allowing certain kinds of discrepancies to occur, while simultaneously promoting those errors into real business concepts in the domain. Banking and accounting systems are particularly great demonstrations of this, because they started doing it centuries ago when nodes were in cities connected by ink-and-parchments packets on horse-ridden routes.

2
snarfy 4 hours ago 2 replies      
It's how banks work.

You write a check and buy something. The banks reconcile and determine you don't have enough money, and your account goes negative.

If it were atomic, your account could never go negative.

3
quizotic 5 hours ago 3 replies      
Jim Starkey (founder of NuoDB) is rumoured to have said that any adjective before the word "consistency" is equivalent to the prefix "in" as in "eventual consistency is inconsistency". A bit harsh, and not completely true, but a good warning to a world that increasingly believes eventual consistency is the best way to handle the CAP theorem.
4
kellros 8 hours ago 0 replies      
I'll tell you later :)
23
Bitcoin Is Having a Civil War as It Enters a Critical Month bloomberg.com
334 points by discombobulate  13 hours ago   281 comments top 27
1
buttershakes 11 hours ago 12 replies      
This is a fight for control of Bitcoin. It is business interests on both sides fighting for a position of authority. SegWit2x is an attempt to remove control from the core dev team, which while technically strong is full of zealots with questionable motives and terrible management skills. Bitcoin ABC and Unlimited have their own parts to play as factions. It's getting tense, but it's been years in the making. Groups unwilling to compromise on the most basic points. I suspect that SegWit2x will end up taking over the network, but I'd rather see a pure large block faction like Bitcoin ABC. Either way the core developers are going to lose control of a 40 billion dollar network, possibly one of the biggest fails in modern technology. They will be left on a minority chain which will have little relevance going forward. I've said it before but anyone who put money into Blockstream has to seriously be wondering what the hell they are doing, there CEO should have been kicked out a long time ago, he has no relevant experience to actually running an organization and has royally messed it up.
2
shp0ngle 11 hours ago 3 replies      
The basic question is how to scale; off-chain or on-chain. The rest is just theatrics and typical nerdy hyperbole.

One side of the fight (Core / blockstream) wants to scale off-chain, pushing transactions to side-chains and/or lighting networks, and want to profit from off-chain solutions.

The other side of the fight (segwit2x / miners) wants to scale on-chain, making the blocks bigger, and profit from block fees.

Both sides have pros and cons.

Pros of off-chain solutions - more scalable, don't need expensive confirmations for each transaction, more long-term. Cons: the solutions don't exist yet and might be vaporware; segwit etc are just stepping stones.

Pros of on-chain solutions - making the blocks larger can be done now, no need to wait for new software and new networks. Cons - makes the blocks larger, which makes running bitcoin nodes harder. Also cannot scale this way infinitely (you need to keep all the transactions on a disk forever).

The discussion about segwit is in reality just discussion about how to scale, and who profits.

As for me, I don't really care, Bitcoin is inefficient either way

3
apeace 11 hours ago 2 replies      
The days are counting down to the "Segwit2X" rollout, the idea supported in the "New York Agreement" (NYA)[0].

There is a contingency plan in place should the Core-supported User Activated Soft Fork become activated.[1]

Segwit2X has working code, has been tested in beta, and is now in RC.[2]

Without commenting on the merits of the different approaches, the current situation is thrilling to watch as a spectator. To call it a "Civil War" is not an exaggeration.

[0] https://medium.com/@DCGco/bitcoin-scaling-agreement-at-conse...

[1] https://blog.bitmain.com/en/uahf-contingency-plan-uasf-bip14...

[2] https://lists.linuxfoundation.org/pipermail/bitcoin-segwit2x...

4
arcaster 11 hours ago 2 replies      
After working in the space for about a year, after being a developer and enthusiast surrounding crypto since the early days of BTC this "bickering among core devs" is nothing new.

Any press or "talks" that say otherwise are either being influenced with serious bias or are simply reporting false information.

I like DLT tech, however, if bitcoin has shown us anything it's that once you solve the double-spend problem you're still left with an even more grotesque problem of governance.

People pick fun at ETH since it has a "single leader", but Vitalik is more of a back-seat conductor than a "grand leader". Also, most arguments of "bitcoin being a truly decentralized platform because our devs are decentralized" can easily be diffused by vaguely looking into how BlockStream operates...

The political shit-storm being paraded by BTC needs to end soon, we really don't need another 2-3 years of douchey BTC core devs arguing on the internet and bad-mouthing any project that isn't BTC.

5
xutopia 9 hours ago 2 replies      
To me this whole process shows how great cryptocurrencies really are. The process is live, it is public and it is messy.

Compare with how our usual currencies are handled. Behind closed doors with powerful banks or private companies deciding for our governments.

6
rwmj 10 hours ago 3 replies      
If Bitcoin splits, what do you predict would be the effect on holders of bitcoins?

- They have twice as much money (yay!)

- They have twice as much money but the value is split, so it's worth approximately the same.

- One of the branches wins or mostly wins.

- The split does so much damage that some (all?) value of coins is lost.

7
BenoitP 9 hours ago 2 replies      
I'm starting to see a bit clearer on how a fork would pan out:

Miners: Hashing power has little influence. As long as there are miners, and two chains rejecting each other transactions will be processed. At first, transactions processing might take a while, but difficulty will adapt. This will create two legitimate currencies. Now everybody in possession of 1 BTC would have 1 BTCa + 1 BTCb.

Exchanges: Little power. They will trade both BTCa and BTCb, and accept commissions.

Trader of goods, in embedded devices: They might have to modify their client to accept both currencies, but they would have to follow the market rates. Otherwise they would have to suffer income loss from people using them to profit from arbitrating the markets.

BTC-rich individuals: They have now 1 BTCa + 1 BTCb. But there is transaction replayability. If they spend 1 BTCa, their BTCb can also get spend the same way. And they lose their BTCb. Chains have a strategic advantage to replay transactions getting to the other one because: 1) they get to keep the commission, 2) they ascertain themselves as more encompassing economically (not sure on this one maybe, they want to stay neutral).

Now, if BTC-holders can wallet-emptying-double-spend them to 2 different addresses they control on the 2 chains. And, compared to the ones who got their transaction replayed, they have kept both their BTCa and BTCb.

TL;DR: IMHO, come the technical fork, some BTC-holders will be tumbling until they irrevocably acquire their BTCa + BTCb, and use them to make runs on the markets, effectively materializing the economic fork.

----

I'd love the opinion of someone who lived through the ETH-ETC split, especially about the transaction replayability part.

8
Animats 9 hours ago 0 replies      
The scary thing is that the developers want to go from initial release of new code to wide deployment in a few days. This on something where any security flaw can be attacked anonymously and profitably. What could possibly go wrong?
9
rihegher 12 hours ago 0 replies      
Already discussed a few days ago on HN https://news.ycombinator.com/item?id=14758587
10
nfriedly 9 hours ago 3 replies      
What is Bitcoin and friends good for right now besides speculating with and trading for other currencies?

A while back there was a BTC marketplace where among other things, I spent 1 BTC on a steam key for the game Portal (a poor trade in hindsight).

But they shut down and the only other place that I can think of that accepts BTC is humblebundle.com - and presumably they convert it to USD right away.

Actually:

> Bitcoin payments have been disabled for the Humble Capcom Rising Bundle.

So, yea, who accepts BTC right now?

11
placeybordeaux 10 hours ago 2 replies      
I just moved ~20% of my crypto holdings from BTC to LTC. The rest I'll likely keep close to 40% of my cryptoholdings in BTC, but move it onto my own wallet. If a fork actually happens, I'd prefer to be in control of the private keys.

It's kind of odd that there is still so much FUD about segwit, as it has already activated on LTC. It hasn't appeared to open any security holes.

12
ihuman 12 hours ago 2 replies      
How is this different then what happened with Bitcoin XT and Bitcoin classic?
13
badloginagain 11 hours ago 5 replies      
Sorry if this is a stupid question, but why not both? It doesn't appear that the two strategies are mutually exclusive. Is it just that SegWitX2 is considered too rushed? Is it just that miners have a vested interest in maintaining influence?

Personally it seems like smart contracts and other similar services beget an ecosystem that could swell the market cap by a significant amount, I assume miners would have a long term goal of doing just that.

As a disclaimer, I own Bitcoin, but I'm definitely a layman and I don't really have a horse in the race. What I'm most concerned is what these changes are going to accomplish when looking back 10 years from now. I'm in BTC for the long-term, and this whole thing stinks of petty bias and tribal power plays.

14
hellbanner 6 hours ago 0 replies      
I see a lot of mention of 51% attack; the selfish miner attack could be done with closer to 33%(!):

https://arxiv.org/pdf/1311.0243v4.pdf

https://bitcoin.stackexchange.com/questions/38273/have-bitco...

15
JohnJamesRambo 10 hours ago 1 reply      
Can someone show some math on how much more expensive it would be to run a node if block size is allowed to increase from 1 MB? It sounds like a silly made-up excuse.
17
jgord 1 hour ago 0 replies      
I'm not sure why SegWit is put forward as a "scaling solution". It does make some room by moving signature data out of the main block, which may allow 2.5x as many transactions - but thats a one time improvement, afaict.

The real problem is simply that the blocksize is way too small. At peak daily loads we are trying to put 20MB of transactions into a single 1MB block. Of course the unprocessed transactions pool up in the 'mempool' waiting for the next block, and are eventually cleared later in the day in off-peak times.

The reason they don't just pool up indefinitely, and crash the server, is due to economics - people pay higher transaction fees to get their important transactions into the next block. Miners earn part of their income from those fees, so they put the best paying transactions into the block first.

Most people who might like to use Bitcoin to pay for actual things, will balk at paying 3$ to send 500$, which means less people use the system, or they only use it for important big trades - thus, an equilibrium is set up where transaction volume is kept low.

Keep in mind bitcoin blocks occur on average every 10 minutes. A global rate of 3 trans/sec is clearly not a large number for a system used by millions, all across the globe.

Litecoin has the same architecture, but doesn't have this bottleneck problem - they have 3/4 the blocksize, process 4x as frequently and handle less that 1/10th the volume of transactions. So there is no mempool, fees are low etc.

The max blocksize is set to 1MB in code [ think #define or static const ], so increasing it means releasing new software - old versions will not be able to process large blocks, so this means a "hard fork".

I would argue that a blocksize increase is urgently needed and justifies a prudent hardfork - because it is currently preventing Bitcoin from growing. Not only do we need a 2MB block yesterday [ some say 8MB ], but we need a clear block size upgrade schedule for the next few years so Bitcoin can handle steady growth, without the need for many future hardforks.

Blocksize increase over the next few years could yield a 20x to 200x increase in throughput using the current architecture ... this releases the stranglehold on transaction flow and user growth, and buys time to build out all the other nice new technologies that can augment, or scale beyond, the linear architecture of the blockchain.

This issue has been delayed and debated for 2.5 years, so now it really is urgent and people on both sides are pretty angry. Sadly, its metastasized into an ugly political civil war .. but I think at heart it is a fairly normal engineering issue that could have been resolved routinely. Maybe having a ton of cash riding on your code makes easy choices hard.

18
lamontcg 10 hours ago 0 replies      
From a technical analysis perspective (yeah i also do palm reading and seances), Bitcoin's chart looks like a perfect triangle continuation pattern.
19
narrator 11 hours ago 2 replies      
This is why I'm bullish on Litecoin. Already has segregated witness, lightning network, low transaction fees, and a low drama community.
20
deletia 8 hours ago 0 replies      
This just in: centralized authorities worried about the deflation of our fiat bubble and crypo currencies position as the next generation of digital commodities capitalize with FUD propaganda two weeks before a software patch (proposed last year) is rolled out exactly* in the way it is intended to.
21
bleair 10 hours ago 0 replies      
It appears the emotional power of "money" coincidences with zealotry. It must be exciting to have your money tied into something that could split. If the split does happen and both forks keep running won't the world economy of bitcoins simply doubled? I assume someone will provide an exchange to move to/from bitcoin-zeal vs. segwit2
22
someSven 12 hours ago 0 replies      
I was a bit surprised by the crash, I would have sold and buyed back cheeper. But I thought the information was already in the price. I could imagine someone tried to make it crash hard by selling a lot and creating a panic, and failed.
23
xiaoma 12 hours ago 0 replies      
Standard. This is what bitcoin leaders do.
24
faragon 12 hours ago 2 replies      
TL;DR: Bitcoin is popping, and Ethereum is going to be the new bubble.
25
m777z 11 hours ago 3 replies      
With this much uncertainty, paying ~$2000 for 1 bitcoin seems insane to me. But that's why I don't invest in cryptocurrencies; too much volatility for my taste.
26
pteredactyl 11 hours ago 6 replies      
Bitcoin continues to evolve. With that comes growing pains. And internal struggles as it is mostly an open source project.

But for Bloomberg to use a 'civil war' hyperbole signals fear from the establishment. Established capital more specifically. And really, that is bitcoin's biggest threat.

Disclosure: I own bitcoin.

27
gremlinsinc 12 hours ago 1 reply      
If I wanted to grab an altcoin like Nxt/Ardor -- would it be safer to buy bitcoin now, or wait till Segwit when it could become cheaper? Nxt/Ardor and all crypto's are downward spiral now--because of the bitcoin split coming, so I feel now's a buyers paradise.. and Nxt/Ardor chains look very promising from a tech standpoint.
24
A basic Lisp interpreter in R github.com
42 points by juliuste  8 hours ago   13 comments top 6
1
clircle 7 hours ago 0 replies      
Ihaka (co-creator of R) may still be working on a new statistical programming language based on Common Lisp.

https://www.stat.auckland.ac.nz/%7Eihaka/downloads/Compstat-...

2
jordigh 4 hours ago 2 replies      
If you enjoy this sport, there's also Make-a-Lisp (MAL), an ongoing project to write lisp in every programming language:

https://github.com/kanaka/mal

In particular, here's their R version:

https://github.com/kanaka/mal/tree/master/r

3
huac 8 hours ago 0 replies      
From one of the linked issues, a "Lisp-like R": https://github.com/chanshunli/jim-emacs-fun-r-lisp
4
peatmoss 5 hours ago 1 reply      
This is brilliant. It kind of reminds me of hy (https://github.com/hylang/hy) in addition to Norvig's lis.py that the author cites as inspiration.

I feel like lots of folks in the R community secretly or not-so-secretly pine for lisp. My own "someday project" is to implement some portion of R in Racket"Arket". Of course all the native libraries that have been wrapped in R are the tricky bit.

5
pvaldes 7 hours ago 1 reply      
You can also open a lisp interpreter and open an R session from here. Probably more efficient.

If you want to open a lisp repl inside R you can just open an R session and write:

system("sbcl", intern=FALSE)

That's all... use intern true if you want to save the session to an R object

foo <-system("clisp", intern=TRUE)

6
lottin 4 hours ago 0 replies      
R with Lisp syntax. Cool stuff.
25
Nationwide 5G to cover all 24 square miles of San Marino rcrwireless.com
35 points by urahara  8 hours ago   9 comments top 6
1
tgragnato 5 hours ago 0 replies      
Telecom Italia is well-known to manage noteworthy pilot projects.But the company often fails in making these excellences the norm.

For the mobile network:

- the coverage of 4G is ... scarce

> In Italy, almost all of the population can benefit from mobile Internet connectivity services over the 2G network, namely Global System for Mobile Communications (GSM), GPRS (General Packet Radio Service) and EDGE (Enhanced Data rates For GSM Evolution). Next to the full coverage is HSPA (High Speed Packet Access) technology, while the implementation of HSPA + (HSPA Evolution) and 4G LTE (Long Term Evolution) solutions is still to be completed. [https://www.sostariffe.it/news/copertura-rete-mobile-in-ital...]

- the practice of trying to force users into accepting bundles that include online streaming of music and/or movies, subscriptions to the fixed network, newspapers, movie tickets, various amenities is hated by many. [https://www.tim.it]

- not having a internet-dedicated monthly fee is a lack, especially given the exaggerated costs in exchange for a low data threshold

For the fixed network:

- having a guaranteed minimum bandwidth is a utopia and the real upload bandwidth is around 20% of the declared

- the depeering has been dictated by anti-competitive strategies rather than technical reasons [http://blog.bofh.it/id_432]

2
DiabloD3 3 hours ago 0 replies      
This is neat and all, but no LTE standard has been tapped for 5G. LTE Advanced Pro (the third generation of the LTE spec) is the closest to having a spec that can work for 5G, but has not yet been chosen for that.

In addition, it is implied that 5G will require usage of 28, 37, and 39 GHz, which LTE Advanced Pro does not currently have profiles for.

What San Marino is doing is building a current gen 4G network (ie, 4.9G or whatever they're calling it), allowing as many feature of LTE Advanced as possible (including complex MIMO), so in a few years most cell phones will have caught up to make effective use of it.

Also, as a side note, LTE Advanced Pro was introduced in 3GPP release 13 (early 2016), and anything that meets the requirements for 5G will not be until release 15 (most likely next year).

3
zitterbewegung 6 hours ago 2 replies      
This looks like a really cool testbed for 5G. No tall buildings for you to put in 5G repeaters / femtocells. No bureaucracy for you to jump hoops. Small enough that deployment wouldn't be that big of a problem. Large enough to actually do real world tests with phones. Is Nokia (Alcatel-Lucent) doing the deployment because I wouldn't be surprised if they did.
4
blue1 5 hours ago 0 replies      
That's great. Even 3G/4G in San Marino has been rather flimsy so far.
5
Theodores 3 hours ago 1 reply      
Meanwhile in the Tesla/SpaceX corner we have space based broadband just around the corner:

https://hackernoon.com/will-spacex-become-the-worlds-biggest...

With nearby Tesla cars and SolarCity roofs giving your phone 5g with 25ms latency. It could happen. Soon you will be able to buy a VPS in space to halve that.

One day this space based network is going to come online and it will supercede many terrestrial upgrades, making things like the TelCo 5G roll out not needed after all.

6
TokenDiversity 3 hours ago 0 replies      
This is true 5G. Not the fake ATT 5G. I saw neowin.net taking a stance against fakes by labelling ATT 5G as "fake 5G" clearly in their title. I hope we all do the same until ATT & co stop misleading their customers leading to eventual confusion for everyone.
26
DCompute: GPGPU with Native D for OpenCL and CUDA dlang.org
98 points by ingve  13 hours ago   27 comments top 4
1
axaxs 7 hours ago 0 replies      
D has always interested me a lot. Sure it's been around a while, but the community seems rather small - comparatively. That said, that small community produces some really nice stuff - they are up to what 3 'official' compilers now? I hope to see its adoption rise now that the reference compiler is open source(iirc) - the community seems second to none as far as signal to noise ratio goes.
2
ptrott2017 8 hours ago 2 replies      
Nicholas Wilson has done an awesome job with DCompute - especially ability to use D's lambda's and templates when writing compute kernels. Its going to be fun seeing this evolve.
3
gravypod 11 hours ago 6 replies      
I can't wait until compilers can start auto generating GPU kernels. That will be when GPGPU really takes off for most people who's applications aren't critical enough to spend hours writing these by hand but would benefit from the significant speed up.
4
p0nce 5 hours ago 0 replies      
Interesting. CUDA kernels are plagued by an explosion of entry points and OpenCL C kernels by the lack of meta-programming.
27
Show HN: An etcd backed DHCP server github.com
50 points by lclarkmichalek  6 hours ago   6 comments top 2
1
lclarkmichalek 6 hours ago 4 replies      
I wanted to set up some home automation based on my phone connecting to wifi, but parsing dhcpd's lease file didn't seem cloud native enough, so I implemented this. Surprisingly, it actually works, is concurrency safe (you can legitimately run two of them on the same network, and you'll not corrupt the etcd state. I mean, I pity the clients that have to deal with that network, but eh), and um, is almost useful.

I would not recommend anyone use this for anything other than entertainment.

2
ubercow 3 hours ago 0 replies      
Any reason you picked this over using something like Kea [1]. Because it's fun is an acceptable answer :-)

[1]: https://www.isc.org/kea/

28
Processing Emotions willmeekphd.com
49 points by blaze33  10 hours ago   11 comments top 4
1
unabst 3 hours ago 4 replies      
I'm not a fan of this discipline of managing and processing emotions as if they were opinions or the consequence of our thoughts.

Emotions are facts. This must be the premise. If you've felt them, they've happened already. The question is not "what are you feeling?" but rather, "what are you going to do about what you've felt?"

So as long as you're not confusing what you can change, you're good. You can prepare yourself and educate yourself for when you could feel something next time. But this is also already highly automated. Emotions tend to educate themselves. So I find a lot of this is just trusting yourself, and not overreaching into thinking you can change your past or twist the facts.

If you're sad you're sad. If you're happy you're happy.

It's not so much managing your emotions. It's more about managing the moment, and what you wish to do with yourself. It's about managing the things that make you feel things.

If you hate your job, quit your job. Don't manage the hate.

If you love someone, go for it. Don't manage your love.

And so on.

2
woodandsteel 7 hours ago 1 reply      
This is a good article. I would just add that attending to an emotion and explaining to someone what you are feeling and why often results in a spontaneous process that produces positive changes such as lessening of intensity and change in perspective. This is much of how psychotherapy works, or for that matter just talking things over with a friend.

A good book on attending to feelings is Focusing, by Eugene Gendlin.

3
woliveirajr 8 hours ago 0 replies      
4
johnksawers 6 hours ago 0 replies      
That's a good article - a nice overview. I have a conference talk I give on the same topic called Your Emotional API: https://johnksawers.wistia.com/medias/wnlr918xe7
29
The Stars Are a Comforting Constant nautil.us
54 points by dnetesn  11 hours ago   2 comments top 2
1
paulmd 6 hours ago 0 replies      
The stars are not a constant though. In virtually any suburban area you have substantial amounts of light pollution. We are seeing a pale imitation of what every human living before us saw.

You really have to go quite far off the beaten path to reach a true dark-sky area. Unless you are living a 20 minute drive from the nearest town of 250 and an hour away from the nearest town of 100k+ then you have some degree of light pollution.

https://www.lightpollutionmap.info

2
grasshopperpurp 7 hours ago 0 replies      
Some good moments throughout, and 'How to search for aliens'is my favorite of the group. The comparison POV's between her and her mom and between the claustrophobic vigil and the vastness of space worked well for me. She projects more than I prefer with lines like, '. . .Four point five billion years / since genesis and the sky still hovers / like a veil between us and space, / wanting to be lifted before the unintelligible . . .' But, that's a matter of preference. And, with the 4.5 B years, she brings in another fun comparison - the shortness of human relative to the big-picture timeline - without being explicit or trite. Thanks for sharing!
30
How Uber's Hard-Charging Corporate Culture Left Employees Drained buzzfeed.com
58 points by smb06  4 hours ago   32 comments top 10
1
splitrocket 12 minutes ago 0 replies      
As one former employee said, explaining why he joined the company, it seemed like a libertarian playground where the best would rise to the top. But, he said, I quickly realized that environment also means work becomes a blood sport.

A libertarian is someone who is hell bent on discovering exactly why and how societies choose to govern themselves, the hard way.

2
lloydde 3 minutes ago 0 replies      
> In college I took several business classes, and one was about Southeast Asian business. The professor said ... there's this spectrum of stress level. You want workers to be as stressed as possible, but not over the line...

Anyone have links/references that name and describe this business style?

3
cbanek 2 hours ago 2 replies      
While Uber seems to be the current whipping boy of Silicon Valley, I wish I could say that this article would sound different if I replaced Uber with any other big tech company.

Getting yelled at constantly, usually with profane language? On call for weeks at a time with no help? Put in a double bind by management? Put on an impossible task, or a task made impossible? Stack ranked that you don't drink enough? Staying around late trying to look productive?

I wish I could say any of those weren't ubiquitous in SV.

4
blizkreeg 57 minutes ago 0 replies      
I've heard that the hard-charging culture at many investment banks and law firms is similar. People work ultra long hours, because if you don't, there's always someone who is willing to and ready to replace you. Associates at law firms routinely have billables of 2000+ hours a year. Bankers are often in at 7am and leave at midnight. Long hours seem to be par for the course. So then, why are we any different from law firms and banks like GS?

I strongly believe that this kind of toxic culture has no place in any organization. So I'm in no way condoning the culture at Uber or similar SV companies. I for one, never want to work at a place like that. I am curious though, what makes us different?

5
psidium 36 minutes ago 0 replies      
Holy crap. I've seen people getting hard looks for even suggesting the project's team should stay after the 8 hours shift. Only once a manager asked me to stay, very dreaded and saying he was really sorry he had to ask me that all the time. And they paid me twice the time I've been there and my performance review skyrocketed because I came in when I didn't need to.

But I make about 15k USD (if converting currencies) so there's that haha.

6
sigsergv 5 minutes ago 0 replies      
I think Fincher should make a movie about Uber.
7
thisrod 2 hours ago 4 replies      
> In May 2015, an on-call engineer failed ... At the time, Uber had recently reached a valuation of $50 billion.

I find that astonishing. Outside Silicon Valley, I think that it would be quite unusual to leave $50 billion of plant running overnight, but pinch pennies by not paying anyone to stay up and check the oil levels during the dog watches.

8
continuations 1 hour ago 2 replies      
> But at a company with more than 15,000 people

Uber has over 15,000 employees? That seems a lot. That's almost the same headcount as Facebook. Why does Uber have so many people?

9
linkregister 2 hours ago 0 replies      
This seems fairly accurate based off of the accounts of friends and colleagues who have worked at Uber.
10
dingo_bat 58 minutes ago 0 replies      
Awww... how sad! Highly paid employees had to work hard!
       cached 18 July 2017 04:02:02 GMT