hacker news with inline top comments    .. more ..    19 Oct 2016 News
home   ask   best   3 years ago   
TDD has little or no impact on development time or code quality neverworkintheory.org
257 points by joatmon-snoo  4 hours ago   175 comments top 46
jdlshore 2 hours ago 11 replies      
This study, like most software development studies I've seen, is seriously flawed. It doesn't justify the sensational title here on HN.

* The sample size was tiny. (20 students)

* The participants were selected by convenience. (They were students in the researcher's class.)

* The majority of participants had no professional experience. (Six students had prior professional experience. Only three had more than two years' experience.)

* The programming problems were trivial. (The Bowling Kata and an 'equivalent complexity' Mars Rover API problem.)

Maybe, maybe you could use this to draw conclusions about how TDD affects novices working on simple algorithmic problems. Given the tiny sample size and sampling by convenience, I'm not sure you can even draw that much of a conclusion.

But it won't tell you anything about whether or not TDD impacts development time or code quality in real-world development.

geerlingguy 3 hours ago 7 replies      
Could it be that TDD vs. tests-after-code is a highly personal thing? I personally find it easier to write good tests after I've coded something functional. Before hand, I know one or two fuzzy ideas of what I want to accomplish, but I can't list out the concrete, real-world test scenarios until after I've coded something, poked and prodded it, etc.

But I know some people are wired differently; they'll think a lot more about scenarios first, then code after they have everything accounted for. For them, TDD as a philosophy seems more fitting.

I think the chasm exists between _untested_ code and code that has tests. I've never understood the seemingly-religious zealotry behind TDD as an XP practice. Just like pair programming... if it works for you and your coding style, awesome. But don't force it down my throat or act like it's the One True Path to clean code.

tspike 3 hours ago 4 replies      
My experience has been that TDD is worthwhile when working with notoriously slippery whack-a-mole functions like handling time or money. The time saved by catching regressions vastly outweighs the time taken to implement the tests.

In contrast, TDD has been a waste of time for me for UI-based work, as the effort needed to properly expose the functionality under test is too great and the requirements and design change too quickly to be worth it.

In the latter case, writing some deterministic UI tests against mock data after the requirements and implementation have settled has proven much more effective in preventing regressions.

EdSharkey 2 hours ago 3 replies      

 * fire your QA team * dev team is the level 2 production support, and * get to continuous integration nirvana
management fads have been sweeping through my Scrum enterprise for the last 18 months.

Teams that aren't testing constantly, well, they've got tons of escape defects on every release. And those devs are constantly in fire-fighting mode, it's miserable for them. And I see that leading to compressed schedules for them and more reckless behavior like asking to push their releases during the holidays where there could be severe financial consequences to bugs.

As far as I'm concerned, in an environment like mine, where developers can no longer hide their incompetence behind bureaucracy like a QA team, it is official insanity to not spend inordinate amounts of development time writing automated tests. You should be spending 70% of your dev time writing tests and doing devops and 30% writing features.

I read in these comments a lot of bellyaching about how much time it takes to write tests. First, TDD is a skill that you can get good at, and it won't take as much time as you think once you get good. Second, I just don't think you have a choice to not test comprehensively when escape defects become a mark of shame in the organization.

donw 3 hours ago 2 replies      
Am I correct in reading that they performed this experiment only for two days, and entirely with graduate students?

If so, they have missed the point of TDD.

In the short term, TDD probably doesn't make a difference, one way or another.

But software as a business is not a short-term game.

I would love to see a study where the participants are, over a period of six months, given the same series of features (including both incremental improvements, as well as major changes in direction).

In my experience, teams that don't test at all quickly get buried in technical debt.

Untested code is nigh impossible to refactor, so nobody ever does, and the end result is usually piles of hacks upon piles of hacks.

As far as testing after development goes, there are three problems that I see regularly:

One, tests just don't get written. I have never seen a TLD (Test Later Development) team that had comprehensive code coverage. If a push to production on Friday at 6pm sounds scary, then your tests (and/or infrastructure) aren't good enough.

Two, tests written after code tend reflect what was implemented, not necessarily what was requested. This might work for open-source projects, where the developers are also the users, but not so much when building, say, software to automate small-scale farm management.

Three, you lose the benefit of tests as a design tool. Code that is hard to test is probably not well-factored, and it is much easer to fix that when writing tests, then it is to change the code.

haalcion3 3 hours ago 2 replies      
This is a misleading title and conclusion. The study showed a huge benefit of TDD over Waterfall, and it is only when compared to ITL that it was found to not be better.

But moreover, I think it's important to understand why Beck pushed for TDD.

TDD is like saying "I'm going to floss before I brush every time, no matter what."

But, when people don't do TDD they typically aren't all saying "I'm going to brush and floss afterwards every time, no matter what."

Instead, most say "I'll floss regularly at some point, but I don't have time now, and it takes too much effort. I'll floss here and there periodically, maybe before my monthly meeting or big date night."

Another reason Beck pushed for TDD was method and solution complexity reduction which results in lower time and cost required for maintenance because code is simpler to read and understand. Again, with ITL, you're still writing tests for everything, so you'll see those benefits. However, if you fail to write some or most tests, some developers will write overengineered solutions to things and have overly long difficult to follow methods that will make maintenance suck more resources.

If you want to go beyond this study, though, Beck, Fowler, and DHH had a critical discussion about TDD in 2014 that's worth checking out:


johan_larson 33 minutes ago 0 replies      
The questions worth asking about techniques like TDD are "What problems does it fix?" and "What problems does it introduce?"

I would expect a determined attempt at TDD to solve the "no tests" problem, because it is so utterly insistent on tests. It should also solve the "don't know how to start" problem, because it de-emphasizes planning and design in favor of just jumping in; you write the tests, and then you do the bare minimum to make them pass.

That said, I would expect a TDD-based project to have the "bad architecture" problem: messy interfaces and sort of ad-hoc separation of concerns, because it makes no time for up-front analysis and design. It's always focused on the current feature and doing whatever it takes to make it work now.

In fairness, it does include a refactoring step, which is supposed to clean up the mess after the fact. Color me skeptical. Refactoring is hard, and people tend to do it on a large scale only when they have to.

ckastner 23 minutes ago 0 replies      
I think the top-most comment on that page states a counter-argument nails it:

Here's my hypothesis, based on personal experience: the benefits of TDD begin to manifest when they are applied at scale. During design and development, if a single developer can plausibly understand an entire system in their head, the benefits of TDD (and, in fact, unit testing) are negligible. However, there's a non-linear benefit as systems become larger, particularly in the diagnosis of large and complex system failures.

inglor 3 hours ago 2 replies      
There is a problem with all these studies - they all use a very small amount of programmers (21 in this case) with no experience (all graduate students in this case) and presumably no significant experience with TDD or TLD.

I'm not making a stand about TDD here - I just think we need to have much better computer engineering science studies if we want to have significant results.

defenestration 3 hours ago 0 replies      
The title suggests that TDD has little or no impact on dev time or code quality at all.

The research shows no significant difference between TDD and iterative test-last (ITL) development.

Could the title be updated? To show that it is a comparison of TDD vs ITL/TLD.

iUsedToCode 16 minutes ago 0 replies      
The research seems low quality. Whenever i try creating something more complex than just a CRUD webapp, i'm always relieved after getting a significant code coverage.

It may be because i'm a medicore programmer (i mostly do hobby projects), but getting assurance that my 'small change here' didn't mess up anything major in a distant part of the system is quite relaxing.

Obviously i only test logic and usually write the tests after coding. It still helps with my flow.

shade23 2 hours ago 1 reply      
I have spent almost 3 years now writing code(with very few or no tests) and my current organization stresses on agile practices a lot.I encountered TDD from here.So I would like to chip in here too.

TDD solved a major problem for me which I have seen a lot of people suffer with. _Where do I start ?_ . The thing is TDD and refactoring go hand in hand. I cannot imagine doing TDD if I was not using an IDE like Intellij or something. When you normally start writing code first(typical TLD) then you need to have a plan before hand.This plan cannot change much because you really do not get feedback till you complete major segments of the code. TDD ensures you keep getting nibble sized feedbacks which assure you that what you are writing works. This according to me is the single most beneficial point of the system. TDD or TLD would allow maintainable code too.And often while doing TDD too,you can strictly follow TDD. It might not have an impact on code quality for seasoned developers(coding for years on the same codebase) but it does help for the others .It also reduces my inertia considerably too. So while it might not have impacts on development time or code quality. I tend to sleep well without large UML Diagrams floating in my head and knowing that each unit of my code works independently.

lowbloodsugar 3 hours ago 0 replies      
"TDD has little or no impact on development time or code quality when compared to the equivalent number of tests implemented afterwards using TLD."

FTA: In this paper we reported a replication of an experiment in which TDD was compared to a test-last approach.

Very different title.

supersan 3 hours ago 1 reply      
Up until now it has mostly been opinions and biases and even though many popular programmers[1] have been saying this for a very long time, it's great to see a controlled study done about it.

This makes it a fact and a great counter argument for helping a lot of programmers who are being forced to practice TDD because of the generally accepted claims in productivity and code quality associated with doing it.

[1] http://david.heinemeierhansson.com/2014/tdd-is-dead-long-liv...

sayrer 2 hours ago 0 replies      
In "Realizing quality improvement through test driven development: results and experiences of four industrial teams", an MSR researcher found that TDD did reduce defects in his study, but also came at a large cost in time-to-ship.


This finding contradicts the headline. TDD impacted both development time and code quality in that study.

sebringj 2 hours ago 0 replies      
I'm not commenting on the TDD studies in terms of its effectiveness but I do know that a project that takes longer brings more programming hours which results in larger budgets. If you were a company selling your services, you would be a bit more motivated to include things that take longer especially if this tugged at the emotional sense of assurance in your clients. You would also preach it to your programmers as a core practice and they would happily be converts. This goes for all the structure surrounding your project as well. I tend to see more structure in outsourcers these days and a smugness along with it. I wonder how much of it is bloatware though.
NumberSix 3 hours ago 0 replies      
Software development varies enormously. Flight avionics software differs from video game software differs from a spreadsheet differs from an order-entry system differs from laboratory analysis software differs from a web browser and so on. Flight avionics differs from a commercial jet liner to a fighter plane to a model airplane. Some projects have huge budgets and others have shoestring budgets. Some projects require extremely high reliability and quality; cost is not an issue. Other projects can be quite buggy, low quality but still useful -- cost effective.

Developers vary as well. Some temperamentally find something like TDD useful. Others do not.

There is no one software development methodology to rule them all.

jjp 2 hours ago 0 replies      
This is an editorialised title. The blog posting is a boring "Test Driven Development". The blog posting and the paper that it fronts has a conclusion that no significant difference between TDD and iterative test-last (ITL) development, which is quite a bit different from TDD has little or no impact on development time or code quality
Rapzid 2 hours ago 0 replies      
My default is to not write many tests at all during the experimental, build-out phase. I'm not looking for exact or bug-free software, I'm trying out different API's, aggregates, and architecture in general. Needing to refactor tests every time I want to make a drastic change is... Well, you know. AS somebody else pointed out, this architectural stuff is probably actually much harder to nail down than just writing code that works. This is not limited to very initial build out but could apply to big refactors as well.

After and during the experimental phase, it depends. Both before and after I may write tests before or "test with" for gnarly logic or algorithm-y stuff. Otherwise and in addition I do copious amounts of manual testing. Manual testing is a must for much of what I do, so I augment or substitute automated testing as appropriate. Automated testing is great, but sometimes the overhead is too expensive.

rhizome31 49 minutes ago 0 replies      
As a TDD advocate, and assuming this study has any scientific validity, this is actually good news! There's a very common claim that TDD makes you less productive. It's good to have some study to oppose this claim.
namuol 2 hours ago 0 replies      
- Population: A classroom of students, most without professional experience

- Sample size: 21 students

- Study duration: 2 days

- Team size: Individual

Tests are most useful when refactoring someone else's long-forgotten code; the sort of thing that happens frequently in long-running projects consisting of large teams. In other words, the "real world".

Show me that study.

PaulKeeble 3 hours ago 0 replies      
I never really viewed TDD as better at reducing bugs for a short term project, its going to have marginal better chances of getting additional test cases.

I view it more as important for breaking the growth of testing effort in an iterative project. With each release the scope of what should be tested to fully test a project climbs and unless a team wishes to linearly increase the size of its test team its all but certain tests will be skipped.

TDD gives us the ability to always full regression test as its just machine time. Its a safety factor in knowing nothing is broken which in turn gives us confidence we can refactor.

gaius 53 minutes ago 0 replies      
It always both amuses and saddens me how people will eagerly write more tests than actual code, but refuse to use a strongly typed language. The compiler is my test harness.
arcticbull 2 hours ago 0 replies      
I've tried to TDD numerous times in my professional career; I'm confident it works for many. I prefer to use white-box as my second pass through at my algorithm. It allows me to identify potential weaknesses, write test cases around them and correct them in one step. I never feel quite as secure with TDD as I do with post-hoc testing. I'm also not going to tell other people that's the one-true-path. Unit tests? Critical. Before vs. after? Personal.

With respect to this study, I think at best we can say that equal quality tests yield equal results. I don't think -- based on reviewing the methodology -- that the headline can clearly be drawn from the study.

jmadsen 3 hours ago 1 reply      
The problem that I have with this article is how people will interpret the results. The test is comparing (presumably) Comp. Sci. graduate students who already know good design patterns, best practices, etc at a relatively high level to see if they are faster and more accurate by testing before vs. after writing the main code. (TDD vs. TLD)

That's all well and fine, and possibly completely accurate. However, many people's takeaway is going to be the out-of-context & incorrect title of this post. (It does not say TDD is worthless - it says it is essentially the same as TLD)

I've always looked at TDD as a tool to help push less experienced, less "educated" developers into 1) even using tests at any point of the development cycle, 2) creating tighter, cleaner and MORE TESTABLE code by the time they've reached the end of the cycle.

So, if your team is and always will be well-educated, experienced programmers who already understand how to always do everything correctly from the beginning, feel free to use either method.

Otherwise, I'd urge you to consider TDD.

xrd 1 hour ago 0 replies      
Does this study assess the long term cost of software? It may be true that this has little benefit in writing code from scratch, and my experiences are that TDD definitely takes longer when writing code than not doing it. But, how does it evaluate claims that 90% of the cost of code comes in the maintenance, not the initial creation of it.
eva1984 1 hour ago 0 replies      
Not surprised. Religiously follow certain principle to believe that it could help you bypass the complexity of the problem itself, almost always won't stand the test of time.
refulgentis 3 hours ago 1 reply      
My anecdata matches the author's - I feel more productive doing TDD.

Perhaps because it's less stressful. You think about system design as you code, instead of only when you hit a wall and have to rewrite everything, or when you have to clean up for code review.

Either way, if it has little to no compact on dev team or code quality, I bet the positive impact TDD has on team morale would make it worthwhile.

jaunkst 3 hours ago 0 replies      
TDD is king when refactoring, or proving an algorithm. You have a tests to confirm the output, and near realtime feedback that you assumptions are correct. The rest is obvious. Mission critical component TDD, complicated refactor TDD, algorithm you need to validated TDD. Anything else write the code and get a peer review.
stevehiehn 3 hours ago 0 replies      
If you were to always write tests immediately after you write a few classes I don't think it would make a difference. However from my own experience I never write nearly as many tests after the fact.
greesil 3 hours ago 1 reply      
This is the kind of stuff that in the aggregate you can't show a relationship, but I bet if you controlled for type of project one would see some interesting results. Anecdotally, I know some firmware engineers that shit out the buggiest code I have ever seen, and test driven development would have definitely improved the customer experience. Because when the engineers have literally no tests other than trying stuff out with a printf on the target embedded device, any amount unit-testing will wind up helping.
goalieca 3 hours ago 0 replies      
Actual studies were never needed to convince managers to switch processes. Bonus points for blaming old problems on old process while blaming new problems on "not doing agile right".
mathattack 3 hours ago 0 replies      
Replicated with 21 grad students? And then they quote statistics?

Painful to watch people generalize from such small sample sizes.

BurningFrog 3 hours ago 0 replies      
TDD, like most of the agile practices, is a learned skill.

Doing it at an expert level is very different from an untrained novice winging it.

gedy 3 hours ago 0 replies      
Tests before/at/near development time really helps your code design - I've seen how ensure code is unit testable simplifies and enforces layering, etc. Really disagree that this does not help code quality.
avodonosov 2 hours ago 0 replies      
It's great such studies exist, but there might be many reasons why they are incorrect (they are testing on students, probably the students don't understand how to apply TDD, or other way around, they are so good that their coding approach provides all the benefits without TDD; the numeric metrics used in study might not adequately reflect the interesting characteristics of the code base, the payback of TDD might show up in later stages of the product life when we refactor or extend it, etc).

Probably TDD can speedup people who otherwise aren't used to iterative bottom-up approach - TDD will encourage short cycle of "change - run and see how it works" loop. Especially in non-interactive languages like C or Java.

Also, if we write tests after functionality is implemented, how do we know why our test passes: is it because the functionality is correctly implemented or it's because the test doesn't catch errors? To ensure test catches errors we need to run it on a buggy version of code. Implement functionality, write test, introduce errors in functionality to ensure the test catches them - that's 3 steps. Run test in the absence of correct code and then implement the code - 2 steps. That's where "test first" might be efficient.

But often that might be achieved other way. Suppose I'm writing a function to merge two lists. I will just do in REPL (merge '(a b c) '(1 2 3)) and see by eyes that it returns (a 1 b 2 c 3). I will then just wrap it into assert: (assert (equal `(a 1 b 2 c 3) (merge '(a b c) '(1 2 3))). Run this and see it's passes - that all, I'm sure it's an OK test.

In short, I think there is a certain truth in TDD, but it shouldn't be taken with fanaticism. And it can even be applied with negative effect (as any idea).

Suppose I want to develop a class (defclass user () (name password)).

I personally will never write tests for make-instance, (slot-value ... 'name), (slot-value ... 'password) before creating the class, the see how tests fail, then creating the class and see how tests pass.

Tests take time and efforts for writing them, and then for maintenance and rewriting when you refactor code. If a test captures an error then the test provides some "return of investments". Otherwise writing this test was a waste.

The tests in the above example will never capture anything.

I tend to create automated tests for fragile logic which is relatively easy to test, so that the efforts spend are justified by the expected payback.

But all my code is verified. Write several lines, run and see what doesn't work, fix that.

copperx 3 hours ago 0 replies      
For a split second I thought they were measuring TDD against no tests at all and I felt a panic-induced adrenaline rush.
Confusion 42 minutes ago 0 replies      
The comments to that story are pretty good.

An interesting question is: why does TDD fail in such experiments (it does so unexpectedly consistently), even when many developers feel it has benefits when they practice it?

There is no silver bullet, so there must be circumstances in which TDD does not work. And conversely, the central question is: under what circumstances does TDD work? What are the preconditions?

bigodines 2 hours ago 0 replies      
Clickbait. TDD !== tests, article compares TDD with TLD.
micahbright 3 hours ago 1 reply      
Somehow, as a Software Engineer, I'm not really surprised.
ericls 1 hour ago 0 replies      
If the claim in the paper is true:

TDD = Same time + same quality + feel better.

zkhalique 3 hours ago 0 replies      
TDD no, regression testing yes.
known 3 hours ago 0 replies      
Experience is the name everyone gives to their mistakes --Oscar
blakecallens 3 hours ago 1 reply      
It may not boost productivity upfront, but it saves a lot of time down the line by alerting you when something is out of place.
isuckatcoding 3 hours ago 2 replies      
As someone who uses unit tests to find bugs in my code, that I would never otherwise find, this is surprising.
ben_jones 3 hours ago 1 reply      
...when implemented poorly
C++ lambdas negatively impact novices and don't benefit professionals acm.org
31 points by uaaa  1 hour ago   32 comments top 9
GeneralMayhem 17 minutes ago 1 reply      
This is ridiculous. C++ lambdas (and std::function) don't replace iterators except for the most fervent disciples of the Church of Haskell. They replace single-function interfaces in cases where you would have had to put together a custom struct that did exactly the same thing as a lambda with capture but in about 15 more lines.
porges 1 hour ago 2 replies      
> After instructions, participants were given printouts of sample code they could refer to while solving tasks. Group Lambda got code of a C++ program using lambda expressions and group Iterator received code of the same program written using iterators. They then had time to study the samples before starting the tasks and could refer to these samples later.

These samples do not appear in the paper, so we don't know what they saw.

The iterators discussed are Java-/C#-style iterators, not C++ ones (as I expected reading the abstract).

In a C++ context I would have expected lambdas vs iterators to be something like:

 // lambda float retVal = 0; std::for_each(mb.cbegin(), mb.cend(), [&](item x) { retVal += item.price; }); return retVal; // pure iterator float retVal = 0; for (auto it = mb.cbegin(); it != mb.cend(); ++it) { retVal += it->price; } return retVal;
... and the first would be better off as:

 return std::accumulate(mb.cbegin(), mb.cend(), 0f, [](float acc, item x) { return acc + x.price; });
I think the need to use ref-capture (since you only get a side-effecting `std::function` to play with in their sample) would be the thing most likely to throw people off as its something that should be avoided in most code, anyway ;)

emcq 1 hour ago 1 reply      
The paper seems to mainly compare iterators vs lambdas. This seems like a bit of a strawman; the best use of lambdas is beyond iterators.

For example, consider callback heavy asynchronous code. A promise library with lambdas is much easier to write and read than the equivalent state machine.

I would go as far to say any mechanism where function chaining is useful, such as the nice data to mark/SVG abstraction used in D3, and also in promise libraries, has advantages with lambdas. Not only do you avoid having to write extra classes or methods, but the code is more succinctly logically grouped together, requiring fewer indirections to get to the transformations occurring.

byuu 1 hour ago 2 replies      
Context always matters. I use lambdas sparingly in my applications, except for one major area: user interfaces.

I can't begin to stress what a huge timesaver it is being able to bind a button's callback to a quick lambda instead of having to bind a callback to an std::function, add the function to the class header, and then put the actual button-click code somewhere else in the project in a separate function ... and then repeat that process for a large UI with hundreds of widgets.

It's not even the initial extra typing, it's having all the code right where it's most relevant instead of scattered around everywhere and having to name all of these callback functions.

ryporter 9 minutes ago 0 replies      
It's important to note that the negative impact documented in this study is in development time and whether the task is completed. It makes no claims about impacts on maintainability that show up later in the development process. It also does not measure the longer term impact on development time if a team starts using lambdas and gains experience over time.

I say this not to dismiss the study, which appears to be fairly well done and to provide interesting results. I'm simply saying that its results are not inconsistent with lambdas providing a net, long-term benefit if introduced to a development team at work.

nice_byte 49 minutes ago 1 reply      
I'm a professional and you can take the lambdas out of my cold, dead hands.

Also, their examples are removed from reality. I've never seen people use lambdas like that. Most of the use cases I've seen are callbacks that get triggered on certain events (i.e. "display notification when background loading thread completes") and predicates (i.e. find_if). I see neither in their examples.

NotThe1Pct 9 minutes ago 0 replies      
I have used lambdas heavily in the past and then stopped. Getting away from the rabbit hole that are lambdas + SFINAE + variadics increases productivity immensely.
gohrt 1 hour ago 0 replies      
Submitter's editorialized title mischaracterizes the OP

> All the tasks focused on iteration on a collection using a C++ vector object.

That's comparing "functional style" collections to iteration.

It is readily apparent that C++ syntax is so troublesome that "functional collections" won't be a win for small iteration blocks.

We had the same debate for functional Java collection methods (although the Java 8 lambdas tilt it more closely in functional Java's favor)

The main use case for C++ lambdas is for declaring callbacks.

EGreg 1 hour ago 1 reply      
I used to really like C++ ... back in 2000. That is before they added the kitchen sink into the language.

I prefer my computer languages to be like Chess - a few rules but efficient and expressive. Like C.

Why? Because everyone can read the code others on the team wrote, without being a language lawyer and knowing tons of esoteric features and magic.

That has implications for maintainability and team productivity, and the bottom line for a business.

Really, people, having basic coding style conventions instead of tons of language features wasn't so bad:

 int Foo_bar(struct Foo* this) { // even this is readable }

Astronauts enter China's space station bbc.com
46 points by ehxor  2 hours ago   4 comments top
jcoffland 1 hour ago 1 reply      
China has now done 6 crewed missions to space in the last 3 years. In that time, Russia has had 13 manned missions many of which NASA astronauts have hitched a ride on. NASA is still on the ground since discontinuing the shuttle program in 2011.
Donald Knuth Was the First Erlang Programmer videlalvaro.github.io
43 points by old_sound  2 hours ago   15 comments top 5
pmontra 49 minutes ago 1 reply      
To the eye of a developer used to more modern languages that also shows some bad choices in the design of the Erlang syntax. Basically everything that makes the source code less readable than the original: uppercased variables, the minuses that prefix the module and export statements, the wierd =:= operator. I add the also wierd <> binary and string concatenation operator, not used in this example.

The semicolon-period statement separators/terminators are in the original and in natural languages, but newer languages proved them to be useless. Probably compilers in the 80s needed some help by the programmer to be fast.

Elixir fixed some of the worst offenders, kept others and added something. Examples: the <> is still there but at least we can interpolate strings Ruby like, the useless do at the end of almost every defsomething declarations (the compiler should get it by itself.)

But every languages has its wierdnesses, the contest is to have the least of them.

fusiongyro 1 hour ago 2 replies      
Knuth is simply defining a piecewise function here in ordinary math notation. Functional programming languages borrow heavily from math, Erlang is a functional language. The resemblance would probably be even more striking with Haskell... which only serves to undermine the message.

Erlang's great though. Glad to see some irrational exuberance for it.

peterkelly 39 minutes ago 1 reply      
It's called pattern matching, and is present in most functional languages - Haskell is a good example of something which uses this style extensively.

In a sense it's derived from the way that some mathematical functions are expressed, e.g. i've seen the fibonacci sequence expressed in this manner a few times.

The only thing this excerpt from the book has to do with Erlang is that they both used the same (existing) idea.

sotojuan 1 hour ago 1 reply      
Doesn't most of this syntax come from Prolog?
gohrt 1 hour ago 0 replies      
Clickbait title is clickbait.

minimal-syntax languages tend to look similar.

Introducing Google Cloud Shells new code editor googleblog.com
59 points by rafaelm  4 hours ago   6 comments top 3
cantbecool 25 minutes ago 0 replies      
Isn't this like Heroku Garden?


nawtacawp 2 hours ago 1 reply      
Hydraulix989 2 hours ago 2 replies      
There's a million UX annoyances with Google Cloud that still need fixed, and the very first thing they do is build an editor...
Introducing Rust Language Server rust-lang.org
273 points by JoshTriplett  9 hours ago   62 comments top 8
hetman 7 hours ago 3 replies      
I think the most astonishing thing to me about the Rust language project is not the language it self (sure it's innovative but there's also still a lot of work to be done to make it get out of your way). It's the speed and tenacity with which the project has managed to push development of the supporting tooling. The development tools surrounding a language can often be more important for productivity than the language itself, and it feels like the Rust team and community really get that.
favorited 6 hours ago 5 replies      
For anyone interested in similar tools, Microsoft recently published a protocol[1] for this type of service, and an implementation[2] of it for VSCode+NodeJS.

Swift also has a library[3] called SourceKit which does the same thing for Swift it can run as an out-of-process daemon, and people have written integrations for Atom, Emacs, etc. to get autocomplete, syntax highlighting, def links, etc.


JoshTriplett 7 hours ago 0 replies      
I saw the demo of this at RustConf, and it was incredible. See the video linked from the post: https://www.youtube.com/watch?time_continue=2405&v=pTQxHIzGq...

I look forward to seeing support for this integrated with vim.

beliu 5 hours ago 0 replies      
This is awesome. The best languages are the ones with strong tools.

Side note: really looking forward to rolling this into Rust support for Sourcegraph (Sourcegraph team member here) to make any Rust library as easy to explore as https://sourcegraph.com/github.com/gorilla/websocket/-/blob/... and https://sourcegraph.com/github.com/staltz/xstream@master/-/b... are for Go and TypeScript. Thanks for making and open-sourcing this!

joostdevries 4 hours ago 0 replies      
Compilers that have a language service api are awesome. The Typescript compiler does and it makes for high quality low latency refactoring, code completion, show expression type etc across ediors and IDEs.Since my other day to day language is Scala I was glad to find out that the upcoming Dotty Scala compiler will also feature a language service api. Yay.
haberman 5 hours ago 1 reply      
I've wanted this for a long time. But I don't see why it has to be language-specific. Maybe some of the fancy parts do, but why can't make/cmake/ninja/etc. support this in a language independent way? Just basic queries/commands like:

 1. run any build steps that directly depend on file X 2. run any build steps that directly or indirectly depend on file X 3. get whether the build of X succeeded or failed, and console output
These alone would go so far towards making Vim/Emacs/etc more user-friendly.

denfromufa 5 hours ago 1 reply      
This most likely originated from TypeScript, which actually went further with compiler and code analysis integrated
avodonosov 3 hours ago 0 replies      
Is it something like swank for common lisp?
Researchers reach human parity in conversational speech recognition microsoft.com
404 points by jonbaer  14 hours ago   131 comments top 25
Eridrus 13 hours ago 4 replies      
The actual paper has a section on error analysis that is particularly enlightening: https://arxiv.org/abs/1610.05256

On the CallHome dataset humans confuse words 4.1% of the time, but delete 6.5% of words, most commonly deleting the word "I".

Their ASR system confuses 6.5% of words on this dataset, but only deletes 3.3% of words, so depending on how you view this their claim about being better than humans isn't definitely true, if you consider the task to be speech recognition, rather than transcription.

Also, while the overall "word error rate" is lower than humans it's not clear if this is because the transcription service they used is not seeking perfect output, but rather good enough output and the errors the transcription service makes may not be as bad as the errors the ASR system makes in terms of how well you can recover the original meaning from the transcription.

It's clearly great work, but reaching human parity is marketing fluff.

jpm_sd 11 hours ago 1 reply      
I look forward to being able to converse with Microsoft's research team as easily as I can with humans. I hope that one day, journalists can learn to write headlines with similarly low rates of error.
uvesten 13 hours ago 2 replies      
Good for them! I'm a bit surprised that the researchers didn't already possess human-level speech recognition, though.
radarsat1 11 hours ago 4 replies      
The term "human parity" refers to a comparison of the error rate, which is a single scalar summarizing performance in terms of mistakes made. It says nothing about the kind of mistakes, and I can easily imagine that machines qualitatively do not make at all the same kind of mistakes as humans. I'd be curious to know if the kind of mistakes machines make might strike human listeners as quite stupid, but maybe not.. many algorithms are getting better at taking into context and prior knowledge into account.
Animats 12 hours ago 4 replies      
Very nice. How long before something this good is available as open source?

A tough test would be to hook this up to a police/fire scanner, or air traffic control radio.

windlep 8 hours ago 1 reply      
I'll admit I'm not very interested in speech recognition of this nature when it can't disambiguate the speaker. ie. the way Amazon Echo and other voice recognition systems can't tell the difference between a human in the room and the TV. Even when one might be clearly a female voice vs. a male.

None of the voice recognition systems on the market learn my voice distinctly from my wife's or sons, and I don't want their speech triggering things on accident (especially my son's), so I don't use any of them.

I'll be more impressed when I can restrict Amazon Echo or one of these assistants to ignoring any voice that isn't at least rather similar to my own, not merely recognizing the words I'm speaking.

grzm 13 hours ago 5 replies      
I don't have a background in this area, so I'm likely easily impressed, but this seems really impressive. And the acknowledgement that there's a lot of work to be done, such as discriminating between speakers and recognition in adverse environments. Yeah, it's Microsoft writing on their own technology, but they addressed in the text the questions I had already in mind from just reading the title. It didn't leave me with feeling that it's just a marketing piece.

> Still, he cautioned, true artificial intelligence is still on the distant horizon

It's frustrating when technologies like image and speech recognition and robotics are conflated with AI.

Maarten88 12 hours ago 1 reply      
Nice to read this, being someone who uses lots of Microsoft products, but I have mixed feelings: after all these years Cortana still understands 0.0% of my native language (Dutch). Very disappointing, especially seeing that Google has no problems understanding Dutch.
eb0la 11 hours ago 2 replies      
Knowing Microsoft this will be part of Cortana in a few weeks.

I hope it will be integrated soon with the Speech API as well ( https://msdn.microsoft.com/en-us/library/hh361633(v=office.1... ).

braindead_in 4 hours ago 0 replies      
I run an human powered transcription service and I get really excited on such news. Typing is the first step of our process (of four) and any ASR system which can generate even around 80% accurate transcript of a file will be incredibly useful. We have tried several systems but unfortunately none have been able to get there yet.
wbhart 12 hours ago 1 reply      
So Microsoft finally have an AI that can "wreck a nice beach". Along with text autocompletion, we are all set for a decade of irritating miscommunication.
cellis 12 hours ago 0 replies      
Ok, when's the next 2GB Xbox One update and will this fix the problem of me saying "Xbox watch NBC", and it 'hearing' "Xbox watch TV"?
jarboot 2 hours ago 0 replies      
How long do you think it is until captioning companies / TRSs such as Captel downsize significantly because of tech like this?
chris_st 7 hours ago 0 replies      
Perhaps the folks at Microsoft's Lync (named after this gentleman, [1], no doubt) or maybe it's Skype for Business now could get some of this research.

We have this at work (alas), and it does "transcription" of voicemail, which it sends as an email. It's easily 90% wrong, regardless of speaker, unless it's a slightly bad connection, when it's worse.

[1] https://www.youtube.com/watch?v=NV9fKUkx76Q

nattyice 7 hours ago 1 reply      
Meaning will always escape us when it comes to language.Not only will there always be a disconnect between the speaker and his or her audience, there will always be a subjective perspective that cannot be tapped into. Can AI ever really be compared to a subjective perspective?

Although the article recognizes that perfection has not been assumed, parity might not even be a capacity.

Conversation is difficult to measure. Take a look at the philosophical viewpoint of Deconstruction. Food for thought.


raimue 11 hours ago 0 replies      
> [...] a speech recognition system that makes the same or fewer errors than professional transcriptionists.

How low would the error rate be for humans that can fully concentrate on listening instead of writing at the same time? Unfortunately, that cannot be tested.

loup-vaillant 11 hours ago 1 reply      
Great. Now Microsoft has the means to store every Skype conversations indefinitely it's only text, now.

Seriously, great work, but just like facial recognition, this will cut both ways.

andulus 6 hours ago 0 replies      
I wonder if this success has any help in the advancement of another application neural networks? Do these achievements translate easily to other domains, or it's just an isolated case?
swagtricker 8 hours ago 1 reply      
Time files like an arrow, but fruit flies like a banana.

Wake me up when they can match human recognition of context.

nicklovescode 10 hours ago 0 replies      
Is there a demo or video of them using this? Would enjoy playing with it.
Kenji 11 hours ago 0 replies      
I have read too many human parity claims that left me disappointed to believe this one. Call me a pessimist or a cynic. I'll be very excited when I have the code running on my machine and when I can compose this comment verbally without a hassle.
plussed_reader 11 hours ago 0 replies      
Do I have to use Windows to leverage this new software setup?
dfgonzalez 9 hours ago 1 reply      
Did someone put it as SaaS already?
ahmetyas01 7 hours ago 0 replies      
any video or audio to get the idea how close they are?
EGreg 7 hours ago 0 replies      
In English, probably.
Air India Taking Advantage of Tailwinds flightradar24.com
281 points by obi1kenobi  11 hours ago   110 comments top 17
toomuchtodo 10 hours ago 6 replies      
I hope someone is writing some code that'll automatically integrate winds aloft forecasts into flight plans!

The fuel savings (and time in the air) ROI across the entire air travel industry would by yuuuuge, and the only cost is development time (I say "only" as you're not having to build any new physical machinery).

EDIT: Boeing already offers this service it appears: http://www.boeing.com/commercial/aeromagazine/articles/2011_... and the fuel savings are substantial.

ptaipale 9 hours ago 0 replies      
Is this actually a new thing?

I was under the impression that carriers and captains have taken the jetstreams into account when planning their routes for decades and it is systematic.

Cabin announcements have mentioned this already in 1990's: "Ladies and gentlemen, our flight path takes us today further to the north so we'll be passing Vorkuta and approach the Lake Baikal from north before entering Mongolian airspace. This way we can utilize tailwinds and our estimated time of arrival in Beijing is about half an hour ahead of the schedule."

hughes 9 hours ago 1 reply      
I had always just assumed airlines did this... seems like an equivalent headline to "shipping industry taking advantage of ocean currents".
manav 10 hours ago 2 replies      
I wonder if it was just a matter of getting permission to fly that route over China / East China Sea.
dorfsmay 3 hours ago 0 replies      
I don't know why, and even though the net effect is the same, I think it's really cool that instead of going back and forth, like most flights, this plane (and its passengers) keeps going in the same direction and goes once around the earth on each "return" flight.
ceedan 8 hours ago 2 replies      
Better late than never, but why has it taken Air India so long to do this?

Edit: Answered in replies to manav

haalcion3 3 hours ago 0 replies      
And for those that missed it, here's another way to reduce fuel usage: plasma.


aaron695 5 hours ago 2 replies      
Is this just an example of India bureaucracy and more proof of the lost chances of most of the BRICS?

Why wasn't this done on day one?

myth_buster 9 hours ago 2 replies      
Kudos for ingenuity but from my understanding, flying over land is preferred than over an ocean as there are options available in case of emergency. So isn't this jeopardizing flights for fuel savings?
wst_ 4 hours ago 0 replies      
Hm... They read Seveneves? There's similar concept in there.
msandford 10 hours ago 2 replies      
Doesn't this sort-of imply that they could do the same thing the other way also? Or are they already doing that by catching the polar jetstream at the top of the polar route?
shabbyrobe 8 hours ago 1 reply      
Does the additional speed increase the stress on the airframe or is that more than offset by the reduced flight time?
highd 9 hours ago 0 replies      
Did the jet stream change in some way recently? Or are they just now taking advantage of it?
mankash666 10 hours ago 1 reply      
Glad to see a government owned entity operating on science.
sandworm101 10 hours ago 8 replies      
15+ hours in the air. In economy. That isn't healthy. I'm wondering if we eventually get to the point that health agencies speak up, that there is too much risk in having people sit in tiny chairs for such periods.

When I was a kid I did lots of 8+ hour flights (YVR-London, London-middle east). Seats were bigger then. Now I see old people sitting in tiny seats and wonder how many will leave the flight with a DVT injury? How many will be in hospital with flu the next day? If sitting is the new smoking, these flights are like sleeping inside a chimney.

gandalfu 6 hours ago 4 replies      
Why is the plane going faster? Less resistance from the wind due to the smaller speed difference? I don't believe the air can push the airplane...

Any physicist care to explain?

6stringmerc 5 hours ago 1 reply      
Air India PR Person 1: How do we frame this airspace deal with China to make us look really good and not just playing catch up?

Air India PR Person 2: Talk up the jet stream time savings!

Air India PR Person 1: Brilliant!

Exploiting Linux kernel heap off-by-one cyseclabs.com
13 points by vnik  2 hours ago   2 comments top 2
Animats 1 hour ago 0 replies      
When you see something like this:

 /* args points to a PAGE_SIZE buffer, AppArmor requires that * the buffer must be null terminated or have size <= PAGE_SIZE -1 * so that AppArmor can null terminate them */
you just have to expect exploits.

This is a problem that comes up repeatedly in the Linux kernel. When some kernel call accepts or returns variable-length data, the details are handled locally, not in some general-purpose functions for moving variable-sized data in and out of the kernel safely. That's likely to lead to some checks not being made.

xiaodown 43 minutes ago 0 replies      
Ubuntu vulnerability list: https://people.canonical.com/~ubuntu-security/cve/2016/CVE-2...

(TL;DR: Ubuntu 16.10 (Yakkety Yak) is listed as "needs triage", no other releases are affected (14.04/16.04/etc))

Extreme side-effects of antidepressants bbc.co.uk
20 points by pmoriarty  2 hours ago   15 comments top 6
qrybam 36 minutes ago 2 replies      
Some thoughts from someone who's been prescribed anti depressants in adolescence but never took them because I believed the doctor misdiagnosed:

* Do these drugs genuinely help or is it just a strong placebo response?

* If the anecdotal evidence increases the odds from 1 in 100 to 1 in 4, would this be considered normal in medicine?

Of course the symptoms could be attributed to the wrong thing here but they sound pretty horrific. My initial reaction was that in the future we'll look back at these drugs as barbaric, similar to how we view lobotomies today.

Edit: formatting

alfon 14 minutes ago 0 replies      
Dr. Peter C Gtzsche: Psychiatry gone astray.


tcj_phx 42 minutes ago 0 replies      
SSRI drugs (sold as 'anti-depressants') have always been known to cause suicide ideation... While they do seem to help some people, it is now known that this is because of the drugs' effects on the neurosteroids [1], NOT because of 'increased serotonin'. Anti-serotonin drugs (LSD, various MAOIs, etc) are much more effective anti-depressants.

[1] https://en.wikipedia.org/wiki/Neuroactive_steroid#Role_in_an...

There are some good articles in the Boston Globe's archives about Prozac, circa 2000. "Prozac, Revisited", etc [2]. Robert Whitaker [3] worked for the Boston Globe, before he wrote Mad in America and Anatomy of an Epidemic.

[2] http://www.narpa.org/prozac.revisited.htm (the boston globe's official archives site is not so easy to use, but I've previously verified that these stories exist)

[3] https://www.madinamerica.com/robert-whitaker-new/

The first patient in this BBC article could also have been diagnosed as 'exhausted':

> She had begun taking [SSRIs] while caring for her seriously ill mother and studying for her final exams at Cambridge University, but suffered severe side-effects after her GP prescribed a stronger dose of tablet. (emphasis added)

I think 'exhaustion' is a frequent cause behind the symptoms labeled "depression".

In May of this year, I watched Lexapro (an SSRI) destroy all the progress I'd made with my girlfriend... She'd asked for this drug a month after she'd escaped from her court-ordered tranquilization, because she thought it had helped her years ago. Really it just helped her relapse on cocaine then. This time it caused rapid heartbeat, and much anxiety. Her last benzodiazepine turned her into an anxious wreck... The psychiatrists got hold of her again, and they're making sure that she will never recover.

About a week ago I went through videos on my phone... and found one of my girlfriend about a week before she was taken to the hospital. The video proves, beyond any doubt, that she is not "persistently" disabled, that the symptoms that originally put her in the hospital were entirely due to quitting her addictions cold-turkey, and not due to 'defective genes' or other pseudo-scientific rationalization for forcing her to use palliative drugs.

Confusion 34 minutes ago 0 replies      
Many meds have extreme side-effects in part of the population. Often we call those 'allergies', but those are just the tip of the iceberg. There's nothing new here, just a reminder that a specific medication may not work for you or may work but still make things worse by causing side-effects.
smegel 23 minutes ago 1 reply      
I would say becoming "happy" when the circumstances of your life would and should make any right-minded person unhappy is a pretty extreme side-effect.
jcoffland 1 hour ago 3 replies      
Raspberry Pi (2 and 3) support in Fedora 25 Beta fedoramagazine.org
65 points by cheiVia0  6 hours ago   17 comments top 6
TD-Linux 3 hours ago 0 replies      
One big thing here is that unlike Raspian or derived distros that use its kernel, Fedora is building their own mainline kernel. We're finally getting close to the day where installing on ARM will be as easy as x86, with no custom kernel or vendor patches.
alxmdev 3 hours ago 1 reply      
Excellent! I tried out Ubuntu MATE 16.04 on a Raspberry Pi 3 a few months ago, and I was pleasantly surprised by how well it runs. The 1GB RAM is an unfortunate limitation, but even so the Pi is a viable little desktop. Great to see upstream kernel support and official releases from major distros.
andrewchambers 3 hours ago 0 replies      
Running plan9 on a raspberry pi 2 is so fast. It ruined linux on a raspberry pi for me, software currently is so heavy weight.
mwambua 4 hours ago 1 reply      
I'm pretty excited about this! I started using f22 after using Ubuntu for a number of years, and I was particularly impressed by its package management. I personally find yum/dnf easier to use than apt and slightly more transparent. F24 has been an exceptional release and I look forward to running 25 on a pi.

Hopefully we'll have a Pi with better IO by the time it releases. :)

eschaton 1 hour ago 0 replies      
Hope this will eventually lead to Fedora on RPi3 with arm64. Like everyone else they say "maybe" and "it only has 1GB RAM" with no consideration of the other reasons to support 64-bit ARM.
ekianjo 3 hours ago 1 reply      
Why does every raspberry install recommend dd instead of the cp tool? Cp does the same thing and has a much better syntax.
Bedrock Rock-solid distributed data bedrockdb.com
83 points by qertoip  7 hours ago   56 comments top 17
nvartolomei 1 minute ago 0 replies      
> Bedrock is designed from the ground up to operate in a multi-datacenter environment with rigorous use of stored procedures the new standard for modern application design.

stored proceduresnew standard for modern application design?

hbrundage 3 hours ago 1 reply      
I don't quite get why SQLite is the right primitive to compose a large scale database system out of...

If the design goal is write throughput, SQLite can be beat really easily by the client/server systems since each SQLite instance only supports one concurrent writer. I guess users of Bedrock could partition their datasets into different pieces that aren't often written in tandem, but why go through all the trouble instead of just using a database with real MVCC?

If the design goal is geo replication, Bedrock seems to ignore the past many years of innovation in the field. MySQL and company have methodically built up features to allow safe synchronous and asynchronous replication for minimizing the risk of downtime or data loss like GTIDs over a long time

If the design goal is read throughput, I don't think Bedrock can be faster than just pure SQLite if your data can live beside your application, and if you have more than one node, you need a complicated SQL optimizer that understands how to push down things like predicates and aggregates to the leaves, and then recombine results at the top level to actually efficiently read in parallel, which I don't think Bedrock has built yet.

This thing kind of seems reminiscent of Jetpants or one of the other MySQL orchestration tools that allow for scaling the DB beyond many nodes automatically... but... why SQLite?

notacoward 3 hours ago 1 reply      
The idea's not fundamentally bad, but some of the rhetoric raises alarm bells.

> built atop SQLite, the fastest, most reliable, and most widely distributed database in the world.

Most widely distributed maybe, but far from the fastest and AFAIK others beat it on reliability as well. No disrespect to SQLite, it's far more feature-rich than other embedded-style DBs, but without a definition of "database" that excludes them the statement as written is false.

> written for modern hardware with large SSD-backed RAID drives and generous RAM file caches

In other words, relies on those things so it won't work worth crap on anything less.

> The easiest way to talk with Bedrock is using netcat as follows

Nice security model you've got there.

sonofgod 5 hours ago 1 reply      
Where do you make your CAP Theorem tradeoffs?

When shouldn't people use Bedrock? https://sqlite.org/whentouse.html is a wonderful page - full of discouragement to would-be SQlite users to know when they're in a use-case better served by some other tool.

lucio 2 hours ago 1 reply      
Can u show stats of bedrock use in expensify?How many users?How many nodes?How many transactions per second?queries per second?DB size?largest table rowcount?node failures per month?how many times have u had a master failure?network partition incidents?
kbaker 3 hours ago 2 replies      
OK, how is this different from Rqlite [1], except in C++ instead of Go, Paxos instead of Raft?

> rqlite is a distributed relational database, which uses SQLite as its storage engine.

> rqlite gives you the functionality of a rock solid, fault-tolerant, replicated relational database, but with very easy installation, deployment, and operation.


Also, is Bedrock DB only 30 days old? (since the 'first commit' message on GitHub?) That's barely topsoil!

[1] https://github.com/rqlite/rqlite

lucio 3 hours ago 1 reply      
Great idea!!! SQLite is an extraordinary product. I'll be following Bedrock.

This could be, for databases, what node was for server-side programming. There are parallels, you're using a "client side single-writer (excellent) DB" and making it apt for the cloud with Paxos. Again, great idea!!!!

And it seems you're taking with a good humor some "strong" comments here. +1 sr. +1 to u.

misframer 4 hours ago 1 reply      
Here's Expensify's CEO (David Barrett) posting on the p2p-hackers mailing list about this. It's from April 2016.


dsl 5 hours ago 1 reply      
For all the emphasis put on multi-datacenter replication, it would be awesome to see it documented!

Really looking forward to playing with this. Cool stuff.

carterehsmith 4 hours ago 2 replies      

Look. Just about anyone can create some "database" and declare it "rock-solid". Frankly, a simple key-value store is built in most languages.

Do you have more than that?Do you have benchmarks? You know, competing against the other providers?

Animats 1 hour ago 1 reply      
So where are the other copies? The documentation tells you how to set up Bedrock locally, but to get redundancy, you need multiple copies talking. How do you set that up?

And how does security work?

lucio 3 hours ago 1 reply      
How does the client get "redirected" to another node if the one you're talking to fails?It is transparent to the client?
thesmallestcat 5 hours ago 1 reply      
You'd have to be nuts to depend on something backed by Expensify. They'll probably fire the maintainers.

Edit: Turns out it's authored by the CEO! So you'd have to be nuts to use it, period. But, um, poke the source if you're curious. This is a good intro https://github.com/Expensify/Bedrock/tree/master/libstuff#li... I can't imagine a more ironic and hilarious backdrop for the "We Fire People" campaign. Must be tough finding people who can keep up with a brain that big.

quinthar 4 hours ago 0 replies      
partycoder 6 hours ago 2 replies      
I don't want to be a jerk but... just for the sake of common sense:

- Faster? Provide a benchmark. Faster is a relative thing, faster than what? faster under which conditions?

- More reliable... than what? under which conditions? how do you know it is more reliable?

- More powerful? I thought SQLite is more constrained than other SQL databases. e.g: No stored procedures. Powerful? how? compared to exactly what?

Nullius in verba. I don't take your statements as granted, give me a proof, otherwise it's just flatus vocis.

misframer 3 hours ago 1 reply      
How do you monitor a database / system like this?
marknadal 3 hours ago 1 reply      
I think it is important to encourage people to build their own databases, so I want to commend you guys for doing it. But I do have some questions (disclosure: I am the author of https://github.com/amark/gun ):

- Is it correct to say that Bedrock is primarily a replication layer for SQLite? Most things your homepage highlight seem to be SQLite features, not Bedrock features.

- Elsewhere in this thread you mention you run PAXOS (what implementation are you using? Your own? Do you have a test we can run to verify correctness?) But then you mention you sacrifice the "C" in the CAP Theorem, which would go against any point of having PAXOS. What happens in a split-brain scenario, where you have lets say 6 peers, and there is a network partition straight down the middle?

- You also mention elsewhere in this thread that if you are talking to the same node (a sticky session) there is no consistency issues. Do you mean linearizable? Because consistency has to do with multi-machine data consistency. For instance, if a write happens on another peer, then the node you are talking to should return that write not a stale write.

- Is there anything other than just assuming it is run on SSD that make it faster than SQLite on SSD?


Amex for Developers americanexpress.com
81 points by titomc  8 hours ago   51 comments top 12
nodesocket 3 hours ago 2 replies      
I'll play devils advocate. If your Netflix and this direct Amex integration saves you 1/2 percent per transaction that makes a <strike>huge</strike> difference to your bottom line.

Netflix current subscribers 83 million and average revenue per subscriber per month is $10.32.

 $83M * $10.32 = $857M total revenue per month. Assume 10% of transactions are Amex. $857M * .10 = $86M per month. Finally 1/2 percent of $86M is a savings $430,000 per month.

WhitneyLand 6 hours ago 3 replies      
Wow, talk about too little too late. This kind of late to the party strategy is why startups will always be needed to lead innovation.

The irony is the highest ranking person at AmEx who really understands this is probably a pretty smart guy who had to fight and lobby for years to rally enough support to make this happen.

edit: Its worse than I thought. A quick search shows they brought in high level talent from Google, Amazon, etc. I'm willing to bet these guys pitched some cool ideas before leaving frustrated after their short tenures.

OliverJones 6 hours ago 3 replies      
Hmm. For the typical card-not-present (online) use case, stripe.com and paypal do a pretty darn good job of processing AMEX payment cards, as well as the others.

Tokenization is vital in this age of cybermiscreants. Ya don't want customer payment card data in your dbms.

The stripe.com API offers tokenization, and it offers the ability to send and validate data like zip/postcode, cvv and street address to cut fraud. They have an api for chargeback disputes, too. (The business I serve doesn't use it, instead we use the forms on their web site. We have dozens of disputes per decade, not worth programming.)

Squareup.com (Square) does a good job with card-present transactions.

And these service providers offer predictable processing fees.

I wonder what's special about the AMEX APIs? Maybe somebody from AMEX can explain? Some of us are always looking for better ways to serve customers and handle payments efficiently.

20years 5 hours ago 2 replies      
Less than 10% of our customers pay with Amex. Can someone shed some light on why a developer would implement this for Amex payments vs just using Stripe, Auth.net or another payment gateway that supports Amex?

I am trying to understand the value here and how this will benefit the developer and/or consumer.

niftich 5 hours ago 2 replies      
It's tempting to think this ship has already sailed and they're late to the party, but I'm actually more concerned.

Recently, we saw MasterCard announcing APIs; ultimately the card issuers are the real gatekeepers of their own data and their own integrations, and as more of them move to gain the control back that they ceded with their lack of public developer engagement, the role of payment processors becomes less clear.

Sure, for the near future, a payment processor can continue to abstract away from the actual card issuer; but as richer APIs surface, those may siphon away marketshare from payment processors.

vitobeto 6 hours ago 1 reply      
On the landing page you may see a computer screen with PHP code, but they just offer JAVA and .NET SDK right now, fun fact!
locusm 4 hours ago 1 reply      
AMEX in Australia:Maybe 1 in 10 merchants even accept it, for that 1 you pay a much higher surcharge.
kkirsche 6 hours ago 3 replies      
It's interesting for sure, but as a consumer rather than a business, is there a use case for this?
emodendroket 3 hours ago 0 replies      
Gotta try and make those lost Costco bucks somehow.
brianbreslin 5 hours ago 1 reply      
So can someone explain to me why one would use amex over stripe or braintree? Or why would one use mastercard's API or Visa's? They all have similar APIs right?
programminggeek 3 hours ago 0 replies      
This is interesting, but if you need card present payments and EMV, something like CardFlight's SDK https://cardflight.com/sdk/ is going to be a better fit.

Disclaimer: My friend works there and has told me all about it.

princetontiger 6 hours ago 0 replies      
This is hilarious. 20 years late? Nothing to see here, move along.

WhitneyLand is correct.

Welcoming Open API Spec 3.0 capitalone.com
42 points by lindybrandon  6 hours ago   7 comments top 3
victor106 4 hours ago 2 replies      
Looks interesting. Seems like the seeds are being sown for fintech to take off with lot of big financial companies developing api's. for e.g:- https://developer.americanexpress.com/home

CapOne is definitely sailing ahead of others. How this benefits the company remains to be seen.

Hope this won't be like other companies (mint, twitter, etc) which opened up their api and shut them down or drastically reduced what dev's could do.

fatihpense 49 minutes ago 0 replies      
How does it compare to RAML? http://raml.org/
joshu 3 hours ago 1 reply      
Wow, worst possible name.
Tile Studio: development utility for graphics of tile-based games sourceforge.net
51 points by cheiVia0  7 hours ago   18 comments top 11
m48 3 hours ago 0 replies      
I was always under the impression this program was a level editor made primarily for use with for the author's Clean Game Library [1], a game engine for a functional programming language named Clean [2]. I haven't used either, but maybe people into functional programming will be interested in studying a working game engine made in a functional programming language back in 1999? [3]

As for interesting features the program offers on its own it has a pretty decent integrated graphic editor for tilemaps [4], and uses a template language for defining the output format instead of providing a generic parsable file format [5]. It also appears to abstract away layers by making each tile in your map consist of a "front," "middle," and "back" tile [6]. In general, it seems like a more old-school version of Tiled, and may be useful for people developing for/on older systems that want a very lightweight tilemap editor.

For people used to programs like Tiled, though, there's probably no compelling reason to switch to this tool. The lack of a working undo function in the map editor could be a dealbreaker, for one.

[1] http://cleangl.sourceforge.net/index.php

[2] http://clean.cs.ru.nl/Clean

[3] http://cleangl.sourceforge.net/thesis/

[4] http://tilestudio.sourceforge.net/drawing.html

[5] http://tilestudio.sourceforge.net/tutor.html#CreateTSD

[6] http://tilestudio.sourceforge.net/tutor.html#MapEditor

ivan_ah 6 hours ago 3 replies      
See also http://www.mapeditor.org/ which is another good software in that space.
rtpg 1 hour ago 0 replies      
I remember watching Notch do a ludum dare and actually use paint.NET as his level editor.

Each color can be set to a tile, and you can read in the bitmap easily to load a level. A fun way to quickly crank out maps.

j_s 3 hours ago 0 replies      
Related 2D art tool on HN 3 years ago: Sprite Lamp


edit: semi-freshly re-written in C++ according to the blog

poisonarena 4 hours ago 0 replies      
I always use pyxeledit, cheap, professional, and works on mac and PC, and great for animations


tarr11 5 hours ago 1 reply      
I wish there was strong, open source html5 tile editor
coldcode 6 hours ago 1 reply      
Is that correct: September 26, 2012 last update?
wyqydsyq 4 hours ago 1 reply      
It boggles me that a project hosted on SourceForge that hasn't been updated in 3 years and is in no way relevant lands on the front page of Hacker News...
qwertyuiop924 5 hours ago 0 replies      
...But is it better than Tiled and Aseprite?
bitmapbrother 4 hours ago 0 replies      
Nice. Works on the Mac via Wine.
AION: Artificial Intelligence Open Network ai-on.org
121 points by cocoflunchy  11 hours ago   13 comments top 5
swanders 2 hours ago 0 replies      
So if I am interested in social-media-bot-detection, as found here:https://github.com/AI-ON/ai-on.org/blob/master/projects/soci...

Should I create a new repo from scratch for the project and then make a pull request to update the md file from the original repository to point to the new repo?

This is my understanding from the documentation.

JD557 9 hours ago 2 replies      
All mailing lists appear to be empty, is this a new project?

If so, it would be nice to have at least a "Welcome" topic on each mailing list to "kick off" the discussion of each topic.

kobeya 8 hours ago 1 reply      
Have you considered running proper mailman instances for the project mailing lists instead of Google Groups?
dharma1 9 hours ago 2 replies      
Run by who? The website is very sparse on info
partycoder 9 hours ago 1 reply      
Well, I hope this network succeeds.

If it's based on the concept of "LION" (LinkedIn Open Networkers), I have to say that many Artificial Intelligence groups in LinkedIn are full of people talking about Asimov instead of actual AI.

Final A credit card built for the 21st century getfinal.com
44 points by cbw  3 hours ago   45 comments top 14
ars 1 hour ago 1 reply      
This is called a https://en.wikipedia.org/wiki/Controlled_payment_number and Bank of America (among others) has had it since the 20th century (16 years ago[1]).


sabman83 1 hour ago 2 replies      
I don't think I need temporary numbers to protect me from fraud. The fraud protection with my current credit card has been working fine so far. Another reason not use this card is that I don't want to miss out on reward points. I don't see how this business model is going to survive.
bcherny 2 hours ago 3 replies      
Can anyone chime in on how this compares to https://privacy.com or https://www.entropay.com?
mfrommil 1 hour ago 2 replies      
If there's an annual fee, most customers expect either excellent rewards or top-tier customer service (e.g. amex). Hard to justify $50/yr for neither of these.
tyfon 9 minutes ago 0 replies      
Do they really go for the iphone only route in 2016 or am I missing a link somewhere?
simplify 1 hour ago 1 reply      
Does anyone know how Final / other companies are able to create virtual CC numbers? Is there an API or something?
laurentdc 1 hour ago 0 replies      
Hm, I've been doing this for years. My bank lets me create virtual MasterCard credit cards that can either be one-time use only (they "auto destroy" after one payment is authorized) or can be set to expire after a certain month or a certain amount of money is spent.

They're linked to a physical card and/or bank account that you never disclose, and you get an SMS notification for every transaction.

Not sure what's the novelty here?

xamlhacker 1 hour ago 2 replies      
Virtual card numbers are not a new thing. For example, some Citibank cards allow you to create separate virtual numbers for different merchants. However, it looks like Final makes the process much easier.
rbcgerard 2 hours ago 1 reply      
Am I missing something - I mean it's annoying having to update recurring payments with a new card# but it doesn't happen that often and the issuer has the liability, so what do I care?
Sym3tri 34 minutes ago 1 reply      
Dang. I was hoping it would be virtual cards combined with something like Coin[1]


no_protocol 2 hours ago 3 replies      
> We are PCI-DSS v3.1 compliant and apply PCI standards when dealing any cardholder information

I don't think this was some kind of wordplay attempt around dealing a deck of cards. Hopefully they're just "dealing with" the information instead. Typos in statements proclaiming how safe and secure they are...

The concept seems fine, I think some of this is already possible with other card issuers. I doubt I would pay a $49 annual fee for the service when there are free cards available.

largehotcoffee 2 hours ago 2 replies      
Another one?
eruditely 57 minutes ago 1 reply      
People in this thread are just obscuring the details of this announcement, which is that this is wonderful and that I will be looking to acquire one of these cards.

Work well done.

EGreg 1 hour ago 2 replies      
That guy... and his videos! Pretty cool no? He stars in all his own vids. How do they choose what startup to make them for?
My observations during the explosion at Trinity (1945) fermatslibrary.com
143 points by luisb  14 hours ago   43 comments top 13
antognini 14 hours ago 4 replies      
In one of my interstellar medium classes I remember learning about the physicist Geoffrey Taylor's estimate of the yield of the first atomic bomb. At the time, the energy of the atomic bomb was classified information, but photographs of the explosion like this one [1] had been published in newspapers and magazines. Crucially, this photograph was labelled with both the time after the explosion, and a scale bar. With just this information Taylor found it was fairly simple to calculate the energy of the explosion. He ended up publishing two papers about it [2] [3]. In astrophysics, his calculations are now used when modeling the effect of a supernova on the surrounding interstellar medium.

[1]: https://upload.wikimedia.org/wikipedia/commons/thumb/7/77/Tr...

[2]: http://rspa.royalsocietypublishing.org/content/201/1065/159

[3]: http://rspa.royalsocietypublishing.org/content/201/1065/175

pjmorris 12 hours ago 1 reply      
Obligatory reference to 'The Making of the Atomic Bomb', Rhodes. Terrific book on the people, physics, and politics behind 'The Bomb' and its use.

Any recommendations on biographies of Fermi?

chiefofgxbxl 4 hours ago 1 reply      
Due to the blast in the desert, quartz and other minerals in the sand were melted to form Trinitite, a green glass. Would be interesting to hold this piece of history in a museum (according to Wikipedia, it's safe to handle).


maverick_iceman 13 hours ago 0 replies      
Great resource on Fermi problems and related topics.[1]


tehchromic 12 hours ago 0 replies      
Once had a job working in the basement of a building in Los Alamos NM operating a digital scanner to digitize various documents from the lab archives. I encountered a few boxes of photos of various atomic tests - beautiful and spooky.
meerita 9 hours ago 0 replies      
Imagine to witness a Tsar Bomba https://en.wikipedia.org/wiki/Tsar_Bomba 64km tall mushroom head. About 8 kilometres (5.0 mi) in diameter, was prevented from touching the ground by the shock wave, but nearly reached the 10.5-kilometre (6.5 mi) altitude of the deploying Tu-95 bomber
coldcode 13 hours ago 1 reply      
Seeing a black and white description of an explosion estimated at 10,000 tons of TNT is still mind-boggling. Even harder to believe that within a decade or so there were explosions of 100,000,000 tons of TNT. E=MC2 gets big in a hurry.
craftandhustle 4 hours ago 0 replies      
While reading this, I started wondering if there are any other 3rd party (civilian?) accounts of the event or if the geography / remoteness made it 'invisible' to everyone else? I have a funny visual in my head of a random farmer casually mentioning to his wife a 'weird glow in the horizon' on an otherwise regular Monday morning.
slackpad 7 hours ago 1 reply      
Trinity site is an interesting place to visit - it's open to visitors twice a year. http://www.wsmr.army.mil/PAO/Trinity/Pages/Home.aspx
hwc 14 hours ago 1 reply      
We did this calculation in a course I took in college. I've completely forgotten how it worked.
tajen 9 hours ago 1 reply      
Is there only one page? I'm on an iPad, there doesn't seem to be arrows for next/previous page.
Kenji 14 hours ago 1 reply      
I just love how practically minded these men were. Whatever they had available to them, they made it work somehow. If you want to read another account of the Manhattan Project / first nukes, I highly recommend the book "Surely you're joking, Mr. Feynman!" which contains a wealth of interesting and funny (and sometimes touching) trivia about the life of Feynman and how things went at Los Alamos.
AstralStorm 13 hours ago 1 reply      
Excellent page that is completely unreadable on mobile.

Let us download a PDF at least.

Node v6.9.0 (LTS) nodejs.org
276 points by dabber  12 hours ago   71 comments top 13
dcgudeman 12 hours ago 5 replies      
When a native Promise incurs a rejection but there is no handler to receive it, a warning will be printed to standard error.

Probably the most important change other than improvements in es2015 support.

wheresvic1 11 hours ago 2 replies      
eistrati 12 hours ago 1 reply      
> v6.9.0 marks the transition of Node.js v6 into Long Term Support (LTS) with the codename "Boron"

Welcome, Boron! We've been waiting for you :)

MikeKusold 11 hours ago 1 reply      
For those using nvm:

nvm install lts/boron && nvm alias default lts/boron

You may need to update nvm before that works:

( cd "$NVM_DIR" git fetch origin git checkout `git describe --abbrev=0 --tags --match "v[0-9]*" origin`) && . "$NVM_DIR/nvm.sh"

dancek 11 hours ago 2 replies      
> Support has been dropped for Windows Vista and earlier and macOS 10.7 and earlier.

Windows Vista was released in 2007, OS X 10.7 in 2011.

For some reason, third-party software seems to support old Windows versions much longer than OS X. Meanwhile, Apple stops supporting old hardware in new OS X versions quite quickly (IMHO), probably for business reasons.

I had one Macbook in the past that I had to put Linux on, just because most software dropped support for the most recent available OS version.

indexerror 11 hours ago 2 replies      
Can we get an estimated time of arrival of ES2015 imports in nodejs? I understand that the support needs to come from v8 first of all.
tschellenbach 8 hours ago 0 replies      
The ES6 improvements make Node a very powerful language. I'm a python developer and the improvements to the language are impressive. No support for array unpacking and import statements just yet though.
jakub_g 11 hours ago 0 replies      
> Very large arrays are now truncated when passed through util.inspect(), this also applies to console.log() and friends.

This is a funny one, in conjunction with the facts about npm that 1) npm has no public programmatic API (and zero docs on it), 2) npm devs encourage you to exec() stuff and read stdout, as all the internal methods can be changed/removed at any time, also in non-semver-major.

I used to have script doing `npm view | grep ...` which now fails under node 6. The best solution I guess is to use a hardcoded version of npm as a dependency and rely on programmatic API rather than stdout.

neovive 4 hours ago 1 reply      
I know it's not completely related, but does anyone know if ExpressJS runs on 6.9?
SomeHacker44 6 hours ago 1 reply      
I'm a little surprised that "LTS" doesn't mean at least five years of support like Ubuntu.
dcgudeman 12 hours ago 1 reply      
Hopefully an official 'Boron' tagged docker image will follow shortly.
brosky117 9 hours ago 0 replies      
I'm stoked about the ES6 support.
ilaksh 6 hours ago 2 replies      
I am assuming this means they will tag v7 soon so I can easily use it with nvm?

Also, does anyone know if there is a way to get babel to output code targeting Node v7? Like, just stuff in the 'latest' preset that isn't out-of-the-box (with or without the flag).

Denormalization for Performance: Don't Blame the Relational Model dbdebunk.com
32 points by sgeneris  7 hours ago   10 comments top 2
exmicrosoldier 2 hours ago 1 reply      
I find zero explanation of how to solve performance with a relational model.

As I understand the article, it seems to say...just because all the existing databases you have seen suck at performance when normalized doesn't mean normalization can't be fast.

EGreg 4 hours ago 3 replies      
From my experience, denormalization of a relational model is a special case of "caching". What you are really doing is caching data where it is most likely to be used at access time.

The overview is like this:

1) You need some operation to have low turnaround time / latency

2) So you maintain and update a cache, typically while writing to the data store.

3) Like all caches, you can either invalidate it before changing the data, slightly harming availability, or you can have it lag behind (eventual consistency).

So the article is actually wrong, you don't need to trade consistency for performance. You can increase read performance (lower latency) without losing consistency, by having a cache (denormalization) and invalidating the relevant caches BEFORE writing data (which lowers write latency, but not necessarily write throughput... typically, we don't care about write latency as much as read latency.)

Mirai Botnets level3.com
101 points by _jomo  15 hours ago   34 comments top 7
BlickSilly 11 hours ago 0 replies      
IANA Security Expert, but simple advice from Krebs:

>Anyone looking for an easy way to tell whether any of network ports may be open and listening for incoming external connections could do worse than to run Steve Gibsons Shields Up UPnP exposure test.


another thing to remember... ALL IoT devices have admin credentials, its just a matter of whether or not they can be connected to, whether the credentials are compromised, and whether the device is susceptible to brute force.

robolange 12 hours ago 3 replies      
The main take-aways are: 1) Use a firewall between your Internet connection and your IoT devices, and 2) disable UPnP support on your firewall.

It's disturbing how many devices enable telnet and/or ssh by default, make it difficult or impossible for a user to actually change the default password, and subvert firewalls using P2P protocols. At the end of the day, to secure your network you really do need to run nmap regularly against your subnet checking for devices with open ports, and tcpdump between your gateway and your devices, monitoring what connections they are actually making.

For ordinary users, the situation is truly hopeless. They are pwned by default if they buy into IoT.

weej 5 hours ago 0 replies      
For those interested a couple weeks ago I did a source code review and write-up: "Mirai (DDoS) Source Code Review"


M_Grey 11 hours ago 4 replies      
The IoT is a disaster in slow-motion, and outside of highly technical circles, it seems to be one that is totally invisible.
caycep 10 hours ago 0 replies      
How bad are ubiquity devices, and the state of security and firmware updates for them? I was thinking about switching to a ubiquity amplify home router from tp-link partly out of concern for this, and was hoping that their firmware and security updates would be a little more on-point. But one of their routers are on this list...
alexvay 11 hours ago 0 replies      
How many remember the Smurf Attack https://en.wikipedia.org/wiki/Smurf_attack ?

I remember claims that this type of attack was fixed forever. But physics doesn't change... Easily.

rasz_pl 3 hours ago 0 replies      
>Level 3 Threat Research Labs will continue to identify and track developments in these botnets

but not take any action against actual source of the traffic, AS that host BOTs with static IP.

>We will also work with hosting providers and domain registrars to block traffic to these C2s

but again not do anything to close the source of the problem.L3 admits they have a list of ~500K static IPs with bots behind them, they arent blocking nor reporting those, why? because traffic is traffic and they are in business of selling pipes?

Google Flights will now tell you when fares will increase techcrunch.com
375 points by jonbaer  16 hours ago   150 comments top 21
karakal 14 hours ago 6 replies      
I'm curious, how does one gain access to flight schedules/fares? Is this something that anyone can get their hands on and create a service (complexity aside), or do you need some sort of license that costs thousands of dollars?

Does each airline have their own way of exporting this data? Is there a single entity that aggregates from all of them? How does the actual data look like? (Is it a dump every X hours, or something more modern like a stream you can subscribe to?).

lmkg 14 hours ago 10 replies      
Bing Travel had predictions for fare fluctuations for air tickets back in 2009. It was pretty awesome back when I flied a lot, but they apparently killed the feature in 2014. Now Google's bringing it back, two years later.

I don't understand the future sometimes. \(_o)/

forgotpwtomain 14 hours ago 7 replies      
I almost never use anything other google flights when searching for a ticket. The UI is just so much better and easier to use than the ridiculous bloated crap that most travel sites are (especially when they are trying to shove car-rental and hotel deals in your face).

The one drawback is sometimes they are missing local / smaller airlines from their list of flights (which can be a major price difference from the major ones) on short flights.

bluetidepro 14 hours ago 4 replies      
I've searched with Google Flights a few times, but they are consistently more expensive than the flights I find with other services (Hipmunk, specifically). Has anyone else noticed this before, too?

Does anyone know why the prices would be that much different? For the searches I've done, Google Flights is close to $150 more than what Hipmunk shows. Does Hipmunk maybe just have some sort of promo or lower price that Google Flights can't offer?

EDIT: This curiosity also relates to what "karakal" is asking in their comment: https://news.ycombinator.com/item?id=12736433 - Like is this data just universal or do some services get better deals than others?

samfisher83 14 hours ago 0 replies      
Doesn't Kayak do they same thing? They have the wait buy stuff too.

However my favorite site now days skiplagged. The site united try to sue.

Zenst 13 hours ago 0 replies      
I take it they do not factor in currency, which would be factor if paying via credit card and exchange rates change if paying in another currency than home.

That would be I feel very dynamic in part, though still something that plays more of a factor in price than other aspects.

I'm not sure how much Airlines adjust for that and given all fuel is ties tot he USD($) then more a factor for non USD pricing with the exchange rate of the USD.

Now a feature that monitored that and fuel cost changes could potentially give people a heads up before the airlines adjust and might be a good feature.

Though I can count the number of flights I have taken on my hands, so not that afay with the dynamics Airlines use to adjust prices and the frequency.

patja 6 hours ago 0 replies      
I wish the time horizon on this matched the time horizon on which tickets are available. It seems to be about a month short when I compare to directly shopping for flights on airline websites.

Granted that shouldn't matter quite as much for the "when will fares increase" question, since they have to have a baseline to evaluate the increase magnitude, but it sure matters simply for the "I'm planning to fly in for a popular event a year from now and I know tickets are being snapped up so I want to compare fares for flights as soon as they become orderable" scenario.

JonoBB 9 hours ago 1 reply      
I love the interface of Google flights, but I can usually find cheaper tickets elsewhere (usually on skyscanner). I've no idea how ticket pricing works and why one site can be so much cheaper than others, but skyscanner is usually 15-20% cheaper on international flights for me.
whitej125 12 hours ago 0 replies      
Will Google Flights tell you if prices are going to decrease in the near future too? You would generally think if they can predict one direction, then they can predict the opposite. However, telling someone that a cheaper flight may exist in the future is going to convince them to leave the site and possibly not come back.
yalogin 14 hours ago 4 replies      
Almost every site offers this. Isn't this unhelpful in the last ng run? Consumer behavior will change and the price will smooth out or the window of lower prices will be gone or the percentage of change will become smaller.
flashman 4 hours ago 1 reply      
I have time series data from over a thousand gas stations. How can I forecast the price of gas?
triangleman 13 hours ago 0 replies      
Speaking of fares going up, has anyone else noticed fares going up on Google Flights, soon after searching for them?
adrianratnapala 14 hours ago 0 replies      
Is this actual new information it is revealing, or just a way of presenting what had been available in the form of the time vs. price bar-graph that it has had since ancient times.

The bar-graph had been hidden from the usual UI by decree of UX designers (Maths is hard!), but was always available as a kind of easter-egg.

Perhaps this feature is the compromise?

netfire 14 hours ago 1 reply      
The title is a little misleading. The screenshot shows that Google is showing that "prices will likely increase" which is different than the "fares will increase" in the title. "fares might increase" would be more appropriate here.
philfrasty 10 hours ago 0 replies      
Fun fact: its search for railway-connections in Germany is by far better than the original site (Deutsche Bahn).
losteverything 9 hours ago 0 replies      
Will it help predict when mistakes are most likely to occur? The $30 RT EWR to HNL?
pkaye 14 hours ago 3 replies      
I wonder if the airlines will start gaming thing. Seeing what people are being told and doing the opposite to catch them off guard.
ausjke 6 hours ago 0 replies      
been using google flights for 1+ years now, it's great! beat all other alternatives for me.
mcfunk 11 hours ago 0 replies      
As a once frequent Farecast user I was initially excited about this, but realized that for the most part airfarewatchdog has completely replaced this use case in my life.
cft 13 hours ago 1 reply      
I wish they released a native app. A web app is great in concept, but for the actual research and actually committing to buying the tickets a native app is preferable.
jsprogrammer 12 hours ago 0 replies      
flights.google.com has been doing it for at least a week or two already.

I haven't seen it, but I hope that it also tells you 'when fares will decrease'.

Show HN: Swip.js, a library to create multi device experiments github.com
132 points by brackcurly  15 hours ago   15 comments top 5
anotheryou 9 hours ago 2 replies      
Does anyone know how well you can time via an app?

I wonder how it sounds to play sound across devices. Streaming and clock syncing can happen beforehand, just the playback has to be very sync to an absolute timing.

Maybe one can do something crude sound propagation synthesis by playing with timing, sound runtime, gps and a crowd. If you synthesize the sound the app even stays very small. You could e.g. make an ocean wave roll through the audience (when every device knows where it is and knows the exact time of when the wave will hit that position).

Same could work with devices as pixels, but I don't find it that interesting.

rawnlq 14 hours ago 2 replies      
Using the devices as planes for the golf demo is incredibly creative!


I know you can use HTML5 DeviceOrientation for angle and the "swip" for relative positioning. But how did you get the physical size of the screen of each device?

mcs_ 11 hours ago 1 reply      
I don't have a use case for this (yet) but ... the demo is absolutely incredible. Very nice work
Lxr 1 hour ago 0 replies      
That golf game has a lot of potential even as a native app, nice!
addedlovely 7 hours ago 0 replies      
Wow, great work. I assume the pinch gesture calibrates the positioning and is always center of the screen? The bounce off the screen edge on pong is satisfying.
On the wikileak-ed emails from Tanden on Lessig lessig.tumblr.com
66 points by dankohn1  9 hours ago   40 comments top 7
dcposch 2 hours ago 0 replies      
This is incredibly gracious of Lessig, to defend someone who treated him poorly.

I donated to both his ill-fated Mayday PAC and his ill-fated presidential campaign last year. He is a selfless guy who is trying to address the root cause of US political dysfunction.

I wish him the best of luck in his future projects.

IIAOPSW 3 hours ago 7 replies      
I used to respect what Assanage is doing. But recently its clear his actions aren't some noble crusade to speak truth to power and promote freedom of speech and otherwise promote civil liberties. If that were the point then he would leak whatever he has whenever he gets it instead of trying to time it to affect the election. If he really believed in the causes he claims to believe in then he wouldn't be trying to get Donald-lets-silence-my-critics-Trump into office.

I guess in light of new evidence I changed my opinion. Why can't other people be rational like me.

slantedview 1 hour ago 2 replies      
Who would expect such a classy response to a threat of violence against a "smug" and "pompous" professor. Or maybe Tanden is wrong and it is actually she who needs an adjusting.
aub3bhat 1 hour ago 2 replies      
I agree its very gracious of Lessig and completely agree that individuals deserve privacy. But lets be honest, even in this case the forgiveness originates in fact that all individuals involved are on the same side in this election. Further the justification about her being engaged with public / public-sector is hollow. Had it been Karl Rove saying the same thing, would the anger be justified?

If we are at all going to judge people by the private communication then lets at least be consistent.

E.g. Here is the the women who had a left-wing blogger fired:

"Progressive blogger fired for calling Hillary Clinton ally a 'scumbag'"

Read more: http://www.politico.com/story/2016/05/matt-bruenig-neera-tan...

doe88 23 minutes ago 0 replies      
TLDR; When they go low, you go high.
lhnz 2 hours ago 3 replies      
You can't just blame the attacker, it's poor information security practice which enabled the leaks.
slantedview 53 minutes ago 1 reply      
While Lessig is gracious here, I don't have to be.

Tanden and Podesta are representative of what will become the Clinton white house. This sort of rhetoric indicates where they, and Clinton, stand on a variety of interconnected issues, from money in politics to lobbying and outright corruption.

Lessig has done more than almost anyone to champion the idea of separating money from politics. To many, he is a hero. To Tanden and Podesta, two of the current (and future) policy leaders of the Clinton administration, he is a smug professor who needs to have the shit kicked out of him. This should tell you all you need to know about the outlook and direction of a future Clinton administration.

Never in American history has money had such a stranglehold on our elected officials. The winners in this system, such as Hillary Clinton, are perfectly happy with the status quo. People like Lessig want to blow it up. Take note.

JDK 9 release schedule java.net
151 points by erl  14 hours ago   135 comments top 10
haalcion3 5 hours ago 4 replies      
Been coding Java for close to 20 years. Can anyone show me what's being done in the language to bring on newcomers, or did that ship sail 10-15 years ago?

Some ideas that would bring people back:

* Wildly new, terse, and clear syntax and a great library of built-in tools that are briefly and intuitively named.

* Easily write and design interfaces that generate both/either back-end or matching integrated front-end code which is off in its own directory and can easily be used by existing JavaScript and HTML.

* Similarly be able to generate the JavaScript front-end code that use those JS client libraries with easily writable/pluggable generators so that it can generate Angular 1.x, 2, ReactJS, Bootstrap, etc. in "best-practice" ways that can be updated frequently as the community changes.

* Simultaneously provide the option to serve very similar pages using straight HTML, degrading even to the point that a text only browser could use the site easily.

* Easily define responsiveness of pages.

* Support multiple 3D, 4D, etc. interfaces with customizable inputs to be forward-compatible without overdoing complexity (i.e. it's really pluggable).

* Similarly support generation of almost any kind of service integration, with easy pluggable authN/R.

* Easily scalable.

* Relational, noSQL, versioning DB (noms) support.

* Make fun books for kids and a site where they can share what they've written, write games, build things, etc.

* Make it integrate with every browser, even some older versions, operating systems.

* Make it compile low-level vs. byte code so it's fast as shit.

spullara 12 hours ago 3 replies      
The next interesting thing for the JVM is value types in Java 10. It may convince me to use it pre release.
merb 13 hours ago 1 reply      
well not too bad if they could deliver the jdk 10 (valhalla) a faster.jdk 9 is less important than jdk 10.
gravypod 13 hours ago 1 reply      
Does anyone have the full changelog of added/new features.
sytringy05 7 hours ago 0 replies      
I cant wait for that REPL.. I've almost always got the intellij debugger running with the Evaluate Expression window open
javanese 12 hours ago 7 replies      
So, uh, how many people are using Java 8?

I still see projects using Java 5...

ape4 9 hours ago 1 reply      
If you are doing dates the nice way as yyyy/mm/dd you should use dashes - ie yyyy-mm-dd. Let the slashes mean other styles.
Bjartr 10 hours ago 0 replies      
Just in time for GWT to get support for Java 8
Scarbutt 11 hours ago 1 reply      
Will the "modular source code" feature help handle the "jar hell" problem?
puppetmaster3 11 hours ago 2 replies      
This is posted a year early. Please post this in a year.

I see no JSON, I have to use a 3rd party lib. And ... no word on fixing logging divergence.

Show HN: Skeletal Animation in Your Browser via Dual Quaternion Linear Blending chinedufn.github.io
63 points by chinedufn  13 hours ago   12 comments top 4
santaclaus 1 hour ago 0 replies      
Super cool! Any chance that you'll add OCR skinning [1]? It looks really cool, and seems to get the best of both linear blend and dual quaternion skinning!

[1] https://www.disneyresearch.com/publication/skinning-with-opt...

pzone 2 hours ago 0 replies      
Dual quaternion skinning seems to have failed to catch on. Support was removed from Unreal Engine for example. Why is this?
eriknstr 11 hours ago 1 reply      
Neat demo. I have a couple of suggestions for camera control:

1. Allow the use of the mouse for camera control.

2. Use up/down arrows for rotation similar to what you do with left/right and have separate keys control zoom.

Jasper_ 6 hours ago 2 replies      
Why not use the standard weighted matrix blending? What does the quaternion approach provide?
DNA reveals a hybrid Ice Age bison species depicted in ancient cave art cbc.ca
47 points by biot  13 hours ago   11 comments top 4
ChuckMcM 11 hours ago 1 reply      
Apparently if you collide two species of ruminant with enough energy you get the Higgs Bison. :-)
throwaway98237 12 hours ago 1 reply      
Bahh! I read as, "DNA reveals the Higgs Boson". Was mighty intrigued.
curtis 12 hours ago 1 reply      
> DNA reveals the Higgs bison

I had to read that title several times.

douche 7 hours ago 1 reply      
It's a shame that the original title isn't used here.

I'm curious about Higgs now. Was this one person, who had an interest in both extinct bovines and projected subatomic particles?

366 unique views in three days, no circuit board sales; what am I doing wrong? metaboard.space
7 points by ysteiner  4 hours ago   3 comments top 2
Aeolun 0 minutes ago 0 replies      
I just looked at the page, but I'm not interested in a circuit board at all. Why would I buy it?

Where your visitors are coming from is extremely relevant to why you have or haven't sold anything, I think.

ThrowawayR2 3 hours ago 1 reply      
Well, let's see:

1) Looking at the description:

"The Meta Board is a physical programmable circuit board that allows you to build physical circuits on a breadboard without wire or specific value resistors and capacitors using a host microcontroller, such as the Arduino Nano."

This is rather uninformative to say the least. In what way does it replace resistors / capacitors? How many does it replace? How is it programmed?

2) On watching the video, it seems as if this device lets a person turn on or off some pins under the control of a computer. That would make this device a digital I/O board, minus the I part, or a relay board, perhaps? (Both of which are readily available online already.) However, the description suggests that it does more than this?

3) The "Why use the Meta Board?" reasons are either not very compelling or have common solutions. For example, re: #2, resistors and capacitors costs a negligible amount and re: #7, it's not particularly clear what that means.

3) There is no description of the electrical characteristics of the inputs/output pins or anything else.

4) The image links on the metaboard.space page are broken, at least when viewed from North America. Manually opening the links returns an access denied message from the CDN.

       cached 19 October 2016 07:02:01 GMT