hacker news with inline top comments    .. more ..    24 Aug 2014 Best
home   ask   best   5 years ago   
fork() can fail
755 points by dantiberian  3 days ago   313 comments top 43
kabdib 2 days ago 6 replies      
When I was young and really didn't understand Unix, my friend and were summer students at NBS (now NIST), and one fine afternoon we wondered what would happen if you ran fork() forever.

We didn't know, so we wrote the program and ran it.

This was on a PDP-11/45 running v6 or v7 Unix. The printing console (some DECWriter 133 something or other) started burping and spewing stuff about fork failing and other bad things, and a minute or two later one of the folks who had 'root' ran into the machine room with a panic-stricken look because the system had mostly just locked up.

"What were you DOING?" he asked / yelled.

"Uh, recursive forks, to see what would happen."

He grumbled. Only a late 70s hacker with a Unix-class beard can grumble like that, the classic Unix paternal geek attitude of "I'm happy you're using this and learning, but I wish you were smarter about things."

I think we had to hard-reset the system, and it came back with an inconsistent file system which he had to repair by hand with ncheck and icheck, because this was before the days of fsck and that's what real programmers did with slightly corrupted Unix file systems back then. Uphill both ways, in the snow, on a breakfast of gravel and no documentation.

Total downtime, maybe half an hour. We were told nicely not to do that again. I think I was handed one of the illicit copies of Lions Notes a few days later. "Read that," and that's how my introduction to the guts of operating systems began.

cperciva 3 days ago 5 replies      
This reminds me of one of the most epic bugs I've ever run into:

    mkdir("/foo", 0700);    chdir("/foo");    recursively_delete_everything_in_current_directory();
Running as root, this usually worked fine: It would create a directory, move into it, and clean out any garbage left behind by a previous run before doing anything new.

Running as non-root, the mkdir failed, the chdir failed, and it started eating my home directory.

azinman2 3 days ago 3 replies      
I see a lot of comments blaming the programmer. This is completely the wrong attitude.

Why are you treating the programmer like a machine? They're not a machine -- they're human. Regardless if they fully understand the API or not things should have have sane defaults for HUMAN FACTORS reasons.

Bugs will always exist. The fact that the Linux kernel has many bugs is just one example of a code base that has over a decade of work put into it by many people with high skill shows that bugs are inevitable.

The goal should be to assume people will do stupid things and make fatal behavior more explicit/difficult. Do we really need -1 for kill to do such behavior? How common is that anyway? It's a pretty destructive behavior, and probably should be removed from kill. The human factors approach would say if you really want that behavior then write a for loop to do over the list of pids, because it should never be within easy reach especially for such an uncommon scenario.

Apple's iOS API is similar. Try to insert a nil object into an array? Crash. Try to reload an item in a list that's past the known objects index? Crash. So instead of doing something sane like reloading the entire list, the user has a shit experience because off by one errors happen easily especially in front-end/model work [1 re: fb's persistent unread chat].

Not recognizing the human part of things leads to issues everywhere.. reminding me of this article on human factors in health care previously posted on HN [2].

Conclusion: design for humans and default to non-fatal situations.

[1] http://facebook.github.io/flux/[2] http://www.newstatesman.com/2014/05/how-mistakes-can-save-li...

spudlyo 3 days ago 3 replies      
If a function be advertised to return an error code in the event of difficulties, thou shalt check for that code, yea, even though the checks triple the size of thy code and produce aches in thy typing fingers, for if thou thinkest "it cannot happen to me", the gods shall surely punish thee for thy arrogance. [0]

[0]: http://www.lysator.liu.se/c/ten-commandments.html

jwise0 3 days ago 1 reply      
In a similar family, note also that setuid() can fail! If you try to setuid() to a user that has has reached their ulimit for number of processes, then setuid() will fail, just like fork() would for that user.

This is a classic way to get your application exploited. Google did it (at least) twice in Android: once in ADB [1], and once in Zygote [2]. Both resulted in escalation.

Check your return values! All of them!

[1] http://thesnkchrmr.wordpress.com/2011/03/24/rageagainsttheca...[2] https://github.com/unrevoked/zysploit

jgrahamc 3 days ago 2 replies      
Quietly goes to check the last piece of C I wrote containing a fork():

    if (daemon && !test_mode) {      int pid = fork();      if (pid == -1) {        fatal_error("Failed to fork");      }      if (pid != 0) {        write_pid(pid_file, pid, !test_mode);        exit(0);      }    } else {      write_pid(pid_file, getpid(), !test_mode);    }

jbb555 2 days ago 0 replies      
This to me is a good example of why exceptions in modern languages are good way to handle errors. In this case the user has basically ignored the error return from fork() and the accidentally used it in kill.

If fork() had thrown an exception for an unexpected failure then the user could not have accidentally ignored it in the same way.

I realize that this is not appropriate for a system call but it seems like a good example of why handling errors using exceptions is helpful sometimes.

mutation 3 days ago 3 replies      
Just noticed that in Perl the behavior is slightly different: http://perldoc.perl.org/functions/fork.html unsuccessful fork() returns undef, effectively stopping you from kill-ing what you don't want to kill.
quotemstr 3 days ago 2 replies      
I wish posix_spawn were ubiquitous; it's a much better process-launching interface than fork: it's naturally race-free and amenable to use in multi-threaded programs, and unlike fork(2), it plays well with turning VM overcommit off. (If overcommit is off and a large process forks, the system must assume that every COW page could be made process-private and reserve that much memory. Ouch.)
AnimalMuppet 3 days ago 2 replies      
Somewhat OT, but in the same neighborhood:

Standard file handles are another thing you should not assume are there (though I'm not sure how to test for it programmatically).

We once had a user that, for whatever reason, tweaked their Unix installations to not pass an open stderr to processes - they just got stdin and stdout (that is, file handles 0 and 1, but not 2). If you wrote to stderr anywhere in your program, it wrote to whatever was open on handle 2, which was not a stderr that the OS passed in.

Yeah, that's a pretty insane thing to do, but somebody was doing it...

IgorPartola 3 days ago 1 reply      
Back in the day I had a Motorola Atrix (remember those? First dual core Android phone, best thing since sliced bread, abandoned by Motorola a few months after launch?). Well, one of the ways to root it was to keep forking a process until the phone ran out of memory. After fork failed, you were left with a process that for some reason was running with root privileges...
Aurel1us 3 days ago 5 replies      
Just as a reminder:"So, malloc on Linux only fails if there isnt enough memory for its control structures. It does not fail if there isnt enough memory to fulfill the request." - http://scvalex.net/posts/6/
zokier 3 days ago 2 replies      
Who needs type safety when we got integers.
trippy_biscuits 3 days ago 2 replies      
"Unix: just enough potholes and bear traps to keep an entire valley going."

If you don't understand how to use sharp tools, you may hurt yourself and others. Documentation for fork() clearly explains why and when fork() returns -1. Those that find the man page lacking or elusive may get more out of an earnest study of W. Richard Stevens' book, Advanced Programming in the UNIX Environment. In any case, every system programmer should own a copy and understand its contents.

ajarmst 3 days ago 1 reply      
Stevens and Rago, "Advanced Programming in the Unix Environment, Volume II", page 211,212.

if ((pid = fork()) < 0) err_sys("fork error"); is idiomatic in Unix.

ionelm 3 days ago 1 reply      
Seems Python handles this correctly (by raising an exception):

    >>> resource.setrlimit(resource.RLIMIT_NPROC, (0, 0))    >>> os.fork()    Traceback (most recent call last):      File "<ipython-input-7-348c6e46312a>", line 1, in <module>        os.fork()    OSError: [Errno 11] Resource temporarily unavailable

prasoon2211 3 days ago 1 reply      
The first time I learnt of fork (from an OS book), the example had three branches to the if statement after fork - and the first tested for a negative pid. I suspect that the reason this link has 400 odd upvotes is because more people aren't learning OS the correct way in the beginning. Or maybe my OS book was nice. IDK.
brazzy 3 days ago 3 replies      
And this right there is why exceptions are a superior mechanism of announcing errors...
mcguire 1 day ago 0 replies      
If you have set a non-root user's process limits correctly, sending SIGKILL to all of that user's processes is likely a perfectly fine response to their fork() failing.

If you haven't limited the number of processes a given non-root user can start to some value the machine can handle, sending SIGKILL to all of the user's processes is probably not going to do anymore damage.

If a program running as root doesn't correctly handle fork() failing, someone needs to be taken out back and beaten with a stick. Maybe the person who wrote the program, maybe the person who ran it as root. But somebody.

serve_yay 3 days ago 1 reply      
Out of "-1 as failure return value", and "-1 to signal all possible processes", at least one is a bad idea.
alan-crowe 1 day ago 0 replies      
Is there a test command that asks the operating system to run a program but cause the nth fork to fail? I would be more diligent about writing code that handles rare errors if I could create test cases. Writing code that I cannot test feels wrong.
Mister_Snuggles 3 days ago 1 reply      
This reminds me of the time I was telnet'd (since SSH wasn't a thing at the time) into a remote SunOS/Solaris server. At the time my only Unix experience was with Linux.

"killall -9 httpd" gave an unhelpful error message. "killall httpd" also gave an unhelpful error message. "killall", which would give you usage instructions in Linux, killed all processes on the system. Reading this article makes me figure that killall was likely a frontend to kill(-1, ...).

That day I learned a valuable lesson about reading man pages and understanding that not all unixes are the same.

quackerhacker 3 days ago 0 replies      
Sometimes these threads are just serendipity!

I just recently finished a multithreaded program where I found obtaining the pid [on linux: getpid()] of child processes spawn was only effective by utilizing a common pipe that was non-blocking [fcntl(pipefd[1], F_SETFL, O_NONBLOCK) ].

In other, more humorous words, as a "parent," it's great to know what your "child," is doing (or in this sense), who your child is (the actual pid), instead of just kill SIGTERM them.

JD557 3 days ago 2 replies      
Is there any clean way to use an Option/Maybe monad in C (or C++)? It should be a simple way to solve problems where error codes are valid inputs of other functions.

The simplest way I can think of is:

    struct maybe {        bool isEmpty;        void* value;    }
Although I wonder if using C++ templates, classes and operator overloading is possible to make a more practical implementation (using void* does seem like a bad idea).

japaget 3 days ago 1 reply      
See also https://news.ycombinator.com/item?id=8189968 for another UNIX trap for the unwary.
yokom 3 days ago 4 replies      
I don't use fork() that often, but my own paranoia is why I always test for <= 0 instead of == 0. Some people think I'm weird for doing something like:

  if len(some_list) <= 0:      # Test for empty list
But it's just my way of covering my ass in case the laws of physics change during execution, or just in case weird bugs exist like those found in this article.

nikita 3 days ago 1 reply      
Yes, fork can fail and we ran into this a few years ago at MemSQL. The problem was that MemSQL would allocate a lot of memory and linux wouldn't allow to fork such a process. A remedy to that is to create a separate process and talk to it via tcp. This small and low on memory consumption process is responsible for fork/exec paradigm.
walski 3 days ago 1 reply      
Thanks! Definitively in my "shit I should know but didn't before HN schooled me"-top-10 :)
kazinator 3 days ago 1 reply      
I once did

    rm -rf $PREFIX/usr/lib
in a Bash script being run as root. PREFIX was misspelled, and set -u was not in effect, so the misspelled variable silently expanded to nothing ...

CSDude 3 days ago 0 replies      
I'm a teaching assistant of an OS course, I grade projects. I constantly remind students to check the return values of the system calls and it is mostly the main issue in their codes.
wmil 3 days ago 1 reply      
The kill -1 behaviour seems like a bug.

Sure, it's documented, but how often is it done on purpose? It seems like something that should at least be a separate function.

jdrago999 3 days ago 0 replies      
Yes, yes it can. That's why it's always:

    fork() or die "Cannot fork: $!";

dasmithii 3 days ago 0 replies      
I'm uncertain that I've ever checked for errors after calling fork.

Thanks for providing an impetus to do so.

jheriko 2 days ago 0 replies      
i've never actually used fork... feeling glad now. i probably would have not realised this...

reminds me of allocating memory for an error message to tell someone they are out of memory. :)

donatj 3 days ago 3 replies      
And this is why I think Go-lang and its multi-return is the way of the future. In Go you are required to handle errors. If you want them to go away you have to explicitly use an _ and thats really easy to find in the code and shame the person who did it. Nothing fails silently. Nothing fails via primary return. It is such greatness it is hard to express.
jacquesm 2 days ago 0 replies      
That's why you read the manpage on a function before you apply it rather than just cutting-and-pasting the first bit of code google returns when you search for 'fork example unix'.

(In this particular case that actually returns (for me) a bit of code that gets it right.)

smegel 3 days ago 0 replies      
I see it happen quite often on boxes with limited memory and hungry processes.
runarb 3 days ago 2 replies      
Easy to test out to. In C:

  #include <unistd.h>    int main(void)  {         while(1) {                 fork();         }  }

general_failure 3 days ago 0 replies      
This is why we need checked exceptions.

Imagine if C/POSIX had a checked ChildProcessCloneException

VLM 3 days ago 0 replies      
"Neither of them fail often"

See /etc/security/limits.conf and nproc and "fork bomb"

Aside from intentional fork bombs I've seen this done intentionally in the spirit of a OOMkiller to keep a machine alive for debugging / detection of problem. 100 "whatever" processes will kill this webserver making it impossible to log in and diagnose much less fix, so we'll limit to 50 processes in the OS.

I've also seen it in systems where people are too lazy to test if a process is running before forking another and the system doesn't like multiple copies running (like a keep alive restarter pattern). If ops has no access to the source to fix that or no one cares, then just run it in jail where you only get two processes, the restarter-forker and the forkee. Then hilarity can result if the restarter thinks the PID of the failed fork means something, like sending an email alert or logging the restart attempt. "Why are my logs now gigabytes of ERROR: restarted process new pid is -1?"

gre 3 days ago 0 replies      
C programming 101.
arbitrage 3 days ago 0 replies      
Your username is offensive. I'd like to take what you say seriously, and perhaps even engage in further conversation about the topic at hand, but ... you know ... you look stupid. Grow up. You should be embarrassed.
danielbhall001 3 days ago 0 replies      
Lol this is a true war HAHA..
Using Google Earth to Find an ISIS Training Camp
514 points by mmayberry  1 day ago   101 comments top 22
e98cuenc 1 day ago 4 replies      
It makes me super proud that he used Panoramio, a site I created 9 years ago. Nowadays with the prevalence of Google Street View is less useful than it used to be for this kind of stuff, but there are still places that the GSV guys have not (yet) covered.
hemancuso 1 day ago 6 replies      
Anyone who is impressed by this should seriously check out Andrew Sullivan's view from your window contest. Most weeks a reader submits a photo from their window, literally anywhere in the world. And people track it down with very similar techniques, to the exact window. And a lot of the contest photos offer much less to go off of than this [namely, only 1 photo, low res, country/context unspecified].

Here is the winners archive.


Some of photos aren't too crazy or offer a landmark that is recognizable if you'd seen it before. But most of them offer very little in terms of knowing where to start unless you've got a huge body of contextual knowledge you can draw on.

A couple ones that I had absolutely no idea where to start with:


I imagine the CIA/NSA has a crack team of a couple dozen people doing this exact job.

joelrunyon 1 day ago 3 replies      
Interesting that as it's getting easier & easier for "normal" people to do stuff like this - our media is getting worse and worse at it. They've essentially given up on reporting or investigating anything original and simply spew back "opinions", "tweets" or PR releases.
mmayberry 1 day ago 2 replies      
I'm sure the government is on this as well but its pretty incredible what a citizen journalist can do from his home computer with a few basic web sites. Some of the other projects that he has worked on (finding a russian training camp, authenticating an Egyptian revolution movie, etc..) are worth a read as well
jebus989 1 day ago 1 reply      
I like this, but probably worth mentioning that this isn't a covert group so it seems somewhat akin to reporting where the Donetsk separatists are.
DrSayre 1 day ago 1 reply      
This is pretty awesome! I wanted to do something similar with the video of James Foley, but I figured there was people smarter than me already doing that and I really didn't want to watch that video.
MrJagil 11 hours ago 0 replies      
Such an interesting analysis, and seemingly a very nice site too. A shame the comments are so headless.

It's a travesty communities and discussions devolve so quickly on the internet (though I of course know from PGs eternal struggle how hard it is to prevent). Whoever can solve this problem (nice try disqus etc) will certainly claim fame.

Up and down votes won't cut it. It will require a serious inquiry into psychology, sociology and behavioural studies I believe.

qstyk 1 day ago 0 replies      
What an odd introduction: "Have you ever wondered what it would be like to go through training as an ISIS terrorist? Or better yet, where you would go to find such advanced training?"


moskie 1 day ago 2 replies      
This is cool and all, but I can't help but be reminded of the hunt for Boston Marathon bombers. Sure, maybe the author's heart is in the right place, but Random Person On The Internet could easily have gotten something wrong, that seems intuitively correct to the author and a general audience (us), but is in fact incorrect. Which makes me inclined to instead leave stuff like this up to the professionals. (appeal to authority, i know, but... getting this stuff right is important.)
carlob 1 day ago 1 reply      
Honest question: what's the advantage of using FlashEarth over Bing maps (which appears to be the source of the data)?
lolryan 10 hours ago 0 replies      
When looking for new apartments in NYC, a combination of Google Street View and the Flyover feature in Apple Maps does wonders to validate how truthful brokers are in their listings.
misiti3780 1 day ago 0 replies      
I read an interesting article a while ago about who they have tried (unsuccessfully) to track


from land artifacts. I supposed it is a bit more difficult as google maps didnt make it to Waziristan yet.

It sort of reminds me of this article from a while back


magicalist 1 day ago 0 replies      
Wait, so is the "new construction" around the tower the training camp? The article doesn't seem to say explicitly. Or is this more finding where they do training marches?
bmmayer1 1 day ago 0 replies      
Way to crowdsource ours and Iraq's military intelligence! I'm sure someone will find this useful.
Too 22 hours ago 0 replies      
I bet this guy is good at https://geoguessr.com/
jqm 1 day ago 1 reply      
Western governments didn't know this already?

I have a hard time believing they didn't. There are (in my opinion) strategic reasons ISIS was allowed to get as far as it did. And reasons they were allowed to appropriate large amounts of cash and US weaponry.

notastartup 1 day ago 0 replies      
Absolutely amazing investigative work. This is very good intelligence from using everyday resources and a keen eye from the determined.
justplay 1 day ago 0 replies      
this man deserve a huge applauds. He is our sherlock holmes.
IBCNU 1 day ago 0 replies      
Nice work.
kelukelugames 1 day ago 1 reply      

What are the next steps?

elleferrer 1 day ago 4 replies      
We need more bellingcats' - this was a great find - can we expect a planned airstrike in this location any time soon?
known 10 hours ago 0 replies      
You don't need great weapons. Just Drop A "Heat bomb" In Antarctic Ice Shelves. You'll Drown The World.
Why Racket? Why Lisp?
411 points by Tomte  2 days ago   276 comments top 32
agentultra 2 days ago 2 replies      
Some practical features I enjoy in CL:

1. Conditions and restarts: As far as error handling in programs go this is the most rock-solid system I've encountered. You can tell the system which bits of code, called restarts, are able to handle a given error condition in your code. The nice thing about that is you can choose the appropriate restart based on what you know at a higher-level in the program and continue that computation without losing state and restarting from the beginning. This plays well with well structured programs because the rest of your system can continue running. Watching for conditions and signalling errors to invoke restarts... it's really much better than just returning an integer.

As a CL programmer using SLIME or any suitable IDE, this error system can throw up a list of appropriate restarts to handle an error it encounters. I can just choose one... or I can zoom through the backtrace, inspect objects, change values in instance slots, recompile code to fix the bug, and choose the "continue" restart... voila the computation continues, my system never stopped doing all of the other tasks it was in the middle of doing, and my original error was fixed and I didn't lose anything. That is really one of my favorite features.

2. CLOS -- it's CL's OO system. Completely optional. But it's very, very powerful. The notion of "class" is very different than the C++ sense of struct-with-vtable-to-function-pointers-with-implicit-reference-to-this. Specifically I enjoy parametric dispatch to generic functions. C++ has this but only to the implicit first argument, this. Whereas CLOS allows me to dispatch based on the types of all of the arguments. As a benign example:

    (defclass animal () ())    (defclass dog (animal) ())    (defgeneric make-sound (animal))    (defmethod make-sound ((animal animal))      (format t "..."))    (defmethod make-sound ((dog dog))      (format t "Bark!"))    (make-sound (make-instance 'animal))    (make-sound (make-instance 'dog))
Will print "..." and "Bark!" But the trivial example doesn't show that I can dispatch based on all of the arguments to a method:

    (defclass entity () ()) ;; some high-level data about entities in a video game    (defclass ship (entity) ()) ;; some ship-specific stuff... you get the idea.    (defclass bullet (entity) ())    ;; ... more code    (defmethod collide ((player ship) (bullet bullet))) ;; some collision-handling code for those types of entities...    (defmethod collide ((player ship) (enemy ship))) ;;; and so on...

    Ship::collide(const Bullet& bullet) {}    Ship::collide(const Ship& ship) {}
Where collide is a virtual function of the Entity class requiring all sub-classes to implement it. In the CLOS system a method is free from the association to a class and is only implemented for anyone who cares about colliding with other things.

The super-powerful thing about this though is that... I can redefine the class while the program is running. I can compile a new definition and all of the live instances in my running program will be updated. I don't have to stop my game. If I encounter an error in my collision code I can inspect the objects in the stack trace, recompile the new method, and continue without stopping.

3. Macros are awesome. They're like little mini-compilers and their usefulness is difficult to appreciate but beautiful to behold. For a good example look at [0] where baggers has implemented a Lisp-like language that actually compiles to an OpenGL shader program. Or read Let Over Lambda.

One of the most common complaint I hear about macros (and programmable programming languages in general) is that it opens the gate for every developer to build their own personal fiefdom and isolate themselves from other developers: ie -- create their own language that nobody else understands.

Examples like baggers' shader language demonstrate that it's not about creating a cambrian explosion of incompatible DSLs... it's about taming complexity; taking complex ideas and turning them into smaller, embedded programs. A CL programmer isn't satisfied writing their game in one language and then writing their shaders in another language. And then having to learn a third language for hooking them all up and running them. They embody those things using CL itself and leverage the powerful compiler under the floorboards that's right at their finger tips.

Need to read an alternate syntax from a language that died out decades ago but left no open source compilers about? Write a reader-macro that transforms it into lisp. Write a runtime in lisp to execute it. I've done it for little toy assemblers. It's lots of fun.

... this has turned into a long post. Sorry. I just miss some of the awesome features CL has when I work in other languages which is most of the time.

[0] https://www.youtube.com/watch?v=2Z4GfOUWEuA&list=PL2VAYZE_4w...

reikonomusha 2 days ago 5 replies      
I dislike this notion that Lisp (or Haskell or OCaml or ...) owe it to everyone else to explain and enunciate why it can be more productive to use Lisp.

"""Thats asking too much. If Lisp languages are so great, then it should be possible to summarize their benefits in concise, practical terms. If Lisp advocates refuse to do this, then we shouldnt be surprised when these languages remain stuck near the bottom of the charts."""

What?? Why? The problem is not and has never been communication of Lisp features. No one made a concise list of why C and Java are so great that people rushed to use them. Instead, they were pervasively used and taught in universities, and they are pervasively used in the development of most applications for e.g. Windows and Linux, and they are relatively simple languages (in theory) whose semantics most people "get". No wacko high order crap, no weight curried things, no arrows or morphisms or monads or macros.

Programmers of such languages don't owe the rest of the world anything. Everyone has a choice about what to use, and it's each individual programmer's responsibility to choose them wisely. There is plenty of material about Lisp and Scheme out there. Unfortunately, we are in this TL;DR culture where no one has the time to spend a few hours every week to learn something new, since somehow that's too big a risk on their precious time.

Now, for some comments:

1. Everything is an expression.

He says this is a boon, but it's also confusing for "expressions" which are side effectful. Too bad he did not talk about that, nor did he talk about how the expression-oriented way of thinking is really best for purely functional languages that allow for substitution semantics.

2. Every expression is either a single value or a list.

This is wrong, unless we devolve "single value" into the 1950's idea of an "atom". What about vectors or other literal composite representations of things? What about read-time things that aren't really lists or values?

3. Functional programming.

Functional programming is indeed great, but why don't we talk about how in Lisps, we don't get efficient functional programming? Lisp has tended to prefer non-functional ways of doing things because Lisp will allocate so much memory during functional programming tasks that for many objectives, FP is far to inefficient. Haskell solves this to some extent with things like efficient persistent structures and compilation algorithms such as loop fusion. Lisp doesn't really have any of this, and the data structures that do exist, many people don't know about or use.

4 and 5 don't really have to do with Lisp but particular implementations. That's fine I guess.

6. X-pressions.

What the hell is an X-pression?

7. Racket documentation tools.


8. Syntax transformations.

He made the same mistake as he so baroquely outlined at the start. What in the world are these "macros" and "syntax transformations" good for? You're just telling me they're more powerful C preprocessor macros that can call arbitrary functions. But I was taught that fancy CPP code is a Bad Idea, so boosting them is a Worse Idea.

9. New languages.

Same problem as 8. You say it's useful but you don't say why. Just that it's "easier".

10. Opportunities to participate.

Nothing to do with Lisp again.

* * *

Instead of all this glorifying of Lisp and etc, why don't we spend time increasing that library count from 2727 to 2728? Or do we need to go through an entire exercise about whether that time spent is worth it or not?

"""Rather, you arebecause a Lisp language offers you the chance to discover your potential as a programmer and a thinker, and thereby raise your expectations for what you can accomplish."""

You're repeating everyone else. Notice how difficult it is to convey such things without being hugely abstract and unhelpful? Why don't other programmers see this huge productivity benefit from these Lisp wizards in their day-to-day life? Where are the huge, useful applications? They all seem to be written in C or C++.

"""Its mind-bendingly great, and accessible to anyone with a mild curiosity about software. """

It is accessible to those who are intently curious about theoretical aspects of software development, especially abstraction, and who can take exercises which require mathematical reasoning. A "mild curiosity" in my experience with others will not suffice.

* * *

This post may sound somewhat cynical and negative, but Lisp enlightenment articles are almost as bad as Haskell monad tutorials. They're everywhere and by the end, still no one gets it. And I don't like the attitude that because a group G doesn't understand it, and group H does, that H owes it to G to spoonfeed the information. That's not the case.

bad_login 2 days ago 2 replies      
It seems most people here have never used (and not tried) racket.

I decided to use racket for my little sides projects, asreplacement for scala and clojure.

I choose it because it was clear for me i can't stand limitationsother language impose to me in way of style and boilerplate, theracket macro (aka syntax transformer) system is the most advancedi know to reduce the boilerplate to a minimum and so just writewhat i want to express. In facts i rarely write macro becausewriting a good macro demand you take care of errors syntax, i amlazy in the bad meaning of the term.

I choose it because it's dynamic typed and i get more convincedthat type go on your way most of the time (expect complexalgorithms)(i write little projects, so refactoring argument isout). It enable me to write code and eval it on the fly withgeiser (using enter! on the back), after eval the new function itest it in the repl, hack until the function meet the requirementcopy paste from repl and boom i get a unit test. Because it aseval and it will become handy one time in your programmer lifefor sure.

I choose it because of it's sexpr syntax, as a heavy user ofemacs i know that other syntax is a pity.

Also because it has (and i use):

1. llar parser (implemented through a macro).

2. A pattern matching nothing to envy scala or clojure deconstructs.

3. An optional type system.

4. A contract system.

What i find hard as a new comer (to racket, not as a programmeralready now scala, clojure, half of c++ :), php) is

1. The broadness of the features the language offer, whichfeature to use e.g.: class or generics.

2. The documentation is rich but lack of examples for the commoncases, so you need to read the doc of the function (sometime it'shuge).

3. Understanding how racket module works is quite hard and youhave the documentation, if you don't plan to play with the macroexpander (the stuff that run your macro) and some dynamicfeatures you don't really need to.

4. You need to 'register' the errortrace library if you want astacktrace, quite a surprising behaviours for me.

My opinionated conclusion:

Racket is the best language design i ever see, it's hard to learnbut make you feel learning an other language will just become tolearn a new sub-optimal syntax. Sadly the ecosystem is lackinglibraries and peoples and i am not helping in this way.

aaronbrethorst 2 days ago 3 replies      
From what I've seen out of the Clojure community over the past few years, it seems like they're far more likely (and able) to offer up concrete examples of how Clojure makes their businesses and products successful in a way that an imperative language could not. So, yay Clojure community, and boo on hand-wavy Lisp people.




na85 2 days ago 11 replies      
I sure hope the giant, hideous, obtrusive diamonds inserted into the text to denote a hyperlink doesn't catch on as a trend. It's a great way to break the flow of the text and irritate your readers.

As for the idea of Lisps, well, it sure seems neat. But I've literally never run across a situation where I needed my code to edit itself. I've never run across a situation where the lack of an everything-is-an-expression-is-a-list feature prevented me from doing what I wanted to do.

So I just don't really feel the need to get repetitive strain injuries in my pinky from reaching for the parentheses all the time.

jimbokun 2 days ago 3 replies      
If you took a Common Lisp programmer from the early to mid 90s in a time machine to today, very little about current programming languages would seem novel or an advance over what he or she was using then.

I think this is a reason for much of the smugness of Lisp programmers. Whatever features you think are new or cool or advanced about your programming language, Lisp probably got there first.

mrottenkolber 2 days ago 5 replies      
Can not resist...

This article is fairly misguided. I find it painful that everybody who writes about a Lisp offshoot (Scheme, Clojure, ...) ends up misrepresenting Common Lisp.

To sum up "Why Lisp?" from a CL perspective: CL has pretty much every feature of every programming language around, only that its better designed, implemented and generally more powerful. It's just a poweruser language. Its not just macros, sexps and lambdas. Its also number types, arrays, OOP, symbols, strings, structs, dynamic/lexical variables, lambda lists, multiple return values, on-line disassemble, exceptions, restarts, MOP, metacircular definition, great implementations, great libraries...... the list goes on and on... I surely forgot a ton of great stuff. TL;DR: CL got everything. And this "everything" is designed so well that its extensible and no CL programmer ever needs to doubt that any new feature can be implemented easily in CL.

To correct a few of the wrong statements of OP:

> WaitI love state and data mutation. Why would you take them away? Because theyre false friends.

CL is NOT particularily functional. Just because we know how to write good side-effect free code, doesn't mean its a functional language. (We jave SETF after all, failed to mention that aove).

> a syntax transformation in Racket can be far more sophisticated than the usual Common Lisp macro.

Outright wrong. The only reason Scheme has weird macro systems is because its a Lisp 1. CL is designed well (thus being a Lisp 2), and thats why its simple but ultimately more powerful macro system can work.

> A macro in Common Lisp is a function that runs at compile-time, accepting symbols as input and injecting them into a template to produce new code.

This is so wrong I had to write this comment. A macro in Common Lisp is a COMPILER, it accepts arguments and returns an SEXP. It is infinitely powerful, it can do EVERYTHING.

cturner 2 days ago 1 reply      
The top reason here could been written - lisp is more expressive. You can find ways to express an idea that make sense now, and which are readable. Macros is a different part of the same idea.


Maybe I'm doing it wrong, but a problem I've had with racket is as you begin to build larger projects, when something breaks it can be quite difficult to find out exactly where the break happened. When you compile Java or run Python, it's almost always immediately obvious what broke.

The way I got around this was to use a methodical TDD approach. Would be a shame if that turns out to be as good as it gets for lisp.

Something I haven't done yet but am interested to get to is attaching a repl console to a running process.

_delirium 2 days ago 0 replies      
The manual to this publishing system, discussing how its markup/programming language is implemented as a custom Racket language, is pretty interesting: http://mbutterick.github.io/pollen/doc/
wes-exp 2 days ago 1 reply      
Author is wrong about hygienic macros they are not more powerful. They are less powerful, and more complicated, in order to enforce safety. Whether this is preferable or not is a matter of debate.
ColinDabritz 2 days ago 1 reply      
A good list of some interesting 'day to day' benefits of Lisp. Maybe that is something that would appeal to beginners especially.

From my perspective Lisp is a powerful language because of its genesis in research. The question wasn't "How do we make a tool to make this hardware do what want?" but rather for a research goal.

If you want to read the actual original Lisp paper look up: Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I John McCarthy April 1960

Paul Graham covers it nicely in this essay, especially the "What made Lisp different" list about 1/3 inhttp://www.paulgraham.com/icad.html

Lisp has had expressiveness we're only recently seeing in popular mainstream languages now. It has to do with the design, the simplicity, and how Lisp expresses problems. I've often heard it described as "the language gets out of your way." That's why Lisp.

DCKing 2 days ago 5 replies      
Why Lisp? That is understood.

Why Racket? From an ignorant outsider's perspective, all Lisps seem to be more or less interchangeable when it comes to the language. They only differ in the details, and each seems to be about as difficult to learn as the other. Although this article does make somewhat of a case for specifically Racket, it seems to be a rather weak one - tools are nice and some language details are nice. But the same general arguments can be made for other Lisps, most notably Clojure. It seems to me that Clojure is a lot more practical: it has many good libraries in both Clojure and Java, it has some great tools, there's a lot of momentum, and it can be deployed everywhere (including the browser).

So, being an ignorant outsider, is there any reason the Lisp I should learn isn't Clojure?

michaelsbradley 2 days ago 0 replies      
RacketCon is being held in Saint Louis, Missouri, USA the day following the Strange Loop 2014 conference (also in Saint Louis):


I hope to see some of you there!

59nadir 20 hours ago 0 replies      
I think it's kind of sad that most of the comments in this thread are about other languages than Racket and only barely touch on the articles points (mostly the stuff you don't need to actually try the language (or any lisp) to comment on). I think it says a lot about what the crowd on HN is really all about.
einhverfr 1 day ago 1 reply      
I am learning Perl's FP features and really liking it, and teaching myself common lisp. I enjoyed the article quite a lot actually.

I find it is very hard to define functional programming for many people but this is what I have come to explain to people:

Functional programming means thinking in terms of mathematical functions in the f(x) sense. Once you get that basic promise, that for any given input you have a single correct output, then it transforms the whole way you think about and designing your software.

The better I get with lisp, the more everything else changes. I may have to try Racket.

mapcar 2 days ago 1 reply      
> If Lisp languages are so great, then it should be possible to summarize their benefits in concise, practical terms.

His list is concise but man did he take a while to get to it!

Seriously though. The introduction was super relevant as I have wondered the exact same question about Lisp myself. What features make it so praise-worthy? Maybe X-expressions isn't a core feature for everyone to appreciate, but the fact that everything is an S-expression is an understated value. People complain about its syntax, but alternate versions (so many reincarnations of parentheses-less Lisps) have never caught on.

The thing is, Lisp is no longer unique in its feature set, and languages with more standard forms of syntax have incorporated some of its features. But it is uncommon to find all of these listed features in one language. In the domain of data analysis where I do most of my work, it still makes me sad that XLISP-STAT has been supplanted by other languages which leave the user wanting.

colig 2 days ago 0 replies      
Does anyone else find it difficult to highlight things on this page? Specifically, 'kvetchery' which is found in the third paragraph.

I believe the author is the one responsible for the facelift of Racket's documentation. He may belittle his own lack of formal programming education but I am thankful for his design chops.

happywolf 2 days ago 2 replies      
Agree with some of the earlier comments: the diamond-shaped thingies inserted are really a nuisance and breaks the reading flow.
rodrigosetti 1 day ago 0 replies      
Nice write-up. I really like Racket, but never had a chance to use it in a professional project so far.

By the way, Scribble (item 7) is an implementation of Literate Programming - a feature is some other languages too. In Haskell, for example, you can write programs in Latex with embedded code.

amirouche 2 days ago 1 reply      
I tried somewhat Pollen and it's kind of fantastic, the only thing is getting to work with a new project in a language that I don't know very well is difficult for me. That's why I started a similar project in Python, watch https://warehouse.python.org/project if you interested /azoufzouf/
theRhino 2 days ago 0 replies      
Not sure how closely you have read Siebel - your 1st few points are covered pretty comprehensively at the start of his book
hyp0 2 days ago 1 reply      
Re 1. everything is an expression benefit, condition example:

C-like languages often have the ternary operator, cond?exp1:exp2, that is exactly this. I feel clever using it, but I consider it a hack, because it's (usually) less clear. A microcosm of lisp, clever but unclear.

picardo 2 days ago 0 replies      
Is it just me or is this page entirely blank? I poked through the source for quite a long time, but couldn't undo it. The text is there. There is something in the CSS that obscures it in Chrome. So annoying.
sanatgersappa 2 days ago 0 replies      
My personal experience with Clojure has been that it 'bends' the brain - in a good way, and it forever changes the way you program. It is extremely difficult to go back to the style of coding I did before my exposure to Clojure. I guess other Lisps would provide a similar experience.
alvatar 2 days ago 2 replies      
Many of the last items in the list should be in the category "Scheme". Racket is a dialect from Scheme, but it still is a Scheme. The syntax-case macro transformations are available in most Scheme systems.
dschiptsov 2 days ago 0 replies      
"Clarity is an evidence" as the saying goes. This post only proves its correctness.)

The distinct feature of Lisps and good lispers is clarity of thought and conscience of writing.

TheMagicHorsey 2 days ago 1 reply      
Why does Racket perform so slowly on benchmarks compared to Clojure and SBCL?
guilloche 2 days ago 1 reply      
When visited with w3m I got spaces breaking words frequently.Is there any justification to use complex html to control intra-word spacing for a web page?
CMCDragonkai 2 days ago 0 replies      
Many of the reasons you wrote are the same that got me attracted to Elixir. That and of course erlang interop and OTP.
kazinator 2 days ago 0 replies      
Check it out. For five years I have been developing a tool called TXR. It's a "Unixy" data munging language that is ideally suited for programmers who know something about Lisp and would like to move away from reaching for the traditional stand-bys like awk, sed, perl, ...

You do not have to know any Lisp to do basic things in TXR, like extracting data (in fairly complicated ways) and reformatting it, but the power is there to tap into.

In TXR's embedded Lisp dialect, ("TXR Lisp"), you can express yourself in ways that can resemble Common Lisp, Racket or Clojure.

You can see a glimpse of this in this Rosetta Code task, which is solved in three ways that are almost expression-for-expression translations of the CL, Racket and Clojure solutions:


Or here, with syntax coloring:


If you closely compare the original solutions, you will see that certain things are done more glibly in TXR Lisp.

qewrffewqwfqew 2 days ago 1 reply      
That's some pretty ugly text for a site called 'practical typography'.
jstoja 2 days ago 2 replies      
Please....text-align: justify;...
The Strange and Curious Tale of the Last True Hermit
368 points by randomwalker  2 days ago   83 comments top 26
grownseed 2 days ago 5 replies      
Many, many times have I considered isolating myself, separating myself from the seemingly vacuous concerns of a society riddled with senseless traditions, layers upon layers of societal band-aids and pointless struggles over ridiculously subjective arguments.

I spent most of my life being disgusted by the frivolity of most people's desires and qualms, and for this reason, I feel I deeply understand why Chris Knight did what he did. No reason, no justification, no particular aim, just life.

While I still catch myself wishing for such a life, I realized I could not blame or reject what I do not actively participate in. Furthermore, I came to the conclusion, possibly wrongly, that a life worth living is a life worth sharing, that society will always be able to offer you more than you can offer it.

I now believe that the solution is not to reject society, nor be tied by its requirements or norms, but rather behave as a free agent, with independence, compassion and mental fortitude.

Law, Economy, Politics, Religion, Science, Technology, ... are, in my opinion, mere relics and artifacts of thousands of years of civilization, localized attempts at guiding the seemingly mis-guided, while becoming eventually meaningless in the grand scheme of things.

These civilized relics are not necessarily bad, but as with anything else attachment becomes the issue. While becoming a hermit is possibly the quickest way of severing those ties, attachment is the burden of the mind, not of society at large. Isolation diminishes, or even wipes attachment issues altogether, but it does not resolve them.

This might come across as preachy, though it certainly isn't my intention, I simply wanted to share my view with anybody who, like I used to, wishes for isolation as a remedy.

soneca 2 days ago 4 replies      
"I did examine myself," he said. "Solitude did increase my perception. But here's the tricky thingwhen I applied my increased perception to myself, I lost my identity. With no audience, no one to perform for, I was just there. There was no need to define myself; I became irrelevant. The moon was the minute hand, the seasons the hour hand. I didn't even have a name. I never felt lonely. To put it romantically: I was completely free."

This part resonated a lot to me. I consider self-awareness one of my qualities. But I too feel like the more I try and understand myself, the more distanced of the world I am. If I micro-analyze every reaction I have, I miss the point to connect to another person. I take myself out of society.

I found out that being defined by another person is a good thing for me. Particurlaly by people I love. I want to naturally be the person that made people I love love me.

ithought 2 days ago 0 replies      
I marvel out how perfect this story is in various ways. No real resolution to it, no motive, no discernible point. The unfolding of it all and the brutally abrupt ending; "We are not friends", seemed predictable shocking and sad at the same time.

Interestingly, the journalist has an upcoming movie where he's played by Jonah Hill. A fugitive murderer had used his name as an alias and through that, he'd developed a relationship with him and interviewed him after the person was convicted.

grecy 2 days ago 2 replies      
> He was never happy in his youthnot in high school, not with a job, not being around other people. Then he discovered his camp in the woods. "I found a place where I was content," he said. His own perfect spot. The only place in the world he felt at peace.

This resonates very strongly with me personally. So much so, I traveled to Alaska and hiked into "The Magic Bus" of Chris McCandless/Into The Wild fame [1]. From there, I spent 2 years driving to Argentina, sleeping out in my tent as often as possible. I'd often go a week without seeing or talking to another person, two weeks when I found somewhere remote enough.

Since then I've moved to the Yukon, where I've met some very interesting characters. One guy, in Dawson City, lives in a cave across the Yukon River from town. He has a second cave full of chickens, and he sells the eggs in town to make enough money to pay for food/beer. He boats across the river in summer and walks across the river for 7 months of the year.

I once again feel the pull, and I'm heavily planning my next trip - 2 years around Africa, hopefully getting as remote as possible. With luck, that will lead into a 2 year Europe->SE Asia trip, once again camping and hiking as much as possible.

[1] http://theroadchoseme.com/the-magic-bus

bmj 2 days ago 0 replies      
Reminds me a bit of the story of the Russian family that lived in a remote section of the Siberian taiga:


Granted, they weren't alone (it was a family), but they truly lived a hermit's existence, even when they were discovered by geologists.

hyp0 2 days ago 3 replies      

  The moon was the minute hand,  the seasons the hour hand.
Guy can write. Once his weekly obligation ends, he could make it work with a source of income as a writer, using a smartphone, bluetooth keyboard, solar panel, and get near a cell phone tower. Without rent and utils, he needs much less money than usual. Order groceries etc online, so he can remain isolated (and of course hunt/fish).

Or write a book, invest, live on interest in the woods.

He wouldn't really like having to write, but he admires good writing, and if it would grant contentment...

See also, coding in the woodshttp://www.atariarchives.org/deli/cottage_computer_programmi...

GotAnyMegadeth 2 days ago 0 replies      
You could argue that he missed out on a lot of the amazing things about modern life, you could argue that we are missing out on many of the things of a solitary life in the forest. Either way, I'm glad that he didn't miss out on one of the most important things about modern life: Pokmon.
personlurking 2 days ago 2 replies      
He may have lived in an uninhabited place but it seems, considering the amount and variety of stuff he stole, his mind was almost constantly inhabited by the modern world.
wglb 2 days ago 1 reply      
A remarkable article.

The author of this has written a number of other spellbinding articles: http://www.gq.com/contributors/michael-finkel

kingkawn 2 days ago 0 replies      
Kickstart an LLC that employs him to maintain his camping ground alone, a privately funded forest ranger. Satisfy the terms of the court decision while allowing him to return to his place and live in peace with supplies provided at a drop site.
riemannzeta 2 days ago 1 reply      
This guy is more worthy of admiration and emulation:


sixQuarks 2 days ago 1 reply      
fascinating read. With 7 billion people in this world, it never ceases to amaze me the different types of experiences humans have had.

I also just finished listening to the latest "Hardcore History" podcast regarding WWI. Holy shit, what crazy things humans have done/experienced.

arjn 2 days ago 1 reply      
Very nice article and writing - and a great subject.

To wander the woods all day, read when you want,

To be free of all connections, to not even need a name.There is something to it.

Oh ..and Rudyard Kipling ... wonderful.

lazyeye 2 days ago 1 reply      
For anyone for whom this story resonated, this book is well worth a read:-

An Island to Oneselfhttp://www.amazon.com/An-Island-Oneself-Tom-Neale/dp/0918024...

More info on Tom Neale:-http://en.wikipedia.org/wiki/Tom_Neale

keithpeter 2 days ago 0 replies      
OP reminded me of this


And this


Fugue? Small scale stroke? Or just a need to quieten the brain? Has this man had a neurological examination of any kind?

suprgeek 1 day ago 0 replies      
If you want to put faces to names mentioned:http://www.pressherald.com/2013/04/09/north-pond-hermit-susp...
comrade1 2 days ago 1 reply      
Did he not have a sex drive? I just can't imagine an existence like that, without sex. I'm about 1/2 through the article.
daveslash 2 days ago 0 replies      
Being someone who grew up in Maine, not too far from there, spending hours upon hours as a young child, alone exploring acres and square-miles of woods - what struck me the _most_ was "whoah, the boogie man WAS real all those years...."
ilamont 2 days ago 3 replies      
Did this remind anyone else of the Satoshi Nakamoto outing (https://news.ycombinator.com/item?id=7353283)? Journalist befriends recluse, turns it into a magazine story.
gre 2 days ago 0 replies      
Chris Knight was the name of Val Kilmer's character in Real Genius.
coldcode 2 days ago 1 reply      
There are many times in life where you wonder if you would be better living alone outside of the regular world. This guy actually did it. I would go insane if I was alone for more than a few weeks.
zem 2 days ago 0 replies      
as a kipling fan, he was almost certain to have read "the miracle of purun bhagat" [http://www.hermitary.com/literature/kipling.html]. i would have loved to see his opinion of it.
joeyspn 2 days ago 2 replies      
Why is that article dated on September 2014?
pthreads 2 days ago 0 replies      
Who is to say he is the last true hermit? There could be several in this world. We just don't know.
yuvalo 2 days ago 0 replies      
"You speak like a book, one inmate teased."
Fish vs. Fish in Street Fighter II
369 points by cyberfart  3 days ago   69 comments top 29
Magi604 3 days ago 3 replies      
If you like to watch computer-controlled 2d fighting characters with actual decent AI (most of the time) squaring off against each other, check out http://www.saltybet.com/ .

There are over 5000 characters in the database, and it runs 24 hours a day. The matches can get really amusing sometimes.

tormeh 3 days ago 5 replies      
Can the fish see the game? What if the fish were rewarded for winning? Can we train fish to play street fighter? How about rats?
minimaxir 3 days ago 2 replies      
Background: This is the logical conclusion of Fish Plays Pokemon (http://motherboard.vice.com/read/an-exclusive-interview-with...), which was a hackathon project and a play on Twitch Plays Pokemon.
SchizoDuckie 3 days ago 0 replies      
And as a finishing touch, man made 2 fish fight eachother on Street Fighter and streamed it to the world in real time.

God I love the internet :D

kevin_thibedeau 3 days ago 1 reply      
He needs to randomize the 1P/2P assignment because Aquarius seems to prefer being on the right side of the tank and gets an unfair advantage as 1P.
nsxwolf 3 days ago 2 replies      
A lot of engineering effort to watch Balrog jab and Blanka duck forever.

Fish are bad at playing Street Fighter II. I already intuitively knew this.

tinco 3 days ago 0 replies      
Stream seems to be down, here's what was on: https://www.youtube.com/watch?v=NHrRksz-XLI
Eiriksmal 3 days ago 0 replies      
Soooooo much more engaging than the Fish Plays Pokemon the creator links to. I explained it to a coworker (...in marketing) and couldn't help but giggle at every other sentence in my explanation. "So the fish swim around in that virtual grid, see, and that triggers button presses in the fighting game." Sweet action, as the kids say.
w-ll 3 days ago 0 replies      
Well given that its Street Fighter my best strategy was to always just mash random buttons.

I do wonder if he choose the fish at random or tried to find more active fishies.

eklavya 3 days ago 2 replies      
I don't get it, can someone please tell what is going on?
level09 3 days ago 0 replies      
Humans are pattern recognition animals, we like to look at anything random and extract meaningful patterns from it.

I see this game as a clear example of that, where (random) fish movement is fed to a computer game and translated into game play (pattern). I fail though to conclude anything particularly interesting if one fish won the game :)

Natsu 3 days ago 1 reply      
Hypnotic. Aquarius can't seem to lose.
golergka 2 days ago 0 replies      
What I'd really want to see is two monkeys playing the game against each other for the treat. If given some time, I believe that they can get better than most humans.
pawn 2 days ago 0 replies      
I had a dream one time when I was a kid that I trained a puppy to play Street Fighter 2. I'd never imagine it would ever be linked to anything this close to reality.
coldcode 3 days ago 2 replies      
I wonder now if one could train chimpanzees to play the game successfully.
amatera 2 days ago 0 replies      
If one of the fish managed to make an Hadoken it should get a job at Konami (or at least a nice place on an "Konami"-Desk)
zeeshanm 2 days ago 0 replies      
Would be interesting to observe if there may happen to be some pattern in fish movements as more data is collected over time.
aaronm14 3 days ago 3 replies      
Anyone know if these guys making these games are using some kind of library for the image processing or what? Seems pretty complicated to do
sagnew 3 days ago 1 reply      
Haha this is awesome! First a couple of HackNY fellows make Fish Plays Pokemon, now this! I can't wait for what fish related fun comes next.
spiritplumber 3 days ago 0 replies      
what's missing from "fish plays game" is some sort of reward for the fish. I wonder how clever they can get.
louhike 3 days ago 0 replies      
These fishes are more entertaining than the Pokemon game which was played through the chat room.
thestonefox 2 days ago 0 replies      
what about the parody of http://twitch.tv/garlicplays
antidamage 3 days ago 0 replies      
I was actually hoping for a writeup on the differences between regular fish and fish in SFII, but this is also good.

Next we need kittens versus fish. Kittens are much more active.

lowlevel 3 days ago 0 replies      
This is best thing I've seen all year.
grej 3 days ago 0 replies      
If I could upvote you 10 times for the most ridiculously useless (but cool) thing I've seen on HN, I would. You win HN for today sir.
andersthue 3 days ago 0 replies      
This is the answer to life, the univers and everything!
gzur 3 days ago 2 replies      
I thought I was going to see Phil Fish fight himself, somehow.
codr 2 days ago 2 replies      
Just gotta say this is a strange outcome of our technological developments.. I mean think of the energy that was spent making this work - it could've been done doing something.. I dunno fucking useful?!

It's almost as bad as watching two "grown-ups" play Street Fighter lol.

Feynman Lectures on Physics now free online
348 points by silenteh  9 hours ago   49 comments top 9
paulvs 45 minutes ago 0 replies      
Just chapter 1 contains answers to many questions that I've had on the back of my mind.. Why does water ice expand when it melts? If water ice is a crystalline structure, how can it vary in temperature (e.g. -5 degrees to -10 degrees). A very good read.
rdxm 5 hours ago 3 replies      
every now and then i go into the library and read a chapter out of my hardcover set of these. my kids do the "geeez dad" thing when i make them sit with me and look at them.

classics.. to be sure....

edit: i almost feel like these shouldn't be something that gets digitized.....this knowledge and its presentation belongs in a tactile medium...

cjrd 7 hours ago 2 replies      
Wow - this is awesome. Anybody interested in helping me map out a dependency graph of the concepts in the Feynman Lectures?
leephillips 8 hours ago 1 reply      
These books show more than anything else why Feynman is so revered among physicists as a teacher. An introductory course in physics, simple yet demanding, and shot through with Feynman's unique approach and personality.
k-mcgrady 5 hours ago 4 replies      
Is this worth reading for someone without a particular need to understand physics in depth? What I mean is if I take the time to read these will I learn anything useful to someone not pursuing a career in physics or related field?
CountHackulus 8 hours ago 3 replies      
Disappointed it's explicitly not for download, but it's still excellent to be able to have access to all of these.
allegory 5 hours ago 0 replies      
Wonderful news but particularly sickening for me as I fished out 130 for the hardback volumes last year!

Absolutely great books however!

I've learned a lot already from those books.

Also, the "For the Practical Man" (algebra, geometry, trig, arithemtic) series of books on mathematics that Feynman started his career with. They are hard to get hold of and expensive but the calculus book is wonderful if incredibly dense and written in an early 1900's style!

Those, a cheap Casio calculator, a box of pencils and some school exercise books have taught me more than a university degree and years of industry experience.

Edit: found a legitimate PDF of "Calculus for the practical man" http://physsocyork.co.uk/notes/J.%20E.%20Thopmson--Calculus%...

josealicarte 8 hours ago 0 replies      
Great lecture, this very useful :) to all fresher
YouTab: Automatically get chords for music
348 points by yoodit  3 days ago   88 comments top 32
yoodit 3 days ago 13 replies      

I've been hard at work on a project that I would like to share with you. It's called YouTab and its what I believe is a great way to sync lyrics and chords with music. The smart guys I work with use a nifty algorithm to "listen" to the music and in a lot of cases it does a really good job in getting the chords. But since technology has its limits there's an editor application that lets you fix what is wrong.

I am hoping that this will develop into a useful resource for musicians and music lovers and I'd love to hear what you think about it and get ideas as to what you might like to see next.

Thank you for taking the time to read this.

Jemaclus 3 days ago 0 replies      
This is really nice. There are a number of songs whose chords I can't find, and this one came up with (at the very least) a starting point for figuring it out. I like how it tracks the beat and shows the waveform, and I especially like having the video play in the bottom right so I can watch as I play.

Very cool. The only nitpick is a copy tweak. Throughout the app the app refers to itself as "us" or "our" ("Working our magic") and then almost immediately after as "me" ("It takes me about 30 seconds.") You should consider unifying the pronouns so that either you're always using first person, or you're always using the royal "we".

Otherwise, this is pretty rad. I can see myself using this to practice some new songs that come out.

Hytosys 3 days ago 0 replies      
Both this and Chordify are really awesome endeavors! However, I find them both to be erratic in accuracy to the same degree. Many times, a major in a simple I-IV-V pattern will turn into a minor, or vice versa, or a simple major will excitedly be read as a major 7th. It must be a huge pain in the ass trying to pluck out these harmonics and to accommodate for all sorts of wacky instruments, so I'll let it slide! Both services are tremendous if only for getting the initial framework for a song and figuring out some of the incorrect chords yourself.

Does YouTab have a "confidence" rating for each chord? I don't know if it'd be the best UX to include that number for each chord (and maybe even alternate chord suggestions), but there are times when I'm simply playing along with the song incorrectly and it takes me a couple amateur minutes to correct the one chord that Chordify got wrong.

Great stuff, anyway!

abakker 3 days ago 3 replies      
I'm pretty impressed with this - I purposely fed it a song I thought would kill it ("Fuzz Universe" By Paul Gilbert) - It did an impressive job of capturing many of the underlying chords, while ignoring the lead lines over the top. I notice that it is not really great at capturing very fast chord changes, an has some trouble with varying time signatures, but great first effort. It would be pretty cool if you could upload your own MP3 to it, and get a result back - that way you could generate the output off a recording of yourself to distribute to bandmates.

Edit: Later, that song did kill it, as the changes got faster/harder.

Also, it doesn't seem to have a complete set of possible chords - one song to check would be "A hard Day's Night" by the Beatles. It has a difficult and distinct first Chord which might be valuable to test against.

ganeumann 2 days ago 0 replies      
Awesome service. I'll definitely be using it.

A question about your plans: you describe annotating as 'contributing to the community' but your terms of service say that only you, not the community, have a license to my copyright on the annotations. You also say that you may one day charge fees.

There have been cases (notably http://en.wikipedia.org/wiki/Gracenote_licensing_controversy) where users have built a database that a company has then claimed as its own and profited from, to the exclusion of the users. So, the question: are the user-contributed annotations open source and licensed as such? And, if not, why would I contribute annotations to a wiki that I may later no longer be allowed to access?

jameshart 3 days ago 0 replies      
Been having a lot of success with Capo [1] recently - excellent beat and chord detection (though it often overcomplicates simple fifths and sus4s assuming they're much more full voiced than they are); also provides a time/frequency intensity view that you can use to pick out melody lines which it automatically translates into tab.

[1] http://supermegaultragroovy.com/products/capo/

ChrisMac 3 days ago 2 replies      
I'm using Firefox and it kept crashing on me on about half the songs I tried.
neonscribe 3 days ago 0 replies      
I tried it with a song that is very familiar to me, "Antonio's Song" by Michael Franks, using the top hit in Videos for this Google search. It is in 4/4 time and the beat doesn't vary at all. It has five different actual chords: Am7, A7, Bm7b5, Dm7 and E7, and the pattern of chord changes is quite conventional in a verse-chorus structure. The algorithm did a so-so job of determining the chords. It rarely noticed that they were seventh chords, instead identifying them as Amin, Bdim, Dmin and E. It appears to rely strongly on the bass part. In one case there was a C# passing tone in the bass between an A7 and a Dm7 chord that was identified as a C#aug chord. I didn't try the editor. I guess this would make it easier than entering all the chords from scratch, but it struck me that there was still a lot of manual work to make it accurate and usable.
mcnape 3 days ago 1 reply      
Hey, great job on the website! I have one small criticism (in addition to others already listed here). I put in a song that was in the key of B. The most-used chords of this key are B, E, F#, and G#m - as I'm sure you're well aware. However, the songs chords were detected as B, E, F# and Abm. While technically correct, as Abm and G#m are the same chord, the convention is to list Abm as G#m in this case. I believe this is to avoid mixing sharps and flats in the written chords forms. Written chords should either all be in flats or sharps, rarely if ever mixed. Certain keys are listed with sharps, and others with flats. Here are the most common ones:

C - n/a

D - sharps

Eb - flats

E - sharps

F - flats

F# - sharps

G - sharps

Ab - flats

A - sharps

Bb - flats

B - sharps

phpnode 3 days ago 1 reply      
I couldn't get this to work, after selecting a song it appears to work for ~30 seconds and replies with "This song cannot be analyzed because it is not set as public on YouTube." or "This song cannot be analyzed because it is not set as public on SoundCloud."
subdane 3 days ago 1 reply      
Nice site! (But I was disappointed when I realized there wasn't actually tablature).
imacomputer2 2 days ago 0 replies      
I love the concept, but every version of the song I looked for gave me this message "This song cannot be analyzed because it is not set as public..."
dyeje 3 days ago 0 replies      
Very cool! I love the way it just analyzes the audio. It'll be very useful when trying to figure out how to play obscure songs that don't have tabs available on the web.
anigbrowl 3 days ago 0 replies      
Works very nicely even with difficult program material (I listen to a lot of weird electronic stuff), but wit would be nice if it had export-to-MIDI or suchlike.
beefman 3 days ago 1 reply      
woutervdb 3 days ago 0 replies      
Great site, awesome project, good stuff. However, I found some kind of a "bug": http://i.imgur.com/elpJSS8.pngAt this point, you can't see what "kind" of F the first F is. They clearly are different, however I'm not a musician so I wouldn't know what it should be.
tjr 3 days ago 1 reply      
Wow, at a glance, this appears to be the best auto-chord-transcribers I've used. I'm getting much more usable data out of this than others that I've tried.

Do you intend for it to be able to hear altered chords?... #5, b5, etc? It didn't seem to be catching that on a song I submitted, but it got the root and third correct, which is still helpful.

pcorey 3 days ago 0 replies      
This is very very cool. How exactly are you getting the audio from youtube? I'm assuming you somehow pull the audio from youtube in your backend (how?), analyze it, use that analysis to build your display and then sync that with the playing youtube video?
hoelle 3 days ago 0 replies      
Wow. Great work! Dropping lyrics in and then adjusting the timings was easy and fun.

I wasn't aware at first that 'contributing' to a song would be public. This is cool and intuitive, although I'd prefer to contribute anonymously.

medell 3 days ago 0 replies      
Clap, clap, clap. This is incredible and much needed after frustratingly navigating through the constant up sell of the poorly designed tab sites out there. You know who I'm talking about. I would pay for this.
circa 3 days ago 0 replies      
I just signed up and have only looked at it for about 20 seconds. Seriously impressed with the chords it has found so far. Great job! Can't wait to check out more later.
sdotty 2 days ago 0 replies      
I love it! Good stuff! The cursor beating in time with the music is a nice touch.
troymcginnis 3 days ago 0 replies      
That's really awesome! Obviously not bullet proof and it doesn't hit everything 100% of the time but the concept is awesome.

Great work.

saurabh_math 3 days ago 0 replies      
Awesome project,personally I feel there should more options for discovering music, like languages etc. That will also help you in user engagement.
SnacksOnAPlane 3 days ago 0 replies      
This is amazing! I've been wanting something like this for so long.
dave_chenell 3 days ago 0 replies      
This is awesome. The first song I tried worked perfectly. Well done!
chrionsr 3 days ago 0 replies      
WOW! Great for me that likes to produce spare time! Thanks guys!
freerobby 3 days ago 0 replies      
This is fantastic.
DanielBMarkham 3 days ago 0 replies      
One idea for a feature: you might want to include the ability to move the chords up or down a few half-tones. Some folks will use various tricks to move their instrument up or down a few notches in order to make the chords easier to play. The tool needs to be able to adjust for that.
mglauco 3 days ago 0 replies      
Nice work!
ebbv 3 days ago 1 reply      
This is really cool. The thing that's obviously missing to me is chord charts to go along with the chord names. Despite the fact that people can look up the chord elsewhere, the tool would be much more useful to novice players (who are probably the majority of likely users) to simply provide those charts.
tech-no-logical 3 days ago 2 replies      

   sorry this video has been removed from youtube   sorry this video has been removed from youtube   sorry this video has been removed from youtube   sorry this video has been removed from youtube
ad nauseum. nice project, but doomed to fail because of this.

why the downvotes ? even their TOS sort of acknowledges it will not work :

YouTab respects your copyrights and the copyrights of others and therefore requires that you only annotate tablature of your own music or of music that you are licensed to annotate (such as public domain music).

Show HN: Stellar Git for PostreSQL and MySQL
309 points by obsession  1 day ago   73 comments top 23
robert_tweed 1 day ago 4 replies      
Generally the hardest thing with version control on a database (for an evolving codebase) is separating unrelated changes - such as schema changes vs content updates - and branching and merging those changes in sync with the code dependencies. Another issue is non-destructively replaying development changes into test/production environments.

So for example, you might have a feature branch that includes some schema changes and some value modifications, and a content branch that includes a bunch of inserts into a few content tables that happen to include foreign key references to each other (so you need to maintain referential integrity when replaying those updates/inserts).

I don't see anything in the description that indicates this tool address those problems. For me, those are really the only problems that a DB version control system ought to be focused on. Speed of snapshotting is not all that important in a development environment as you typically work on a cut-down dataset anyway. A minute or so to take a snapshot a few times a day isn't a huge deal, whereas taking more frequent snapshots doesn't seem like something that adds any value, if it doesn't address any of the other problems.

mbrock 1 day ago 5 replies      
I wish projects like these would always include some basic info in their README about: (1) how it works, and (2) how it might fail.
amirmc 1 day ago 2 replies      
If anyone's interested in git-like storage systems then it's work checking out Irmin [1]. Previous discussion is at [2].

Excerpt: "Irmin is a library to persist and synchronize distributed data structures both on-disk and in-memory. It enables a style of programming very similar to the Git workflow, where distributed nodes fork, fetch, merge and push data between each other. The general idea is that you want every active node to get a local (partial) copy of a global database and always be very explicit about how and when data is shared and migrated

Irmin is not, strictly speaking, a full database engine. It is, as are all other components of Mirage OS, a collection of libraries designed to solve different flavours of the challenges raised by the CAP theorem. Each application can select the right combination of libraries to solve its particular distributed problem."

[1] http://openmirage.org/blog/introducing-irmin

[2] https://news.ycombinator.com/item?id=8053687

falcolas 1 day ago 2 replies      
So, it appears to just copy tables around within the database. I wouldn't want to use this on a DB over a few MB in size. Sure, restores are "fast" (a table rename), but copies are not so much.

I can't imagine this would be kind to a production database (lots of cleanup from copied & deleted tables), and would consume a lot more space than a gripped logical backup of the tables in question.

m3h 1 day ago 0 replies      
Why does the author compare it to Git? The functions this software performs are no where near those performed by Git. Nor it is a proper version control system.
lucian1900 1 day ago 4 replies      
This sort of thing is useful, but already supported by Postgres through transactional DDL. Migrations that fail will have their transaction reverted.
Gigablah 1 day ago 1 reply      
From the code:

    INSERT INTO %s.%s SELECT * FROM %s.%s
Yeah, good luck with that.

bronson 1 day ago 0 replies      
Nice. I wrote a similar tool for Rails / ActiveRecord models: https://github.com/bronson/table_differ

It takes snapshots and computes diffs between snapshots or the live database. It lets me drop and re-import some of my app's tables, then compute the minimum set of changes between the previous import and the new import. I wouldn't call it "git for ActiveRecord models" but it appears to be similar to this project.

Comments welcome! The docs, as always, could use some help.

squigs25 1 day ago 6 replies      
The implications for this extend beyond backing up your database.

Imagine a world where daily time-series data can be stored efficiently:This is a lesser known use case, but it works like this: I'm a financial company and I want to store 1000 metrics about a potential customer. Maybe the number of transactions in the past year, the number of defaults, the number of credit cards, etc.

Normally I would have to duplicate this row in the database every day/week/month/year for every potential customer. With some kind of git-like storing of diffs between the row today and the row yesterday, I could easily have access to time series information without duplicating unchanged information. This would accomplish MASSIVE storage savings.

FWIW efficiently storing time series data is big problem at my company. No off the shelf solution makes this easy for us right now, and we would rather throw cheap hard disk at the problem rather than expensive engineers.

crad 1 day ago 0 replies      
Maybe I'm missing something, but I didn't see anything with regard to indexes, users, stored procedures, views or what not.

Seems like it's for table schema snapshotting in a database without any external storage.

Browsing through the code, I see that it's highly table centric using SQLAlchemy.

swehner 1 day ago 1 reply      
Line 53 of https://github.com/fastmonkeys/stellar/blob/master/stellar/o... is

                CREATE TABLE %s.%s LIKE %s.%s
This made me think of a table called

                create table `a; drop table users;`  (col int);
... which works in mysql.

I don't know if the stellar code will trip over something like this. But mysql (SQL) shouldn't even allow names like that.

edem 10 hours ago 0 replies      
Folks might confuse this with the Stellar currency (stellar.org). You might give some emphasis in the title.
codeoclock 1 day ago 1 reply      
Unfortunate name, excellent project :)
jdc0589 1 day ago 0 replies      
Shameless plug for mite: https://github.com/jdc0589/mite-nodeSimple migrations that take advantage of everything you already know about git and sql, plus some other cool stuff.

It's not too mature yet, the readme is mediocre at best, and it has some issues that will popup when working with a team, but it's pretty damn useful.

jimktrains2 1 day ago 0 replies      
While not exactly the same thing, I've recently found and started using https://github.com/nkiraly/DBSteward to specify schema and then store the spec in my repo with the code. It also supports diffing the current schema against a previous one, so that nice upgrade sql scripts can be generated.
iurisilvio 1 day ago 0 replies      
I expected something related with Stellar coins.

Looks like a good project, I definitely want an easy way to manage development databases.

jamesmoss 1 day ago 1 reply      
Interestingly they don't show MySQL benchmarks in the readme; I suspect it might be because the MySQL implementation is pretty basic


level09 1 day ago 0 replies      
This is a nice project. I used to have my database dump tracked by git (in binary mode). anytime my db changes I'll have to overwrite the file with the new database dump and include it with the commit.

I'm just wondering if this project offers anything special/better than the method I described.

iso8859-1 1 day ago 1 reply      
how does this compare to time travel queries? http://en.wikipedia.org/w/index.php?title=Temporal_database#...
JohnDotAwesome 1 day ago 2 replies      
How does it work? Where does it breakdown? Why are these things not in the README?
ZenoArrow 1 day ago 0 replies      
Just a small correction; it's not PostreSQL, it's PostgreSQL.
josephcooney 1 day ago 0 replies      
Typo? Shouldn't it be PostgreSQL not PostreSQL?
mosselman 1 day ago 0 replies      
Looks very nice, could you put up some practical examples?
299 points by daigoba66  2 days ago   163 comments top 30
MichaelGG 2 days ago 5 replies      
I wonder what the search story is. One technology that really does deliver, and has totally impressed me, is Lucene/ElasticSearch. I'm used to all sorts of hyperbolic claims, but holy shit, ElasticSearch just delivers. We tossed in about 40M documents from a SQL Server DB, and not only did it require less resources (a 30%? reduction in size), the queries are beyond anything that'd be approachable using SQL Server. And I've only touched the surface, using it as a pure plug-n-play setup.

With DocumentDB, not having a local version severely limits what I'd consider this for. Losing that flexibility is a big deal. Maybe this is just a limited preview and they haven't build the management side for local installs.

ceejayoz 2 days ago 5 replies      

> Want to edit or suggest changes to this content? You can edit and submit changes to this article using GitHub.

Pretty remarkable given Microsoft's approach to open source in the 1990s that they're now using a service built around Linus's bespoke open source version control system to allow people to suggest changes to their documentation.

jpalomaki 2 days ago 1 reply      
Ad hoc queries using SQL like syntax. No need to define indexes.

Javascript execution within database. Stored procedures, triggers and functions can be written with Javascript. "All JavaScript logic is executed within an ambient ACID transaction with snapshot isolation. During the course of its execution, if the JavaScript throws an exception, then the entire transaction is aborted."

Pricing is based on "capacity units". Starts with $22.50 per month (this includes 50% preview period discount). One capacity unit (CU) gives 10GB of storage and can perform 2000 reads per second, 500 insert/replace/delete, 1000 simple queries returning one doc.

In order to see pricing details, change the region to "US West":http://azure.microsoft.com/en-us/pricing/details/documentdb/

Very interesting addition to Microsoft offering. I was actually just yesterday wondering if they have any plans for this kind of service. Table Storage is quite primitive and Azure SQL on the other hand gets expensive when you have lots of data.

One potential "problem" with this is the bundling of storage capacity and processing power. If I understand this correctly, I would need to buy 10 CUs per month to store 100GB of data even if I'm not very actively using that data.

streptomycin 2 days ago 1 reply      
DocumentDB utilizes a highly concurrent, lock free, log structured indexing technology to automatically index all document content. This enables rich real-time queries without the need to specify schema hints, secondary indexes or views.

How does that work? Isn't that going to incur a major performance hit? If not, why don't other databases get rid of indexes?

Also, if anyone from MS is reading, http://azure.microsoft.com/en-us/documentation/articles/docu... links to http://azure.microsoft.com/en-us/documentation/articles/docu... which is a 404 error.

whalesalad 2 days ago 3 replies      
I liked everything about it until I saw the API for the Python client. What a catastrophe.

I pray Microsoft is looking for Python developers: https://gist.github.com/whalesalad/2142f0075c6896f4547c

fineline 2 days ago 3 replies      
"All JavaScript logic is executed within an ambient ACID transaction with snapshot isolation. During the course of its execution, if the JavaScript throws an exception, then the entire transaction is aborted."

Have I missed something, or have MS delivered a novel and valuable feature? I'm not aware of support for transactions across documents in other NoSQL platforms. I'd be grateful if someone has any experience or better information in that regard, thanks.

bkeroack 2 days ago 5 replies      
...and MS goes after MongoDB. It would be nice to see an on-premises version, if only to compare performance/consistency with Mongo.
luuio 2 days ago 1 reply      
A quick comparison between DocumentDB vs MongoDB: http://daprlabs.com/blog/blog/2014/08/22/azure-documentdb/
orand 2 days ago 2 replies      
If I understand correctly, their multi-document ACID transaction support is a big deal. The only other NoSQL/NewSQL systems I'm aware of with that ability are FoundationDB and Google Spanner/F1.
pokstad 2 days ago 3 replies      
Sounds very similar to CouchDB. Server side Javascript written by the user, and an HTTP interface. The ability to adjust consistency is really neat.
allegory 2 days ago 4 replies      
No local installation. No banana.

I wouldn't tie a product to a single cloud vendor.

lubos 2 days ago 2 replies      
What are the limits of DocumentDB? You know, like max size of database, max size of document, max number of documents per database, max. number of attributes per document, max. number of databases per DocumentDB account.

What's the max. duration of database query, max size of query result.

What kind of performance can be expected, does it decrease as the size of database increases or it remains constant?

I'm going to wait a few days until hype settles.

jnardiello 2 days ago 2 replies      
And that is a creative product name. Well played MS.
mallipeddi 2 days ago 0 replies      
What are the size limits on a collection? Docs mention transaction support is offered only within a collection. Is a collection essentially limited to a single physical machine in the background or does it span across machines? It looks like in Standard Preview, the max collection size is 10GB.
seanp2k2 2 days ago 1 reply      
Interesting: https://github.com/Azure/azure-documentdb-python it's empty for the moment, but glad to see first-party support for Python
reubenbond 2 days ago 0 replies      
The @DocumentDB twitter links to a tutorial on DocDB: http://www.documentdb.com/sql/tutorial
cvburgess 2 days ago 3 replies      
Does anyone know how this compares to AWS DynamoDB[1] ?

[1] https://aws.amazon.com/dynamodb/

chippy 2 days ago 0 replies      
Spatial queries and indexing. Most data has some location component. I didn't see anything with this. Is it in there, or planned?
andrea_s 2 days ago 6 replies      
Am I alone in thinking that sql-like syntax is actually a step backwards from building query documents programmatically (MongoDB style)?
petilon 2 days ago 3 replies      
So does it run on a cluster? If so which of Consistency, Availability and Partition tolerance does it NOT offer? (See CAP theorem)?
yxhuvud 2 days ago 1 reply      
It would have been nice to see some actual details of how it works so that it can be compared to the competition.
gamesbrainiac 2 days ago 0 replies      
I find it surprising that DocumentDB wasn't already a copyrighted name. ;)
talles 2 days ago 1 reply      
Can I use DocumentDB out of Azure (hook my own)?
poolpool 2 days ago 1 reply      
I wonder if this is built on JetDB
sarciszewski 1 day ago 1 reply      
Leave it to Microsoft to give it the most generic sounding name possible.
nandkishiee 2 days ago 0 replies      
Sick! Love it
utunga 2 days ago 3 replies      
Another case of Not Invented Here syndrome from Microsoft. One wonders why they couldn't just take the open source and very well architected RavenDB http://ravendb.net .Net Document DB and provide first class support for that within Azure.
cbsmith 2 days ago 3 replies      
'cause what the world needs is another proprietary NoSQL solution.
hackerkushal 2 days ago 0 replies      
THIS THING IS A BEAST!!It is absolutely bad ass
Nux 2 days ago 0 replies      
A new "cool", locked-in service served on a silver platter by Microsoft to the brainwashed.

Everybody else uses open source on premises or their cloud of choice.

Show HN: A virtual whiteboard for working or teaching remotely
306 points by MarkMc  2 days ago   139 comments top 54
MarkMc 2 days ago 8 replies      
This is little 'scratch-your-own-itch' project I started working on about 9 months ago. Front end is GWT, back end is Java servlets. Database was originally MySQL but switched to Prevayler for performance reasons.

My YouTube video gives a nice overview of the benefits of a virtual whiteboard:http://youtu.be/MDEHFHG1l3Y

What does Hacker News think?

wodow 2 days ago 7 replies      
I love it - works perfectly... but immediately the feature creeper in me has kicked in. Killer features

1. Be able to type in text (any font will do)

2. Drag-and-drop an image to be be able to annotate it over the top (would be excellent design task, e.g. using screenshots of some work-in-progress).

3. Movable objects?

senko 2 days ago 1 reply      
Very nice. Seems to be vector-based (ie. how the eraser and undo/redo work, and allowing things like pan&zoom).

I'm the author of a similar tool, https://awwapp.com/ , which is bitmap-based, ie the eraser works as it'd on a physical whiteboard, and there's no zoom (or undo/redo).

Great start, looking forward to seeing the future progress!

krmmalik 2 days ago 1 reply      
Oh my god. I love this!I've been waiting for a simple polished solution for this for years and years. When skype came along and then we got skype extensions, I thought they'd solve this, but oh no. Every solution that has come out so far has been mediocre at best. I'm glad someone decided to tackle this properly.I haven't had chance to try on iPad, but from the homepage it seems it'll work fine there as well?
Nemi 2 days ago 1 reply      
This is freaking awesome. You probably know this but it does not work in IE on windows 8 with a touchscreen. Chrome works fine however. I am guessing it is something to do with the touch events that IE has that are different?
kenny_r 2 days ago 0 replies      
The brain-shaped thought bubble filled with blue gears is _very_ reminiscent of the devopsdays[1] logo...

[1]: http://devopsdays.org/

valar_m 2 days ago 0 replies      
SSL, private boards, and some kind of assurance that what we're drawing truly is being erased.

Add that, and we'll pay you money to let us use it. I suspect other companies will, too.

8ig8 2 days ago 0 replies      
This is handy for remote tutoring. I dropped a math worksheet image onto the whiteboard. Imagine working remotely with a student. Here's a static 'snapshot'.


Tried this years ago over dialup with some MS app. The technology was flakey and kept getting in the way. This works great.

oneweirdtrick 2 days ago 0 replies      
You hear that? That's the sound of Google's Business Strategy team cueing Wagner's 'Ride of the Valkyries'.
yaddayadda 1 day ago 0 replies      
@MarkMC - Any idea if your browser whiteboard will with the TouchPico (https://www.indiegogo.com/projects/touchpico-turn-any-surfac...) which is an inexpensive touchscreen projector. (I'm just thinking these would be a great combination of technologies.)
gldnspud 2 days ago 1 reply      
This tool was pretty responsive to draw with using a tablet, and I'm glad the pen width isn't very large.

I love using a Wacom tablet for drawing diagrams, and wish there was a good shared whiteboard tool that supported pen pressure.

I didn't know if this is possible at first, but a quick search revealed http://muro.deviantart.com/, which supports pen pressure using a plugin.

Any chance you might add that kind of flair to whiteboardfox?

afaqurk 2 days ago 0 replies      
Very nice!

My friend's parent is disabled and we thought about doing something like this and hooking it up to a touch-screen monitor for them.

That way the parent can see messages from her kids (in a different city) and vice versa without any effort or typing.

jleask 2 days ago 1 reply      
That's something I've had on my todo list for ages. As others have said though, it'd be great to be able type text as drawing letters with a mouse always takes ages and looks rubbish.

SVG export would be nice too, perhaps a paid for extra :)

jasonkester 2 days ago 1 reply      
Nice first cut. It's actually surprising that it took a full seven years for anybody else to show up Twiddla's [1] space with a true HTML5 whiteboard. Back when we started out, there were half a dozen commercial versions of this exact thing, but all in Flash or Java, and all trying to compete with WebEx.

It think you're on the right path positioning this for use in schools. That's our main use case too, replacing overhead projectors in the classroom, and ruining snow days for an entire generation of kids.

Good luck!

[1] http://www.twiddla.com/

billybofh 2 days ago 0 replies      
Do you have any plans to have a 'broadcast' mode where the person who creates the whiteboard is the only one who can draw on it? I can see itchy student fingers abusing it otherwise in a teaching context... ;-)
graeme 2 days ago 0 replies      
I use this kind of thing professionally, for distance tutoring. Looks very nice, draws smoothly. I like the easy "erase" function.

There's one thing stopping me from using it: the URL

I use jotwithme right now, and I can set a session name, then tell people to go there. Here, I have to get the url from my ipad, sent it to myself somehow, and send it to the student.

That's not exactly hard, but it's annoying enough compared to jotwithme that I'd keep using that. But if you had that feature, I'd switch. Jot's erase feature isn't as good.

gtramont 2 days ago 1 reply      
Back in 2011, when I was starting with node.js I built something I called 'Writeboard' https://github.com/gtramontina/Writeboard . It's not an active project anymore, but I've considered resuming working on it. A rewrite of it is hosted in heroku, at http://writeboard2.herokuapp.com/ -- Check it out.
chrisweekly 2 days ago 3 replies      
Surprised to learn this is Java and long-polling. (These features scream Meteor / DDP to me.) Regardless, it's a v good start. Thanks for sharing!
megablast 2 days ago 0 replies      
Did something like this myself, a few years ago:


Except all you need to follow along is a browser.

soneca 2 days ago 1 reply      
Great work! My feature request: similar to the "insert pic", create an insert slide. You upload a .PPT and select which slide you want to use.

If teachersare going to use this, many of them will already have a powerpoint to use.

Adding to this, an easy way to navegate betweens different whiteboards of the same author (so it is easy to go to the next slide).

Just my opinion about what might work.

unwind 2 days ago 1 reply      
I tried opening "my" whiteboard in a second tab in Firefox (on Windows). The second tab's view is distorted, it reminds me of having a badly programmed modulo register on the Amiga.

Which doesn't say much to most, I guess... It looks as if the scan lines are mis-aligned, i.e. as if some pixels in each line is missing from each, causing the resulting image to be slanted and distorted.

dharma1 2 days ago 0 replies      
google drawing -> setup share settings so anyone with link can edit -> scribble


devniel 2 days ago 0 replies      
My respect for use the java stack, I like it. Here another guy to tried to build his own whiteboard a few years ago http://notephy.com , I'm just searching a solid support to audio recording with webrtc to improve it.
hugozap 2 days ago 3 replies      
I would love to use this, but i don't have a facebook account. I hope you support other login options
shervinshaikh 2 days ago 0 replies      
This looks similar to Citrix's Talkboard http://www.citrix.com/products/talkboard/overview.html
bitJericho 2 days ago 1 reply      
I played around with something like this when I was a kid (it was a group of kids drawing anime characters if I recall). Every once in a while I search for something like that and I can never find it. Now it's back :D
jacquesm 2 days ago 0 replies      
My quick-and-dirty hack for a one way version of this is to open skype using screen sharing and then to run 'gimp' while using the chat & voice for instruction/questions.
DanBlake 2 days ago 1 reply      
I did something like this a few years ago, but we were forced to use flash- http://flockdraw.com
itry 2 days ago 2 replies      
josealicarte 2 days ago 0 replies      
This article was awesome, teaching remotely are very effective because other people are busy , they dont have time to go institution.
orcinusorca 2 days ago 0 replies      
Very well done. I can see myself using this in the near future. Thank you for making this.
joebo 2 days ago 0 replies      
nicely done. I will try using it with my coworkers. The video was helpful with suggestions on zooming to draw text.
hoof_marks 2 days ago 0 replies      
Awesome!!..with tablet. You can add text support, and login with email id..dont have facebook!
free2rhyme214 2 days ago 0 replies      
My only suggestion is to make sure the youtube video plays in HD by default. Excellent work!
gavinpc 2 days ago 0 replies      
Doesn't work if cookies are blocked. Well, it is a "white board," anyway.
sidcool 2 days ago 1 reply      
Very impressed Mark. Nicely done. Do you mind telling the technology stack?
eldelshell 2 days ago 0 replies      
Very nice, but you just made me remember how hard it's to draw with a mouse.
Brandon0 2 days ago 1 reply      
This is very cool! Is there a way to zoom in on desktop? Mouse wheel perhaps?
har777 2 days ago 0 replies      
Great stuff ! I'll try to build my own version using socket.io :)
jiri 2 days ago 0 replies      
Very nice! I gonna use it myself for drawing using tablet on big screen!
j2kun 2 days ago 0 replies      
A simple chat functionality within the app would be nice...
ujjwal_wadhawan 2 days ago 0 replies      
A zoom in/out with mouse scroll would be good to have.
junyeeng 2 days ago 0 replies      
How long do you take to complete the project?Awesome work!
freebs 2 days ago 0 replies      
Making it easier to zoom would be nice. Maybe with scrolling?
dblacc 2 days ago 1 reply      
This is fantastic. How long is a whiteboards lifetime ?
johnmoore 2 days ago 2 replies      
Only works in Internet Explorer 9, doesn't work in IE 8 in some companies they only allow you to use IE only and only upgrade the browser if they install a new OS.
cdnsteve 2 days ago 0 replies      
Interesting, seems like TogetherJs or Prezi.
khrist 2 days ago 0 replies      
very nice, responsive. Great job. I was looking for such tool. Feature request, should allow copy paste :).
mentos 2 days ago 0 replies      
Might this use Firebase?
Duber 2 days ago 0 replies      
looks good to me, congratulations
samstave 2 days ago 0 replies      
There should be an "omeagle" version of this where you draw pics with strangers.
lazyant 2 days ago 2 replies      
great! only missing typing text
Multi-Datacenter Cassandra on 32 Raspberry Pis
276 points by zzzqqq  2 days ago   54 comments top 13
sgt 2 days ago 19 replies      
I'd be worried about just switching RPi's off. We recently got a Pi for the office to run as a dashboard - and after a couple of power cuts it corrupted the SD card.

Now I'm going to have to set up the system again, and I don't know whether this is going to happen again. The SD card that got corrupted was a Class 4 Kingston.

Maybe I'll look into a Sandisk (possibly Class 10?) next time. But I am worried that it's not the SD card's fault, but rather a combination of a journaling filesystem, an SD card and a sudden power outage.

Edited: Apologies, I realized now that the red button cuts power to the network switch, not to each individual Pi. But my concerns about the Pi and power cuts still remain though.

thinkingkong 2 days ago 2 replies      
Is there a video we can see?Hitting the button im imagining the circles showing some kind of re-sync animation?
coreymgilmore 2 days ago 0 replies      
Pretty cool. Would like to see it working (video/timelapse/gif)?

Also, any reason for not making the big red button randomly select a "datacenter" to take offline?

Idea: transition this into a 3 or 4 datacenter cluster.

crazypyro 2 days ago 0 replies      
I noticed the mention of FIRST and at the same time, noticed the red/blue color choice. I'm sure its just a coincidence, but still entertaining. Project looks awesome.
smoothpooper 2 days ago 0 replies      
Demoing the multicluster setup and simulating the failure to various people was the hardest part for me. This will help so much. A video will be nice.
PanMan 2 days ago 1 reply      
How is the circle of lights set up? What does it show?
fasteo 2 days ago 0 replies      
Picture of the back of that wall !!
mkoryak 1 day ago 0 replies      
and here is the link to the "high res" picture of the setup 4000x2000:http://www.datastax.com/wp-content/uploads/2014/08/cluster_c...
rodvlopes 2 days ago 0 replies      
I see rpis everywhere... Are they self-replicating?
ribs 2 days ago 0 replies      
Wicked! I want to see a video.
rjurney 2 days ago 0 replies      
They need a custom designed enclosure with pretty lights.
yossarian314 2 days ago 0 replies      
bfrog 2 days ago 1 reply      
They say it was difficult to get a high performance DB running on a 700mhz chip with 512Mb of ram. Perhaps its just the wording but that sounds like the opposite of high performance to me.
Announcing Calibre 2.0
261 points by cleverjake  1 day ago   87 comments top 13
vj44 1 day ago 7 replies      
I'm sure calibre 2.0 is a great technical feat, and kudos for all the work put into this product, but judging by the screenshot the user interface is equally clunky as in 1.0.This software does mostly everything I need it for to convert ebooks... but can you, the authors, please improve the UI?
dredmorbius 15 hours ago 1 reply      
I'll take a look at this, as my long rant on what's wrong with browsers[1] basically ends up with the admission that something along the lines of Calibre or Zotero is probably more of what I want from a reading app: the ability to manage a library of works, local, networked, or on the Web, with a highly uniform presentation (ignore virtually all document formatting in favor of my own preferences).

From my relatively light explorations of Calibre to date (v. 1.25 on Debian jessie/sid):

The UI is clunky. Especially when trying to edit / capture bibliographic information I've found it beyond frustrating.

The built-in readers are severely brain-damaged and I've found no way to change them. The PDF reader is complete and total fail, the eBook reader isn't much better, and I seem to recall that accessing HTML docs is similarly frustrating.

By contrast, I've been impressed by the Moon+Reader Android eBook reader, generally like the Readability online (Web) reader and Android app, and had found a Debian eBook reader that was fairly decent client -- fbreader. Its main disadvantage is in not having the ability to set a maximum content width. I find that 40-45 em is my preferred width in general. Among fbreader's frustrations: I cannot define a stylesheet, though I can apply a selected set of styles (defining margin widths, e.g., but not the _text_ width, which is frustrating). The book I've presently got loaded is either right or center justified -- the left margin is ragged, again, frustrating. And text doesn't advance on a <space>, like virtually any other Linux pager.

If calibre readily supported alternative clients, I'd be a lot happier with it.

The ability to include / reference / convert Web content would be somewhere north of awesome. There's still a large amount of information online that I reference, but would prefer to archive or cache locally, and/or convert to more useful formats (usually ePub or PDF).

Optimizing viewing experiences for wide-format, vertically-challenged screens would be hugely useful. 16:9 display ratios mean vertical space is at an absolute premium. Most PDF viewers are utterly brain-dead in this regard (evince, for example, requires four manual repositionings to view a typical 2-up document). The Internet Archive's BookReader does an excellent job of consider positioning content and paging through it as two separate functions. I strongly recommend taking some UI notes from it. https://openlibrary.org/dev/docs/bookreader

Alternatively, the old 'gv' ghostscript Postscript and PDF reader will page through documents in a highly sensible fashion: top-bottom, left-right. Why this was achieved in 1992 while PDF readers of the subsequent 22 years have utterly blundered in this regard escapes me.

That said, I'm looking forward to this showing up in Debian's repos (I've got v1.25 presently).



1. http://www.reddit.com/r/dredmorbius/comments/256lxu/tabbed_b...

skant 1 day ago 2 replies      
The author of Calibre claims:In my opinion, calibres graphic interface is damn good [1]

I don't think the author is going to make any strides towards improving/changing the UI


llasram 13 hours ago 1 reply      
In 2008-2009 I was probably the second biggest commiter to Calibre (still #4 according to github), focusing entirely on the conversion pipeline and format support. I'm still proud of the OEB modeling as some of the finest OO code I've written, or probably will write now that I've moved on to functional.

For everyone complaining about the UI and management functionality, realize that you are not the target audience. Head over to www.mobileread.com, look at the Calibre forum and the praise Kovid gets, and you'll see that he's largely catering directly to what his core users want.

It is interesting that Calibre and mobileread are still around, and relatively little changed. I lost interest and moved on once pretty much every commercially-available e-book became available in EPUB format. What's left is a very, very specialized core of enthusiasts.

dredmorbius 3 hours ago 0 replies      
Can someone point to a good Calibre tutorial?

My use-case: I download material in various formats from online, mostly in PDF, ePub, or some markup format (LaTeX, Markdown, HTML, etc.) I've got a large set of downloads, which I then try to import into Calibre. This is in support of a large research project.

1. It's difficult to tell what I've imported and what I haven't.

2. The import process itself is slow. Enough so that I'll fire it up, get caught up in other stuff, and ... well, tend not to get back to it.

3. The corpus is fairly large: around 1000 books and papers, plus another 5,000 others pulled from web archives.

4. Tracking this by metadata is crucial. Title, author, publication date, and tags. Managing _that_ is a headache on its own, especially adding metadata to works / confirming automatically extracted content is accurate.

5. Once I've got the information organized, reading, referencing, annotating, and other tasks should be supported.

Again: calibre is about the only tool out there I'm familiar with, but it's a pain. Zotero and various LaTeX bibliographic tools are also of some use.

marianminds 1 day ago 5 replies      
One thing that really sucks still is the conversion of PDFs (for e.g. journal articles) into formats suitable for e-ink readers. I've tinkered with its heuristic processing and regex formatting, but I'd never considered manually touching up the final .epub as it comes. If their ebook editor is any good I might start reading journal articles again.
maxerickson 1 day ago 2 replies      
Does it still refuse to index without managing?

A quick glance at the documentation says yes.

nebulous1 1 day ago 3 replies      
I don't suppose he's backtracked on his awful position on auto-updates?
holychiz 1 day ago 1 reply      
so much hate for the UI! personally, it works and works well for its intended purpose. To me, it's even intuitive at times. By that virtue, it's already better than 90% of software out there, free or not. Can it be better? sure, like everything else in life. Now that I know that the dev is abrasive from other HN comments, i've got even more respect for him, because of the heavier load he has to carried. :)
bowlofpetunias 1 day ago 2 replies      
I have a love-hate relationship with Calibre. As a way to manage my ebooks, and especially overcoming the insanity that is DRM, Calibre is a lifesaver. I wouldn't even be buying ebooks if it wasn't for Calibre. (I only bought a Kindle after making sure I could crack the DRM and actually own the books I paid for.)

However, the user interface of Calibre is one of the worst I've ever encountered. It looks and feels like a teenagers first attempt at creating a desktop software prototype back in 1995. (Having to go to the website to download and install every single new minor release also feels like something from a bygone era.)

I donate to Calibre because I need it to continue existing, but I have no love for it.

jrvarela56 1 day ago 1 reply      
Does anyone have a decent webapp to replace this? Interested in building one since I haven't found a viable option. As of now I use Calibre and set my folder to Dropbox so I can access books.
yuribit 1 day ago 1 reply      
Is there some Calibre plugin to convert scientific articles with math formulas to epub or mobi?
necrodome 1 day ago 0 replies      
I am trying to find a solution to ditch calibre(at least for library management) completely, and with the advent of cheap android eink devices, this seems more possible now. A simple app that communicates with a web backend to manage my library on such a device would be enough.

one recently released such device is Boyue T62 (http://www.banggood.com/Boyue-T62-8G-Dual-Core-6-Inch-WIFI-A...) Here is an overview (the review is for the same device, just rebranded and with previos generation specs) http://blog.the-ebook-reader.com/2014/08/11/icarus-illumina-...

You also get much better pdf reading capabilities with these devices.

Until the next generation displays for reading come into play, these look much better overall than kindle, nook, etc.

Why I'll Never Tell My Son He's Smart
240 points by eroo  1 day ago   63 comments top 26
ChuckMcM 1 day ago 8 replies      
I've had mixed thoughts about this over the years. Sadly you only get to raise your kids once, you can't try other scenarios and see of there is a better path.

Highly verbal kids, and that is generally kids who read a lot, will be told they are smart whether you do it or not. And if you're child's teachers are telling you how smart they are, and they ask you "Dad, my teacher said I'm really smart, do you think I'm really smart?" You'll have to decide what the narrative is.

That said, it's great to reward struggle rather than success and to emphasize that it is through failure that we value succeeding. Everyone I know who shielded their children from failure has struggled later with teaching them how to cope with failure. That isn't scientific of course, just parents swapping horror stories, but it has been highly correlated in my experience. Putting those struggles into the proper light is very important.

A less obvious but also challenging aspect of this though is that you must teach your children that natural skillsets don't determine their worth. You are good at maths but lousy at sports? Makes you no better or worse than someone with the opposite levels of skill. That is much harder as kids are always looking for ways to evaluate themselves relative to their peers. If you endorse that you can find yourself inculcating in them an unhealthy externally generated view of self worth.

jwmerrill 1 day ago 2 replies      
> Dr. Carol Dweck... has found that most people adhere to one of two mindsets: fixed or growth.

I'm sympathetic to Khan's overall POV here, but "research says there are basically two kinds of people..." always tickles my skepticism antennae.

Claims like this are so often overstated by researchers to punch up an abstract, and then so often simplified further in uncritical 3rd party reports that I wouldn't bet a sandwich on the truth of any such claim without seeing the data for myself. C.f. the widely believed and largely unsupported claims about learning styles.

Would be nice of Khan to link to the publications so we could decide for ourselves.

toehead2000 1 day ago 5 replies      
I think there's a flip side to this, too, though. Being told you're smart, or good at math, or whatever, can be a motivator. It can encourage you to seek out and develop that talent, and also to persevere when things are difficult. At least for me, personally, when faced with a tough math concept I would think "well Ive been told all my life I'm good at math and I've been pretty good up until now so I'm sure I will be able to figure this out."

Giving negative motivation to a kid, saying "you're stupid," is recognized to sometimes be a self-fulfilling prophecy. There's no reason that "you're smart" can't work in the same way. I would not be surprised if a lot of this phenomenon of children being negatively motivated from positive feedback ends up having a different explanation than the one posited here.

jkimmel 1 day ago 0 replies      
I found this to be very insightful, as I am not familiar with the cited research. It brings to mind my own memories of growing up, and how being told how "smart," I was could actually act as a hindrance.

As the article notes, I was only praised when I got a correct answer, or used a big word without stumbling. In one particular memory, I am afraid of taking a new mathematics placement test in school -- not because of the difficulty, but precisely because I had gotten a perfect score on the last one. There was no room to grow, if I didn't get them all right again, would that make me not "smart?"

Very simple changes in the language we use with young children could possibly avoid that kind of anxiety in bright youth.

kiyoto 1 day ago 2 replies      
I find this campaign/propaganda dangerous.

I only know of Japan and the US, but as someone who went to one of the most prestigious secondary schools in Japan and universities in the US, I have seen well-educated, smart people with "growth mindsets" struggle later in their lives.

1. Regardless of what we say, in many corners of adult life, results are valued over processes. While a superior process has a higher likelihood of yielding a superior result, this is often not the case, and in a perversely Murphy's law-esque manner, it turns out to be false at critical junctures of one's life. And the deeper the growth mindset is ingrained into you, the more disappointed/despaired you find the situation and feel incapacitated and betrayed. Of course, a singular emphasis on results with no consideration for process is equally bad. Most people find their own local optimum between the two extrema, and I don't see how a campaign towards one end of the spectrum is all that meaningful or worthy.

2. This probably sounds terrible, but not everyone is "smart" as measured by academic performance. Certainly effort is a huge part of the equation, but some minds are better wired for academics than others. And the longer you work at it and hence surround yourself with qualified peers, the more apparent it becomes that not everyone is working equally hard. This realization usually does't mesh well with the emphasis on process from one's formative education, and many people become jaded/hopeless. (And of course, even within academic subjects, there are individual variances). While it is important to try, it is also the responsibility of educators (and adults) to see if the child's potential lies somewhere else, or to borrow Mr. Khan's words, to see if the child can be tenacious and gritty about something other than academics.

AnimalMuppet 1 day ago 0 replies      
Here's the other side: Kids are often cruel. Kids say demeaning things to other kids. One of the frequent ones is "You're stupid". And some kids are more emotionally fragile than others. I don't want my fragile kid to hear "You're stupid", perhaps frequently, without it being countered by affirmation that she is not in fact stupid.

But I also don't want that to be the equivalent of "participation awards" in Little League. For it to be of any real value, it has to go with teaching her how to actually think.

davidgerard 11 hours ago 0 replies      
My 7yo is really obviously smart and she knows it - top of class in everything. But so are her parents. So we're hammering home that smart is not good enough and you have to learn to do things, acquire skills.

(Her mum is an excellent role model in this, 'cos she's basically competent in a dizzying array of small skills. "If you want to be good at everything like Mummy is, this is how you learn it!")

Basically the hard part is capturing her interest. Anything she's interested in, she will absolutely kill. Anything she's not interested in, she won't bother with. That bit she gets from me ...

It also reminds us to set a good example: learn things and do them. Because it doesn't matter what you say, it's the example you present.

That said, I was most calmed by the many, many studies that show that, as long as you don't actually neglect the kid, they'll probably turn out how they were going to anyway. So helicopter parenting really is completely futile.

We've caught her at midnight reading books more than once, so I'll call that "huge success" ;-)

brudgers 10 hours ago 0 replies      
The article makes me sad.

What makes me sad is the idea that not telling a child she smart is justified so that the child will meet the parent's expectations. Telling a smart child they are smart is honest and kind and humane. I believe that in the long run the attitudes toward honesty and humility and empathy are the most important things I instill as a parent.

Some things are easy for smart people and not acknowledging that as a factor in my child's successes would be dishonest when discussing those successes. It is akin to not acknowledging that a pitcher of cold Kool-Aid is not the product of economic circumstance.

Some success is comes from pure good fortune, some comes from just showing up, and some comes from hard work. Talking honestly about when and how each plays a role is my job as a parent. I hope my child develops the ability to distinguish challenge from a checklist of busy work.

It's not either or. A child can understand that some successes come because the task is easy for them. Others will come from hard work. The can tell the difference between watching an addition video and earning an orange belt.

That said, my standard for good parenting is forgiving. Just trying to do a better job than one's own parents is hard enough. My parenting advice, for what it's worth, is to treat children as antonymous moral agents, fully capable of making intelligent decisions and able to learn from mistakes. Talk with them honestly as such and avoid deceit even when they are small.

Because that is when the foundation for their life as a teenager and adult is laid.

gaelow 9 hours ago 0 replies      
Regardless of any received training, smart people don't usually struggle as much as normal people when they are presented with a new, different kind of problem. That's something you cannot learn.

Even a kid's brain will not "grow" more or less depending on what kind of stimulus he is exposed to. But it doesn't mean it's bad to reward and compliment your kid for struggling and working hard instead of just being naturally good at something. It helps the child to build a character and face problems instead of giving up. The article is right about that.

There are also many ways to get a better access to the full capacity of your brain. It's not like the movie "Lucy", but many conditions may prevent you for using it to its full potential: Age, injury or illness, sleep deprivation, stress and exhaustion, lack of nutrients, drug abuse and chemical unbalances, etc. Some of those factors present problems that can be treated or even prevented, and you will (most of the time) function at the same cognitive level as a careless smarter person.

Also, the fact that there is no way you can alter your intelligence without altering your DNA doesn't mean you can't use it to discover and apply better problem-solving patterns for a particular discipline, making yourself effectively smarter.

yodsanklai 19 hours ago 2 replies      
I don't have kids, but I think I would tell them the truth. First, it's difficult to define "smart" as it's a conjunction of many skills. But even for one given skill, you may be the best in your class or your school, but there are likely millions that are much better than you. No need to worry too much where you lie and try to do the best with what you have.

> "Researchers have known for some time that the brain is like a muscle; that the more you use it, the more it grows. Theyve found that neural connections form and deepen most when we make mistakes doing difficult tasks rather than repeatedly having success with easy ones. What this means is that our intelligence is not fixed, and the best way that we can grow our intelligence is to embrace tasks where we might struggle and fail."

I wonder to which extent this is correct. Sure, it would be nice if it was the case. It's a nice myth that anybody can achieve anything with the proper amount of work. I see it all the time in fields such as maths or music. Some people are naturally so much better than others than even a lifetime wouldn't be enough to catch up.

Aerospark 20 hours ago 0 replies      
This is why Salman Khan is one of (if not) the greatest teachers of our generation. When I read this article, I remembered when Khan Academy first started... it was the first attempt to make good education free and easily accessible, exactly the way it should be. Hats off to you sir, thanks for another great lesson I will teach my kids some day :).
edpichler 1 day ago 0 replies      
"The Internet is a dream for someone with a growth mindset."

Exactly what I feel. Days are becoming too short for such amount of interesting things to do and to learn (Hacker News, Quora, Designer News, Coursera, Khan Academy, TED, Project Guttenberg... the list is long, and it's growing...)

dalek2point3 1 day ago 0 replies      
Aaron Schwartz introduced me to Dweck. It has been an integral part of my life ever since:http://www.aaronsw.com/weblog/dweck
eroo 1 day ago 0 replies      
I wasn't aware of this research. Reflecting on my own schooling experience, however, there is something pleasantly intuitive about it.

I'm always impressed with Salman Khan's work.

BrandonMarc 22 hours ago 0 replies      
One of the first Aaron Swartz essays I read (first of many) was on this very topic. He gives great details about how she experimented with children and games, and how their mindsets manifested themselves, and how she came to her conclusions about fixed vs growth.


To me, the possibility that anyone can move from fixed to growth is astounding [1] ... that fact itself positively brims with the possibilities it opens up, if only a person can realize they're not stuck and they can expand their horizons.

Khan's description of "interventions" is interesting.

[1] I also suspect the converse is equally possible, given the right circumstances ... which is worth keeping in mind, I 'spose.

tokenadult 1 day ago 0 replies      
A readable popular article about this research, "The Effort Effect," was published right after Professor Carol Dweck moved her research base from Columbia University to Stanford University.[1] And Dweck has written a full-length popular book, quite readable and helpful for parents, called Mindset: The New Psychology of Success[2] that I recommend to parents all the time.

[1] http://alumni.stanford.edu/get/page/magazine/article/?articl...

[2] http://mindsetonline.com/


blazespin 1 day ago 0 replies      
Great article, lousy title. The point he was making is that smart people are those that appreciate learning more than knowing. The reality is life is very much that - successful people everywhere are those who are always willing to push themselves beyond their comfort zone.
phaet0n 1 day ago 1 reply      
There is a sort of analogue to this: parents praising their children as beautiful/pretty or brave/strong. Both vacuously reduce the childs ability to reflect genuinely on their strengths and their source of self-worth. Beauty (or the appreciation of) becomes solely reduced to the physical (and external), and courage reduced to dare-devilism/ego-centrism instead of the appreciation of fear and acting to overcome it.
spiritplumber 20 hours ago 0 replies      
I kept being told "You're smart/gifted" when I did something clever, and "You need to try harder" when I didn't. Left me with some self esteem issues.
riffraff 19 hours ago 0 replies      
so pardon my natural question: was Dweck's work replicated?

We have believed for decades in the stanford prison experiment, and it was faulty.

ngokevin 1 day ago 1 reply      
This is common knowledge by now. I didn't even have to read the article, just skimmed it. These types of articles really cater to people who were often told they were smart when they were little, or found school to not be difficult.

I have even seen non-educated mothers state this fact even while playing poker, "yeah I never tell my son he's smart, I congratulate his hard work instead because it changes his mindset".

QuantumChaos 22 hours ago 0 replies      
While I wish the best for all children, I feel like this kind of discourse has a negative effect on the very intelligent. By downplaying the significance of intelligence, it trivializes the gifts of the truly intelligent, and places an excessive emphasis in the virtue of hard work. I see on HN all the time the claim that hard work beats intelligence. But I have never really worked that hard, I just have an extraordinary ability in mathematics.

When I was a child, I was told that I was very smart (which I was) and pressured to fulfill my potential. Other children may be pressured to be hard working and studious. I would rather celebrate people who are naturally gifted, and also people who choose to work hard. What is important is that people's actions arise naturally from their own desires, not from external pressure or manipulation.

Swizec 1 day ago 2 replies      
> Fixed mindsets mistakenly believe that people are either smart or not; that intelligence is fixed by genes. People with growth mindsets correctly believe that capability and intelligence can be grown through effort, struggle and failure.

Can intelligence be gained though? I agree that skill can only be gained through effort/practice/etc. But intelligence ... isn't intelligence more like a natural talent than something you can gain?

Much like you can't just train yourself to have a beautiful singing voice or big boobs or absolute pitch hearing, I don't think you can train yourself to be more intelligent. Smarter, yes, intelligenter, not really. It's a talent, not a skill.

kolev 1 day ago 1 reply      
So, I should rather lie?
MisterBastahrd 1 day ago 0 replies      
If my kid is smart he'll figure it out for himself.
Biologically extending human vision into the near-infrared: Initial success
226 points by irollboozers  2 days ago   60 comments top 14
JacobAldridge 2 days ago 1 reply      
Link to the original Project Page, if (like me) you're playing catch up on the experiment:


gus_massa 1 day ago 1 reply      
Copy of a comment I made in a previous submission, a few hour ago: https://news.ycombinator.com/item?id=8207152

Well, the data is very noisy. The main problem is that this data doesn't have a before/after comparison. Is the 850nm light visible now or it was always visible???

It's also very difficult to make a fair comparison. The room must be the same, the light sources must be the same (a new coffeepot with a small led can ruin the experiment, removing a coffeepot because it has recently broken can ruin the experiment).

For a preliminary experiment, the before-after comparison is enough. For a serious experiment you need many voluntaries, compare the before-after signals of them all at the same time in the same experimental conditions, and double blind testing.

There is a small possibility that they are measuring "excitement" instead of light. The subject hears that they are now going to test with very near infrared light. He got exited. They measure that. Perhaps the flash makes a slight sound, perhaps the light operator makes a slight sound. (Perhaps the 850nm flash makes a sound that the other flashes don't make?)

specialp 1 day ago 2 replies      
This could certainly be possible. Jay Neitz did experiments on monkeys to cure colorblindness using gene therapy and was successful. [1] He has said that perhaps one day humans can have genes for more color receptors added to be able to see more colors as some birds do.

1. http://www.neitzvision.com/content/genetherapy.html

leoc 1 day ago 3 replies      
diziet 2 days ago 1 reply      
I'd love to see the ERG readings for more experiments and before vs after at 950nm~
dedward 2 days ago 1 reply      
I seem to recall an article from some years back about someone using welding goggles with multiple layers of a specific blue filter on a very bright day and being able to see near-IR.. or something darn close to it.
qwerta 1 day ago 0 replies      
My astronomy friends are into hard-core start gazing. One experiment was in La Palma island with near-perfect night sky at 8 000 feet. One guy could see 8.1 magnitude stars at 80% cases (independent stats). With oxygen and some training he would probably get to 8.5 magnitudes.

There are similar stories with sound etc. I think some people can see near infrared, it is just question of finding them.

tdaltonc 1 day ago 0 replies      
When is the flash on and when is it off in these plot? What would these plots look like in a control subject?Does the subject have any other indication of when the flashes are occurring?

I know that this isn't written to be read critically, but I don't know what the take-away is.

sigil 2 days ago 4 replies      
What an interesting experiment. Could there be some basis, after all, to the urban legend that eating carrots improves night vision? Carotenes are "partly metabolized into Vitamin A" [1], but this experiment is skipping the precursors and going straight for what I assume are large and exclusive doses of Vitamin A. Can it really be that no one has tried this before?

Related and probably equally silly idea: I've always wanted a pair of sunglasses that could tune in to different EM spectra. How far are we from that? Night vision goggles are bulky because they need external power to do the frequency shifting, right?

[1] http://en.wikipedia.org/wiki/Carrot#Nutrition

tylermenezes 2 days ago 0 replies      
What's really cool is that this entire project was done for under $5k!
_greim_ 1 day ago 1 reply      
> near-infrared

So, still red then?

Garbledup 2 days ago 0 replies      
Through technological enhancement[1] or practice it seems that anyone can make an attempt at monitoring & responding too these frequencies.


TTPrograms 2 days ago 0 replies      
Those plots really need labels.
userbinator 2 days ago 0 replies      
This is particularly relevant given that there's been a recent trend of interest in thermal imaging cameras... of course, the range of those is in much longer wavelengths.
Serendipity When 2 people listen to the same song at the same time
224 points by gflandre  2 days ago   65 comments top 21
eatitraw 2 days ago 2 replies      
Warning: it starts playing music automatically. Be careful to adjust your volume so you don't bother anyone around you.
moskie 2 days ago 2 replies      
While this data is interesting enough on its own, the map animations are even cooler. The transitions from one location pair to the next are mesmerizing, and can provide some really cool perspectives of the globe. Great execution on that.
ZeroGravitas 1 day ago 2 replies      
ahnberg 2 days ago 1 reply      
Pro-tip: press space whenever you hear something you like and the song will continue playing, and you have a good chance to catch it. Also, clicking anywhere on the screen (while paused or not) takes you to the active song in Spotify.
billmalarky 2 days ago 0 replies      
Kyle McDonald has a lot of cool projects. Check out this one http://vimeo.com/29348533 using the open source Facetracker library (built by Jason Saragih and maintained by Kyle).

Pretty friendly guy, helped me out via email with some questions I had when I was playing around with facetracker.

paul9290 2 days ago 2 replies      
Cool, how about 2 or more people listening to the same audio in sync on their different Internet devices, Spotify?

Together creating a stereo system with friends & or those in the crowd around them.

Anyone else interested in such a feature?

iLoch 2 days ago 2 replies      
I'm finding the pause button doesn't react in time for me to catch the song most of the time. By the time I realize I like what I'm hearing it's already too late to pause it. An adjustment for time per song would be great.
shalmanese 2 days ago 2 replies      
Every song selected was in English. I'm not sure if this is because it's built so only English songs show up or because English has become the defacto global language for music.
cheshire137 1 day ago 0 replies      
Very annoying that it started playing music by itself.
aparadja 2 days ago 2 replies      
Based on a few minutes of observation, Ed Sheeran is the most popular artist in the world.

Is there any sophistication behind the sound clip selection? Just a certain static point in each song, or some kind of algorithm to get to a recognisable part?

PaulJulius 2 days ago 2 replies      
As the music started to play I reached to pause the music I already had playing, but then I realized that it was paused automatically by Spotify. That's a pretty cool feature that they have - very well integrated.
netvarun 2 days ago 1 reply      
Off-topic: Could the admins change the link to point to the final, redirected url - https://www.spotify.com/us/arts/serendipity/ ?

Anyways, fantastic execution! Great visualization. My only super-minor complaint is the fade in/fade out could be a little less abrupt when the songs change :)

owenversteeg 1 day ago 0 replies      
If you want to keep a track playing, just hit ctrl-page up to switch to another tab. It'll play for the track's full 30 seconds.

Or you can click the background to listen to it on play.spotify.com.

theworst 2 days ago 0 replies      
I've always thought this would be cool for e.g. a cross-country team, or friends running together, to have.

Imagine incorporating a PA system so a coach could talk to and track his athletes on all their training runs...

madaxe_again 1 day ago 0 replies      
I call this phenomenon "radio".
Trufa 2 days ago 0 replies      
A missing feature would be to be able to see a list of the songs that were played, I missed a couple of songs I'd want to listen a little bit more.
huuu 1 day ago 0 replies      
While listening to this it made me realize how much music is compressed nowadays. I think this is a bad trend.
Grue3 1 day ago 0 replies      
That's nothing, last.fm would show who is listening to the same song you are listening to right now.
cmstoken 2 days ago 1 reply      
Wow, what an awesome project. Would be cool to read how it was made.
ris 1 day ago 0 replies      
Jesus, people have an awful taste in music.
taeric 2 days ago 1 reply      
Neat and all.... but with the beauty of "radio" one could basically light up a crapton of folks in any given area all listening to the same song.

I remember back when I delivered pizzas, it was not uncommon for most of us drivers to all be humming the same song as we are getting stuff inside, since we all listened to the same stations.

If it was a really good song ending as I got back to the store, it was not uncommon to find that I waited it out in the parking lot along with at least two other drivers. :)

The Chairless Chair, an invisible chair that you can wear
219 points by 51Cards  3 days ago   67 comments top 25
jasonkester 3 days ago 2 replies      
A few hundred years late, and a bit on the expensive side compared to its competition:


I built one of these for backpacking trips for a little less than $5 for a wooden disk, a couple plumbing bits and a clip belt. Silly looking when you're walking around with it, but infinitely better than sitting on a wet log next to the campfire.

bane 3 days ago 8 replies      
This appears to be something that's solved with the Asian Squat. It's not fashionable in the West to do this so we don't build up the right flexibility and tendon strength to do it comfortably. But if you can train yourself to do it, you can do it for hours. It's basically human's default "sitting" position.


wuliwong 3 days ago 1 reply      
My buddy has been pitching the idea of "chair pants" to me for a decade at least. Glad to see someone finally executed on this. :)
tofof 3 days ago 1 reply      
Sure playing fast and loose with "invisible", aren't we CNN?
drcode 3 days ago 0 replies      
It's gotta suck or be vaporware, given that no single video exists on the entirety of the internet of the device in action... only cheesy renderings and stills.
ChuckMcM 3 days ago 1 reply      
Interesting concept. I used to carry a 'nada chair'[1] in my backpack when hiking, I suspect this is much heavier though. I could totally see it as a huge win for folks who had to stand while customers were around (think the guy selling food from a cart on the street). If you motorized it so that it helped older people stand up then it could be a double win for them.

[1] http://www.gingerbreadshows.com/nadachair/

anigbrowl 3 days ago 0 replies      
I expected the worst from the headline but this is actually pretty neat. Seems like it could improve safety as well in production environments where workbenches or sit stools could present a trip hazard (eg if you're manhandling large objects that obstruct your view of the ground).
iovar 3 days ago 1 reply      
It looks really uncomfortable to me.

Straps holding it in place, only two pads beneath your butt and no back support.

Maybe in a two minute demonstration it's ok, but wear it all day and I bet it will feel like a jail.

As for the assembly worker video example, it doesn't seem like a well-thought use case.

Why not use a stool? Cheaper and might have some back support.

Also, off the top of my head:

What happens if you forget and lean backwards, even a bit ?

How much time does it take to put it on and take it off?

How easy is it to put it on in a slightly incorrect manner and twist and break your leg?

josephschmoe 3 days ago 0 replies      
This sounds really useful. If it's less than 100$, I could see it selling very well outside of a convention.

If a future iteration is cheap, light and can be worn below clothes, I could really see this catching on.

kalendae 3 days ago 0 replies      
found a 2008 article on similar honda 'legs' http://www.wired.com/2008/11/honda-announces/
swombat 3 days ago 0 replies      
Awesome invention. If they can make it even less intrusive people might start wearing it in daily life. Not sure how to feel about the "Chairolution" slogan, though.
jonknee 3 days ago 0 replies      
We've come full circle--from a standing job to a desk job to a standing desk job to a standing job where you can sit on your pants.
colordrops 3 days ago 0 replies      
This seems to be a stepping stone to widespread fully powered exoskeletons. I could imagine the next version of this supporting lifting capability.
haversine 3 days ago 0 replies      
I bet pregnant women would like a version of this which supported their body weight sporadically throughout the day, especially in the late third trimester.

Hell, I want one to help me do the dishes. There's somewhere a stool wouldn't make much sense.

Shivetya 2 days ago 0 replies      
Question, if standing desks are so great then why is standing at work considered bad? Only reason I can come up with is that with the desk you can set its height
mivanov 3 days ago 1 reply      
m-app 2 days ago 0 replies      
These guys are definitely streets behind:


stang 3 days ago 1 reply      
Invisible chairs have been around for a long long time: http://i.imgur.com/tIIBKCY.jpg
reddog 3 days ago 0 replies      
This would perfect with my standing desk!
2810 3 days ago 0 replies      
I think it can be even more invisible..www.youtube.com/watch?v=DkmkGFHTjRg
allochthon 3 days ago 0 replies      
A tiny step closer to mass-market robotic exoskeletons.
EGreg 3 days ago 0 replies      
I remember when I was singing in the Juilliard pre college chorus as a kid, we had to stabd for hours on end in a concert. I wanted to make pants that go above the knees and lock into place, so they support me standing. So I wanted to build a low tech version of this when I was a kid :)
wehadfun 3 days ago 0 replies      
seems like it would hurt.
trhway 3 days ago 0 replies      
Last slide suggests that RyanAir would love it.
fnazeeri 3 days ago 1 reply      
Sitting is the new smoking. This thing is like the e-cig of chairs...
The Harvard Classics: Download All 51 Volumes as Free EBooks
230 points by yammesicka  2 days ago   47 comments top 15
themodelplumber 1 day ago 0 replies      
Beautiful. Thank you for the reminder that these books exist. I read some of these books years back, and I still treasure the experience. I had a terrible job that started at 6:30 a.m. where by some miracle people kept assigning me tasks that could be automated, so I was about a month ahead on all of my work. In the early mornings I would read from these books on a Dell Axim that was propped up above my keyboard, next to my propped-up reversed CD-ROM disc.

One book that's not part of the collection but that I would recommend to the people here on HN is "James Nasmyth, Engineer: An Autobiography": http://www.gutenberg.org/ebooks/476

Here's a bit from a "coding interview" that went well for him:

"I carefully unpacked my working model of the steam-engine at the carpenter's shop, and had it conveyed, together with my drawings, on a hand-cart to Mr. Maudslay's next morning at the appointed hour. I was allowed to place my work for his inspection in a room next his office and counting-house. I then called at his residence close by, where he kindly received me in his library. He asked me to wait until he and his partner, Joshua Field, had inspected my handiwork.

I waited anxiously. Twenty long minutes passed. At last he entered the room, and from a lively expression in his countenance I observed in a moment that the great object of my long cherished ambition had been attained! He expressed, in good round terms, his satisfaction at my practical ability as a workman engineer and mechanical draughtsman. Then, opening the door which led from his library into his beautiful private workshop, he said, "This is where I wish you to work, beside me, as my assistant workman. From what I have seen there is no need of an apprenticeship in your case."

He then proceeded to show me the collection of exquisite tools of all sorts with which his private workshop was stored. They mostly bore the impress of his own clearheadedness and common-sense. They were very simple, and quite free from mere traditional forms and arrangements. At the same time they were perfect for the special purposes for which they had been designed. The workshop was surrounded with cabinets and drawers, filled with evidences of the master's skill and industry. Every tool had a purpose. It had been invented for some special reason. Sometimes it struck the keynote, as it were, to many of the important contrivances which enable man to obtain a complete mastery over materials."

Anyway, a pretty fun, educational book for someone with that mindset.

devindotcom 2 days ago 2 replies      
Great to have these, but if you're interested in a classic, five minutes' research will save you a lot of pain. The wrong translation can put you off a book or author for life, and a bad edit, abridgement, or lack of notes can render a work incomprehensible or weak.

Just take a second to look up whether there are any modern translations that might be up your alley, or whether you prefer accuracy over readability, or what have you.

wtbob 1 day ago 4 replies      
> It was in 1909, the nadir of this milieu, before the advent of modernism and world war, that The Harvard Classics took shape.

I think he means zenith, not nadir. 1909 was the high point of human civilisation, before barbarism and ugliness took hold.

Also, not covering Freud, Nietzsche & Marx was no mistake: this is a collection of lessons to learn, not lessons to learn from.

scottcha 2 days ago 2 replies      
I've owned the entire collection including the Shelf Of Fiction. The main thing to consider is that for the works written originally in english or are hard to find these are good resources. For the works which have been translated there are usually much better translations available (and worth paying for).

Very glad to see these freely available though.

LaSombra 1 day ago 1 reply      
Just wrote a dirty Ruby script to download them. https://gist.github.com/lasombra/a489f715985715663595

P.S.: This is my first Ruby script. I'm still learning it.

atmosx 1 day ago 0 replies      
There's also the Gutenberg project[1] which offers a huge variety of classics for free in (almost) every format.

[1] http://www.gutenberg.org/

walterbell 1 day ago 0 replies      
Archive.org has many out-of-copyright books, but there is little support for discovery of "related books" or "all books in a multi-volume series". Sorting by download count within categories is a start, for example:

https://archive.org/search.php?query=mediatype%3A%22texts%22... will lead to The Cambridge History of ___ (geography or topic, e.g. Literature, India) and The Cambridge ___ History (time or topic, e.g. Ancient, Medieval, Natural). Each of these titles are several volumes, 500-1000 pages per volume, covering centuries of events from a British perspective.

German Classics, https://archive.org/search.php?query=subject%3A%22German%20l...

Eastern Classics, https://archive.org/search.php?query=subject%3A%22Oriental%2...

arethuza 1 day ago 0 replies      
As a Scot, I was pleasantly surprised to see Robert Burns on the list, but digging around it looks like Burns was a keen supporter of the American Revolution and even wrote a "A Toast for George Washington":


mynameishere 2 days ago 1 reply      
Well, that's one ghastly website you pointed to. I have a physical edition of the Harvard Classics, and it's mostly boring stuff and speeches and political documents that are sufficiently summarized in other contexts (history books, Bartlett's, etc). One book that is worth reading is this (free):


minopret 2 days ago 1 reply      
Whoever would like to improve that list at gutenberg.org can follow directions on the site to get access to edit it. I hope they will.

I was glad to see that some like that page. I was actually the one who grabbed that list of contents from Wikipedia, requested access to edit Project Gutenberg's "Bookshelves" wiki, and added the links there to the Project Gutenberg versions of many of the selections. It was fun and not hard.

ensignavenger 2 days ago 1 reply      
My wife and I recently purchased this entire set, excepting books 1 and 5, at our local library book sale. We are now looking for the missing volumes, so if anyone happens to have them laying around, and would like them to go to a good home, get in touch :)
shimshim 2 days ago 0 replies      
I've spent a few years searching these out for fun in second-hand shops and used book stores, avoiding online simply for the thrill of trying to find them on the street. This is fantastic that they are available for free download now!
Paul12345534 2 days ago 0 replies      
Once upon a time when I was first learning to program, I wrote a Python script to download them from bartleby.com and make them into nice CHM files :) some good stuff
ChuckMcM 1 day ago 1 reply      
And if there was ever a testament to why Copyright should expire for the public good, this is it.
garric 2 days ago 4 replies      
It's fine to read these for the literature and/or a peek into how earlier people saw their world, but beware of ideas whose underpinnings are still touted as fact - such as those from the Wealth of Nations. Adam Smith may have been among the intelligentsia of his time, but he made claims far outside of his expertise which have long since been shown to be fantastical imaginings. And if you've ever seen A Christmas Carol, you'll have passing familiarity with the debtor prisons and Irish potato famines justified by classical liberalism (economic theory, not to be confused with the popular modern term) which has today become neoliberalism (which contains justification for neoconservatism, so let's not get partisan about it) since about 1980 with Reagan in the USA, Thatcher in the UK, and Deng in China and isn't any better for reasons I won't currently go in to. (Crosby, Harvey)

For instance, Adam Smith argued that barter was an inefficient way to make transactions because it required a dual coincidence of wants by both parties. Nevermind that communities simply didn't function this way, instead giving what they had now in a system of credit rather than debt. This is one of many examples undermining Smith's ideas, so be careful if you decide to read such books. Unless your degree concerns historiography, your time would be much better spent elsewhere. (Graeber)

Smith is easy to debunk, but ideas contained within many classical novels provide popular justification for cultural imperialism. They're not so easy to address. (Said)

On bananas and string matching algorithms
196 points by cjbprime  14 hours ago   45 comments top 5
zackmorris 11 hours ago 3 replies      
I've seen some very strange things in my career. I find posts like this delightful, because I can point to them and say "see, here is proof that even code written with the best of intentions can still have bugs."

Programmers tend to fall into (at least) two camps: the skeptics and the pragmatists.

Sometimes when I report a finding, programmers accuse me in one way or another of messing something up because that cant possibly be failing. Those are the skeptics, using incredulousness almost like a shield to protect their worldview. They tend to have an up-close/whats right in front of them approach to programming, are prolific and usually take a positive stance on programming.

At other times, reporting a finding is met with resignation, almost like please work around it because we just dont need this right now. Those are the pragmatists, taking the long view/forest for the trees approach, knowing that programming is more than the some of its parts but also that its a miracle it even works at all. They are the daydreamers and sometimes are perceived as negative or defeatist.

I was a pragmatist for as long as I could remember, but had a change of heart working with others in professional settings. I saw that certain things like databases or unix filesystems could largely be relied upon to work in a deterministic manner, like they created a scaffolding that helps one stay grounded in reality. They help one command a very mathematical/deliberate demeanor, and overcome setbacks by treating bugs as something to be expected but still tractable.

But here is one of those bugs, where the floor seemed to fall out from under our feet. One day I mentioned that SSL isnt working and about half the office flipped out on me and the other half rolled their eyes and had me track it down:


The gist of it is that OpenSSL was failing when the MTU was 1496 instead of 1500, because of path MTU discovery failing and SSL thinking it was a MITM attack and closing (at least, that is how I remember it, I am probably futzing some details).

That was odd behavior to me, because I see SSL as something that should be a layer above TCP and not bother with the implementation details of underlying layers. It should operate under the assumption that there is always a man in the middle. If you can open a stream to a server, you should be able to send secure data over it.

Anyway, we fixed the router setting and got back to work. All told I probably lost a day or two of productivity, because the office network had been running just fine for years and so I discounted it for a long time until I ruled out every other possibility. Ive hit crazy bugs like this at least once a year, and regret not documenting them I suppose. Usually by the time they are fixed, people have forgotten the initial shock, but they still remember that you are the one weirdo who always seems to break everything.

StefanKarpinski 12 hours ago 3 replies      
Really interesting post and good debugging work. A couple of take-aways:

1. This is one reason it's a good idea to use signed ints for lengths even though they can never be negative. Signed 64-bit ints have plenty of range for any array you're actually going to encounter. It may also be evidence that it's a good idea for mixed signed/unsigned arithmetic to produce signed results rather than unsigned: signed tends to be value-correct for "small" values (less than 2^63), including negative results; unsigned sacrifices value-correctness on all negative values to be correct for very large values, which is a less common case here it will never happen since there just aren't strings that large.

2. If you're going to use a fancy algorithm like two-way search, you really ought to have a lot of test cases, especially ones that exercise corner cases of the algorithm. 100% coverage of all non-error code paths would be ideal.

srean 6 hours ago 1 reply      
There are many comments discussing whether length should be unsigned or signed. There are arguments for and against it. This a prototypical carpet bump, you squash it in one place it raises its head somewhere else.

I see it not as a question whether lengths should be signed or unsigned but whether subtraction, assignment etc should be polymorphic w.r.t signed and unsigned. I think the issue here is the polymorphic variants of these binary operators are inherently risky.

Casting gets a little tedious, but languages that do not have operator overloading should disallow subtraction from unsigned and subtraction that return unsigned. You either cast it up, or if possible reorder the expression/comparison so that the problem goes away. Even assignment can be a problem. Ocaml can get irritating because it takes this view but I think it is safer this way.It is very hard to be always vigilant about the unsigned signed issue, but hopefully a compiler error will mitigate risks, not completely, but it is better than nothing.

That leaves languages that allow operator overloading, in those cases if you are overloading an operator you better know what you are doing, and watch out for cases where the operator is not closed.

sitkack 11 hours ago 3 replies      
Why is this code so damn fancy? Shouldn't the fanciness be offset by proofs or extended testing? Open loop!
alayne 10 hours ago 1 reply      
I try to never look at GPL/LGPL code if I'm going to implement something at work or under another license.

Sorry for suggesting a good practice to avoid legal liability.

Hard Science About Diet
192 points by yaddayadda  1 day ago   211 comments top 20
bollockitis 1 day ago 9 replies      
If you haven't read either of Taubes's books, please do. I highly recommend Why We Get Fat[1]. It's a spectacular piece of scientific journalism. If that's too much for you, try one of his talks on the same topic[2].

When I first encountered the idea that we do not get fat from eating too much and that calories weren't responsible, I thought it ludicrousthe body can't disobey the laws of physics! Thermodynamics! But after seriously thinking about the idea, I realized Taubes was providing a far more complete understanding of metabolism. The human body doesn't run on calories, it runs on food. Yes, we can easily learn the caloric content of food, but that's largely irrelevant. What's important is how food affects the body, not its raw energy content. I see this misconception time and time again, especially among smart people who like to reduce the human body to merely a physical machine, often ignoring the whole biology thing.

I think the hormone theory of obesity is correct and I think these studies will prove it. But even if they show otherwise, this type of research is long overdue and we all stand to benefit from the results.

[1]: http://www.amazon.com/Why-We-Get-Fat-About/dp/0307474259

[2]: http://youtu.be/ywRV3GH5io0

jrapdx3 19 hours ago 2 replies      
After 150 comments have been made, maybe 6 people will stop to read this. But I feel compelled to contribute a bit of what I've learned about obesity.

I was the medical director of an obesity treatment clinic for 10 years, working with thousands of obese patients.

The most important lesson is that obesity is a disease, and each obese person has a different disease. Each case requires a unique treatment approach. "Cookie-cutter" methods won't cut it.

I'm convinced that obesity is the most complex disease the art and science of medicine has ever faced. I can't even begin to describe the mind-boggling complexity of the situation.

A minimalist outline: factor in participation of the endocrine system (insulin resistance, role of cortisol, thyroid, reproductive hormones), the immune system products promoting obesity, as well as adverse inflammatory effects of adiposity contributing to metabolic disarray, and the brain's functional role in metabolism involving highly intertwined connections of neuronal circuits regulating metabolism and sleep/circadian rhythms. And so I could go on for gigabytes on these subjects, even before citing the enormous list of references.

Short answer: all of these body systems (neural, endocrine, immune) are interactive. Think many:many relationship with "many"==trillions. Therein are the solutions to obesity. Small needles, huge haystack.

Short answer: all of these body systems (neural, endocrine, immune) are interactive. Think many:many relationship with "many"==trillions. Therein are the solutions to obesity. Small needle, huge haystack.

A few years ago it was mentioned at a conference that at the time over 250 human genes (and their peptide products) had been identified to play a role in obesity. Considering the multitude of known and potential gene/environment interactions, what simple "cause and effect" paradigm could we glean?

So yes, many obese patients respond favorably to low CHO, high N diets. Altering PUFA intake to approximate a 1:1 intake of N3 and N6 EFA in adequate amounts is warranted. Elimination of physiologically incompatible trans-fatty acids in the diet is absolutely necessary. Mono-unsaturated or saturated fats within calorie constraints are not usually an issue. Behavioral approaches are always indicated.

Just remember, each of us is different, our systems are inherently quirky, and tremendous variation is common. The above general rules are fine to start with, but be prepared, understand the "reality paradox": exceptions are the rule and not the exception.

geoffc 1 day ago 1 reply      
For me the answer is simple. If I eat 2500 calories a day of vegetables, nuts, meat, eggs and fruit I have trouble finishing it all. If I eat 2500 calories with grains and sugar included I'm starving at the end of the day and it takes every ounce of will power not to eat 3000 calories. It might well be calories in, calories out but what I eat makes it dramatically harder or easier to regulate the calories in.
dangerlibrary 1 day ago 2 replies      
Holy crap, talk about burying the lede.

There's a table at the bottom of the article that contains the tl;dr about the scientific studies referenced. All are still underway, there are no published results yet.

alainv 1 day ago 1 reply      
Fascinating that the core study the article focuses on is using strictly male subjects. I thought this had been a controversial approach[1] for quite some time now - yet they still claim the study's main goal is "doing it right."

[1]: http://well.blogs.nytimes.com/2010/06/30/phys-ed-what-exerci...

scotty79 17 hours ago 1 reply      
I read some time ago that there are only three dietary advices for general population backed by science:

  1. Eat food.  2. Not too much.  3. Mostly plants.
I wonder if anything changed since then.

Al-Khwarizmi 16 hours ago 1 reply      
I have a hard time believing the new theories that fat is not that bad and sugar is the evil. I used to have a diet with plenty of beef and pork meat. Then I went to live for a couple of months in Singapore, and there I ate noodles. LOADS of them, because I loved them. The result was that I lost a lot of weight.

In my experience what gets me fat is meat, and what makes me lose weight is eating less meat and more pasta and rice. But I suppose it varies per person.

tim333 15 hours ago 1 reply      
Two striking things about dieting. Firstly its a matter of calories in vs calories burnt. Stop eating and you'll lose weight so it's mostly down to your controls, conscious or unconscious. Secondly everyone starts at about 8 lbs at age 0 and ends up about 140lbs at age 18 give or take 50% and that's not down to conscious planning - the unconscious bits of the brain make kids hungry if they need food and to not eat and run around if they have too much. And the mechanisms are powerful - no kids remain at 20 lbs because they choose to. When adults get obese its seldom because they choose too but because the unconscious bit goes wonky. I think the US mostly due to sugary food and not much exercise. It's interesting if you look at the book 'French Women Don't Get Fat' that it's mostly a recipe book but her actual story was she went from France to the US for a year or so, hit the sugary snacks and piled on the pounds and then on her return her dad was horrified so she dropped the snacks and the weight went. So a mix of factors there.
ajcarpy2005 1 day ago 0 replies      
The body likely has ways of losing weight that are faster than simply eating itself (burning fat)

Not all weight is fat

Metabolic efficiency varies, including by calorie type

Much of the chemical energy output in the body is involved in actually repairing or replacing, not only in expanding the volume of fat reserves or even muscle.

It's all a thermodynamically-limited bunch of processes but thermodynamics is a limit rather than a driver of energy transformations.

Calorie REDUCTIONS don't guarantee weight loss because obviously the body can choose to expend less energy. And if the term CALORIE DEFICIT is used, it is not justifiably used because science currently can't determine the necessary level of granularity since energy, weight, and measurable metabolic output/activity all change in response to factors other than the ones which are thermodynamically relevant, and this makes thermodynamic equations/measurement of human dieing problematic. Essentially the system is kind of a 'black box' and some of the relevant inputs and outputs in the thermodynamic equation are 'inside' that mathematical 'black-box.'

Edits for spelling

Oh and a slightly less vague explanation can be expounded onto the concept of energy transformation to explain why it wouldn't always correlate with a weight change...combining or dividing molecules.

What if your body doesn't have enough energy to go through the processes of burning a fuel source (or the necessarily mistake or vitamins, or other nutrients...)

tokenadult 1 day ago 0 replies      
We need new rigorously controlled experimental studies to tease out the causation patterns suggested by correlations observed in observational studies of human diet. The way to test a causal hypothesis is always, at bottom, to do a controlled experiment.[1] So we will tease out the effects of diet on different people by finding experimental volunteers and subjecting the volunteers to controlled diets, such as those planned for some of the experiments described in this interesting article.

This is very difficult to do, as almost all human beings eat when they feel like eating WHAT they feel like eating. Earlier human experiments on effects of diet in the 1970s actually required the experiment subjects to live in the laboratory long-term, and to have every gram of everything they ate during the experiment measured exactly by experiment team assistants. Even at that, those experiments came up with few clear conclusions, perhaps because the experiments weren't lengthy enough or didn't include enough subjects for strong inferences. Now the experimenting begins again. Whether the currently hotly debated hypotheses about human diet win or lose, it's important to put the hypotheses to the test of a rigorous experimental study to advance human knowledge.

[1] http://escholarship.org/uc/item/6hb3k0nz

scarygliders 1 day ago 0 replies      
Reading the comments so far, it's interesting how much people's ire gets raised on this topic.

Also, have a look at this, originally written/published, it seems, in 1958: http://www.ourcivilisation.com/fat/index.htm

Makes for a fascinating read, and it amazes me how close it gets to what's currently being put forward now (high fat low carb == good).

visarga 23 hours ago 1 reply      
What about calorie counting? I think the best way to lose some fat is to add up the sum of the calories of what you're eating, as the day goes by. In time, this will create an ability to know which foods are too rich and which are ok. Also, they allow management of appetite/hunger by allocating the rest of the calories for the rest of time. I personally found it much easier to eat on a budget of 1200 or 1400 calories a day than following a regime that forbids some kinds of foods or aims to make food less palatable. I lost 30 pounds that way and was able to keep my new weight in the following 5 years. I used an iPhone App for actual counting and calorie database lookups. Also, physical activities can be tracked and added up to the daily budget. If I walk for 2 hours, then I can have an extra meal, if I want to take it.

TL;DR Calorie counting makes for mindful eating and changes habits, without suffering.

rayiner 1 day ago 0 replies      
I think modern food marketing should get more scrutiny here. Aside from the 24/7 food ads with Photoshopped hamburgers, there are the new, more caloric, more addictive products. Starbucks, for example, has replaced the traditional American coffee and donut with a latte and pastry combo that has twice as many calories or more. You can't sell low-calorie coffee with creamer and sugar (~50 cals) for as much as you can sell a latte (~200 cals).
clumsysmurf 1 day ago 2 replies      
"Gardner's study stems from his previous research, which suggests a diet's effectiveness may be due to how insulin-resistant the dieter is at the outset."

What if it also depends on the subject's microbiota, which would be impacted by a number of things including the (unwanted) consumption of residual antibiotics in meats.

Seems like the more we find out, the more questions there are.

honhon 22 hours ago 0 replies      
I don't believe its necessary for all the experiments to last that long. Its possible to have shorter more controlled experiments to gain insight to health benefits of particular diets.

For example...

A British group of volunteers were locked in a zoo and were allowed to eat up to 5 kilos of raw fruit and vegetables per day - but only raw fruit and vegetables.

"Nine volunteers, aged 36 to 49, took on the 12-day Evo Diet, consuming up to five kilos of raw fruit and veg a day."

"The prescribed menu was:

- safe to eat raw; - met adult human daily nutritional requirements; and - provided 2,300 calories - between the 2,000 recommended for women and 2,500 for men,"

"Overall, the cholesterol levels dropped 23%, an amount usually achieved only through anti-cholesterol drugs statins.

The group's average blood pressure fell from a level of 140/83 - almost hypertensive - to 122/76. Though it was not intended to be a weight loss diet, they dropped 4.4kg (9.7lbs), on average."


poolcircle 15 hours ago 0 replies      
I firmly believe in the saying, "You get sick because of what's eating you and not being of what you are eating".
yaddayadda 1 day ago 0 replies      
>NuSI's starting assumption, in other words, is that bad science got us into the state of confusion and ignorance we're in. Now Taubes and Attia want to see if good science can get us out.

NuSI's approach to test long-standing food science assumptions.

dang 1 day ago 3 replies      
If anyone can find a sentence from the article that would make a more descriptive title, we can change it. This is one case where subtitles and the opening paragraph both fail us.
yarou 1 day ago 2 replies      
I think it's important to note that every individual is different. I'm surprised that gene therapy has not made any inroads into weight loss management and diabetes/hypertension prevention. In an ideal world, diet and exercise should be tailored to your genetic makeup, instead of the "one size fits all" brute force approach.
wdewind 1 day ago 7 replies      
This seems intellectually interesting to me but it's frustrating because I feel it takes what is a relatively simple issue and makes it needlessly complex. Yes, maybe there is some amount of optimization you can do with your diet, but the simple fact is that there is a not a single study in the history of science which has been able to demonstrate eating at a caloric deficit and gaining weight. Show me an obese person not eating significantly too much.

From the practical standpoint of actually trying to lose weight/get people to lose weight, the challenges in nutrition are almost entirely around compliance (how to ensure someone sticks with the program) rather than substance (what people put in their bodies). Most people know, within reasonable terms, how to eat healthily. It may not be the most optimal way possible (perhaps keto or some other diet is), but if we spent more time studying how to teach compliance I think we'd be making a lot more progress towards stopping obesity.

Lego Calendar
200 points by greenburger  1 day ago   36 comments top 10
jacquesm 1 day ago 1 reply      
For a while LEGO had 'modulex' and 'plancopy', at least one segment of which was targeted at lego based planning boards.


gurvinder 1 day ago 3 replies      
You will soon get a cease and desist letter from Lawyers of Lego for using their trademark. I am telling from my experience.
00bemccurrachp 1 day ago 0 replies      
Love the idea, especially it's accessibility. This feels like it would be fast to manage physically, except for the take-photo-and-email component, which could be replaced by a webcam pointed at the board. Then there's also no technical friction fiddling on your phone.
knes 1 day ago 0 replies      
FYI, This was posted 11 months ago too.


jpetersonmn 1 day ago 1 reply      
Looks pretty cool. Few questions.

1) Do you have to give your calendar login to a 3rd party? 2) What happens I add something to my calendar on my computer, is there some alert sent to someone that they need to add a lego to the board?3) What I schedule something on the calendar online, but the lego doesn't get added to the board. When someone takes a picture and syncs it, what will happen to my appointment? Will it think it's gone and erase it? Notify of the descrepency, etc....

I'm envisioning in my head some arduino powered lego calendar that automatically puts the blocks in place as appointments are added/moved/deleted from the cloud.

yzzxy 1 day ago 1 reply      
Although the image recognition software is cool, it's almost surely cheaper to just set up Mechanical Turk HITs, compared to however many programmer-hours were spent on the image recognition.
zheshishei 1 day ago 1 reply      
Can the synchronizer differentiate between single block and double block heights?

Also, another cool level of granularity (if needed) could be using 1x2 or 1x1 lego blocks to add more information that's easily seen in the photo. Not only do you have different colors of 1x2 and 1x1 blocks, you can also place them in different positions (left/right vertically, top/bottom horizontally).

All in all, great idea. I'd like to set one of these up myself in the future.

teamonkey 1 day ago 2 replies      
Can anyone explain what hiding blocks in a drawer achieves? I can't work it out.
sygma 1 day ago 0 replies      
I posted this two months ago [0] glad to see it gained traction this time :)

[0]: https://news.ycombinator.com/item?id=7914768

corbett3000 1 day ago 0 replies      
This is so 2012.
205 points by lelf  18 hours ago   82 comments top 25
fencepost 12 hours ago 2 replies      
There are multiple projects/products out there for this, some of which are linked here and some of which are not. Not all are currently available. There was a fair amount of discussion and useful information in a Brian Krebs article: http://krebsonsecurity.com/2014/06/gear-to-block-juice-jacki...

USB Condom: ~$10, available. Tends towards either a bare board with USB connectors or that board with plastic shrink tubing on it. (https://www.crowdsupply.com/xipiter/usbcondom) (http://www.usbcondoms.com/) (probably an earlier version but the same person: http://int3.cc/collections/frontpage/products/usbcondoms)

UmbrellaUSB: ~$12, available soon? More polished/finished looking than the USBCondom, got their information on voltages from the USBCondom folks (see comments in the Krebs article above). Working on fulfillment of their Kickstarter (funded July 3). (http://www.umbrellausb.com/)

ChargeDefense: ~$??, a "coming soon" page, a picture of a prototype, and maybe more in September. (http://www.chargedefense.com/)

LockedUSB: ~$20, available. More technical details available, more expensive and very blocky looking - expect it to block any adjacent ports. Technical information indicates that the single unit should work with both Apple and non-Apple devices (https://lockedusb.com/product/lockedusb-adapter-charger-fire...)

Practical Meter: ~$20, available. Protects ONLY when used with their optimized 3-in-1 charging cables otherwise passes data through. Provides a 5-bar indicator of current. (http://www.powerpractical.com/product/practical-meter) more details in their kickstarter (https://www.kickstarter.com/projects/david-toledo/the-practi...)

PortPilot: ~$60, not yet available. Much more expensive, MUCH more informative, switchable between data/no data. Includes a display showing possible and actual power draw, etc. Almost a development/diagnostic device. (https://hakshop.myshopify.com/products/portpilot)

At least 3 listed below via Amazon (2 in UK): PortaPow $7 (2 versions, www.amazon.com/gp/product/B00GC4AJOU, looks like a "beat you to market" device), and Pisen ~$1.70 (http://www.amazon.co.uk/dp/B00E8ALIYU and http://www.amazon.co.uk/dp/B00E8AJ41E).

Springtime 11 minutes ago 0 replies      
Had hoped from the title the product would be a way to switch a USB drive from read/write to enforced read-only mode to protect from malware on unknown hosts. Would be a nice product in itself.
flavor8 17 hours ago 4 replies      
The board looks a little fragile. A tool like this (which is going to be used on the road) needs to be as solid as possible. For $10/unit it seems like they could afford to at least give it an epoxy surround.
lucb1e 17 hours ago 5 replies      
Or you just cut two wires in a normal USB cable. No need to buy condoms!

I think the board is because some power sources might go "hey I'm leaking, there is no device but I draw power!" and cut it off, but I only ever heard about it and never encountered it. My USB ports nicely power fans without ever having a data connection to anything.

cnvogel 13 hours ago 1 reply      
Or in software (Linux):

    # cd /sys/bus/usb/devices    # for n in usb* ; do echo 0 >$n/authorized_default ; done
...so that no drivers or userspace programs are allowed to communicate with any newly connected devices.


Of course this only prevents the USB host, you'd have to disable all USB-gadget daemons on your android phone to not have the charger tinker with the phones's data.

NOTE/added: I just realized that the main purpose this is marketed is to protect the phone's data. I'd me more worried about the computer if someone asks me to lend some juice...

ulfw 16 hours ago 1 reply      
Or you could buy one of the charge-only USB cables I saw all over Asia. They are a bit cheaper than the normal data-carrying ones as they have fewer wires.
petercooper 17 hours ago 3 replies      
Electronics people: why are there any components in this at all? If it's just about disconnecting certain pins, couldn't it just pass the power lines through and be half the size without a PCB at all? For example, it could easily be a cable missing two wires, right? (Note: I'm an idiot when it comes to electronics, so I'm genuinely interested.)
nly 17 hours ago 6 replies      
What are the fundamental flaws in the USB protocol that make it insecure? I know firewire allows for DMA, but I didn't think USB, besides being a complex serial protocol, had any intrinsically unsafe features?
ColinWright 15 hours ago 0 replies      
See also the extensive discussion from a year ago:


readerrrr 17 hours ago 1 reply      
Looks like this is equivalent to a dedicated usb charger.

There should be an option to enable data transfer, currently you have to physically remove it.

I would love to have something like this, if it enabled my devices to be read only; some usb flash drives have a physical button to enable that.

mindslight 12 hours ago 0 replies      
Mobile USB charging ports (as found in airports etc) are more of a gimmick than anything else. A shoddy one will easily damage your device, and if you're constantly plugging into different ones, that seems like just a matter of time. Plus, an unknown one will most likely just put out 500mA (slower charging), and USB A connectors aren't made for high insertion cycles so expect flaky connections. Plus you still have to carry the bulkiest part (the cable) so you still need kit.

I personally just carry a three way AC power splitter cube while traveling, which gives me enough ports for laptop+phone+whomever I ask to share with.

ufo 5 hours ago 0 replies      
I wonder if it would be possible to use a similar system to make a usb hard disk read-only. This would make it easy to avoid malicious computers transferring pesky autoexec.inf files and things like that.
nyar 4 hours ago 0 replies      
If I can get a flash drive for $10 I should be able to get this for <$5. I'll wait until the novelty wears off and get it for $1 from china, on Ebay.
lazerwalker 15 hours ago 2 replies      
It's worth noting that this is unnecessary for iOS devices, where plugging your device into an unknown USB port prompts you to either "trust" or "not trust" the computer in question (with "not trust" disabling data transfer).
dbbolton 4 hours ago 0 replies      
Couldn't a person just cut the green and white wires in their charging cable if they were concerned about this?
sbierwagen 10 hours ago 0 replies      
We resell something similar, from DFRobot: http://www.robotmesh.com/usb-power-detector

One of the ports has the data lines connected, the other port doesn't, so it could be used as a USB condom.

MAGZine 7 hours ago 0 replies      
I can't help but wonder if we'll see USB condoms that help to protect against spying through EM/power draw changes, i.e. to spy on decryption activities.
shimon 14 hours ago 1 reply      
Seems like you can already get similar stuff elsewhere for cheaper and with a little plastic around the PCB:http://smile.amazon.com/PortaPow-Fast-Charge-Blackberry-Char...
Sephr 10 hours ago 0 replies      
I'd pay more for something like a "smart usb condom" which does allow data but only just for power negotiation, so that my devices can still negotiate for higher power when available.
yoran 17 hours ago 1 reply      
$10 is expensive for such simple electronics! I understand that the price of the first piece is the highest, but if this gets mass-produced I think the price can easily drop to something like $1.
srslack 17 hours ago 0 replies      
Be sure to check out http://int3.cc/ Ridley's community project, and watch his talks if you haven't seen those.
geuis 10 hours ago 0 replies      
Honestly the Wired quote is a much better summary and gets right to the point.

"Many public locations now offer USB charging stations, but it's a trivial task to modify one of these to allow an attacker to access your data. Int3.cc's device cuts off access to the data transfer pins on the USB port, while still permitting access to the power supply."

Way too many words on that page before just getting to the damned point.

classicsnoot 12 hours ago 0 replies      
In terms of a protective cover/case, maybe there is a cheap, everyday item or container it would fit into nicely. I put pen springs around all of my cable heads.
mayuro 17 hours ago 1 reply      
Did no one else get that it's a joke?
tomphoolery 9 hours ago 0 replies      
Oh...I thought this was a...um...

Never mind.

Docker 1.2.0, with restart policies
185 points by julien421  1 day ago   66 comments top 10
shykes 1 day ago 6 replies      
Hi all, no World-changing features in this one, but we believe that over time, relentless incremental improvements can make a huge difference.

This week we are freezing all feature merges and focusing on refactoring, code cleanup and generally repaying as much technical debt as possible.

We are also considering a gradual slowdown of the release cadence (we currently cut a release every month), to give more time for QA. Even though we work hard to keep master releasable at all times and run every merge through the full test suite, in practice there can never be enough real-world testing before a release. An 8-week cycle (which is roughly what Linux does) would allow us to freeze the release 1-2 weeks in advance and do more aggressive QA.

waffle_ss 1 day ago 3 replies      
This is excellent news. The lack of a container restart policy was the main reason why I was spending a bunch of time learning CoreOS and fleet.

Trying to get CoreOS installed on VPS providers is a huge pain[0], and fleet and etcd are technically not labelled as production-ready (only CoreOS used as a base OS is)[1], so I'm really glad I can go back to vanilla Docker.

[0]: http://serverfault.com/a/620513/85897

[1]: https://coreos.com/blog/stable-release/

IanCal 1 day ago 1 reply      
I'm not sure about how I feel about this (edit - restart policies). It's cool, but seems to ignore what the OTP part of erlang development learned. They've already gone to "X number of restarts = failure" but with no time involved. There's also no hierarchy, which is where you really start to get the benefits.

While great, I worry that this is a part-solution that will delay the implementation of a proper one.

troym 1 day ago 3 replies      
Maybe a bit off-topic.

I haven't found a satisfactory solution to having communicating containers across multiple hosts. There seems to be quite a few solutions in the making (libswarm, geard, etc). How are other people solving this (in production, beyond two or three hosts)?

charford 1 day ago 3 replies      
Any update on when the OS X version will be available? I'm only seeing version 1.1.2 here:


jijojv 1 day ago 1 reply      
Writable `/etc/hosts`, `/etc/resolv.conf` is huge - no more local dns hacks.
antocv 1 day ago 2 replies      
Oh this is juuuust great. /sarcasm.

So now docker is taking on the work of what systemd and other daemon-managers are supposed to solve? Looking forward to docker run --restart on-failure ubuntu /bin/bash exit -1

When you include a --restart "feature" you know for sure you have don goofed.

But anyway, the rest of the stuff looks like pure candy. Great job!

frik 1 day ago 3 replies      
How to deal with persistent storage (e.g. databases) in Docker 1.2?

is this info up-to-date? http://stackoverflow.com/questions/18496940/how-to-deal-with...

abraham_s 1 day ago 0 replies      
Any idea when AWS Elastic BeanStalk will start supporting this version?
LunaSea 20 hours ago 0 replies      
Is it possible yet to build and "RUN" multiple sublayers inside the same Dockerfile ?
Programming language subreddits and their choice of words
173 points by quantisan  3 days ago   55 comments top 15
ivoras 3 days ago 2 replies      
An interesting indicator from the "mentions" graph: how compatible or co-used two technologies are in practice.

For example: C++ programmers apparently don't mention SQL at all, while it's very popular with PHP (which doesn't have a built-in ORM). There is also no overlap between C++ and JavaScript programmers.

Rust is obviously very influenced by C++ and Haskell, but the C++ community doesn't even know about its existence. Somewhat naturally, the Matlab and PHP communities really don't have much in common.

WayneS 3 days ago 1 reply      
"But what is up with the Visual Basic community? They are neither angry nor happy. They just ... are? :)"

This is answered by the mentions relative to TIOBE graph. They use VB, but they are careful not to talk about it.

josteink 3 days ago 3 replies      
I know PHP-bashing isn't particularly creative nor nice, but I wasn't surprised when I saw the word-analysis at the end there :)

That said, I don't think the findings for C was shocking either.

You're not going to be writing kernel-modules in Clojure. It's going to be some sort of C. And the chances of it involving hardware (performant code or not) is significantly higher than other non-systems languages.

dbbolton 3 days ago 3 replies      
I wonder why Perl was excluded. A quick search reveals that /r/perl has more subscribers than lua, matlab, objectivec, scala, sql, and visualbasic.
ChikkaChiChi 3 days ago 1 reply      
This is awesome. It almost paints a picture from PHP => Python => cpp & Java.

Considering that would be considered a growth path of a developing programmer, I find that fascinating.

PS: Go will always have problems with stuff like this. They should have just named the language string or var or something equal toxic to machine collection.

squirrelthetire 3 days ago 0 replies      
It's great to look at correlations between Rust, C++, and Haskell. Rust programmers are very interested in Haskell, and Haskell programmers Rust; which indicates that Rust's design has a lot of influence from Haskell. C++ programmers are talking about Rust more than anything else; which shows that they are likely interested in moving to Rust.

This is very exciting to me, because I believe Rust is going to be a great replacement for C/C++, and if anyone understands how to create quality programming languages, Haskell programmers do.

TL;DR Rust is going to be the new C++, and the new Lisp. These are exciting times.

josephschmoe 3 days ago 1 reply      
I feel like Haskell gets name dropped a lot more - when you mention something from a C-language you can have a reasonable expectation the other person knows what that thing is.
fenomas 3 days ago 0 replies      
I agree with the author that the top graph may not be hiding any important insights, but the interactive version (linked from the image) is sexy as all get-out.
tel 3 days ago 0 replies      
I wish the colors indicated not absolute comparative volume (more X talk happens on Y forum, than visa versa, so it's colored Y) but instead comparative relative volumes

    Forum X spends a great percentage of their time    talking about Y than visa versa so it's colored X
Basically, who obsesses over who more.

LukeHoersten 3 days ago 0 replies      
That's really clever. I had a good laugh at some of the results. Well done.
virtualwhys 3 days ago 0 replies      
There's a language that everybody is talking about; yet nobody uses it.

Kind of a spinoff of Stroustrup's, "there are languages everybody complains about, and languages nobody uses".

th3iedkid 3 days ago 3 replies      
looks like users of PHP are most offensive when given to choice of words to hate the language and users of mathematica are least offensive ! Funny but not unexpected

clojure users seem very happy with what they use!

On the mutual mentions java users have near zero mention of haskell?

tormeh 3 days ago 0 replies      
This is really cool. I wonder how the "buzzfactor" of Haskell would measure up against Idris.
amcnett 3 days ago 0 replies      
My kingdom for a sarcasm detection algorithm!
huangc10 3 days ago 0 replies      
nice job. as a user of PHP. no comments. :)
RTFM 0day in iOS apps
178 points by algorithm_dk  2 days ago   57 comments top 24
dperfect 2 days ago 1 reply      
This is not the default behavior for a UIWebView. It relies on a practice that is very common among developers of iOS apps with web content.

Specifically: every click/interaction that loads content in your custom web view sends the webView:shouldStartLoadWithRequest:navigationType: message to your web view delegate. Without implementing that method, clicking a tel: link will prompt first. However, many apps throw some logic in there to detect any URI schemes that don't match the standard HTTP/HTTPS schemes used in normal websites, and trying to do something "nice" for the user, they handle requests for those URIs by calling:

[[UIApplication sharedApplication] openURL:request.URL];

This is a reasonable thing to do (outside the context of tel: links) because it allows the app to spawn an external app for custom URIs.

Therein lies the problem: not that UIWebView opens tel: links without prompting (it doesn't), but that many app developers are just trying to improve the inter-app experience, and unknowingly open tel: links directly with that openURL: method.

EDIT: Just my opinion, but I think it's actually pretty cool that Apple gives developers the ability to dial phone numbers without an extra prompt. It makes third-party contact/phone apps much more useful (imagine having to confirm every phone number to dial after tapping the contact in the built-in phone app). In a way, this is the kind of trust / freedom that iOS developers rarely enjoy without a fight. It's just unfortunate that in this instance, it also happens to be very easy to overlook this pitfall when implementing web view logic that handles non-http links.

tolmasky 2 days ago 1 reply      
Seems infinitely more reasonable to have the default be a prompt, and to have to explicitly enable "auto calling". People making a contacts app for example would immediately notice the prompt and rectify it. Meanwhile everyone else can remain blissfully ignorant of the tel: scheme.

On the other hand, with this system, every single app that ever uses a web view has to somehow magically divine that this could be an issue. The UIWebView docs certainly don't warn you about this. So what, is the expected behavior is that if you ever use a WebView in your app you should read every RFC on the planet in case there's some weird edge case like this? Maybe instead of creating systems that require careful developers we could try creating systems that work well by default and need you to explicitly turn on dangerous features like this.

wlesieutre 2 days ago 1 reply      
Strange design choice on Apple's part. Having not read the documentation, my expectation would be that native buttons don't require a confirmation, but links coming from a webview do. It would have to provide an option to suppress the popups in use-cases where the developer is building their app in a webview, but if you're already jumping through all of the hoops for that sort of cross platform development, one more configuration to set isn't a big deal.

I'd blame Apple just as much as the devs. They made a choice to be insecure by default in a situation when a majority of developers are going to assume it functions like the rest of the OS does. Web views in any app ought to behave like Safari by default.

DanBlake 2 days ago 1 reply      
I would think a more likely to be exploited use case would be to find the telephone number of celebs. Send the celebrity some specially crafted message on FB or twitter with a good mix of social engineering and you should be good to go (provided they are on their phone when they view your message)
lukeqsee 2 days ago 2 replies      
Why was this information immediately released/zero-day'ed? Is this method ignoring responsible disclosure or am I missing something?

FB, Google, and others all pay for bugs such as these, so even monetarily, it doesn't make sense to just release it to the public immediately. Again, this is assuming these bugs were not disclosed previously to companies affected.

edit: clarification

username 2 days ago 3 replies      
> When a user opens a URL with the tel scheme in a native app, iOS does not display an alert and initiates dialing without further prompting the user.

What is the justification for this behavior? Why should the web browser open a prompt but not other apps?

gepeto42 2 days ago 1 reply      
Hi guys,

I just did a talk during BSidesLV on the subject of URL Schemes and dangerous implementations.

For those who want all the details:


For those who want to skip explanations on how they work and see the bad examples, auto skipping about 10min:


One example that I have in there is Yo. Yo will automatically Yo someone on your behalf. So if an inline frame has yo://gepeto42 (basically), and you have Yo installed, I have just "de-anonymized" your Yo account as you browsed my website (or any page where I could inject that iframe). A good tip on where to find out about those is to buy Launch Center Pro and to extract the plist it has. This has info about hundreds of iOS apps and how their URL Schemes work.

Happy hunting.

sprkyco 2 days ago 1 reply      
Well done I myself enjoy reading RFC's, but have yet to create something as crafty as this. Good work! Does not work on Android 4.4.2 requires application selection and then brings you to dialpad.
yincrash 2 days ago 0 replies      
You're saying that it's the app developer's fault that they haven't read all of the iOS developer documentation before opening webpages in a web view (rather than just the web view documentation)? That seems ludicrous.
adam-f 1 day ago 0 replies      
Being a bit pedantic here, but shouldn't it be specified as a URI, "tel:0000" not a URL-ish "tel://0000" with 0000 as the authority?
thom_nic 2 days ago 1 reply      
Reminds me of the auto-reset Android bug that utilized a tel: URI that auto-dialed: http://gizmodo.com/5946334/samsung-security-bug-can-wipe-out...
Tloewald 2 days ago 0 replies      
I'd call this bad design on Apple's part -- they should require the application to explicitly define a custom handler or fall back to default (secure) behavior. Yes, it's documented, but I'd call it a design flaw.
michaelmcmillan 2 days ago 1 reply      
Undoubtedly a poor design choice, I appreciate that you bring some attention to it.

  Facetime calls are instant. Imagine you clicking a link, your phone calls my (attacker) account, I instantly pick it up and (yes) save all the frames. Now I know how your face looks like and maybe where you are. Hello pretty!  Yes, it works. I tried.
Can you please provide some evidence that this is practically possible? Last time I used Facetime it took quite some time before the connection was established.

fullpint 2 days ago 0 replies      
There was a talk at BSidesLV called IOS URL Schemes omg://http://bsideslv2014.sched.org/event/21c84fe90196be5a475f6b33...You'll probably want to watch this on Youtube: https://www.youtube.com/watch?v=rJroherlZVo
paulfwalsh 2 days ago 0 replies      
There are a lot more security threats. My startup's URL reputation lookup service is being integrated with one of the biggest app platforms to help developers build security into apps. There are so many security threats that come with the standard WebView - mostly because I'm confident Apple and Google didn't think about how they could be used today.
gepeto42 2 days ago 0 replies      
Chromium Bug Thread about this: https://code.google.com/p/chromium/issues/detail?id=329259

"In practice, the closest to 'malicious' use I've seen the redict-to-app-store-from-ad case"

orblivion 2 days ago 0 replies      
Yeah, I think I disagree with the author. Unless someone can think of a reason it should be like this, I think it's Apples' fault. If there's a reason to allow no-confirmation telephone or facetime initiation, they should make it an option on WebView.
algorithm_dk 2 days ago 0 replies      
Send it using the Facebook Messenger app or GMail app.The fault is in the apps not in the system.Your code is correct.
alex2626 8 hours ago 0 replies      
exist any solution to send URL with command "Send iMessage "text" to "user" ?! automaticly without tap to "send" ???
losingthefight 2 days ago 2 replies      
I completely disagree that Apple is not at fault here. It should always prompt, end of story. More importantly though is that the devs at FB, Google, etc, likely saw how it worked in mobile Safari and, wrongly, assumed it would work the same in a webview. Why have the inconsistency in an a tag?
raviborgaonkar 2 days ago 0 replies      
well Apple did not learn from their previous mistakes and from the Android related Tel+USSD issues - http://www.securityfocus.com/archive/1/504414/30/0/threaded http://www.osvdb.org/show/osvdb/85806 . But I did follow responsible disclosure procedure before going to public and informed Google, Samsung. I hope he did the same :/
lsv1 1 day ago 0 replies      
I tried this but was unable to reproduce it.
S_A_P 2 days ago 1 reply      
this may be off topic, but I don't recognize some of the UI elements(battery life, signal strengh, etc) in his screen shots, and I can only assume that is an iOS8 beta. Isn't he breaking his NDA by showing off an unreleased OS? Security hole or not?
api 2 days ago 1 reply      
I disagree that it's not Apple's fault. This is a violation of the "principle of least surprise." When I drop a web view into my app I am not expecting web pages to be able to do much other than just be web pages. I'm certainly not expecting them to be able to dial the phone without prompting.
Linux Performance
181 points by Walkman  3 days ago   22 comments top 4
fasteo 2 days ago 0 replies      
I would recommend his book [1] to anyone interested in systems performance. What really caught my attention is the focus he puts in having a goal and applying a method to solve performance issues. Many times, I have found myself "lost" while isolating a performance issue. Not anymore.

[1] http://www.brendangregg.com/sysperfbook.html

nisa 3 days ago 4 replies      
I'm just a lowly student assistant that deals with a lot of Linux machines running Hadoop but I'm totally in love with these presentations from Joyent and the SmartOS guys.

I seriously considered moving to SmartOS as ZFS and Zones and likely Dtrace as these features that would make my job, that largely comes down to organizing running software on machines and debugging problems, far easier..and would allow me to use the machines to better degree but in reality it's not going to happen.

Nobody is familiar with Solaris userland. I'm not a sophisticated and educated systems engineer at Joyent I'm just a stressed guy trying to fix problems. Unfortunatly Linux is pretty good at making it work because a lot people are in a similar situation and someone will fix it for me.

I just don't have the time and knowledge and energy to e.g. fix native Hadoop libraries in the ecosystem to build with another libc or make my own or other applications able to run without some Linux specific crap..

That beeing said I really thought about pushing SmartOS/Solaris but as a lone fighter It would be suicide in a world where everyone knows apt-get install <whatever> and get his shit done in a reasonable way..

Maybe it's something for specialised application and not academia

I've came pretty far with just strace and perf top and most problems I had in my own application where better analyzed by valgrind and kcachegrind or massif and the visualizer...

wmf 3 days ago 1 reply      
It looks like this page was recently updated and coincidentally today he gave a talk on Linux performance at LinuxCon: http://events.linuxfoundation.org/events/linuxcon-north-amer... http://events.linuxfoundation.org/sites/events/files/slides/...
josephyu0305 3 days ago 0 replies      
Nice link it show lot of Linux performance help when it comes the system crash/ and very much learning i gotfrom his presentation.
Project Gitenberg
163 points by mdturnerphys  1 day ago   65 comments top 15
transfire 19 hours ago 3 replies      
The idea has a lot of merit. So for that two thumbs up. But I would much rather see a separate website for it. Using Github feels very strained. Perhaps Github would be willing to help set you up with your own instance of their platform which you could modify to better suit the purpose. Maybe even Project Gutenberg would be interested in participating in that.

BTW, I recently learned the Gutenberg was not his name and is really a significant historical inaccuracy. His name was Hannes Gensfleisch. "Gutenberg" was just one of the places his family resided.

DavidAdams 1 day ago 2 replies      
My biggest question is this: did the idea for this originate with the pun, or did they think up the great pun afterward?
prosody 1 day ago 2 replies      
What advantage does this offer over Project Gutenberg's own Distributed Proofreaders[1]?

[1] http://pgdp.net

ldng 17 hours ago 4 replies      
That's a great step. I was toying with a related idea last week actually. To me the next great step would be to great around that a framework/tools to help translation of of those ebooks.

What often happens is that editors have one translation of a book, say Les Misrable, and keep reprinting the same translation independently of the quality. So I was thinking that a github like platform to foster translation would be a great idea. Looks like gitenberg might by the project just for that.

But maybe it should pick a clone (gitlab ?), self host and fork/extend that tool to ease the use so that non-developer could use the site without git knowledge. Then again, tailorisation for translation might not be needed.

kbar13 23 hours ago 5 replies      
One thing I would like to see out of this project is a better version control system for prose. Git is great for code, but it's not at all any good for editing text.
chrisballinger 22 hours ago 1 reply      
Congrats Seth! I had to unfollow you while you were making all those repos because it clogged my feed.

GitHub should really put some work into improving their feed algorithm so one project can't just clog it all.

gluejar 4 hours ago 0 replies      
One obvious need is for a build system that makes ebook files out of the git-managed source. And what should our source be, anyway?
FesterCluck 20 hours ago 2 replies      
Has any consideration been given to works which may start in this platform? My wife is an aspiring author, and we'd like more information. I'm sure there are many topics to cover, and we're interested in hearing all of them. However, I specifically wonder about the adoption of open source licensing to such works.


alessiosantocs 13 hours ago 0 replies      
I really love the idea behind this! I think it's a way to disrupt the books industry with all those editor firms. What's powerful about this is that every person could be listened and her book could easily spread around the globe.

I found https://www.penflip.com/ a few months ago... It isn't focused on building a digital library yet but what I like of this project is the good execution. It would be nice to merge them together!

dredmorbius 16 hours ago 1 reply      
Nice, but NB that page is REALLY hard to read.

    body {        color: black;        font-weight: normal;        font-family: verdana;    }
Helps a lot from my experience.

ryanackley 14 hours ago 2 replies      
I like the idea of git for ebooks. That being said, a lot of the free books available from project gutenberg have been around for quite some time.

Besides translations, what can people besides the author contribute? Doesn't it, on some level, ruin the character of these books? If you look at a non-fiction book from 80 years ago, is it worth bothering to correct the information when you can probably find it at your fingertips on wikipedia?

lucb1e 17 hours ago 0 replies      
For anyone else who finds the font too thin and light to comfortably read, this helps: https://readability.com/bookmarklets
sethish 12 hours ago 0 replies      
If folks are interested in contributing, the mailing list is here: https://groups.google.com/forum/#!forum/gitenberg-project
fiatjaf 13 hours ago 1 reply      
Why don't you add some kind of index/search?
Taylorious 23 hours ago 6 replies      
I don't understand the weird obsession with Git. Its a version control system not the cure for cancer. Anytime someone shoe-horns it into a product they talk about how Git is so amazing and solves all these problems, but what they are really talking about is just a version control system, not Git specifically.

Using Git for just about anything other than what it was built for is a terrible idea. I mean the underlying system is incredibly powerful and could be useful in various projects, but the interface is horrific. I swear its like someone tried to make Git as difficult as possible to use. Programmers have a hard time understanding and using Git, non-programmers will just laugh and walk away. Every time a programmer has an issue with Git, whoever helps them has to sit down and explain the underlying system for 20 minutes and draw a bunch of sticks and bubbles. Non-programmers will never put up with this.

Show HN: Duo a next-generation package manager for the front end
160 points by matthewmueller  2 days ago   89 comments top 23
tomdale 2 days ago 3 replies      
One question I always ask when looking at a package manager is, "Will this help me have reproducible builds?" (http://martinfowler.com/bliki/ReproducibleBuild.html)

I've worked on enough large projects that relied on prayer and optimism instead determinism when resolving dependencies, and eventually you end up losing hours or days trying to get the app running on a new server or a new developer's box.

Not requiring a manifest is a bug, not a feature. If people can specify dependencies, and information about the version used is lost, then you can be sure that it will happen in practice. Code designed for "proofs of concept" has a funny way of making it into production.

As we know from studies of things like organ donation rates, even the smartest humans get tired or distracted and make bad decisions. The only guard we have against it is choosing sane defaults. (See Yehuda Katz' RailsConf keynote for some interesting insight into how this interacts with convention over configuration: http://www.confreaks.com/videos/3337-railsconf-keynote-10-ye...).

While the simplicity of DuoJS allows it to win the Pepsi Challenge against other options, where the first sip tastes very sweet, I would never again willingly choose a dependency manager that makes reproducibility anything other than mandatory.

wycats 2 days ago 3 replies      
It sounds like this means that if I wanted to use a specific version of Ember, I would have something like the following line in many files:

Doesn't this mean that to update an Ember package, I have to change every file that references that package? That would mean having to touch the JS file for every component in your project to bump the version.

Am I missing something?

skrebbel 2 days ago 0 replies      
I like the ideas, but it feels like it's just a small incremental step over established solutions like webpack and browserify.

Does this have any more to offer other than deducting what to download&install from the dependency tree? Because if that's "all", it would be easily added to e.g. webpack, I suppose. Is it really worth making yet another dependency and build tool just for that one feature? If you build on top of webpack, you get a lot of stuff for free, like speedy and dependable file watching, hot-reloading development servers, support for nearly any imagineable frontend language, and a remarkably decent extensible architecture. All this has to be made again for Duo.

Both dependency management and frontend building are highly complex tasks. I'm not saying that therefore it couldn't be done better, but I do honestly, without judging, wonder whether the authors seriously considered existing solutions and ran into impossible problems, or whether this is just the Not Invented Here syndrome at work.

theefer 1 day ago 0 replies      
People interested in Duo may also want to have a look at jspm.io. It solves a similar problem, but with a few differences which to me are advantages:

- Transparently supports modules from CommonJS, AMD, ES6 or globals.

- Enforce a manifest (config.js) that let you pin dependencies (incl. transitive dependencies) to exact versions. Unlike RequireJS config, jspm automatically manages that file for you.

- Support multiple package providers, e.g. NPM, on top of Github.

- Based on SystemJS, a polyfill for the upcoming standard System loader. This hopefully makes it future-proof.

- Does not require a compilation step: dependencies can be pulled dynamically from a CDN over SPDY. Alternatively they can be cached locally as well. A compilation step (jspm bundle) is still available.

- Works both in the context of Node and the browser.

We've been successfully using jspm and SystemJS in production at the Guardian. It's still early days, but the devs are very active and responsive.

This isn't meant to distract people away from taking a look at Duo and making up their own mind, but I noticed nobody mentioned jspm in this thread and thought people may want to look at both and compare.

vkal 2 days ago 2 replies      
So earlier this summer I learned how to use RequireJS + a smidgen of Grunt, then I felt the need to move towards Gulp and Browserify (which I've just recently started), and now I'm excited about Duo.

It would be interesting for someone with more expertise to do a compare/contrast further down the line of all three.

"...I show you how deep the rabbit hole goes" - Morpheus

Also, does someone know how the Closure compiler fits into all of this? It might be totally unrelated but I'm trying to learn more about JavaScript application architecture, and I'm not sure where that fits in.

michaelmior 2 days ago 2 replies      
Personally I think I'd much rather have my dependencies be explicit rather than inferred from what I require across my code base. The idea of a tool which supports both Bower and Component packages is quite nice though.
zackbloom 2 days ago 0 replies      
I'm not sure this actually improves the flow for me. I (like most sane devs), like to lock my deps to specific versions (or vendor them). Currently, it's just a matter of running:

npm install --save dep

With this, I either have to be satisfied with not locking down the version, or go lookup the current version manually before adding the reference to my code.

It also looks like upgrading a dep would mean changing every require.

tjsix 2 days ago 1 reply      
Maybe it's just me but having no manifest makes using this tool quite painful. It has taken me 10 mins just to figure out how to structure the require()'s so that it doesn't error out, and that's with only requiring two packages.

For example I created a quick index.js file and required angular and restangular. It immediately errored out because the angular.js repo doesn't use semver on it's master branch. Ok fine, switch to angular/bower-angular, nope, it looks for index.js so my require now has to read: require('angular/bower-angular:angular.js'). Run again, error, same issue with restangular, it's looking for index.js. That require now reads: require('mgonto/restangular:dist/restangular.js'). I had to actually find where the files are that I wanted to require and explicitly state them in the require. Shouldn't this automatically parse bower/component/package.json files for this info, especially if you're touting the 'No Manifest' thing.

swah 2 days ago 0 replies      
But, but.. I _just_ decided to use webpack :(

More seriously: do you feel like it incorporates all the lessons of its predecessors?

andreaf 1 day ago 2 replies      
I am a little confused about how Duo actually transforms dependencies into JS values. When I write, say,

var uid = require('matthewmueller/uid');

as in the home page example, what gets bound to the uid identifier?

The point is that a package manager does not only need to fetch dependencies, but also to specify relations between modules. This is why, for instance, Bower only does half the story (fetching) and it has to be coupled with a tool like Require.js to actually provide modules.

fiatjaf 2 days ago 1 reply      
Does a simple `require('package')` without the slashes fetch the thing from NPM?

Maybe it could.Maybe it could also work with `require('package@0.5.1')` and other sugars like this.

bpatrianakos 2 days ago 5 replies      
I think I'm missing something. I see how this would be great when it's build time but during development I don't want to have to keep running build commands each time I want to use a new package I'm development. Using Grunt and Bower may be a little time consuming up front but once things are set up its very easy to keep a separate Dev and prod environment in sync between different contributors. I only see the value at build time with Duo which is why I'm sure I missed something.
aymericbeaumet 2 days ago 2 replies      
Is Duo smart enough to remember when it finds a library requirement with a specific tag? So that it is not necessary to indicate the tag on the other places where this same library is required.

If so, manifests could be written in a very neat way. See: https://gist.github.com/aymericbeaumet/22c3a9deba54549821e3

nawitus 2 days ago 1 reply      
Is there support for symlinking for easy development? I mean the equivalent for 'npm link' or 'bower link'.

Also, can you host your own "duo registry" (if the concept makes sense here). And if so, can you point to the private registry instead of using the public GitHub?

Bower and npm supports these, and that's what I'm using currently.

danielnaab 2 days ago 1 reply      
npm supports git repositories as dependencies already (git://github.com/...), so that feature doesn't seem like much added value to me.

And considering that any real project needs a manifest, the versioning syntax doesn't seem especially compelling.

I'd like to be proven wrong, because this looks cool... but I don't see the value over npm and browserify other than some syntactical sugar (which could probably be achieved with an npm wrapper). Am I missing something?

yid 2 days ago 1 reply      
Reading the copy on the page, I wonder if the author has heard of npm init and npm install --save. I haven't had to edit package.json in quite a while.
Kiro 2 days ago 0 replies      
Things are moving fast in the front-end world. Just as we've stopped using RequireJS and accept Browserify as the new king DuoJS comes and overthrows everything again.
neilellis 2 days ago 1 reply      
I love simple things, this gets my +1, look forward to learning more.

I can't see why this is browser only, am I missing something?

teabee89 2 days ago 0 replies      
The Go link should point to http://golang.org
RoboTeddy 2 days ago 1 reply      
Is it fast during development on large codebases? (e.g. by compiling incrementally)
colinramsay 2 days ago 0 replies      
This looks superb. Great work.
compedit 2 days ago 2 replies      
You can require dependencies and assets from the file system or straight from GitHub:

Oh my. You've essentially killed Bower and NPM in one swoop then haven't you?

An exciting time to be a web developer to say the least, this looks simply amazing.

progx 1 day ago 0 replies      
Did Duo the same as gulp+browserify ?
Next-Gen Lighting Is Pushing the Limits of Realism
157 points by RaSoJo  1 day ago   66 comments top 14
berkut 1 day ago 5 replies      
Should be "Next-Gen Game Lighting".

This is just Physically-based shading basically, which has been done for the past 4/5 years in VFX. Essentially it's energy-conserving materials with the correct fresnel effect based on the surface's IOR which takes things to the next level (for games at least). Doing this properly for layered surfaces (e.g. diffuse wood layer with a clearcoat varnish layer) gives very nice looking results.

Some complex materials can have up to 3 spectral BSDF lobes for reflection, which can only really be done with pathtracing.

For VFX, people are starting to push into spectral rendering now, and trying to optimise things like volumetric rendering for things like SSS which are needed for ultimate realism.

mischanix 1 day ago 2 replies      
"Next-gen" is a bit misleading of a description: the GPU (GTX 670) that this scene is rendered in real-time on has about 30% more power than the GPU in the PS4, although it was released 2 years ago. The CPU used to bake the lightmaps has more than twice the power of the PS4's CPU.
doorhammer 1 day ago 1 reply      
Something of an aside: I'm most stoked about this kind of thing because of the oculus rift. Even just playing through HL2 with my DK2 is an amazing experience.

What I'm really interested in with things like the Rift is foveated rendering, in order to get much higher quality graphics out of less computing power. Basically, using high precision fast eye-tracking to only render the portion of the screen your looking directly at, since your eye can't resolve detail to any great degree outside of a fairly small area in the center of where you're looking.

Foveated imaging in general: http://en.wikipedia.org/wiki/Foveated_imagingMS Research paper on foveated rendering: http://research.microsoft.com/pubs/176610/foveated_final15.p...MS trial on foveated rendering: http://research.microsoft.com/pubs/176610/userstudy07.pdf

kristiandupont 1 day ago 2 replies      
Interesting choice with the Mies Van Der Rohe Pavilion, because as stunning as it is in real life, it's simple geometry actually makes for a bit of a boring tech demo. Still looks very impressive though.
imaginenore 1 day ago 2 replies      
These videos reminded me of the architectural renders of "The Third & The Seventh" by Alex Roman


adrusi 1 day ago 2 replies      
Lighting does a lot to improve a scene, but unfortunately I'm almost certain that it's pre-rendered lighting, which isn't all that interesting since any dynamic objects present ruin the effect.
hyperion2010 1 day ago 1 reply      
The one thing that is still hard is doing the radiosity. The way that I spot CG stuff that can be rendered real time is that there simply are not enough rad bounces and objects without direct lighting in the scene end up being too dark. Baking lightmaps takes a long time, and realtime radiosity methods are still CPU bound.
azinman2 1 day ago 1 reply      
Drool... Very impressive especially in a video game engine? I wish I knew more about the 3d world and could do stuff like this....
danmaz74 1 day ago 1 reply      
This is incredible. It's only a pity that many straight lines are just too straight.
sirmarksalot 1 day ago 1 reply      
If I were an architect, I would be using this to pitch to clients. I know virtual fly-throughs are a thing, but the existing demos I've seen just don't give the same sense of presence as this does.
huuu 1 day ago 2 replies      
Ofcourse the images look very nice but I'm not very impressed. A lot of artists create photo realistic images. For example the Ikea catalogue[1] has a lot of CG images.

These are just pre-rendered textures and light maps combined with real time lighting and reflections in Unreal Engine 4.

But I agree that UE4 does a very very good job at realistic real time lighting! Also take a look at the blog of Paul Mader: http://paulmader.blogspot.nl/

[1] http://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSpe...

halfcat 1 day ago 0 replies      
This reminds me of playing with POV-Ray years ago. I was so amazed what could be done on a low powered PC, granted it took a long time to render, but still was still somehow magical, like the first time you realize you can program a computer. POV-Ray is free and can create some cool realistic looking images[1]. This one is my favorite[2].


eli_gottlieb 1 day ago 3 replies      
That's nice. How many game or film studios will be driven bankrupt by the yet-again increased cost of content production?

But yes, Blood Soaked Shoot-em-Up 93x-treme is going to look very realistic. It will be almost as if I was really invading Iraq!

adam-a 1 day ago 2 replies      
It's worth noting that these scenes take around 10 minutes to render a frame, so it's still a long way from real time.

Detailed in the thread - https://forums.unrealengine.com/showthread.php?28163-ArchViz...

Very pretty though still!

Stanford Encyclopedia of Philosophy
153 points by shliachtx  1 day ago   43 comments top 14
jbarrow 1 day ago 1 reply      
I used the SEP when reading about Epistemology, and I must say that it is both a comprehensive and comprehensible resource. Although it was (for me, at least) a bit dense at times, it was perhaps the best introduction to the subject that I found.

If anyone is interested in some philosophical concept, I would recommend starting here, and branching out to books and linked papers only after using the SEP.

And if anyone has any other good resources they would like to point me towards, I would greatly appreciate it.

thanatropism 1 day ago 1 reply      

> The Internet Encyclopedia of Philosophy (IEP) (ISSN 2161-0002) was founded in 1995 as a non-profit organization to provide open access to detailed, scholarly information on key topics and philosophers in all areas of philosophy. The Encyclopedia receives no funding, and operates through the volunteer work of the editors, authors, volunteers, and technical advisers. At present the IEP is visited over 950,000 times per month. The Encyclopedia is free of charge and available to all users of the Internet world-wide. The staff of 30 editors and approximately 300 authors hold doctorate degrees and are professors at colleges and universities around the world, most notably from English-speaking countries.

The SEP is unusually in-depth about certain subjects, though. (Right now, I have "Dutch book arguments" and "Dialetheism" open).

jonhmchan 1 day ago 0 replies      
SEP is easily the best online resource for philosophy, especially in the analytic tradition. It can be a bit dense sometimes, but I've always used it as an introduction or overview of a philosophy text before diving in during college.
timtadh 1 day ago 0 replies      
SEP is one of the best sites on the net for learning philosophy. The biogrhphies of the personalities are comprehensive and I find the articles to provide nuance other resources sometimes lack. It is a great site for some casual reading on the subject.
Xcelerate 1 day ago 12 replies      
This is cool! But there's a lot in there, so I don't even know where to start. Does anyone know of a good introduction to philosophy? My work is in the hard sciences, and I know next to nothing about philosophy, but am very interested in learning. Something that kind of gives an overview of the whole field I think would be best.
lmarinho 19 hours ago 0 replies      
Thanks for bringing this back to the surface. I see that SEPs design has greatly improved since I last visited it. The new looks make for a much more pleasant reading experience, right on par with the excellent content.
stupandaus 1 day ago 8 replies      
Anyone have recommendations for interesting entries?
ethnt 1 day ago 0 replies      
My philosophy of science teacher turned me onto this site last semester and it was really quite invaluable. The amount of thought and work that is put into each article is amazing, and it certainly helped me understand topics I would not have otherwise.
jessriedel 1 day ago 0 replies      
It's possible to find a few weak articles on the SEP, but by and large the quality is very high even with lots and lots of topics. If you want to learn about an unfamiliar philosophical concept, briefly scan the Wikipedia page and then head to the SEP.
dasmithii 1 day ago 0 replies      
Wow, I've been looking for something of this caliber for a long while. Thanks for sharing.
sebastianconcpt 1 day ago 0 replies      
Fantastic resource. Thanks for the heads up
Confusion 1 day ago 0 replies      
I'm subscribed to their RSS feed[1], which is a wonderful way to be regularly introduced to new interesting articles and subjects whenever one is updated. Alas, it results in a large backlog...

[1] http://plato.stanford.edu/rss/sep.xml

gojomo 1 day ago 1 reply      
An interesting aspect of this project is how it's funded.

Per a presentation I saw a while back, when the site sees visits from Universities that have an associated degree-granting program, or visits from traditional related libraries, it nags the users that their institution should join (if it hasn't already). It's a bit like Shareware/Nagware/Guiltware, applied to a live service, and only targeted at specific related learning institutions with existing acquisition budgets.

Here's the schedule of suggested dues:


..and benefits once joined...


LowDB A flat JSON file database
143 points by ca98am79  2 days ago   69 comments top 23
DogeDogeDoge 1 day ago 2 replies      
This has like zero fault tolerancy and by zero i mean none. Don't even push flush to disk... it has a level of a in memory hash of persistence and is totally insecure in terms of multithreading. As a Toy to show off yeah could work but... in the very end of the day. You can minus me but this is crap.
ThePhysicist 2 days ago 3 replies      
If you're looking for something similar for Python, check out BlitzDB:


It's a serverless, flat-file, document-oriented database which supports indexing and transactions and comes with a built-in ORM layer and a rich query language modeled after MongoDB.

andrewchoi 2 days ago 3 replies      
Unsure if OP is the maintainer, but a quick point: Benchmarks without any indication of underlying hardware don't convey a huge amount of information.
couchand 2 days ago 2 replies      
Please don't use this for anything significant. Shared global state and synchronous file I/O pretty much guarantee that it will blow up in any non-trivial use.
typicode 1 day ago 0 replies      
Hi everyone,

First, thanks for all the interest, its quite sudden and unexpected.

Actually, LowDB is an extract from JSON Server, a mocking REST server based on plain JSON (https://github.com/typicode/json-server).

So, basically, it's not meant to be used in critical / intensive applications.

Instead, it's much more a new convenient way to store data in simple use cases.

Regarding file writing, if your database is small or if you don't run a cluster of Node processes, you should be fine.

Regarding benchmark, as someone pointed it out, its mainly to show that storing to JSON file is fast enough and to compare operations speed. I agree that it says nothing about other databases and LowDB doesn't try to be the fastest either, just fast enough.By the way, 'npm run benchmark' lets you run it on your machine.

However, keep in mind too that LowDB official release is quite recent so it should be improved over time.

Anyway, thanks for all the feedbacks and I hope youll have fun with it :)

korzun 2 days ago 1 reply      
Benchmarking per 1000 iterations is pretty silly. My guess is that it will start to choke as soon as you have concurrency at a higher load.
jdp 2 days ago 0 replies      
Related is tiny[0], an in-process document store that supports Mongo-style queries, as well as a style similar to CouchDB views. You can dump its contents to a JSON file.

Also interesting is PouchDB[1], another document storage library. It can be used with Node or in the browser through various backends (like IndexedDB), and can even replicate to CouchDB.

[0]: https://github.com/chjj/tiny[1]: http://pouchdb.com/

bikamonki 1 day ago 0 replies      
I do find a 'production' use case: I have a static site that uses Backbone to load a static json collection of pages. The json file itself is very small (less than 100k) and will not grow much with time. I built a small CMS to let me edit pages, the data for the CMS is dynamic (MongoDB), and I update the static json file after edit. I could use a CMS that uses LowDB instead to edit the json file directly. Keep in mind that this is one user making one or two edits per day at most. If your realize that the vast majority of small sites/blogs out there have similar requirements, then this tool makes sense. No?
Sir_Cmpwn 2 days ago 0 replies      
I have a hard time believing that this will be performant for anything of signifiance.
mkoryak 1 day ago 1 reply      
here is another flat file db that ive been eyeing:https://github.com/sergeyksv/tingodb

upward compatible with mongodb.

Ive also used nedb, which is ok but tends to delete the datastore when it runs out of disk space :P

scotty79 1 day ago 1 reply      
Do you know any database that keeps the data in multiline human readable text files (like json, yaml, or even nicely formatted xml) but also provides some robustness, concurrent access by multiple users and indices to search the data fast?

My usecase would be to keep the data for a website in it and keep the datafiles themselves in repository so it can be backed up there, monitored and possibly merged.

keyle 2 days ago 0 replies      
Also, I've used nedb and highly recommend it. It's fast and comprehensive.


kolodny 2 days ago 2 replies      
I always wondered in this situation:

    var topFiveSongs = low('songs')      .where({published: true})      .sortBy('views')      .first(5)      .value();
If any db engines figure when the dataset is really large and the limit is really small (no idea what the cutoff would be), if instead of sorting and giving the first five, it instead just looks for the 5 largest/smallest. Anyone have any idea on this?

eim_kasp 1 day ago 0 replies      
Interesting concept, if using it in the right way can save you number of queries and add more speed to your application. Or could be great replacement, where you need reasonable ammounts of data saved, like contests or prelaunch signups, etc.
chippy 2 days ago 1 reply      
Reminds me of the Ruby Standard Library Hash based datastore PStore: http://ruby-doc.org/stdlib-1.9.2/libdoc/pstore/rdoc/PStore.h... ... but without the query wrappers. Nice to see basic things done well.
josteink 1 day ago 0 replies      
Sounds about as useful as the "XML-file only" databases some people decided was the smartest thing ever at the turn of the century.

You don't see those around anymore.

jonny_eh 2 days ago 0 replies      
Are all operations sychronous? If so, this kind of makes it a no-go for most uses with NodeJS.
misiti3780 2 days ago 2 replies      
Serious question - why should I use this over MongoDB or any other NOSQL solution?
econic 2 days ago 1 reply      
Anyone know if there is something similiar to this for Go?
krazykringle 1 day ago 0 replies      
Does it need to suck the entire database into memory to answer a query?
Guillaume86 2 days ago 0 replies      
Nice, I use nedb in one if my projects, always good to have alternatives.
kpennell 2 days ago 1 reply      
Is this similar to firebase?
derpalittlederp 2 days ago 0 replies      
JSON files are my favorite NoSQL :D
       cached 24 August 2014 04:11:01 GMT