hacker news with inline top comments    .. more ..    29 Jul 2014 News
home   ask   best   4 years ago   
1
Google Startup Launch
67 points by ishener  2 hours ago   5 comments top 3
1
jqueryin 20 minutes ago 1 reply      
It's tough to say what their intent is regarding this program. My initial inclination is that they have multiple possible motives for providing this service:

1. To bring more awareness to their developer services and offerings in the small business sector. They've recently stated that SMBs are of huge interest to them. For example, check out "Google My Business", a recent play of theirs:

http://www.google.com/business/

2. To potentially fuel early stage acquisitions and/or funding opportunities via Google Ventures. What a wise play to see how startups are using your offerings to vet them before acqui-hiring or offering funding.

What are everyone else's feelings on this play?

2
bfwi 8 minutes ago 0 replies      
How much of Google's cloud stack are you required to use to apply for this program? If they require you to use GAE for your backend for example, I think it's going to be a dealbreaker for a lot of startups.
3
Killah911 47 minutes ago 1 reply      
Not quite clear on this program. It it more of a biz spark type thing from google? Are they planning to fund some early stage startups?
2
Using SSL Certificates with HAProxy 1.5
15 points by fideloper  29 minutes ago   2 comments top
1
fideloper 29 minutes ago 1 reply      
After the last edition on using HAProxy 1.5, many asked about using it with SSL certificates, so I've written up the process on both SSL Termination and SSL Pass-Through.
3
The seL4 microkernel
102 points by gioele  6 hours ago   19 comments top 4
1
userbinator 15 minutes ago 0 replies      
An OS that is proved to be essentially perfectly secure? For some reason this brings up thoughts of trusted computing, and not in a good way...
2
gioele 2 hours ago 0 replies      
Please note that the formally proven model has been implemented also in Haskell, not only in C.

Formally-proven-correct code + Haskell + literate programming = https://github.com/seL4/seL4/blob/master/haskell/src/SEL4/Ke...

3
tdicola 4 hours ago 2 replies      
From the supported platforms it looks like it should work on a BeagleBone Black (arm, am335x, armv7-a, cortex-a8)--anyone been brave enough to try?
4
Intermernet 6 hours ago 3 replies      
seL4 is "The world's first operating-system kernel with an end-to-end proof of implementation correctness and security enforcement" originally developed (as far as I know) at UNSW, further developed at NICTA.

The original project home page is at http://ssrg.nicta.com.au/projects/seL4/ and the new page is at http://sel4.systems/

Downloads at http://ssrg.nicta.com.au/software/TS/seL4/

4
How to take over the computer of a Maven Central user
349 points by akerl_  14 hours ago   109 comments top 33
1
moxie 14 hours ago 3 replies      
At Open Whisper Systems, we wrote a small open source gradle plugin called "gradle-witness" for this reason. Not just because dependencies could be transported over an insecure channel, but also because dependencies could be compromised if the gradle/maven repository were compromised:

https://github.com/whispersystems/gradle-witness

It allows you to "pin" dependencies by specifying the sha256sum of the jar you're expecting.

2
heavenlyhash 8 hours ago 1 reply      
SSL would have partially mitigated this attack, but it's not a full solution either. SSL is transport layer security -- you still fully trust the remote server not to give you cat memes. What if this wasn't necessary? Why can't we embed the hash of the dependencies we need in our projects directly? That would give us end-to-end confidence that we've got the right stuff.

This is exactly why I built mdm[1]: it's a dependency manager that's immune to cat memes getting in ur http.

Anyone using a system like git submodules to track source dependencies is immune to this entire category of attack. mdm does the same thing, plus works for binary payloads.

Build injection attacks have been known for a while now. There's actually a great publication by Fortify[2] where they even gave it a name: XBI, for Cross Build Injection attack. Among the high-profile targets even several years ago (the report is from 2007): Sendmail, IRSSI, and OpenSSH! It's great to see more attention to these issues, and practical implementations to double-underline both the seriousness of the threat and the ease of carrying out the attack.

Related note: signatures are good too, but still actually less useful than embedding the hash of the desired content. Signing keys can be captured; revocations require infrastructure and online verification to be useful. Embedding hashes in your version control can give all the integrity guarantees needed, without any of the fuss -- you should just verify the signature at the time you first commit a link to a dependency.

[1] https://github.com/polydawn/mdm/

[2] https://www.fortify.com/downloads2/public/fortify_attacking_...

3
technomancy 13 hours ago 1 reply      
For Leiningen at least the goal is eventually to be able to flip a switch that will make it refuse to operate in the presence of unsigned dependencies. We're still a ways away from that becoming a reality, but the default is already to refuse to deploy new libraries without an accompanying signature.

Edit: of course, the question of how to determine which keys to trust is still pretty difficult, especially in the larger Java world. The community of Clojure authors is still small enough that a web of trust could still be established face-to-face at conferences that could cover a majority of authors.

The situation around Central is quite regrettable though.

4
akerl_ 13 hours ago 0 replies      
It's not specifically named in the article, but the software shown with the firewall popup is Little Snitch, and it's great:

http://www.obdev.at/products/littlesnitch/index.html

5
femto113 13 hours ago 0 replies      
Perhaps as a stopgap Maven Central (or a concerned third party?) could publish all of the SHA1 hashes on a page that is served via HTTPS. This would at least allow tools to detect the sort of attack described in the article.
6
jontro 13 hours ago 1 reply      
This is a horrible policy made by sonatype. A better alternative of mavencentral should be created...
7
brianefox 10 hours ago 3 replies      
The project to offer ssl free to every user of Maven Central is already underway. Stay tuned for details.
8
GaryRowe 3 hours ago 0 replies      
I wrote a Maven plugin to avoid this.

It's available under MIT licence: https://github.com/gary-rowe/BitcoinjEnforcerRules

9
MrSourz 13 hours ago 1 reply      
I'm torn on how I feel about security being a paid feature in this case. Here the onus is being placed on the user, yet many won't be conscious of the choice they're making.

The tiff mentioned in the article was interesting to read.> https://twitter.com/mveytsman/status/491298846673473536

10
passive 2 hours ago 0 replies      
For those of you in the Python world concerned about such a thing, check out Peep: https://pypi.python.org/pypi/peep

It's a pip wrapper that expects you to provide hashes for your dependencies in requirements.txt.

There was a lightning talk at PyCon this year, it seems super easy to use (though admittedly I'm not using it regularly yet).

11
avz 13 hours ago 1 reply      
Exposing your users to MITM attacks in order to encourage donations? Pure evil.
12
jimrandomh 9 hours ago 2 replies      
My main experience with Maven has been downloading some source code, and having to use Gradle to compile it. It went and downloaded a bunch of binaries, insecurely. There were no actual unsatisfied dependencies; it was just downloading pieces of Gradle itself.

I would've much rather had a Makefile. Build scripts and package managers need to be separate.

13
tensor 13 hours ago 1 reply      
The biggest problem with this policy is that new users, or even experienced ones, are likely not aware of it. This is a very serious problem that should be addressed quickly.

edit: and with websites everywhere routinely providing SSL, it seem crazy that it has to be a paid feature for such a critical service.

14
finnn 14 hours ago 0 replies      
Evilgrade (https://github.com/infobyte/evilgrade) is a similar tool that works on a wider variety of insecure updaters. Perhaps a module could be written? Maybe one already exists, I haven't played with it in a while
15
jc4p 13 hours ago 2 replies      
jCenter is the new default repository used with Android's gradle plugin, I haven't used it myself yet but it looks like the site defaults to HTTPS for everything: https://bintray.com/bintray/jcenter
16
clarkm 13 hours ago 3 replies      
So in principle, it's doing the same thing as:

    $ curl http://get.example.io | sh
which we all know is bad. But in this case, it's hidden deep enough that most people don't even know it's happening.

17
SanderMak 6 hours ago 1 reply      
The vulnerability even has a name: Cross-build injection attacks. I wrote about it some time ago [1], [2]. The complete answer includes verifying the (now mandatory) PGP signatures [3] of artifacts in Maven Central. But you need a web-of-trust for that and the whole process is rather impractical currently.

[1] http://branchandbound.net/blog/security/2012/03/crossbuild-i...[2] http://branchandbound.net/blog/security/2012/10/cross-build-...[3] http://branchandbound.net/blog/security/2012/08/verify-depen...

18
chetanahuja 10 hours ago 6 replies      
If I understand this correctly, maven based builds can contain dependencies on libraries hosted on remote servers. golang build system has (or had) something similar too. Witnessing this trend take hold is astonishing and horrifying in equal parts. Not just as a security problem (which is clearly obvious) but also a huge hole in software engineering practices. How can anyone run a production build where parts of your build are being downloaded from untrusted third party sources in real time? How do you ensure repeatable, reliable builds? How do you debug production issues with limited knowledge of what version of various libraries are actually running in production?
19
pjlegato 12 hours ago 2 replies      
All of Maven central is only 180gb, according to https://maven.apache.org/guides/mini/guide-mirror-settings.h...

How hard would it be to just mirror it to S3 and use it from there via HTTPS?

20
jnbiche 12 hours ago 1 reply      
npm has the same problem of sending packages over http, but it's even worse since on average each node package uses about a billion other packages and because injecting malicious code in JavaScript is incredibly easy.

And to be clear, just http here is not the issue. It's http combined with lack of package signing. apt runs over http, but it's a pretty secure system because of its efficient package signing. Package signing is even better than https alone since it prevents both MITM attacks and compromise of the apt repository.

In fact, apt and yum were pretty ahead of their time with package signing. It's a shame others haven't followed their path.

21
sitkack 13 hours ago 0 replies      
luarocks has the same problem. You don't need SSL, you need the packages to be signed.
22
0x0 12 hours ago 1 reply      
I wonder how many enterprise apps have been backdoored through this flaw over the years by now.
23
dandelany 12 hours ago 0 replies      
Sorry to nitpick, but you might wanna fix this typo: s/pubic/public :)
24
buckey 6 hours ago 0 replies      
Yeah have you ever wrote code on the play platform? There is your proof of concept at least on earlier versions static injection using annotations ... It's also how spring works and almost all dynamic ... Hell you can jit your code don't even need to compile it into a class the run time can do it for you ... That why I always compile my jar files so they can't be read as a compressed file anyway pretty cool sounds like you could have a lot of fun with someone doing this ... You could turn there computer into anything you want using Java command line functionality .... I.e. System.get(os.name) if windows do this if OS X do this if lunix do this using Java.lang.runtime.exec then after you open the back door to there computer time for socket connections and getoutputstream etc... Anyway point being java is a cross platform lang so there is a world of possibilities and most of the time they are running this from an IDE so if you inject a sudo call who knows what could happen
25
akerl_ 13 hours ago 0 replies      
Thanks to whomever changed the title; I didn't like the original title, but couldn't come up with a better accurate one.
26
wernerb 14 hours ago 0 replies      
This is indeed a problem that needs to be addressed at some point. The MITM possibility has been mentioned before at SE http://stackoverflow.com/questions/7094035/how-secure-is-usi...
27
fiatmoney 11 hours ago 1 reply      
So if they need some money, what is a better revenue model for them?

- charge some token amount of money to projects (harms the ecosystem, probably not a good idea)

- charge some amount for projects to host old versions, or for users to access old versions (same idea as the first, just less so)

- charge for access to source jars

- paid javadoc hosting

- rate-limiting for free users (the "file locker" model; particularly effective at convincing people sharing an office IP into paying up)

Any others?

28
joncp 8 hours ago 0 replies      
The problem goes deeper. That firewall (Little Snitch) updates itself over port 80, so most likely unencrypted.
29
qwerta 6 hours ago 0 replies      
We store jars in git repo...
30
iancarroll 12 hours ago 1 reply      
What firewall is that? Looks nice.
31
jgalt212 11 hours ago 2 replies      
> When can this happen? If you ever use a public wifi network in a coffee shop

Just don't do this. There is no such thing as a free lunch (or wifi).

32
abalone 12 hours ago 2 replies      
I understand the need to raise money for projects, but the attitude[1] that security is an optional "premium" feature needs to end.

It should be no different from shipping broken code. You can't just say, "oh, well we offer a premium build that actually works, for users that want that." Everybody needs it.

Evernote made this mistake initially when SSL was originally a premium feature. They fixed it.

Granted, there are degrees of security but protection from MITM attacks is fundamental. (Especially for executable code!)

[1] https://twitter.com/mveytsman/status/491298846673473536

UPDATE: @weekstweets just deleted the tweet I was referencing where he described security as a premium feature "for users who desire it" or words to that effect.

33
foo-licious 10 hours ago 0 replies      
It's java who cares?
5
Terraform
537 points by pandemicsyn  18 hours ago   102 comments top 36
1
mike-cardwell 1 hour ago 0 replies      
How does it handle failures? I.e in the first example it creates a server and then creates a dns record. What if the dns record creation fails? Does it roll back everything (i.e destroy the server)? I'd probably want a system that automatically retried x times before rolling back for some situations. In other situations I'd probably want it to not roll back or only roll back some of the tasks. How flexible is it?
2
616c 17 hours ago 4 replies      
Mitchell, do you even sleep? Every time I see one of your tools, I feel like I need to pay you a hefty sum to teach me how to start coding productively, cuz Hashicorp output seems ferocious. Keep up the good work.
3
diggan 18 hours ago 1 reply      
Some read-worth links:

Homepage - http://www.terraform.io/

Introduction - http://www.terraform.io/intro/index.html

Documentation - http://www.terraform.io/docs/index.html

Sourcecode - https://github.com/hashicorp/terraform

-

Seems usable and I'm excited to try it out. I like the idea of "execution plans" and the declerative way of setting up the architecture.

4
cwp 17 hours ago 2 replies      
How does this compare with Nixops or Disnix?

It sounds like it would be possible to plug nix-based provisioning into Terraform, and use it to manage the high-level cluster structure.

Edit: downvotes? whatever for?

5
mahmoudimus 17 hours ago 2 replies      
One of the main strengths of something like cloudformation, is that we can use libraries in languages we're comfortable with to build a programmable DSL.

A great tool, at least for Python, that exposes this is: https://github.com/cloudtools/troposphere.

This gives me the full power of python, so I can build abstractions, use inheritance and encapsulation to specialize certain things.

We've done a lot of work to automate our infrastructure provisioning, but I'm interested in the abstraction layer Terraform provides -- especially for multiple providers.

How can we bridge the gap that is left by Terraform from having a fully complete programming language to define infrastructure (which has downsides but in my opinion, more upsides)?

6
dkarapetyan 17 hours ago 0 replies      
I use HashiCorp tools and I recommend them wherever I go. The reason I do that is because the tools are built with very specific use cases and are grounded in actual practices and backed by solid theory. None of their tools are something that was hacked up over the weekend. Looking forward to Terraform taking over the provisioning/deployment landscape.
7
akoumjian 16 hours ago 1 reply      
Salt Cloud provides most of these features. If this kind of thing interests you, you should check it out: http://docs.saltstack.com/en/latest/topics/cloud/
8
wernerb 18 hours ago 1 reply      
I have been developing a tool that is almost the same called 'ozone.io'.. It leverages CMT tools such as puppet, ansible, chef. Not by writing plugins, but rather have users write or extend scripts called 'runners' that install and execute the CMT tool per node. You can checkout a prototype chef-solo runner at https://github.com/ozone-io/runner-chef-solo.

Parallel deployment of multiple clusters is also covered. It too is handled by a directed acyclic graph based on dependencies on other clusters. I am on my own and I am writing it for my thesis which will come out pretty soon.

It is created as an engine that expects cluster state. A sample input file can be seen here which is the only state you need to launch something. https://gist.github.com/wernerb/35a06e08a4d4e6cb02aa

The whole thing works declaratively, so it converges your infrastructure to the desired state. By increasing the nodes for 'smallweb' it will undergo the steps defined in the cluster lifecycle. It will then also update the configuration of the nginx load balancer.

As you can see each cluster is pinned to a provider/instanceprofile, and one of the things I am adding are affinity rules so the cluster deploys to multiple locations/providers.

It is not ready to be opensourced but if any wants to see, contribute or see more I can give view access.

What do you think?

9
rsync 14 hours ago 0 replies      
I cannot wait to dive into this. We (rsync.net) will absolutely make our storage a usable component in terraform.
10
mongrol 13 hours ago 1 reply      
How does this compare to Ansible? It appears to be operate in the same space/level.
11
courtf 17 hours ago 2 replies      
Just stumbled across this project with the same name, and at least some of the same goals:https://github.com/UrbanCode/terraform
12
mihok 16 hours ago 1 reply      
This is pretty awesome. My question is how well does this integrate with and already setup infrastructure? Or would I have to recreate the system to get going with Terraform?
13
cel1ne 3 hours ago 0 replies      
Computer science Better living through ever higher stacks of abstraction!
14
fideloper 18 hours ago 0 replies      
Feels like a big meta tool!

It reads like you can use any provisioning software, use any server provider supported, use any DNS provider supported, you just need to write a bunch of configuration.

I think I like this, but it sounds complex :D

15
lkrubner 15 hours ago 1 reply      
Off-topic: it is interesting to me how different companies seem to dominate a space for a few years, and then recede. It's a common pattern. I can remember in 2009 when it seemed like RightScale www.rightscale.com was the dominant force creating tools to take advantage of AWS, but nowadays I never hear of them, never see anything interesting come from them. All the interesting stuff is happening elsewhere.
16
onedognight 18 hours ago 1 reply      
> The ~ before the Droplet means that Terraform will update the resource in-place, instead of destroying or recreating. Terraform is the first tool to have this feature of the tools which can be considered similar to Terraform

While not multi-platform, AWS's Cloud Formation does just this, it takes as its input a stateless JSON description of a set of AWS resources and their dependencies. Given a change in the desired state, it will do its best to update resources rather than creating them from scratch when possible.

17
clarkdave 13 hours ago 1 reply      
Thanks everyone at Hashicorp! This tool looks awesome. I wish it had been around years ago so I might have a nice version-controlled set of configuration files instead of a bunch of wiki articles and post-it notes ;)

I have a quick question I didn't see covered in the docs. Is there a best practise way to organise Terraform configuration files? Specifically when using it to manage different environments (e.g. staging, prod, qa). I'm thinking of some sort of folder structure like this:

  /    production/      web.tf    staging/      web.tf    qa/      web.tf      test-servers.tf
So, `terraform apply production` would then plan and apply changes for production servers, `terraform apply staging` the same for staging, etc.

Would be interested to know if you have any thoughts on this, or if there's some sort of paradigm you folks are using internally.

18
sciurus 17 hours ago 1 reply      
It looks like Terraform is launching with decent coverage of AWS resources. Thinking of my own usage, the main ones missing are ElastiCache and CloudWatch. I'm not sure how you can setup a useful autoscaling group without the latter.

EDIT: There's an issue tracking adding more at https://github.com/hashicorp/terraform/issues/28

19
jscheel 7 hours ago 0 replies      
This would be great for us. Various parts of our stack are spread around so many different platforms, and this could really take the grunt work out of that. Not to mention removing the need of dealing with fifteen various shoddy interfaces. Heck, AWS isn't even consistent with itself (just check out OpsWorks vs Route53).
20
errordeveloper 13 hours ago 1 reply      
On the page about integration with Consul [1], I read "Terraform can update the application's configuration directly by setting the ELB address into Consul." The questiomn is whether I can do somewhat other way around, i.e. set get information from Consul and point ELB to it, somewhat like Synapse or SmartStack... Or may be I don't need service discovery tool for this yet and can just use TF without Consul, simply configure the components of the infrastracture and the ELB? The point is just to simplify the first step and avoid adding logic to support Consul lookups in the apps... What's the easiest way here?

[1]: http://www.terraform.io/intro/examples/consul.html

21
ogig 16 hours ago 0 replies      
Offtopic but i guess you, hashicorp guys, would like to know; there is a typo on the geometric animation. It says "Build, Combine, and Launch Infrastucture_", should be Infrastructure.
22
cdnsteve 14 hours ago 1 reply      
Considering Packer is written in Go, can you shed some light on this platform, what languages did you decide to use?
23
devcamcar 14 hours ago 1 reply      
How would you compare Terraform to something like Razor? I think it might be a good one to add to your "vs Other Software" section:

http://puppetlabs.com/solutions/next-generation-provisioning

24
jimmcslim 16 hours ago 1 reply      
Is there a way to encrypt variables and provide a password to decrypt when executing a plan, so that I can commit my API keys, passwords, etc to source control without fear? I'm thinking something similar to Ansible and its 'vault' concept for variables (sure Chef, Puppet, etc have something similar).
25
cetra3 11 hours ago 0 replies      
Would this be a good fit or are there any plans to include providers for hypervisors (VMWare, Virtualbox, Xen etc.. ) Or even containers (i.e, Docker)?
26
arasmussen 10 hours ago 0 replies      
Based solely on the name and the logo, I was really hoping this was going to be an awesome game :P
27
lukebond 14 hours ago 1 reply      
Anyone know if an API is planned? If I want to manage infrastructure from code I would love Terraform to be an option.

As a (predominantly) Node.js developer, I'd probably use pkgcloud for this sort of thing. Terraform supports a great range of providers and has some more advanced features, so I'd love to play with it as an alternative to pkgcloud.

28
_random_ 15 hours ago 0 replies      
"Terraform" - seriously? Making world a better place through constructing elegant hierarchies for maximum code reuse and extensibility?
29
joeyspn 16 hours ago 0 replies      
I can't keep their pace! Another amazing tool from Mitchell and his crew. Hashicorp well on its way to become a DevOps juggernaut...
30
eudoxus 12 hours ago 0 replies      
Does Terraform have any service failure related features, ie if an instances fails?
31
earless1 17 hours ago 0 replies      
This looks like a great tool. I was going to use CloudFormation to setup a new VPC in AWS, but I will give this a shot instead.
32
jscott0918 15 hours ago 1 reply      
What is the licensing on the source code?
33
tvon 17 hours ago 2 replies      
FWIW, I find the purple and blue here a bit painful to read:

https://www.dropbox.com/s/0ki0m7967x5tvn6/Screenshot%202014-...

34
bfish510 13 hours ago 0 replies      
My one gripe is the font choice on your homepage. Makes it very annoying to read.
35
peterwwillis 13 hours ago 0 replies      
tl;dr: Terraform is modular virtual infrastructure automation. [I would say it's an orchestration tool, but that usually implies datacenter-wide resources, and this just seems to apply to cloud service providers]

"[..] Terraform combines resources from multiple services providers: we created a DigitalOcean Droplet and then used the IP address of that droplet to add a DNS record to DNSimple. This sort of infrastructure composition from code is new and extremely powerful."

Well, "new" in the sense of "we created another thing to automate infrastructure deployment and configuration". I have worked with various amalgamated solutions that do this for the past 12 years. Of course they mention that in the software comparison section, but it doesn't take away from the fact that this isn't new by a long shot.

"Terraform has a feature that is critical for safely iterating infrastructure: execution plans. Execution plans show you what changes Terraform plans on making to your infrastructure. [..] As a result, you know exactly what Terraform will do to your infrastructure to reach your desired state, and you can feel confident that Terraform won't surprise you in unexpected ways."

So it's declarative, and it has a dry-run mode.

The thing that really bugs me is the idea that you should be creating "code" to do rote tasks such as changing resources or deploying things. You know what the single most problematic thing about infrastructure changes is? Human error. It's a simple fact of user interface design that humans are less likely to fuck up a point-and-click interface than a command line program that you have to feed a hand-edited config to. And automated config generation can arguably be more error-prone.

Automation/orchestration should not simply make things happen automatically. It should make things work more reliably, and require less expertise to do so. To be frank, any code monkey with a few weeks of free time to kill can create a tool that does exactly what this one does, and that's why there are dozens of them that all do the same thing, yet we always need a new one.... because they all stink at actually making things work better.

This comic isn't just funny, it's a truism: https://xkcd.com/1319/

36
pdenya 18 hours ago 3 replies      
This is a devops tool named Terraform, nothing to do with terraforming.
6
Total darkness at night key to success of breast cancer therapy
81 points by wslh  9 hours ago   40 comments top 9
1
Houshalter 42 minutes ago 0 replies      
From wikipedia:

>Production of melatonin by the pineal gland is inhibited by light to the retina and permitted by darkness. Its onset each evening is called the dim-light melatonin onset (DLMO).

It is principally blue light, around 460 to 480 nm, that suppresses melatonin, proportional to the light intensity and length of exposure. Until recent history, humans in temperate climates were exposed to few hours of (blue) daylight in the winter; their fires gave predominantly yellow light. The incandescent light bulb widely used in the twentieth century produced relatively little blue light. Wearing glasses that block blue light in the hours before bedtime may decrease melatonin loss. Kayumov et al. showed that light containing only wavelengths greater than 530 nm does not suppress melatonin in bright-light conditions. Use of blue-blocking goggles the last hours before bedtime has also been advised for people who need to adjust to an earlier bedtime, as melatonin promotes sleepiness.

When used several hours before sleep according to the phase response curve for melatonin in humans, small amounts (0.3 mg) of melatonin shift the circadian clock earlier, thus promoting earlier sleep onset and morning awakening.

https://en.wikipedia.org/wiki/Melatonin#Light_dependence

2
Aardwolf 16 minutes ago 0 replies      
So many questions...

Does total darkness also exclude dim lights like LEDs? So should you avoiding any electronics like clock radio, phone notification LED, ...?

Is having your eyes closed not enough to block out the light?

If it's about light on the skin: does lying under a blanket block it?

3
tomjen3 5 hours ago 1 reply      
We already knew that working nights gives a much higher breast cancer rate for women, but is there any evidence that it is the same for other cancers?
4
vanderZwan 5 hours ago 1 reply      
So now they relied on a lack of light to increase melatonin levels, but did they have a control group who simply were fed/injected with melatonin? Because if there is a difference between that group, then it becomes even more interesting: what else does the body do in these circumstances of total darkness?

Of course, in general we need better sleep hygiene than what we do to ourselves now, and darkness at night would be one way. But for many it's not easy to just switch to a regime like that.

5
x0x0 6 hours ago 4 replies      
wow, that's amazing if substantiated

I wonder what else we're disrupting with high levels of light. My crappy ass landlord but a giant fucking light right outside my bedroom. The problem is if I have blackout blinds to keep it dark at night, I won't wake up in the morning.

6
rosser 5 hours ago 2 replies      
I can see the headline now: "Scientists Say Darkness Cures Cancer"...

EDIT: Apparently, the fact that I'm mocking the kind of "science reporting" that would turn this result into that headline wasn't screamingly obvious. I'm surprised; it's a phenomenon I've seen discussed often enough here.

7
guard-of-terra 4 hours ago 2 replies      
What's the plan for people who live in the north and have a degree of polar day? As in it's not ever getting dark.
8
joeyspn 5 hours ago 1 reply      
I have sleep problems and sometimes I take melatonin (4gr) at night 30 mins before bed. It is a harmless supplement and is easy to find and buy online or in farmacies... it works wonders
9
carlob 4 hours ago 3 replies      
> however, during the 12 hour dark phase, animals were exposed to extremely dim light at night (melatonin levels are suppressed), roughly equivalent to faint light coming under a door."

If a faint light from under a door is able tu suppress melatonin, does this mean utilities such as f.lux are completely useless?

7
Cascaded Displays: Spatiotemporal Superresolution using Offset Pixel Layers
74 points by chdir  8 hours ago   28 comments top 7
1
chdir 8 hours ago 2 replies      
Video showing its capabilities : http://www.youtube.com/watch?v=0XwaARRMbSA
2
birger 5 hours ago 1 reply      
If I understand correctly the idea is that you get a high-resolution display by putting two low-resolution displays in fromt of each other?
3
oceanofsolaris 3 hours ago 1 reply      
Very interesting approach.

I think one interesting aspect of this is that it couples spatial as well as temporal interpolation. This means that you get a higher resolution as well as a higher framerate, but on the downside seems to introduce additional artifacts depending on how these two interpolations interact.

I have not yet read the technical paper and only watched the video without sound, but from this video it seems that moving sharp edges introduce additional artifacts (can be seen when looking at the features of the houses in peripheral vision at 5:11 in the video). This is what you would roughly expect to happen if both pixel grids try to display a sharp edge, but due to their staggered update, one of these two edges is always at a wrong position.

This problem could probably somewhat alleviated through an algorithm that has some knowledge about the next frames, but this would introduce additional lag (bad for interactive content, horrible for virtual reality, not so bad for video).

I intend to read the paper later, but can anyone who already read it comment on whether they already need knowledge about the next frame or half-frame for the shown examples?

4
blencdr 6 hours ago 2 replies      
I have difficulties understanding the mecanism of this supersampling (2 succesive images to make one ?). Can anyone explain this in a simple way ?
5
npinguy 4 hours ago 0 replies      
I would really like to see some data on the memory savings using this technique. How significant are they?
6
higherpurpose 4 hours ago 2 replies      
Unfortunately this will be yet another proprietary technology from Nvidia that nobody else will use - which means it won't have mass adoption - which means it's ultimately pointless (unless someone else creates an open source version of it).
7
ksec 4 hours ago 2 replies      
What is the real use case for this? Gaming and VR?

we have no problem making 4K Screens and Hardware isn't bound by it either.

8
HTML5 Drag and Drop considered harmful (2009)
12 points by striking  1 hour ago   14 comments top 4
1
dm2 14 minutes ago 0 replies      
@striking: Does this work for you?

http://www.html5rocks.com/en/tutorials/dnd/basics/

http://html5demos.com/drag

I don't understand why this is an issue, there are tons of great tutorials: https://www.google.com/search?q=HTML5+drag-and-drop

Step 1 to being a web developer, search Google first. More than likely anything you need done has been done 10,000 times and there are at least 100 tutorials.

Don't be afraid to use libraries such as jQuery,which HTML5 drag-and-drop doesn't require, but I see that you mentioned avoiding it in another comment. Libraries many times also have the advantage of increasing browser support transparently, which is generally a good thing.

Worried about loading time with libraries? Use a CDN such as http://cdnjs.com/, most users will have the popular ones already cached in their browser.

Use StackOverflow if you have questions, that's what it's there for. Hacker News is for... news.

2
couchand 24 minutes ago 0 replies      
Just built a pretty significant business application with a UI based on HTML5 drag and drop. There were certainly a few quirks, but it was nothing compared to web development in "the old days". Without any library the whole DnD song and dance is implemented as part of a React component that's only about 100 lines of JavaScript.

There are two places likely to trip you up. One of them is as mentioned in the original article: it's hard to remember which events need to have their default prevented. At the moment we're calling `preventDefault` in `onDragOver` which is required to identify the element as a drop target, and `stopPropagation` in `onDrop` which I believe is to prevent many browsers' default attempts to navigate to a dropped url-like thing.

The other potential pitfall is that, for security reasons, the data can only be read in `onDrop`, not in `onDragOver`. If you have a string id you don't mind being downcased you can just use the hack of sending that through the data transfer types (eg. as type "id/12345"), with a "text/plain" fallback.

3
moron4hire 27 minutes ago 1 reply      
I've seen this naming pattern a couple of times in articles submitted to HN. The "... considered harmful" pattern starts with Dijkstra, "GOTO statement considered harmful"[1]. It doesn't mean "such-and-such feature is implemented incorrectly." It means, "such-and-such feature, even when implemented correctly, gives programmers at large the wrong notions."

So, one would say "singletons considered harmful". The entire notion of singletons is wrong. But the entire notion of drag/drop events is not wrong. Just the implementation.

[1] It's interesting to read some of the counter-arguments, and reflect on programming language design arguments we have today: http://web.archive.org/web/20090320002214/http://www.ecn.pur..., and Dijkstra's own commentary on the whole ordeal is--as usual--entertaining http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1009.PDF

4
onion2k 1 hour ago 2 replies      
This is from 5 years ago.
9
Bitpost private decentralized messaging
37 points by draegtun  4 hours ago   22 comments top 7
1
flixic 1 hour ago 1 reply      
The text is super-hard to read, field spacing needlessly big: http://cl.ly/Wnhs

If the whole idea of this is to have good UI over BitMessage, it's not that good of a UI.

2
integricho 3 hours ago 1 reply      
I automatically open all posts where I see the words "distributed", "decentralized", "fault-tolerant", and similar catchy ones, and now I'm feeling like I'm a victim of clever marketing.
3
Maran 2 hours ago 1 reply      
Not sure if the poster is also the author but why is the application connecting to a server? Is it using a centralised point to connect to the Bitmessage network?

Edit: It also appears to connect to Tor. Perhaps a quick overview of what's happening under the hood could be useful.

4
johnchristopher 2 hours ago 0 replies      
It requires macos X 10.9. Really frustrating not being able to try out the app (it seems to have a better interface) while bitmessage runs on 10.6.8.

I suppose it uses bitmessage as a daemon and the app is a UX layer over it.

5
motters 2 hours ago 0 replies      
I'm also working on database encryption for Bitmessage, such that the data at rest always remains encrypted, regardless of whether the underlying file system is.
6
heliumcraft 3 hours ago 1 reply      
that's a great interface for bitmessage!
7
JetSpiegel 3 hours ago 2 replies      
> centralized services such as email

Stopped reading right there.

10
PVS-Studio: Checking Bitcoin
9 points by ProgC  1 hour ago   1 comment top
1
ikken 12 minutes ago 0 replies      
Since they're not sure if they found a security related error or not, I would rather make a disclosure to Bitcoin dev team fist before going public with it.
11
Steel worker reveals blocking view of U.S. aircraft on day of atomic bombing
90 points by devchuk  9 hours ago   17 comments top 5
1
FatalLogic 5 hours ago 1 reply      
I wanted to know if this was plausible, so I located the Yawata Steel Works (the source of the smoke screen) and the Kokura Arsenal (the aiming point for the bomb) on a map. They were about 4.4km apart.

http://i.imgur.com/nJaJtH7.png the site of the Steel Works is on the left)

Interesting, but I still don't know if it's true.

edit: Wikipedia's page on Smokescreens says "One 50 gallon drum of fog oil can obscure 60 miles (97 km) of land in 15 minutes", so I suppose it was easily possible for them to have hidden the target if the wind direction was right

Google Maps link: https://www.google.com/maps/place/%E4%B8%AD%E5%A4%96%E7%82%8...

2
spingsprong 6 hours ago 1 reply      
A couple of photos of what a smokescreen covering a city looks like http://449th.org/ploesti.php

It's for a German city, but it is the same idea.

3
programmer_dude 8 hours ago 1 reply      
How much coal tar would you need to burn to block an airplane's view?
4
outworlder 1 hour ago 2 replies      
Ok, so the target area was blocked. So what? This was not a precision ammo, it was an a-bomb! Even those early designs could just be detonated in the general area and still destroy the target and everything else.

Why did the bombers have to switch targets then?

5
adventured 1 hour ago 1 reply      
It's interesting to read Wikipedia's entry on this matter (claiming it was caused by a huge firebombing raid):

http://en.wikipedia.org/wiki/Atomic_bombings_of_Hiroshima_an...

"The delay at the rendezvous had resulted in clouds and drifting smoke from fires started by a major firebombing raid by 224 B-29s on nearby Yahata the previous day covering 70% of the area over Kokura, obscuring the aiming point."

12
To Stop Cheating, Nuclear Officers Ditch the Grades
50 points by sizzle  8 hours ago   25 comments top 5
1
hliyan 5 hours ago 4 replies      
In other words, you get what you measure. This is one of the reasons I discourage measuring team performance in terms of bug counts or lines-of-code. The former lead to what some call "issue-tennis" and informal fixes, and the latter lead to code bloat. After years of trying, I've finally concluded that "key performance indicators" attached to individuals or teams (as opposed to artifacts and periods) usually leads to more trouble than they're worth.
2
dan_bk 5 hours ago 0 replies      
John Oliver (Last Week Tonight) has a good round-up: http://www.youtube.com/watch?v=1Y1ya-yF35g
3
thisjepisje 5 hours ago 1 reply      
Classical example of Goodhart's law:

http://en.wikipedia.org/wiki/Goodhart%27s_law

4
mkesper 5 hours ago 0 replies      
The world could sleep better if those fatal missiles worldwide had a delay of 24 hours, at least.
5
jedberg 5 hours ago 1 reply      
So War Games was right then... We really do need to replace the officers with computers.
13
Planning to sink: What happens if Kiribati drowns?
20 points by rosser  4 hours ago   23 comments top 5
1
knz42 55 minutes ago 4 replies      
"If you have everything but land if you have a population that is displaced whether that allows you to be a state is a novel question,"

Jews would have disagreed that this is a novel question -- for them the question has been about for around 2000 years if not more.

2
Evgeny 1 hour ago 0 replies      
Related:

Paradise almost lost: Maldives seek to buy a new homeland

http://www.theguardian.com/environment/2008/nov/10/maldives-...

And a longer story on Kiribati

Drowning Kiribati

http://www.businessweek.com/printer/articles/169064-drowning...

3
ghostDancer 2 hours ago 1 reply      
The Order of Malta has no territories except some buildings and no "real" citizens and is semi recognized like a country, more by tradition than other things, with a big difference that Kiribati has citizens that should be relocated. The loss of their territories is a not so natural disaster. I think is more a matter of international willing, if they will to recognize it and maintain the status of Kiribati.
4
iwwr 3 hours ago 1 reply      
What's preventing people from moving onto stilt houses or even large floating seasteads? The big asset of Kiribati is their extensive EEZ which guarantees income from fishing or other activities (mining, drilling etc). You can chain together steel and concrete pontoons, add soil on top all without terrafirma. That could even be funded by the industrialized nations that are supposedly responsible for the sea level rise (a tiny cost, compared to actually cutting emissions).
5
gambiting 3 hours ago 1 reply      
"If you have everything but land if you have a population that is displaced whether that allows you to be a state is a novel question,"

Well, I can think of our own example - Poland has been forcibly taken and split into three parts during late 18th century, by Russia, Prussia(Germany) and Austria. For nearly 20 years there was no such country as Poland on the map, and people who lived on the lands of former Poland were forbidden from even speaking Polish.

Yet the entire country has survived - we had a government and an army which both operated from other countries, until finally we were given little independence back in 1807 - but even after that Russia has controlled majority of the land here. Fast forward 200 years - Poland is a large country in the centre of Europe, despite having disappeared from every map for more than two decades at one point.

So based on that I would say that yes, there can be a nation without land.

15
How a bug in Dropbox permanently deleted my 8000 photos
50 points by jancurn  2 hours ago   32 comments top 16
1
gabemart 37 minutes ago 3 replies      
I think this bug should be considered completely separately from how unwise it is to use a cloud service as the sole storage of important files.

Regardless of the circumstances, losing user files against the wishes of the user is the absolute worst thing a cloud backup provider can do.

Even for files that are deleted intentionally and unambiguously by the user, I'm astonished that Dropbox actually deletes the files at the end of the 30-day restore window. I would expect them to keep the files for some multiple of the publicly-stated restore deadline where the multiple >= 2, if for no other reason than as a goodwill generator. There is no more evangelical advocate for your company than the customer you email to say "Yes, you intentionally deleted this file six weeks ago. The 30 day deletion deadline has passed, but I have managed to restore the most recent version of your dissertation. Thank you for using Dropbox."

For files that aren't intentionally deleted by the user but are "de-synced", it is disgraceful and appalling that there is no contingency system in place. Keeping user files when the user assumes or wishes them to be kept safe is the core competency of a service like Dropbox.

"The user should have kept multiple redundant copies" is not an excuse for a poorly managed online backup service. "Keep multiple backups of everything important" is good advice for a user, but "Keep user files safe when the user thinks they are safe" is the most essential advice imaginable for the CEO of an online backup service.

2
yoda_sl 6 minutes ago 0 replies      
As many others mentioned already "sync is not backup". If you really want true backup then on a Mac I will suggest to have a setup like:1- local backup for quick recovery with Time Machine2- remote backup with a true backup solution like CrashPlan (or similar)

I personally have such setup with a local one with my Time Machine, and having CrashPlan running all the time, it did save me a couple times when after many weeks I deleted a wrong file or folder... Got everything back without problem.

If you don't have a Time Capsule, then you can always consider an alternative with just a smaller subset of what you want to archive by using a JetDrive by Transcend.I recently got a 128GB JetDrive and it is my local destination for my MBP for TimeMachine... Yes 128GB is not enough but knowing that CrashPlan is constantly running make me comfortable enough with that setup.

3
pierreio 6 minutes ago 0 replies      
> "you should backup"

What about in 10, 20 years? Photo libraries will keep inflating. Local storage will not. As of now I backup from a SSD Mac. What happens when I don't have a computer anymore?

Interestingly, people don't value "bits" or information. We value moments and emotions and work and art. There's no successful current consumer business model for people to store and backup photos (Backblaze is mainly prosumer).

And so aren't social networks the real backups by now? The redundancy of publishing on multiple services means some photos will stick and the rest will fade, somehow like former printed pictures I guess. Publish it or lose it?

4
egypturnash 34 minutes ago 1 reply      
Sync is not backup.

Sync is not backup.

Sync is not backup.

My strategy: a big external drive used for Time Machine, and a subscription to Backblaze. Both of these are all about retaining multiple versions, recovering from accidental deletion, and continuously backing up in the background. Dropbox is about syncing stuff between computers.

5
CaptainZapp 1 hour ago 3 replies      
I use Dropbox for sharing pictures, but would never even dream to use a cloud based service for backup purposes.

Granted, Jan's case is a bit more complex and I'm really sorry for his loss.

Stories like that should really be a lesson to everybody never to completely trust a cloud based service as your main backup.

On a side note: I agree that archiving of digital files is a hard problem. The smartest librarians of the world are thinking about how to achieve this for, literally, decades and I'm not sure they even have a good solution to the problem.

My personal strategy is redundancy: I buy new hard disks every couple of years and copy all important files, twice. One hard disk is kept off site.

It's not perfect, but it's the best I can come up with. Reading horror stories, like Jan's, indicates that it's the better solution. Despite the messiness.

6
SunboX 37 minutes ago 0 replies      
7
petercooper 1 hour ago 2 replies      
If you are using Dropbox as a sole backup of your files, think again.

Simplify to If you only have a sole backup, think again. Points 4 and 5 should be you always have more than one backup of important material and at least one backup should be on physical media that you own.

8
shimshim 10 minutes ago 0 replies      
Just curious why more people don't learn about and invest in home NAS's? Something like Synology would be pretty good at storing 8000 important photos. It is still a single point of failure, but its one I control.
9
simi_ 1 hour ago 0 replies      
When your main business is storing files (and you also just raised yet another shitload of cash), not offering better safeguards against file loss is inadmissible. I left Dropbox for Google Drive when they announced the Condolezza Rice move, and I haven't looked back since.
10
codva 35 minutes ago 0 replies      
I think the lesson here is not that the cloud is bad, it's that a sync service is not a backup. I use Dropbox to share, I use Amazon S3 to backup.
11
profsnuggles 40 minutes ago 2 replies      
It really sucks that we all have to keep learning this lesson over and over. Everyone I've ever spoken to that has a good backup strategy in place has it because they have lost irreplaceable files.

Have real backups. Syncing is not a backup strategy, raid is not a backup strategy, etc. 3-2-1 At least three copies in 2 different (storage) formats and at least one copy offsite. It sounds like overkill but you have to decide how much your data is really worth.

12
restlessmedia 25 minutes ago 1 reply      
We live in a convenient age, but also a fickle one. I laugh when my wife prints 100 hundreds of digital photos although her disaster recovery process is more stable than mine.
13
uladzislau 36 minutes ago 0 replies      
Dropbox is NOT a backup service. It's a convenient file syncing and sharing service. And even then it's a good ides to actually check docs or help before proceeding to do something which might result In a loss of data.
14
the4aces 1 hour ago 1 reply      
Try going into the folders and manually restoring. You can select more than one file at a time.... I was able to restore for more than 30 days... Worth a shot.
15
aaronem 1 hour ago 0 replies      
> If you are using Dropbox as a sole backup of your files, think again.

Good advice.

> Without making a mistake, you might lose your files.

If Dropbox, or any other single service or method, constitutes your entire backup strategy, then the mistake is already made.

16
mahouse 1 hour ago 0 replies      
Well, sorry to say this, but if all of your photos were stored only online...
16
HackMIT Puzzle Guide
54 points by dibyo  8 hours ago   6 comments top 5
1
chrisBob 15 minutes ago 0 replies      
In part 1 you need the sound turned on to solve the puzzle. I don't think I could have ever gotten farther than that on my own. Even at home I keep the sound turned off on my computer unless there is something I specifically want to listen to.
2
SimeVidas 16 minutes ago 0 replies      
Im pretty sure the Give it a try first. in the intro should be a link :-P
3
eddotman 36 minutes ago 0 replies      
Love the creativity in this - reminds me of the book Ready Player One, except with less 80s references and much harder (well, in some sense) puzzles.
4
jachwe 2 hours ago 0 replies      
Look behind you, a three headed monkey!
5
tritri 5 hours ago 1 reply      
I know people from MIT, only those people would have the perseverance to solve this puzzle
17
Kiwi rocket company ready to blast off
44 points by 69_years_and  9 hours ago   10 comments top 4
1
syedkarim 3 hours ago 1 reply      
$50,000 per kilogram might sound pricey, especially when compared to what SpaceX works out to, but considering that it's a dedicated launch, versus a rideshare, and they can go up at will--it's the best deal out there. It's a little more expensive than Dneppr, but they only have rideshares once a year. RocketLab's biggest competition, right now, is The Indian PSLV. They don't have set pricing, but I've heard that half of the stated RocketLab price is possible. But again, there is no regularity with PSLV.

There are a few other dedicated smallsat/cubesat launchers on the horizon, but that's also a three year horizon. For the time being, the Kiwis will have a monopoly. Unless, of course, Elon wants to being back the Falcon 1. I believe that rocket had a greater payload, but also was a bit more expensive. Lockheed is bringing back the Athena, but I doubt they will be cost-competitive with RocketLab.

These guys aren't exaggerating: This rocket is a huge deal for the advancement of the new space industry.

2
cconcepts 2 hours ago 0 replies      
I saw this talk from Elon Musk a long time ago and he addresses some of the rocket technology being offered by other initiatives: https://www.youtube.com/watch?v=54Q14cRsMs0

He seemed to give a very balanced perspective - I'd love to hear what his thoughts are on Rocket Lab's Electron, simply because he's in the know and seems to at least try and be objective in his commentary.

3
timClicks 5 hours ago 0 replies      
He looks pretty grumpy, but Peter Beck is actually a very smart, warm and engaging guy in person.
4
powertry 5 hours ago 2 replies      
Are they pushing stuff to geo-synch orbit? Thats where the big business seems to lie. I thought the article was really vague and surprised it did not talk about Space X and other competitors.
18
Sculpting text with regex, grep, sed, awk, emacs and vim (2012)
83 points by aburan28  10 hours ago   24 comments top 4
1
natnat 7 hours ago 1 reply      
One really cool tool that web programmers should know if they work with JSON data a lot is jq: http://stedolan.github.io/jq/. It's a line-oriented tool like sed, awk, and grep, but it's for manipulating JSON data. It can be really useful for quickly making sense of JSON-formatted log files. For example, you can do something like

    jq -c .'select(.server_name == "slow_server") | .end_time - .start_time' < my_log_file
where your log file might look like

    '{"server": "slow_server", "timings": {"end_time": 1406611619.90, "start_time": 1406611619.10}}'
to get your web request timings.

Because it's line-oriented, it also works seamlessly with other tools, so you can pipe the output to, say, sort, to find the slowest requests.

2
agumonkey 5 hours ago 0 replies      
The composite (or prime) filter regexp is brilliant.

    $^(11+)(\1)+$
see OPs linked article http://zmievski.org/2010/08/the-prime-that-wasnt for details

3
shmerl 7 hours ago 2 replies      
I prefer pcregrep, it's more feature rich and syntax is much neater. Using \d instead of [0-9] and etc. makes regexes more readable.
4
alayne 8 hours ago 6 replies      
Perl is better than sed/awk, but you're still going to write unreadable code. Python or Ruby are a better choice for maintainable scripts.
19
BitcoinJS
118 points by markmassie  13 hours ago   24 comments top 4
1
benmanns 12 hours ago 4 replies      
As an experiment I sent 0.0005 BTC to the address corresponding to the private key in the documentation (L1uyy5qTuGrVXrmrsvHWHgVzW9kKdrp27wBC7Vs6nZDTF2BRUVwy -> 17XBj6iFEsf8kzDMGQk5ghZipxX49VXuaV). Within seconds someone had already transferred it out to 1ENnzep2ivWYqXjAodTueiZscT6kunAyYs.

[address] https://blockchain.info/address/17XBj6iFEsf8kzDMGQk5ghZipxX4...

[thief?] https://blockchain.info/address/1ENnzep2ivWYqXjAodTueiZscT6k...

2
indutny 6 hours ago 2 replies      
Hey guys!

Still not considering to use https://github.com/indutny/elliptic for your EC operations? It seems like Bitcore has moved to it, and are quite fine with the results: http://blog.bitpay.com/2014/07/22/bitcore-3000-is-three-time...

3
mifreewil 12 hours ago 2 replies      
how much overlap is there with BitPay's Bitcore? I do believe both projects originated from Stefan Thomas' original work on BitcoinJS
4
chucknelson 10 hours ago 3 replies      
This does not make sense to me:

> BitcoinJS 1.0 Released!

...further down the page...

> Documentation: Soon!

20
Generate raw WAV output by hooking malloc and read
79 points by anigbrowl  12 hours ago   14 comments top 9
1
thegeomaster 6 hours ago 1 reply      

    void* read( int fd, void * data, size_t count)    {      ...      gen_square_wave( 44100 , CLAMP(count, 20, 20000 ), CLAMP( sizeof(data), 100 , 1700), 0.7 );
I don't get it, isn't sizeof(data) always the same, and usually either 4 or 8?

2
ericHosick 9 hours ago 0 replies      
3
pronoiac 6 hours ago 1 reply      
It sounds like when I piped /dev/random to a midi device. (Or was it /dev/urandom and TiMidity++?)

It was like demos; I would be stunned at the emergent complexity and compactness, and it would only elicit shrugs from friends.

4
cowpewter 9 hours ago 1 reply      
Reminds me of when I was a kid, and I had a Tandy CoCo and a tape recorder deck that was the only way to save any programs you wrote on it. I tried playing the data tape a few times.
5
acannon828 10 hours ago 1 reply      
This seems cool, though I don't entirely understand it. "Generates raw WAV output by hooking malloc() and read()." What does that mean exactly?
6
viciousplant 11 hours ago 1 reply      
transform logic to sound? hmm interesting idea
7
niix 9 hours ago 0 replies      
Heh really cool
8
andyzweb 11 hours ago 0 replies      
oh wow this is fun
9
MBCook 10 hours ago 0 replies      
To the tune of Sound of Silence by Simon & Garfunkel:

    Hello malloc() my old friend,    I've come for mem'ry once again,    Because a pointer silently creeping,    Filled buffer that I was keeping,    And the signal that I trapped was a bus error.    Didn't care.    Because I still... have malloc()    There were pages that I missed.    My OS had sent them to disk.    Try my best to not hit swap,    Looking for data I could safely drop,    I compressed some bits that I kept stored in place    Freed some space.    But then I still... used malloc().

21
Facebook Flux Application Architecture for Building User Interfaces
224 points by diggan  20 hours ago   50 comments top 16
1
jefftchan 19 hours ago 6 replies      
It's great to see Facebook releasing code for Flux. Hope to see more in the future. Here are some other implementations of Flux for anyone who's interested:https://github.com/BinaryMuse/fluxxorhttps://github.com/jmreidy/fluxyhttps://github.com/yahoo/flux-example

We recently adopted this architecture for a medium-scale project. The unidirectional data flow has greatly reduced code complexity for us. We're still not satisfied with our server syncing strategy. Seems like actions are the best place to sync data, but how do you deal with interdependent actions?

2
xtrumanx 19 hours ago 5 replies      
I'm still somewhat unclear on the point of the Dispatcher and Actions and they simply feel like needless indirection to me.

For instance, in the flux-chat app within the linked repo, the MessageComposer component calls `ChatMessageActionCreators.createMessage(text);` when the user press the enter key to submit a message.

To find out how that ends up affecting the application, you need to jump around more than a couple of directories to find out how it's all wired up.

I just cut out the middlemen and directly interfaced my Stores from my components. My Store then emits a change event as usual and other components that are subscribed to the change event than refetch their state. That part of Flux makes perfect sense to me. The need for Dispatchers and Actions don't.

To be fair, I didn't start looking into Flux until my app starting growing large enough that I started feeling the pain of my lack of architecture. The Stores and the event emitter solved my immediate problems. Perhaps if my app grows even larger, the Dispatcher may come in handy.

3
voyou 2 hours ago 1 reply      
A lot of this seems like it's just MVC, except what they call the "View" is a traditional MVC Controller (the UI element that handles user interactions and sends these to the Model), what they call the "Controller-View" is a traditional MVC View (something that gets notified when the Model changes and displays that change to the user), and what they call the Dispatcher is what traditional MVC calls the Model.

They write "Flux eschews MVC in favor of a unidirectional data flow", but MVC already has a unidirectional data flow (Controller -> Model -> View). Is this just a case of those who don't understand MVC are compelled to reinvent it?

EDIT: Actually, the main addition over MVC seems to be that the Stores declaratively specify their relationships between one another (which are then resolved by the dispatcher), rather than the developer writing a specific Model implementation that explicitly orders the changes to related elements of the model. I'm a bit suspicious that this would be less explicit, and so harder to maintain, but maybe I'm wrong.

4
driverdan 13 hours ago 1 reply      
I previously spent time reading about Flux, watching the videos, looking at Flux libs like Fluxxor and it seems overly and unnecessarily complicated to me. The actions layer specifically seems unnecessary as it could be implemented at the Store / model layer. The dispatcher is an event queue system with support for dependencies.

To me it makes a lot more sense for React components to push an event directly onto a pubsub event queue which then dispatches accordingly. When data changes anywhere it fires an event that then passes new state to the top level React component. Most of the actions you do to data are boilerplate and can be greatly simplified from Flux.

What am I missing? Why is it so complex?

5
fisherwebdev 12 hours ago 0 replies      
The Dispatcher is not an event queue system, but rather a registry of callbacks. This is a significant difference. The callbacks may be invoked in a specific order, synchronously, so that dependencies between Stores are managed effectively. Stores declare their own dependencies -- the Dispatcher knows nothing of the internals of Stores -- through the use of the dispatcher's waitFor() method.

Yes, you don't absolutely need the ActionCreator helper methods. You could call AppDispatcher.handleViewAction() or even AppDispatcher.dispatch() directly in the View. But keeping these things separated out keeps the code nicely organized and keeps application logic out of the View. Additionally, I find it helps to maintain a mental model where the only way into the data flow is through the creation of an action, codified in the library of ActionCreators.

6
neves 19 hours ago 2 replies      
It has a fine explanation of the pattern: http://facebook.github.io/react/docs/flux-overview.htmlbut what isthe reasoning in using this architecture? Or in pattern speak, what are the forces that this pattern is considering? I can't find a clear explanation about which problems is it trying to solve.
7
joelhooks 17 hours ago 0 replies      
We've got a React Flux series[1] on egghead.io if you like video lessons.

--

[1] https://egghead.io/series/react-flux-architecture

8
sehr 17 hours ago 0 replies      
Actual site for the project:

http://facebook.github.io/flux/

Seems to still be in flux.

9
rubiquity 19 hours ago 2 replies      
I've been using React quite a bit but haven't had a need for Flux. Using React to implement all of my UI concerns, combined with Backbone for easy to use persistence and modeling, has worked wonderfully for me. YMMV :
10
colinhowe 20 hours ago 1 reply      
We've been playing with this architecture. It makes testing a lot easier - you just shove data in where needed and things pop out the other end. It also forces us into splitting things up in a more sensible manner. Along with easier testing is easier reasoning.

That said, like most new architectures/frameworks finding a big example (not just a TODO app) is really hard. We are currently prototyping a big app using React/Flux and we find ourselves having to question ourselves a lot more than we'd like.

11
kitd 3 hours ago 0 replies      
So, does this make stores 'Flux capacitors'? (pretty pls)
12
fiatjaf 18 hours ago 1 reply      
How does this compare with FRP?

I've been thinking about this and concluding that React wouldn't benefit even a little from those FRP libraries and architecture, it is already quite functional reactive. Am I wrong?

13
hcarvalhoalves 19 hours ago 1 reply      
It's easy to integrate with Backbone if you want for this kind of architecture, as it already implements events and stores. I've found it requires less boilerplate than the example in the repo.
14
fnordsensei 17 hours ago 0 replies      
This looks very interesting, and if I'm not mistaken, slightly reminiscent of how apps are built with Om (using core.async for dispatch and atoms for storage).
15
igorgue 16 hours ago 3 replies      
I know is stylized as "f.lux" but fuck man...

Why does big companies do that? Just recently Apple's Swift and now Facebook's Flux.

16
api 14 hours ago 0 replies      
FB seems to be doing some amazing work trying to make the web a more tolerable UI platform.

Once we finally have a good, solid, stable UI building consensus in HTML5/JS, it'll not only be possible to use it for the web but for the desktop too via node-webkit / atom-core.

22
Atlantropa
148 points by luu  16 hours ago   45 comments top 13
1
ern 7 hours ago 1 reply      
When I read the first paragraph, I was immediately reminded of my feelings when watching the Star Trek:TNG episode, made in the late 80's where Picard considered quitting Star Fleet to head the Atlantis Project. When I watched that episode recently, I found the idea that future people would allow large alterations to the earth to be discordant.

It is interesting that, according to the article, Roddenberry also incorporated the idea of a dam across the Strait of Gibraltar into one of his works.

Today, I expect that terraforming Mars and mining the moon will face significant opposition. Would massive land reclamation projects like those in the Netherlands ever happen today, if a particular hyper-conservation Western mindset were in place when they commenced? We can't even let go of outdated urban streetscapes, and buildings past their useful lifespans[1], so it isn't far-fetched to think of moon miners having to recreate impact craters on the moon's surface after mining them, if mining happened in the first place.

Large segments of humanity are becoming sclerotic. Even if [2] the positive effects of damming the Mediterranean (or terraforming central Australia or Antarctica) were found to outweigh the losses, and the impacts could be mitigated, it would still be extremely unlikely to happen, for sentimental reasons.

[1] See recent news about Lloyds of London building, which has became an expensive burden, but can't be touched because of its "iconic" status

[2] yes, obviously there are risks, and they would need to be weighed carefully.

2
thisjepisje 3 hours ago 0 replies      
Does anything come closer to this than Flevoland?

http://en.wikipedia.org/wiki/Flevoland

Slightly related: "a narrow body of water was preserved along the old coast to stabilise the water table and to prevent coastal towns from losing their access to the sea."

3
adrianN 14 hours ago 1 reply      
If you like this, you should get yourself a copy of "Engineer's Dreams" by Ley. It is fairly old and full of such grand projects. One of them, the Channel Tunnel, is no longer a dream.
4
igravious 15 hours ago 2 replies      
Hilarious that the first two comments are diametrically opposed: "Wow that's pretty fucked" versus "Sadly much vision has been lost somewhere in the last century"

I'm going to tread a line between the two and say that I miss the visionary engineering works of old but that I am not completely heedless of the Law of Unintended Consequences. I do think that the Straits of Gilbraltar should be either bridged or dammed. (I'm assuming damming means bridging.) Too good an economic opportunity to pass up.

5
avz 13 hours ago 0 replies      
Brings to mind Aral Sea where the diversion of rivers for irrigation projects have led to the shrinking of the sea area with disastrous consequences for local ecosystems and communities. http://en.wikipedia.org/wiki/Aral_Sea

EDIT:There must be better ways to become a Type I civilizationhttp://en.wikipedia.org/wiki/Kardashev_scale

6
skrebbel 4 hours ago 0 replies      
The Atlantropa movement, through its several decades, was characterised by four constants:

- ...

- Pan-European sentiment, seeing the project as a way to unite a war-torn Europe;

- White-centric superiority (and even racist) attitudes to Africa

- ...

It interests me how such a positive sentiment (Pan-Europanism) can be combined with a very negative sentiment (white supremacism) without any trouble.

I'm not sure we can play the "they were different times" card so easily. Currently, the EU has open borders, so the Pan-Europanism part worked out quite well, actually. As a consequence, the outer borders of the EU have become rock hard, because once you made it to Spain or Italy, you made it to most of Europe. We treat Africans who try to make it across like animals - not out of some white supremacist sense, admittedly, but the effect is the same.

7
jzila 14 hours ago 1 reply      
There's a documentary about the project here: http://vimeo.com/92381391. Fascinating watch.
8
austinz 15 hours ago 1 reply      
I don't think big projects are intrinsically bad because they are big, but it's probably true that something of this scale would have drastic, wide-ranging effects on climate, geological characteristics, rainfall, and many other aspects that might adversely affect millions of people. Due diligence would require us to study and characterize these effects before spending trillions of dollars on such a project.
9
exratione 15 hours ago 2 replies      
Imposing large scale engineering projects of this nature will probably have to wait until after production and point control of self-replicating nanorobots, organic or otherwise, is fairly mature. At that point the cost reduces down to design and raw materials and looks feasible when compared to simply going to other worlds in search of additional space, or building arcologies, or other methods.

Sadly much vision has been lost somewhere in the last century. People are much more conservative about preserving the present state of whatever exists, and have lost the sense of wonder and ambition that characterized the opening of the modern age of machinery.

10
incision 15 hours ago 0 replies      
Awesome, in the more formal sense.

Reading about the intended colonization of Africa puts me in mind of something I read about Moroccan phosphate reserves [1] a while back.

1: http://www.spiegel.de/international/world/essential-element-...

11
acqq 14 hours ago 0 replies      
Note also what I consider a spamming by "Cathcart, R.B." who probably added to the article himself the big list of his texts as relevant for the subject and even a clumsy attempt to the citation.
12
batmansbelt 16 hours ago 2 replies      
Wow that's pretty fucked. No one would even dream of anything so crazy these days. The world certainly has improved in the past 90 years.
13
desireco42 15 hours ago 0 replies      
No wonder they are digging out this Nazi projects now when they occupied us on Balkans. But as before, we will rise again and free ourselves.

Not that everything Nazis did was bad, but this is classic megalomanic project, which honestly doesn't sound completely bad, but it is like those Chinese 3 gorges dam, huge and rife with potential problems.

23
Black Swan Seed Rounds
188 points by BIackSwan  18 hours ago   90 comments top 15
1
cwal37 18 hours ago 4 replies      
I don't think black swan is an appropriate title. This investing is more like when a small group of people split a large powerball jackpot; statistically unlikely, but not really fitting the definition of a black swan (at least as proposed by Taleb).

His rules:

1. The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.

2. The non-computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities).

3. The psychological biases that make people individually and collectively blind to uncertainty and unaware of the massive role of the rare event in historical affairs.

http://en.wikipedia.org/wiki/Black_swan_theory

On a related note, Antifragile is a fun read. Taleb is extremely sure of himself, and extremely passionate. His books and ideas (culminating in Antifragile) can be pretty seductive in the moment.

2
bthomas 17 hours ago 1 reply      
There's no evidence here that successful seed companies are more likely to look like bad ideas than unsuccessful seed companies. If 4/5 black swans look like bad ideas when they start, but 4000/5000 of this year's startups also look like bad ideas, then there's no effect to explain. The only lesson is that seed investing is just throwing darts.
3
imjk 17 hours ago 2 replies      
Sam, could you (or any other Angel investor) talk frankly about the money your put in and what your exact returns have been. How much have you had to reinvest in later rounds too. I can't find any really experienced angel to talk frankly about this with me.
4
sama 18 hours ago 3 replies      
to be clear: i don't think seed investing is random. far from it. i just think it's really important to think independently and take the time to really get to know founders and understand businesses instead of just following other investors.
5
tormeh 18 hours ago 3 replies      
A recent blog post [0] from the Economist stated that buying random stocks on the stock market actually performs better than valuation-weighted investment and conscious investment. I wonder if the optimal strategy is to just invest randomly?

[0]: http://www.economist.com/blogs/freeexchange/2014/06/financia...

6
chatmasta 10 hours ago 0 replies      
A few more reasons raising a quiet/controversial seed round might be a good signal:

1. Publicly raising a large seed round creates a lot of hype when the product might not be ready yet, or even before there's any product/market fit. As a founder, that creates a lot of pressure to satisfy the hype, and also might cause you to attempt building a product that satisfies the opinions of the blogosphere, rather than your customers. In contrast, when you raise a quiet round, you can build your product and achieve product/market fit without worrying about the outside world, instead focusing purely on your customers.

2. If no investors are questioning the model, you should be alarmed. Almost by definition, no company seeking a seed round has proven revenue model. Knowing this risk, you should expect some fraction of investors to be questioning it. If none are, then it's likely a signal that they've been swayed by smooth-talking founders who are very good at raising investment, but, as you mention, not necessarily fit or ready to build the business they're hyping.

So if we assume quiet seed rounds help with success, this leads to some questions about YC, which is essentially a loud seed round. Namely: should you announce your YC acceptance before you have product/market fit? Should you go through YC in stealth mode until ready to capitalize on hype?

It seems your observations can tell us a lot about the value, or anti-value, of product hype. The lesson seems to be, in the words of Denzel Washington/American Gangster, "the loudest person in the room is probably the weakest."

7
razvanr 15 hours ago 1 reply      
I suspect there's also an inverse correlation between the amount of money raised at an early stage and founder focus going forward.

It's easy to interpret a large seed round as early success and become side-tracked or overestimate what your company's impact really is at that stage. Which can impact focus and lead to disaster.

8
zmitri 18 hours ago 0 replies      
What's an approximate range of $ raised/valuation that makes for a Black Swan seed round in your opinion? It's hard for me to understand what the implications are of this besides "don't believe the hype" without having some relative sense of that.
9
tmuir 18 hours ago 1 reply      
Isn't basing the multiplier on valuations at the latest investment round disingenuous? After all, its just a number the owners and investors hope someone will eventually pay for the company. It's the asking price set by people hoping for large returns.
10
panabee 12 hours ago 0 replies      
he lists four: teespring, zenefits, stripe, and optimizely. anyone know the fifth?
11
wunderlust 9 hours ago 0 replies      
This essay might be remotely insightful if Sam explained why great companies "often look like bad ideas at the beginning" and "look really risky at the seed stage".
12
silverlake 18 hours ago 0 replies      
So investing at the seed stage is random. I'm surprised that VCs don't help new companies more in the early stage to mitigate some of the common failure patterns, for example the list PG had in 2006. Perhaps couples counseling for YC founders? Every little bit reduces investor risk even a little.

http://www.paulgraham.com/startupmistakes.html

13
EGreg 3 hours ago 0 replies      
Does Sam Altman continue to do angel investing or now that he's running YC he's stopped as part of their policy?
14
snowwrestler 18 hours ago 0 replies      
"Make your own decisions" is good advice, but I think the main question is whether there exists any reliable means of distinguishing the good contrarian vs. bad contrarian investments--or whether it's just luck. The title seems to imply that the answer might be just luck.
15
recalibrator 17 hours ago 3 replies      
Teespring and Zenefits I've never even heard of until today. Are they rolling in the bank and disrupting their respective industries?

And Stripe, there was huge pent-up demand for a PayPal alternative before they came on the scene. How could anyone say that was a "black swan" bet? It doesn't make sense.

24
Reintroducing FFmpeg to Debian
136 points by unspecified  17 hours ago   42 comments top 13
1
fndrplayer13 15 hours ago 4 replies      
It frankly frustrates me that Debian/Canonical ever used libav to begin with. ffmpeg is and has been the better of the two for quite awhile. The person in this thread arguing against inclusion of ffmpeg would probably be astonished by the number of developers who are building ffmpeg from source instead of using the libav that ships with Debian/Ubuntu/etc. We should encourage developers to use RPMs, especially for something as heavy to build and install as ffmpeg/libav. Many heavy hitters have settled on ffmpeg. It should be done here as well.
2
gioele 6 hours ago 1 reply      
I think the FFmpeg vs libav debate is succinctly described by this quote from https://github.com/mpv-player/mpv/wiki/FFmpeg-versus-Libav

> Although we don't agree with everything FFmpeg does, and we like some of Libav's general goals and development directions, FFmpeg is just better from a practical point of view.

> It shouldn't be forgotten that Libav is doing significant and important development, but since everything they do ends up in FFmpeg anyway, there is barely any reason to prefer Libav over FFmpeg from the user point of view.

> It's also possible that FFmpeg agrees faster to gross hacks to paint over bugs and issues than Libav, however, in the user's perception FFmpeg will perform better because of that.

Basically libav is doing things in the proper way but slowly, so they will die because FFmpeg ships more features although less polished. "The ones that win are the ones that ship", isn't it?

3
zx2c4 15 hours ago 1 reply      
Here's a nice comparison between libav and ffmpeg from the author of 'mpv' (the most viable mplayer fork):

https://github.com/mpv-player/mpv/wiki/FFmpeg-versus-Libav

It helped me make the decision (for choosing ffmpeg).

4
birkbork 15 hours ago 2 replies      
Finally, thank you!

A package called "ffmpeg" has been in debian forever now, and running it's binary claims that "ffmpeg is deprecated", which is a complete lie.

EDIT: also see "FFmpeg and a thousand fixes" [1], suggesting that FFmpeg is working hard improving the security situation, while libav mainly ignored the effort.

PPS also i dont like the libav crew

1: http://googleonlinesecurity.blogspot.se/2014/01/ffmpeg-and-t...

5
izacus 6 hours ago 0 replies      
As someone who regullary had to help people use ffmpeg on #ffmpeg/Freenode, explaining why Debian/Ubuntu packages wrong piece of software under "ffmpeg" package was becoming really tedious.

So thanks to Debian maintainers to fix stupidity.

6
picomancer 12 hours ago 0 replies      
It's about time. I was just using ffmpeg today, wanted libx264 lossless encoding, noticed the deprecation message and lack of libx264 support in the Ubuntu package, recompiled from source.
7
mkhpalm 6 hours ago 0 replies      
I've been using ffmpeg packages from deb-multmedia.org repos through the entire libav period. I'm surprised to hear so many people were going through all the trouble to build it themselves.
8
giancarlostoro 6 hours ago 0 replies      
"I do not believe you, explain that voodoo to me: How is it that it won't break all of Debian and make kittens cry?"

I'm easily amused.

9
igravious 14 hours ago 2 replies      
This'll make it into Ubuntu when? Any estimates out there?
10
shmerl 11 hours ago 0 replies      
Good, mpv will now get more features.
11
vezzy-fnord 15 hours ago 2 replies      
About time. I'm usually all for forking and variety, but libav truly was an example of those gratuitous and destructive forks that offered no real benefit. Though, ultimately, it was the Debian package maintainers' decision to spread propaganda about ffmpeg being deprecated that was the worst.

As a practical example, I was gridlocked when I tried to compile LightSpark from Git, due to libswresample not being present, nor practically obtainable. Editing it out from cmake, predictably, lead to breakage.

Here's a classic article detailing the situation: http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html

And from the mpv developers: https://github.com/mpv-player/mpv/wiki/FFmpeg-versus-Libav

12
ausjke 13 hours ago 1 reply      
eglibc went back to glibc, now ffmpeg comes back to debian, nice!
13
VLM 15 hours ago 1 reply      
I wonder what Niedermayer would say / has said.
25
Show HN: Appsites Beautiful websites for mobile apps in minutes
39 points by ryanmerket  8 hours ago   16 comments top 6
1
philbarr 4 hours ago 1 reply      
It seems to be down at the moment as I keep getting the "We'll be right back!" message. Possibly the HN effect?

Anyway - that aside this is an absolutely brilliant idea and fixes a real pain point for me. After spending so long on the app that I've started to hate it, doing all the screenshots, videos, etc. is bad enough - but then you have to do the website as well and it just makes you feel like - "urgh, really? CSS mangling time? Blergh." I would easily pay some of my hard earned indie cash to just have it done for me. Some thoughts:

- I'm going to want to integrate whatever site you create into my own website, so I just want to download whatever you have and throw it into a div container and have it work. Which means there shouldn't be any/many dependencies on CSS frameworks / JavaScript etc. It should be clean.

- If I'm going to give you my cash I'm going to want to be able to choose from a LOT of different themes so that I can differentiate my app from all the others out there. I don't mind doing some of the simple customization myself (colours, text, etc.). I'm not going to want to pay each time if all it comes down to is changing the logo / picture / text when I could easily do that myself.

- I don't know how you plan to monetize, but I wouldn't pay monthly for this. I just don't churn out apps fast enough (other people might want to though, of course). But I would be prepared to pay a fair bit each time if it looks good enough. I'm thinking 5 for the simple demo shown (chow.appsites.com), up to 25 for something like http://staticapp.co/ - with all the fancy animations and the like.

Anyway - hope it works out and you do well.

2
supercoder 1 hour ago 0 replies      
Doesn't seem to work right now, but reminds me of how app.io started. They basically created websites for your apps automatically but then I think found the demand wasn't that great and pivoted to the realtime demo idea.
3
jnye131 2 hours ago 0 replies      
Definitely something I'd be interested in also, but sadly seems to be struggling with the load.
4
underwater 6 hours ago 0 replies      
Cool concept but your landing page spends too much time telling me what I'll get and not why I'd want it.

I tried out the editor by pasting a random app link. It looks pretty neat but took forever to load. You could also short circuit the reloading when I cancel out of an edit panel. And fix the settings popup so that it can be dismissed when viewed in small browser windows.

5
thomasknoll 8 hours ago 2 replies      
What do these look like live? have an example?
6
jbverschoor 5 hours ago 0 replies      
Cool!You should include a landscape theme (game)
26
Zillow to Acquire Trulia for $3.5B
266 points by julio_iglesias  23 hours ago   149 comments top 18
1
chatmasta 22 hours ago 15 replies      
I interned for Redfin last summer. This is a really interesting space, and most people don't realize that Zillow/Trulia are operating drastically different businesses from Redfin.

Some background: The US real estate industry is broken up into regions (e.g. SF bay area, Orange County, Lake Tahoe, etc.). In order for a brokerage [1] to operate in a region, it needs to employ agents specifically licensed in that region, and have a real office there. Importantly, each region also has its own data feed of listings, called an MLS feed [2]. Amongst real estate agents, the MLS feed in each region is considered the primary source of real estate listings. If a house is not in the MLS, it's not for sale. BUT, only brokerages have access to MLS feeds.

There is no standard for MLS software. It's truly terrible. No joke, in some regions, the MLS service -- responsible for all real estate listings in that region -- is an archaic Windows program running on a desktop in some guy's Lake Tahoe cabin. Generally, MLS feeds are similar in structure, but there is no semblance of standardization, API, or developer-friendly solution for accessing it. Every region has its own MLS feed with its own structure, access restrictions, weird rules, etc. It's a nightmare to develop against.

Zillow and Trulia set out to solve this problem. They are listing aggregators, essentially filling the same role as MLS software. But because Zillow and Trulia are not brokerages, they cannot access the MLS feeds. So they have to get the data on their own. They depend on real estate agents manually inputting their listings into the Zillow/Trulia platforms. Nowadays, most agents do input this data, but that was not always the case, and IIRC Zillow/Trulia still only have something like 80% coverage compared to MLS feeds.

So Zillow and Trulia are simple listing services. They are basically advertising platforms for real estate agents. Their revenue model depends on agent referrals, paid listings, etc. They have no direct role in selling a house.

REDFIN IS A BROKERAGE. Redfin actually employs real estate agents who will help you buy a house. And instead of earning commission proportional to sale price (a huge moral hazard -- see: Freakonomics), they earn commission based on customer satisfaction. So Redfin agents are inherently motivated to work in the customer's best interest, instead of their own, which is getting the price as high as possible.

Because Redfin is a brokerage, it is entirely different from Zillow and Trulia. This is the reason that you only see Redfin in "some" areas (although they have coverage in most major metropolitan areas at this point), while Trulia/Zillow are nation-wide. When Redfin expands to a new area, it needs to establish an office, hire and train agents, file paperwork, etc. This takes time, but often when Redfin gets to a new area, there are already thousands of customers who have been waiting for them to launch there.

Also, because Redfin is a brokerage, it has access to MLS feeds. So Redfin gets its data directly from the source, instead of depending on real estate agents to enter their listings directly into its platform. Because of this, Redfin has 100% coverage in all the regions it serves, compared to ~80% (IIRC) of Trulia/Zillow.

So now it looks like the market will come down to Redfin vs. Trulia/Zillow. I'm curious to see how this plays out. On one hand, Redfin has a far more defensible model -- they have an office in every region, and actually make a lot of money from each listing. And they have a far better value proposition for the customer. Why would you use a real estate agent trying to pump the price as high as possible, when you can use one who will be paid entirely based on your satisfaction rating?

On the other hand, Zillow/Trulia have wider reach. There is nothing stopping them from opening a brokerage in their most popular markets and simply copying Redfin's model. But if they do that, they are already way far behind.

Personally, and I'm biased because I worked there, I think Redfin is going to "win" this battle. There's no reason why Zillow/Redfin can't coexist harmoniously, but I expect we will see Redfin making far more money in 10+ years than Zillow.

[1] http://en.wikipedia.org/wiki/Real_estate_broker[2] http://en.wikipedia.org/wiki/Multiple_listing_service

(EDIT since this is getting so many upvotes: I DO NOT SPEAK FOR REDFIN AT ALL, I DO NOT WORK FOR REDFIN. I worked there one summer last year.)

2
IgorPartola 22 hours ago 4 replies      
Having bought real estate, I can say that Zillow/Trulia are both a blessing and a curse. They are great in that you can see what's been listed for a while, scan a map, and generally put a pretty interface on house searches. This I imagine is their great advantage. I also found my mortgage company through Zillow's mortgage rate search.

Their big disadvantage is that their records are not updated as fast as the conventional MLS. The house I bought recently came on the market 7 days before I made an offer. It was not on Zillow even by the time we had the contract signed. The sellers, for whatever reason, put a very reasonable (possibly even low) price on the house, and I was at the end of the search, having seen enough locations in the area to know that this was a great value.

Another random personal experience: when I first saw Zillow I remember thinking "who needs real estate agents if you have all this?" Then I got an agent to buy my first house. All I have to say is "you do not know, what you do not know." While I do wish that agents simply took a set fee instead of it being a (very large) percentage of the purchase price, they provide a hugely invaluable service. I am not saying it's impossible to buy real estate if you don't have an agent; but if you can do it, you are probably a real estate agent.

3
scelerat 18 hours ago 3 replies      
Just adding some anecdata to the chatter about these sites and the experience of buying a home:

I began searching for a house in the East Bay about two years ago, culminating in a purchase in June 2013. The first year or so I was using Redfin, Zillow, and Trulia to track availability, prices and neighborhoods. While I got some good information, it wasn't until I started talking to an agent with about thirty years experience in the region that I actually got good leads. The listings on all of these sites seemed to trail the MLS leads she would get by days to weeks. I made several offers over many months, each one more than the last, each time watching the stock get thinner and bids climb higher. My agent was not only finding good potential properties, but also providing a lot of perspective and emotional support.

The house I ultimately bought was something she found via her network of colleagues before it even was placed on the market. I made a bold offer and gulped at what I was putting on the table, but in retrospect I was fortunate considering what's happening in the bay area housing market right now.

Maybe I'm a dummy, but I cannot envision going through the process without a real human pro providing guidance and leads.

4
ssanders82 19 hours ago 2 replies      
I trade stocks as a hobby and I was wondering what I'm missing here. This morning Zillow offered .444 shares of Z for TRLA. Currently (1:50 pm EST) TRLA is trading at only a 0.411 valuation of Z. What's to stop me from shorting Z and buying TRLA to lock in the difference as profit? It seems both have agreed to the 0.444 ratio. Is it a regulatory issue? What else would cause the deal to fail?
5
hodgesmr 22 hours ago 1 reply      
Zillow 2013 revenues: $225M, profits: -$12.5M

Trulia 2013 revenues: $175M, profits: -$17.8M

6
themartorana 23 hours ago 8 replies      
Does that leave any real competition in the space?
7
rgovind 21 hours ago 2 replies      
All real estate news and websites are biased and always suggest you that buying a home is a god think. In the SF bay area, I hear real estate agents speak on the radio. Before recession, they said you should buy a home immediately so that rates may increase....Then during recession they say you should buy a home as rates have fallen to historic lows..for last 1-2 years, whenever you hear...they say the interest rates are low, so you should immediately buy it, irrespective of dynamics between interest rate and price of house.

I wish there was a website/service which actively debunks what real estate agents, radio channels are propragating.

8
jscheel 22 hours ago 0 replies      
I listened to Sami Inkinen talk about how proud he was that Trulia was the underdog to Zillow several months ago. I assume that the acquisition talks had probably already started, even as he was talking about this. I'm not incensed at this, I just think we need to be honest with ourselves. When your competition wants to buy you, then you've probably done something very right. But everybody has a price and Zillow obviously found theirs! Congrats to them, they've done a lot for dragging the real estate market kicking and screaming into the future.
9
Cybernetic 17 hours ago 1 reply      
It will be interesting to see how the home value estimates play out. Zillow and Trulia each have their own methodology for determining a home's value and the disparity between the value of each service lists can be significant.

I purchased a home two years ago in Portland, OR (South East). At the time of my purchase, its price on Zillow was listed as ~$70K less than it was appraised for (I had two appraisals and both were within $1K of one another). Trulia listed the value within $1K of the two appraisals.

In two years time, the value on Zillow is listed as the original purchase price. On Trulia, the value is ~$60K more (it is based on an average appreciation of 8% annually of homes in my neighborhood).

I know a home's value is only what someone is willing to pay for it, but the disparity in estimates between those two services has always bothered me.

10
carlmcqueen 22 hours ago 1 reply      
A different article, which I'm having trouble tracing back now, mentioned that this acquisition is more like the same company owning match.com and tinder.

Does anyone know if the intention to leave both up?

article: http://www.bloomberg.com/news/2014-07-28/zillow-to-acquire-t...

"Rascoff said in an interview that the deal to buy Trulia signals that Zillow is creating a portfolio of online real estate brands, which lets the company appeal to the broadest audiences and attract the biggest set of real estate advertisers. The strategy is akin to how IAC/InterActiveCorp (IACI) has multiple online dating brands such as Match.com and Tinder, he said."

Seems odd to leave both up when they're so similar.

11
balor123 18 hours ago 1 reply      
A question for the RE insiders here. What restrictions are there for building services off Redfin/MLS and Zillow listings?

Am I allowed to blog about listings or do I need to be an agent to use image and listing data? I get mixed opinions elsewhere about whether this falls under fair use.

I notice that Redfin provides OpenGraph annotations but the TOS disallow sharing. Am I permitted to share on Facebook, Pinterest, etc? What about on other websites? Is it possible to build a vertical search engine based on these details?

12
Nicholas_C 22 hours ago 0 replies      
It seems like every time I see an interesting deal Qatalyst Partners are involved (Priceline/OpenTable, Elance/oDesk, Yahoo/Tumblr, probably some more I'm missing). Those guys get to work on some really interesting stuff.
13
LukeB_UK 21 hours ago 2 replies      
As someone from the UK, I've never heard of either of these sites.

Kind of amazing that a business that's limited to one country can be worth so much.

14
bjorns 12 hours ago 0 replies      
My first reaction was this headline has to come from a markov chain built out of Business Insider and My Little Pony.
15
epc 21 hours ago 1 reply      
Curious how this impacts Streeteasy, which Zillow just bought some time in the past year. Streeteasy is the defacto MLS (in a sense) for NYC (Manhattan doesn't have an MLS, nor does Brooklyn. Uncertain about Staten Island, Queens or Bronx).
16
smackfu 22 hours ago 0 replies      
The thread from when this was just a rumor: https://news.ycombinator.com/item?id=8081176
17
tindrlabs 22 hours ago 1 reply      
Trulia was so much better then Zillow -- reminds me of Flipboard buying Zite.
18
lcm133 21 hours ago 0 replies      
Don't sleep on Homesnap!* Snap a photo of any home to find out all about it* Similar to Redfin in terms of model and data access* Unique iPhone, iPad, Android and Web experience

http://www.homesnap.com

27
Is Quantum Intuition Possible?
26 points by jonbaer  7 hours ago   12 comments top 7
1
Strilanc 2 minutes ago 0 replies      
I think that quantum physicists do get an (imperfect) intuition for it, but they can't communicate it to laypeople because the inferential distance is too high.

I also think that it takes awhile for societies to internalize these things, and to find what works pedagogically, so it's hard to say how intuitive it will end up being. For example, starting teaching with quantum computer science might be really beneficial because qubits bypass some of the difficulties (differential equations, waves vs particles, tunneling) while keeping most of the weird (superposition, entanglement, measurement, interference, counterfactuals).

2
wahrsagevogel 1 hour ago 0 replies      
Yes it is. And it is called mathematical reasoning. You don't need to have some childish intuition to grasp the tunnel effect. It is enough to understand the concept of an exponential function of complex arguments.

Complicated problems need abstract reasoning not handwavy explanations. "Imagine two rivers which merge at some point and two boats floating down these rivers. Which one will be first? Think about it and solve your race condition problem." Pure nonsense. We have powerful ways to think about programs with programming languages. For physics this language is mathematics. If you want to understand something study the language.

And bits like "Babies also intuitively grasp that objects exist even when youre not looking at them, a concept called object permanence that goes against the classic Copenhagen interpretation of quantum mechanics [...]" are just sad. An observation in the physical sense and the observation of a newborn are not the same concept. Babies have a memory. Computers have memory. Therefore babies are computers.

3
teekert 3 hours ago 1 reply      
The same goes for relativity.I think you will develop such intuition over time. As a biologist I have a firm grasp on what a nanometer is, how big a protein is, what to expect when I "glue" two of them together or what happens when I pour formalin over a cell.

It is extremely hard for me to talk about work to a layman who does not even know what DNA is. Try explaining that every base pairs is about 0.3 nm apart and we have about two meters of DNA, spread over 30.000 genes. What does it mean for promotor accessibility related to transcription factors? It would take some time to calculate these things but I have a pretty nice picture in my head of vibrating proteins and the chaos it is down there (Movies such as this one: https://www.youtube.com/watch?v=yKW4F0Nu-UY are extremely misleading!)

I had a colleague who got the molecular structure of optically active molecules in his head when he saw the absorption and fluorescence spectra of molecules.

But try getting that into the mind of an infant who learns by doing. Would it make sense? It reminds me of a piece from the book of Woz, where he explains how significant it was for his development that his father explained the workings of a transistor by explaining the flows of electrons. It made understanding much easier for him.

I guess, when a kid is ready I won't hurt to feed her/him such information but I wouldn't teach my son that he could in theory tunnel through the wall and end up in the neighbors kitchen...

Nice topic by the way, I remember watching the quantum mechanics movies (and the relativity movies) on my Encarta CDrom (I was 12 I think) over and over and just enjoy that feeling of uneasiness, the feeling that there is another world down there.

4
TuringTest 4 hours ago 0 replies      
Vibrating porridge, that's how I envision quantum phenomena; particles would be the lumps in the pure.

I find that it helps thinking about liquid rather than solid media as it provides better insight for things like "spooky action at a distance", "uncertainty principle" or "probabilistic nature of the waveform", as it dispels the learned intuitions about position and speed of solids that the article speaks about.

There are "particle properties" also that need to be accounted for, and I think of those as "recognizable patterns" measured around an area of interest located where the particle position would be, rather than physical "things" or individual objects.

5
rbanffy 29 minutes ago 0 replies      
As an exercise, in college, a couple colleagues and me started to play 4D Tic Tac Toe. We started with a 4x4x4x4 cube on the assumption that, when the game became too easy, we'd have grasped the idea and would move to a 5x5x5x5 cube. It took us many hours of play, explanations and headaches before we could move on, but we did it.

Its a lot of work to represent it, even ias a series of 3d projections

6
chopin 6 hours ago 3 replies      
The work of Couder et al. on oil droplets might show a way for more of a "quantum intuition". It's a pity that the article does not mention this work.

See eg. http://arxiv.org/abs/1401.4356

7
hyp0 3 hours ago 1 reply      
idea: a quantum fps. By interacting within a world governed by quantum principles, you might develop an intuition. Similar to muscle memory, or training in mathematical notation.

Especially, taking a leaf out of Ender's Game, for children.The average human needs years of training for fluent reading, writing and arithmetic.

Though, might be computationally infeasible to do with adequate accuracy; would necessarily be extraordinarily bizarre and counter-intuitive; and... might not be much fun.

28
Why cans of soup are shaped the way they are
179 points by squeakynick  21 hours ago   98 comments top 21
1
jscheel 19 hours ago 6 replies      
My grandpa designed the bottom of modern beverage cans. The reason for that shape is two-fold. First, the bulge is for pressure resistance. Second, the rest of the design was created specifically for the optimal application of the epoxy spray that prevents your drink from developing a metallic taste. Unfortunately, he didn't patent his work and Anheuser Busch ultimately took his idea for themselves. Never prevented them from being his client though, for other seamless can tooling work.
2
ethomson 20 hours ago 1 reply      
The "So why not for everyone?" section is interesting, but misses the reason the tuna can is actually the worst shape on the list: it's optimized for surface area of the top and bottom of the can! Tuna is actually cooked in the can (retort cooking) and the high surface area is beneficial to this process.

Maybe not the most useful thing I learned in Calc 1, but the thing I remember best.

3
egypturnash 21 hours ago 3 replies      
I feel like there are other efficiencies not considered in the article. Namely spoilage and portion size. I mean, you're not usually gonna need a bunch of tuna at once, so you don't want to have a huge can full of enough tuna for twenty people - but you also need it to be in a can big enough to manipulate. Also one with a big enough visible label space to actually put something legible on it.
4
Luyt 18 hours ago 1 reply      
The article asserts:

"Aesthetically, a slightly taller can looks nicer. The Golden ratio is approx 1.6, so a can with a height of approx 1.6x it's diameter (3.2x the radius) would be very appealing."

However, it is a myth that the Golden Ratio is the most appealing ratio. Many things, from the Parthenon to paper sizes, don't have a golden ratio and that doesn't make them less attractive[1].

Also, the supermarket in my neighbourhood sells soup in a variety of containers (cans, tetrabriks, plastic bags) and if only the standard soup can would sell well, I wouldn't see the other packages.

[1] http://skeptoid.com/episodes/4325

"Perhaps the best known pseudoscientific claim about the golden ratio is that the Greek Parthenon, the famous columned temple atop the Acropolis in Athens, is designed around this ratio. Many are the amateurs who have superimposed golden rectangles all over images of the Parthenon, claiming to have found a match. But if you've ever studied such images, you've seen that it never quite fits, at least not any better than any other rectangle you might try. That's because there's no credible historical or documentary evidence that the Parthenon's designers, who worked more than a century before Euclid was even born, ever used the golden ratio in any way, or even knew of its existence."

5
curiousgeorgio 21 hours ago 10 replies      
> If we wanted to use a shape that packed perfectly efficiently, wed use some kind of cuboid... But we dont see many cubes on shelves. Let's look at cylinders now...

The only real reason the article gives against using cuboids is that "the edges would be stress points", but it goes on to imply that this is mostly solved with "filleted (rounded) edges to reduce stress concentrations and to make them easier to manufacture."

I enjoyed the rest of the article relating to the optimal dimensions of the cylinder, but I still don't really understand why more products don't use cuboids (with or without filleted edges). Surely the space savings for shipping and shelving would be pretty significant, no?

6
fencepost 9 hours ago 0 replies      
This leaves out or glosses over some very important areas, namely the use to which the can is being put. Soup cans are taller than ideal for minimum material use in part because it's easier to pour from them, while on the other hand a tall skinny tuna can would be hated. For a more extreme example, consider the guava paste can - an inch high and 6-7 inches across because of how the product inside is used. Think about trying to pour your soup out of that.
7
unreal37 19 hours ago 2 replies      
The article fails to mention the main reason food is shaped "inefficiently": serving size.

Besides being cooked in the can (as egypturnash points out), a can of tuna contains two sandwiches of tuna. A jar of tomato sauce contains two servings of tomato sauce. A can of soup contains two servings of soup.

Yes, you have to be able to hold it in your hand, and stack it on a shelf. But you can't make a smaller can because that would be "less soup". No one wants to buy smaller cans of soup, and the manufacturers certainly don't want to sell less soup per purchase either.

Economics trumps material efficiency.

8
lnanek2 17 hours ago 1 reply      
We do have a lot of cubes. Juice boxes. Cereal boxes. Wine boxes. Cylinders are pretty much only used when you need to fake larger capacity for consumer preference (cardboard nut containers in super markets vs. plastic cubes of them at Costco) or you need to use metal for some reason (e.g. tanks of pressurized gas, holding liquids with the top off, etc.). I think he kind of misses the point that cubes are superior in terms of space usage then goes and analyzes how space efficient something we are only forced to use for other reasons is.
9
lafar6502 6 hours ago 0 replies      
I think the authors overlooked the most obvious reason for cans being cylindrical: a cylinder is very resistant to vertical compression, so you can stack cans very high on pallets without worrying about bottom layer deformation.
10
hackuser 19 hours ago 1 reply      
> The purpose of a food can is to store food.

Absolutely not. The purpose is to maximize profit.

The can improves profit via sales (being appealing on the shelf and in the kitchen cabinet; perhaps a familiar shape sells better), marketing (the image of the brand and the product, including environmental issues), distribution (the obvious costs and the value of being appealing to the sales channel (e.g., oversized products might be unappealing to the supermarket)), manufacturing costs, functionality for the consumer (food stays fresh, fits standard can-openers, etc.) etc etc.

11
coryking 11 hours ago 0 replies      
Cans of soup could also be shaped the way they are due to the fact that it might be easier to pasteurize a cylindrical can than a cube. The heat can be more evenly applied to a cylinder than a cube.
12
arethuza 21 hours ago 0 replies      
On a related point - I just finished reading "Atomic Accidents: A History of Nuclear Meltdowns and Disasters" and the author specifically mentions how the "can of soup" shape is pretty dreadful for holding fissile materials:

http://www.amazon.com/Atomic-Accidents-Meltdowns-Disasters-M...

13
danielweber 20 hours ago 0 replies      
My first day of calculus in high school, Mr Haskell told us that in a few months we would be able to prove the ideal size for a soup can. And we did!
14
rwmj 19 hours ago 1 reply      
How about the cost of the food? Tuna is expensive, condensed milk is cheap. That matters relative to the cost of the can because it's more important to make the condensed milk can cheap to store a cheap ingredient, than to worry about the minor cost of aluminium compared to expensive ingredients like tuna.
15
victorquinn 19 hours ago 1 reply      
No mention of can openers, and how they work their way around the cylindrically shaped can?

Think of how a can opener would work with a cube, hexagonal, or other shaped can.

Cool article, but seems it builds a lot of assumptions into its analysis.

16
peterwwillis 21 hours ago 2 replies      
Cans of soup are definitely not spherical for space-efficiency. They're spherical because from the assembly line to shipping to the customer, they simply work better. They handle dents well, they roll along assembly lines fluidly, they keep the orientation of the product labels, they're easy to inspect for quality, and they pack and unpack well. Obviously they also stay put on a shelf...

In terms of dimensions there's several factors to consider: label size, stacking efficiency and directional integrity. If you want a nice big color photo of your product, a taller, slimmer container will allow for a large color background and plenty of text for both the front and rear labels. Depending on if it's skinny or wide will determine how other products can be stacked around or on top/below it. And some foods (like tuna) keep their shape/consistency better when laid horizontally to prevent from breaking up while being transported. Similar foods hold together better when the pieces are larger, so larger portions of canned fish have the typical vertical orientation. And of course there's only so much horizontal space that can be allocated per unit before the shelves burst at the seams.

For sealable cuboid containers, more and more containers are being modified with grippable edges to make it easier to handle, since the customer doesn't use the entirety of the product at once (http://ecx.images-amazon.com/images/I/81W3JCB8tHL._SL1500_.j...). Resealable bagged containers are also becoming more popular, as they reduce the amount of air in the container, pack more efficiently, save weight, and are easier recycled. (http://www.gofoodindustry.com/uploads/members/comp-1509/file...)

17
stansmith 21 hours ago 0 replies      
Thanks, I also liked reading the last article about ice cream.

Your articles remind me how much calculus/math I have forgotten since leaving school and, sadly, how little use there is everyday to use calculus :(

18
bnolsen 14 hours ago 0 replies      
The M&M shape is the most efficient for minimizing empty space in shipping containers. They missed that one.
19
bsilvereagle 20 hours ago 0 replies      
There are lots of other really good articles on that blog:http://www.datagenetics.com/blog.html
20
qntmfred 15 hours ago 0 replies      
they need to start optimizing the shape of nacho cheese jars so that i stop getting rim cheese all over my knuckles every time i dip a chip
21
vsbuffalo 19 hours ago 1 reply      
Great post, but "data genetics"? It would be nice if they didn't misuse the word genetics this site has nothing to do with genetics.
29
Dvdisaster: improving data survivability on optical archive media
25 points by walterbell  8 hours ago   7 comments top 4
1
traxtech 1 hour ago 1 reply      
Parchive is also a must have

http://en.wikipedia.org/wiki/Parchive

2
ghshephard 2 hours ago 1 reply      
If people are really interested in using DVDs for archival purposes, they should really consider these - http://www.mdisc.com/, rated for 1000+ years.
3
sneak 2 hours ago 1 reply      
I believe there are also some good tools out there for printing larger quantities of data onto paper, using ECCs and such. QR codes and other such optical encodings are generally designed for much less optimal environments and smaller data sizes; with a laser printer and a good scanner, fairly good data densities can be achieved.

It's an interesting area of research/hacking. Personally I rather love the idea of the Rosetta Project: http://rosettaproject.org/disk/concept/

4
walterbell 5 hours ago 0 replies      
GPLv2 xlocate, by Alexandre Oberlin, is a GUI front-end to locate/slocate/mlocate. It supports indexing of offline media, e.g. offsite DVD backups.

http://qt-apps.org/content/show.php/xlocate?content=99529

manual: http://migo.info/xlocate/readme_en.html

30
Tracking.js A modern approach for computer vision on the web
139 points by zenorocha  19 hours ago   15 comments top 5
1
splintercell 17 hours ago 2 replies      
I can't get it to work(it detects my white shirt, the shoulder area as a face, but not my actual face). Is this a repetition of this: https://www.youtube.com/watch?v=t4DT3tQqgRM
2
joeyspn 18 hours ago 0 replies      
This is really cool and is going to help me A LOT with a detection script I was working on for a WebRTC video website.

I was looking for something like this a couple of weeks ago and I decided to use python with Fisherfaces [0]. Now I'm really curious about Viola-Jones and its accuracy via JS...

[0] https://github.com/bytefish/facerec

3
jbhatab 18 hours ago 1 reply      
I am very interested in computer vision libraries. How exactly does this stack up to OpenCV or is that even a fair comparison?
4
notastartup 7 hours ago 1 reply      
This looks pretty awesome.

I know it's a bit offtopic but on this page

http://trackingjs.com/examples/face_hello_world.html

where can you find such a nice 'chrome browser' graphic? it looks really solid. I want to use it on my website.

5
moron4hire 16 hours ago 0 replies      
It's funny, I look almost exactly like the guy on the left in the "Face (Image)" example, and I have a very good quality webcam, yet it does a really terrible job of detecting my own, live face.
       cached 29 July 2014 13:02:01 GMT