hacker news with inline top comments    .. more ..    23 Sep 2016 News
home   ask   best   3 years ago   
1
Google Is Trying to Get Oracle in Trouble for a $1B Open Secret vice.com
52 points by ivank  1 hour ago   8 comments top 5
1
grellas 19 minutes ago 0 replies      
It is huge that a lawyer would disclose in a public setting such important confidential numbers. I even have trouble seeing how something like that could be "accidental". It is basically a force of habit among experienced litigators to think and to say, in any number of contexts, "I know this may be relevant but I can't discuss it because it is the subject of a protective order" or "I know the attorneys know this information but it was disclosed under the protective order as being marked for 'attorneys' eyes only'". In all my years of litigating, I don't believe I have ever heard a casual slip on such information, even in otherwise private contexts (e.g., attorneys are discussing with their own client what an adverse party disclosed and are very careful not to disclose something marked for "attorneys' eyes only"). Certainly willful disclosures of this type can even get you disbarred.

But the significance of this breach is not the only thing that caught my eye.

These litigants have been entrenched in scorched-earth litigation for years now in which the working M.O. for both sides is to concede nothing and make everything the subject of endless dispute. Big firm litigators will often do this. It is a great way to rack up bills. Clients in these contexts do not oppose it and very often demand it. And so a lot of wasteful lawyering happens just because everyone understands that this is an all-out war.

To me, then, it seems that the big problem here (in addition to the improper disclosures of highly important confidential information in a public court hearing) was the resistance by the lawyers who did this to simply acknowledging that a big problem existed that required them to stipulate to getting the transcript sealed immediately. Had they done so, it seems the information would never have made the headlines. Instead (and I am sure because it had become the pattern in the case), they could not reach this simple agreement with the other lawyers to deal with the problem but had to find grounds to resist and fight over it.

I know that we as outside observers have limited information upon which to make an assessment here and so the only thing we can truly say from our perspective is "who knows". Yet, if the surface facts reflect the reality, then it is scarcely believable that the lawyers could have so lost perspective as to take this issue to the mat, resulting in such damage to a party. Assuming the facts are as they appear on the surface, this would be very serious misconduct and I can see why Judge Alsup is really mad that it happened.

2
mmastrac 1 hour ago 1 reply      
While this is a good story, the headline misses by far the point that the body makes - the only reason this is an open secret is because an Oracle lawyer revealed it in public.

A better title might be:

"Google is trying to get Oracle in trouble for revealing confidential figures"

3
segmondy 47 minutes ago 1 reply      
Oracle should pay, they knew exactly what they were doing. If it was them, they would be suing too. Live by the sword die by the sword.
4
ocdtrekkie 10 minutes ago 0 replies      
If anything, my only sadness is that more of Google's dirty laundry wasn't aired. This illusion that Google search is winning because people prefer it and that Google doesn't make money on Android are both claims I'm happy to see debunked. Google's anti-monopoly claims fundamentally hinge on concepts like these.

And if a lawyer did break the law by doing it, I say she belongs on the same high pedestal people put Snowden on.

5
suyash 10 minutes ago 0 replies      
Google is being plain old Evil.
2
Haiku Project haiku-os.org
107 points by gscott  2 hours ago   31 comments top 10
1
miles 1 hour ago 2 replies      
I always smile wistfully whenever Haiku/BeOS comes up... it's performance and just sheer fun remains unmatched by Windows, OS X, Linux, etc (try it out in a VM).

dr_dank summed it up best back in 2003 [0]:

BeOS was demonstrated to me during my senior year of college. The guy giving the talk played upwards of two dozen mp3s, a dozen or so movie trailers, the GL teapot thing, etc. simultanously. None of the apps skipped a beat. Then, he pulled out the showstopper.

He yanked the plug on the box.

Within 20 seconds or so of restarting, the machine was chugging away with all of its media files in the place they were when they were halted, as if nothing had happened.

Damn.

[0] https://slashdot.org/comments.pl?sid=66224&cid=6095472

2
AstroJetson 20 minutes ago 2 replies      

 Huge Hacker Comments about Haiku OS today But so few Haiku's BE/OS clone quiet power, elegance just simplicity Plan 9 and Minux all have their followers will Haiku get love More than a small toy Time will be the decider Windows look out now

3
mikestew 23 minutes ago 1 reply      
I've played with Haiku in a VM in the past. But what is suggested that I do with the OS? Be a daily driver? Does it fit some niche use case? I have an old netbook, I'd load Haiku on the boot partition, I really would. I'm a sucker for something different (given the evils of Big OS and all), and for the underdog. Hell, I might even contribute if I have something (coding wise) to give.

So I load 'er up on the ol' boot partion and...what? Music production? Just a lightweight, novel web surfing machine? Someone, anyone, give me a reason to spend my weekend farting around with OS installs.

Counter to that, should the answer be, "meh, it's just something novel to play with", then why are the devs pouring time into it? I guess I'm trying to politely say I kinda don't get the point. (But maybe a good answer to question #1 can help.)

4
adamnemecek 1 hour ago 4 replies      
BeOS (the OS Haiku is a reimplemntation of) has an interesting history. It was an OS that has a fully async C++ API (very novel for it's time and even now). The fact that it was async made the OS much more responsive and had a better CPU utilization.

BeOS was another company that Apple considered to purchase but they ended up buying NeXT instead. IIRC they ended up going with NeXT because BeOS didn't have networking back then.

5
gavanwoolery 18 minutes ago 0 replies      
Interesting philosophy on why Haiku is not Linux-based (from the site):

Linux-based distributions stack up software -- the Linux kernel, the X Window System, and various DEs with disparate toolkits such as GTK+ and Qt -- that do not necessarily share the same guidelines and/or goals. This lack of consistency and overall vision manifests itself in increased complexity, insufficient integration, and inefficient solutions, making the use of your computer more complicated than it should actually be.

Instead, Haiku has a single focus on personal computing and is driven by a unified vision for the whole OS. That, we believe, enables Haiku to provide a leaner, cleaner and more efficient system capable of providing a better user experience that is simple and uniform throughout.

6
buckbova 1 hour ago 1 reply      
Something new or interesting to report?
7
Mizza 1 hour ago 0 replies      
I used BeOS as a kid while playing with different operating systems. All of the windows were yellow colored, and you could irrevocably damage your system if you ever tried to change the color.

Good times.

8
rcarmo 50 minutes ago 0 replies      
I love this and would use it if I could actually install it on a modern machine, say a netbook/chromebook/winbook (I'd have to hunt around for a good browser build, SSH and remote desktop, but I like to think those exist/are feasible to self-build).
9
behnamoh 1 hour ago 0 replies      
Another piece of good software that got no attention in this industry. Don't even get me started on so many great languages that failed. Pity :/
10
codegeek 1 hour ago 3 replies      
lets hope it is not a trademark violation

https://www.haikulearning.com/

EDIT: I jumped the gun on this one. Should have done a bit googling first.

3
Heavy SSD Writes from Firefox servethehome.com
274 points by kungfudoi  4 hours ago   198 comments top 35
1
lighttower 3 hours ago 5 replies      
Chrome, on my system, is even more abusive. Watch the size of the .config/google-chrome directory and you'll see that it grows to multi-GB in the profile file.

There is a Linux utility that takes care of all browsers' abuse of your ssd called profile sync daemon, PSD. It's available in the debian repo or [1] for Ubuntu or [2] for source. It uses `overlay` filesystem to direct all writes to ram and only syncs back to disc the deltas every n minutes using rsync. Been using this for years. You can also manually alleviate some of this by setting up a tmpfs and symlink .cache to it.

[1] https://launchpad.net/~graysky/+archive/ubuntu/utils[2] https://github.com/graysky2/profile-sync-daemon

EDIT: Add link, grammar

EDIT2: Add link to source

2
Yoric 2 hours ago 8 replies      
Hi, Im one of the Firefox developers who was in charge of Session Restore, so Im one of the culprits of this heavy SSD I/O. To make a long story short: we are aware of the problem, but fixing it for real requires completely re-architecturing Session Restore. Thats something we havent done yet, as Session Restore is rather safety-critical for many users, so this would need to be done very carefully, and with plenty of manpower.

I hope we can get around to doing it someday. Of course, as usual in an open-source project, contributors welcome :)

3
iask 1 minute ago 0 replies      
So Firefox is also expensive to run in terms of energy consumption. No wonder the fans on my MacBook Pro always sound like a jet engine whenever I have several tabs open. Seriously!

Disclaimer: I dual boot (camp) windows 7 on my mac.

4
zbuf 3 hours ago 4 replies      
I have been running Firefox for a long time with an LD_PRELOAD wrapper which turns fsync() and sync() into a no-op.

I feel it's little antisocial for regular desktop apps to assume it's for them to do this.

Chrome is also a culprit, a similar sync'ing caused us problems at my employer's, inflated pressure on an NFS server where /home directories are network mounts. Even where we already put the cache to a local disk.

At the bottom of these sorts of cases I have on more than one occasion found an SQLite database. I can see its benefit as a file format, but I don't think we need full database-like synchronisation on things like cookie updates; I would prefer to lose a few seconds (or minutes) of cookie updates on power loss than over-inflate the I/O requirements.

5
RussianCow 3 hours ago 6 replies      
Serious question: Is 12GB a day really going to make a dent in your SSD's lifespan? I was under the impression that, with modern SSDs, you basically didn't have to worry about this stuff.
6
Someone 40 minutes ago 0 replies      
12GB/day is about 140kB/second, or one Apple 2 floppy disk every second.

It also is about single CD speed (yes, you could almost record uncompressed stereo CD quality audio all day round for that amount of data)

All to give you back your session if your web browser crashes or is crashed.

Moore's law at its best.

7
towerbabbel 31 minutes ago 0 replies      
I observed something similar several years ago: http://www.overclockers.com/forums/showthread.php/697061-Whe...

I still think the worry about it wearing out an SSD is overblown. The 20GB per day of writes is extremely conservative and mostly there to avoid more pathological use cases. Like taking a consumer SSD and using it as the drive for some write heavy database load with 10x+ write amplification and when you wear demand a new one on warranty.

Backing up the session is still sequential writes so write amplification is minimal. After discovering the issue I did nothing and just left Firefox there wearing on my SSD. I'll still die of old age before Firefox can wear it out.

8
rayiner 3 hours ago 0 replies      
Doing all this work is also probably burning battery life. An SSD can use several watts while writing, versus as low as 30-50 milliwatts at idle (with proper power management).
9
blinkingled 3 hours ago 10 replies      
Even better just disable session restore entirely - Browser.sessionstore.enabled - Since Firefox 3.5 this preference is superseded with setting browser.sessionstore.max_tabs_undo and browser.sessionstore.max_windows_undo to 0.

As I understand this feature is there so if the browser crashes it can restore your windows and tabs - I don't remember having a browser crash on me since the demise of Flash.

10
robin_reala 3 hours ago 2 replies      
Its always annoying when an issue like this is reported yet no bugzilla reports are mentioned. Has anyone else filed this already, or shall I?
11
raverbashing 3 hours ago 0 replies      
Are these writes being sync'd to disk?

Because FF may die but the OS will save it later. That's fine

Not every write to a file means a write to disk

12
weatherlight 1 hour ago 0 replies      
Spotify does some pretty evil I/O as well. https://community.spotify.com/t5/Desktop-Linux-Windows-Web-P...
13
digi_owl 29 minutes ago 0 replies      
I do wonder if their mobile version have a similar problem. I have noticed it chugs badly when opened for the first time in a while on Android, meaning i have to leave it sitting or a while so it can get things done before i can actually browse anything.
14
nashashmi 1 hour ago 0 replies      
On a related note: also see http://windows7themes.net/en-us/firefox-memory-cache-ssd/

Just another firefox ssd optimization.

Edit: And see bernaerts.dyndns.org/linux/74-ubuntu/212-ubuntu-firefox-tweaks-ssd

It talks about sessionstore.

15
zx2c4 1 hour ago 2 replies      
I have fixed this issue forever. I got a Thinkpad P50 with 64 gigs of ram. So, I just mount a tmpfs over ~/.cache.

I actually use a tmpfs for a few things:

 $ grep tmpfs /etc/fstab tmpfs/tmptmpfsnodev,nosuid,mode=1777,noatime0 0 tmpfs/var/tmp/portagetmpfsnoatime0 0 tmpfs/home/zx2c4/.cachetmpfsnoatime,nosuid,nodev,uid=1000,gid=1000,mode=07550 0

16
vesinisa 3 hours ago 0 replies      
I've already moved all my browser profiles to `/tmp` and set up a bootscripts to persist them during boot / shutdown. E.g. for Arch Linux see https://wiki.archlinux.org/index.php/profile-sync-daemon

This is a far superior solution to fiddling with configuration options in each individual product to avoid wearing down your SSD with constant writes. Murphy's law has it such hacks will only be frustrated by next version upgrade.

And no, using Chrome does not help. All browsers that use disk caching or complex state on disk are fundamentally heavy on writes to an SSD. The amount of traffic itself is not even a particularly good measure of SSD wear, since writing a single kilobyte of data on an SSD can not be achieved on HW level without rewriting the whole page, which is generally several megabytes in size. So changing a single byte in a file is no less taxing than a huge 4 MB write.

17
justinrstout 2 hours ago 0 replies      
Theodore Ts'o wrote about a similar Firefox issue back in 2009: https://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/
18
Animats 31 minutes ago 1 reply      
Firefox is relying too much on session restore to deal with bugs in their code. Firefox needs to crash less. With all the effort going into multiprocess Firefox, Rust, and Servo, it should be possible to have one page abort without taking down the whole browser. About half the time, session restore can't restore the page that crashed Firefox anyway.
19
CoryG89 3 hours ago 1 reply      
Maybe I am not understanding this right, but is this saying that Firefox will continually keep writing to the disk while idle? Does anyone know more about this? Why would this be needed to restore session/tabs? Seems like it should only write after a user action or if the open page writes to storage? Even if it was necessary to write continually while idle, how could it possibly consume so much data in such a short period of time?
20
alyandon 3 hours ago 0 replies      
Yep, I have a brand new SSD drive that over the course of a few months accumulated several TERAbytes (yes - TERA) of writes directly attributable to the default FF browser session sync interval coupled with the fact I leave it open 24/7 with tons of open tabs.

Once I noticed that excessive writes were occurring, it was easy for me to identify FF as the culprit in Process Hacker but it took much longer to figure out why FF was doing it.

21
tsukikage 3 hours ago 1 reply      
The interesting question here is, why is the browser writing data to disk at this rate?

If it's genuinely receiving new data at this rate, that's kind of concerning for those of us on capped/metered mobile connections. The original article mentions that cookies accounted for the bulk of the writes, which is distressing.

If it's not, using incremental deltas is surely a no-brainer here?

22
Nursie 2 hours ago 1 reply      
Firefox has been terrible for disk access for many years. I remember I had a post install script (to follow, I never actually automated) that I would run through in my linux boxes back in about 2003 that would cut down on this and speed up the whole system.

Basically chattr +i on a whole bunch of its files and databases, and everything's fine again...

23
leeoniya 3 hours ago 2 replies      
i'm not seeing these numbers, using I/O columns in Process Explorer. i'm running Nightly Portable with maybe 80 tabs open/restored.
24
HorizonXP 3 hours ago 5 replies      
Wow, that's really unfortunate.

I just built a new PC with SSDs, and switched back to Firefox. Even with 16GB of RAM on an i3-2120, Firefox still hiccups and lags when I open new tabs or try to scroll.

This new issue of it prematurely wearing out my SSDs will just push me to Chrome. Hopefully it doesn't have the same issues.

25
yashafromrussia 1 hour ago 0 replies      
Sounds sweet, ill try it out. How is it comparing to ack (ack-grep)?
26
caiob 3 hours ago 0 replies      
That goes to show how space/memory hungry and bloated browsers have become.
27
Sami_Lehtinen 2 hours ago 1 reply      
uBlock also keeps writing "hit counts" to disk all the time, as well as for some strange reason they've chose database page size to be 32k so each update writes at least 32kB.
28
Freestyler_3 2 hours ago 0 replies      
I use Opera on windows, No idea how to check or change the session storage interval.

Anyone got ideas on that?

29
rsync 3 hours ago 0 replies      
I continue to be impressed with the content and community at servethehome - it's slowly migrated its way into my daily browsing list.
30
aylons 3 hours ago 2 replies      
In Linux where should this be written? Inside the home folder?

Maybe moving this folder to a HDD should suffice.

32
amq 3 hours ago 0 replies      
Observed similar behavior with Skype.
33
PaulHoule 2 hours ago 0 replies      
The whole "restore your session" thing is the one of the most user hostile behaviors there is.
34
kordless 3 hours ago 1 reply      
I seriously dislike Firefox, but must use it at work due to browser incompatibility issues with Chrome and sites I use heavily. Anything that makes the experience better is much appreciated.
35
rackforms 2 hours ago 3 replies      
Putting aside how this may not be all that bad for most SSD's, does anyone know when this behavior started?

Firefox really started to annoy me with its constant and needless updates a few months back; the tipping point being breaking almost all legacy extensions (in 46, I believe). This totally broke the Zend Debugger extension, the only way forward would be to totally change my development environment. I'm 38 and now, and apparently well beyond the days when the "new and shiny" hold value. These days I just want stability and reliability.

Firefox keeps charging forward and, as far as I can tell, has brought nothing to the table except new security issues and breaking that which once worked.

I haven't updated since 41 and you know what, it's nearly perfect. It's fast, does what I need it to do, and just plain old works.

Firefox appears to have become a perfect example of developing for the sake of.

4
August 2016 Lisp Game Jam Postmortem stevelosh.com
36 points by nodivbyzero  1 hour ago   3 comments top
1
elliotec 7 minutes ago 2 replies      
How is this a postmortem? Everything seems to have gone well and nobody/nothing died...
5
Ripgrep A new command line search tool burntsushi.net
257 points by dikaiosune  5 hours ago   48 comments top 18
1
losvedir 2 hours ago 1 reply      
Meh, yet another grep tool.... wait, by burntsushi! Whenever I hear of someone wanting to improve grep I think of the classic ridiculous fish piece[0]. But when I saw that this one was by the author of rust's regex tools, which I know from a previous post on here, are quite sophisticated, I perked up.

Also, the tool aside, this blog post should be held up as the gold standard of what gets posted to hacker news: detailed, technical, interesting.

Thanks for your hard work! Looking forward to taking this for a spin.

[0] http://ridiculousfish.com/blog/posts/old-age-and-treachery.h...

2
minimax 1 hour ago 0 replies      
In contrast, GNU grep uses libcs memchr, which is standard C code with no explicit use of SIMD instructions. However, that C code will be autovectorized to use xmm registers and SIMD instructions, which are half the size of ymm registers.

I don't think this is correct. glibc has architecture specific hand rolled (or unrolled if you will lol) assembly for x64 memchr. See here: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86...

3
bodyfour 2 hours ago 0 replies      
It would be interesting to benchmark how much mmap hurts when operating in a non-parallel mode.

I think a lot of the residual love for mmap is because it actually did give decent results back when single core machines were the norm. However, once your program becomes multithreaded it imposes a lot of hidden synchronization costs, especially on munmap().

The fastest option might well be to use mmap sometimes but have a collection of single-thread processes instead of a single multi-threaded one so that their VM maps aren't shared. However, this significantly complicates the work-sharing and output-merging stages. If you want to keep all the benefits you'd need a shared-memory area and do manual allocation inside it for all common data which would be a lot of work.

It might also be that mmap is a loss these days even for single-threaded... I don't know.

Side note: when I last looked at this problem (on Solaris, 20ish years ago) one trick I used when mmap'ing was to skip the "madvise(MADV_SEQUENTIAL)" if the file size was below some threshold. If the file was small enough to be completely be prefetched from disk it had no effect and was just a wasted syscall. On larger files it seemed to help, though.

4
jonstewart 3 hours ago 0 replies      
Nice! Lightgrep[1] uses libicu et al to look up code points for a user-specified encoding and encode them as bytes, then just searches for the bytes. Since ripgrep is presumably looking just for bytes, too, and compiling UTF-8 multibyte code points to a sequence of bytes, perhaps you can do likewise with ICU and support other encodings. ICU is a bear to build against when cross-compiling, but it knows hundreds of encodings, all of the proper code point names, character classes, named properties, etc., and the surface area of its API that's required for such usage is still pretty small.

[1]: http://strozfriedberg.github.io/liblightgrep

5
dikaiosune 4 hours ago 0 replies      
Compiling it to try right now...

Some discussion over on /r/rust: https://www.reddit.com/r/rust/comments/544hnk/ripgrep_is_fas...

EDIT: The machine I'm on is much less beefy than the benchmark machines, which means that the speed difference is quite noticeable for me.

6
cm3 3 hours ago 1 reply      
To build a static Linux binary with SIMD support, run this:

 RUSTFLAGS="-C target-cpu=native" rustup run nightly cargo build --target x86_64-unknown-linux-musl --release --features simd-accel

7
zatkin 13 minutes ago 0 replies      
>It is not, strictly speaking, an interface compatible drop-in replacement for both, but the feature sets are far more similar than different.
8
echelon 3 hours ago 1 reply      
Rust is really staring to be seen in the wild now.
9
pmontra 42 minutes ago 1 reply      
It looks very good and I'd like to try it. However I'm lazy and I don't want to install all the Rust dev environment to compile it. Did anybody build a .deb for Ubuntu 16?
10
krylon 1 hour ago 1 reply      
When I use grep (which is fairly regularly), the bottleneck is nearly always the disk or the network (in case of NFS/SMB volumes).

Just out of curiosity, what kind of use case makes grep and prospective replacements scream? The most "hardcore" I got with grep was digging through a few gigabytes of ShamePoint logs looking for those correlation IDs, and that apparently was completely I/O-bound, the CPUs on that machine stayed nearly idle.

11
pixelbeat 3 hours ago 1 reply      
Thanks for the detailed comparisons and writeup.

I find this simple wrapper around grep(1) very fast and useful:

http://www.pixelbeat.org/scripts/findrepo

12
Tim61 2 hours ago 0 replies      
I love the layout of this article. Especially the pitch and anti-pitch. I wish more more tools/libraries/things would make note of their downsides.

I'm convinced to give it a try.

13
fsiefken 4 hours ago 2 replies      
nice, but does it compile and run on armhf? I don't see any binaries
14
xuhu 3 hours ago 1 reply      
Why not make --with-filename default even for e.g. "rg somestring" ? That seems like it could hinder adoption since grep does it and it's useful when piping to other commands.

Is it enabled when you specify a directory (rg somestring .) ?

15
serge2k 13 minutes ago 0 replies      
> We will attempt to do the impossible

Oh well. Waste of time then.

16
qwertyuiop924 3 hours ago 0 replies      
That is really cool. Although I think this is a case where Good Enough will beat amazing, at least for me (especially given how much I use backrefs).
17
petre 2 hours ago 2 replies      
Does it use PCRE (not the lib, the regex style). If not, ack is just fine. My main concern with grep are Posix regular expressions.
18
spicyj 2 hours ago 4 replies      
rg is harder to type with one hand because it uses the same finger twice. :)
6
Cryptpad: Zero Knowledge, Collaborative Real Time Editing cryptpad.fr
40 points by zerognowl  1 hour ago   17 comments top 12
1
teraflop 1 minute ago 0 replies      
Interesting idea.

I think it's kind of odd to draw such a strong comparison to the Bitcoin blockchain. As the technical description [1] points out, the "chainpad" system discards most of the features and properties that make Bitcoin secure against malicious participants. That seems like a totally reasonable design decision for this application, but then describing it as a blockchain just adds confusion.

In fact, the design seems to bear a much closer resemblance to the Bayou optimistic concurrency algorithm [2], with operational transformation as the underlying data model, and some extra crypto on top.

[1]: https://github.com/xwiki-contrib/chainpad

[2]: http://www.cs.utexas.edu/users/lorenzo/corsi/cs380d/papers/p...

2
Ar-Curunir 6 minutes ago 0 replies      
Why do people insist on using the term zero knowledge for simple semantically secure encryption?

Zero knowledge has a very specific meaning inside cryptography. Encrypting something does not make it "zero knowledge".

3
eganist 1 minute ago 0 replies      
I'm looking forward to tptacek's commentary here given his position on in-browser crypto.
4
williamstein 8 minutes ago 0 replies      
I know this is just a proof of concept demo, but the "Code Pad" mode is built on CodeMirror and becomes unusably slow as soon as the document gets at all large (few thousand lines) perhaps due to them not implemented a range of tricks for transforming CodeMirror's content efficiently, like the setValueNoJump extension here https://github.com/sagemathinc/smc/blob/master/src/smc-webap...

DISCLAIMER: I've spent way too much time on synchronized CodeMirror editing...

5
celticninja 49 minutes ago 2 replies      
Sharing the URL is essentially giving out the key, so there is no digitally safe way to do this unless you encrypt the initial message, at which stage you are using encrypted communication anyway and the URL just leaves open an attack vector. Please correct me if I am wrong.
6
throwawayReply 9 minutes ago 0 replies      
Entering an invalid pad number redirects back to old.cryptpad.fr ?
7
zmanian 1 hour ago 1 reply      
This is a cool implementation of this idea.

Proof of work is probably an acceptable solution for proof of concept but anonymous consensus isn't needed for for collaborative document editing.

I'm still thinking if this use cases needs timestamping or atomic broadcast.If timestamping is sufficient, Google's new roughtime protocol would do the job well. Otherwise you need a proper atomic broadcast algorithim like RAFT, Tendermint, Honeybadger etc.

Great work.

8
no_protocol 45 minutes ago 0 replies      
I have often seen claims that doing any kind of crypto in (browser) javascript is dangerous. Does this fall into that trap?

How can I safely share the URL to someone without already using an established encrypted communication method?

Is the encryption key stored in my browser history?

9
Diederich 42 minutes ago 1 reply      
This is pretty cool, but I believe that you still have to trust cryptpad.fr to send you javascript that won't leak.
10
Canada 31 minutes ago 0 replies      
I'd love something similar, but implemented as a browser extension.
11
mxuribe 34 minutes ago 0 replies      
Seems cool.
12
t0mbstone 1 hour ago 0 replies      
Ok, now this is cool.
7
An Era in Hong Kong Is Ending, Thanks to Chinas Tight Embrace wsj.com
22 points by dcgudeman  1 hour ago   8 comments top 2
1
ryanisnan 10 minutes ago 1 reply      
Although this is off-topic, screw sites that require logins to read content like this.
2
dcgudeman 31 minutes ago 1 reply      
Although I think integration with mainland China is probably a good thing it's things like this I find alarming:

Hong Kongers are sensitive about encroachment by mainland law enforcement. Last year, several Hong Kong booksellers disappeared after publishing thinly sourced, salacious tell-alls about Chinas leaders. They turned up later in detention in mainland China.

8
Twitter may receive formal bid, suitors said to include Salesforce and Google cnbc.com
148 points by kgwgk  5 hours ago   193 comments top 28
1
imagist 2 hours ago 11 replies      
IMO, Twitter is the poster child for the tech bubble. They have users, which is their only claim to viability, but notably, they have never made a profit. Currently valued at around $10 billion with 350 million active users, that's about $29 per user. You'd be hard-pressed to find an investor so foolish that they would invest $29 in each of their users and hope to make it back if it were stated in those terms, but people have rushed to invest in a company which only has users, and whose attempts to monetize users through advertising have correlated strongly with loss of users. There can be little argument that Twitter's price has become entirely detached from its value.

That doesn't of course, mean you couldn't make money by investing in Twitter. You can make money by investing in overvalued companies as long as you don't hold onto your share until it busts. One profitable route would be if Twitter does get bought by a larger company. The market as a whole will lose on Twitter, but local maxima can be more profitable than the whole.

But at a personal level, don't be naive about this. A lot of people are investing, not just money, but time and energy, in Twitter or startups like Twitter. If you find yourself thinking that Twitter is a company with any real value, you should take a step back and evaluate whether you're being wise, or whether you've fallen prey to the unbridled optimism of the tech bubble. Twitter's position as poster child for the tech bubble makes it a good litmus test for people's understanding of the industry, and I suspect it will correlate very strongly with who loses everything when the tech bubble collapses.

2
owenwil 4 hours ago 9 replies      
Google acquiring Twitter is actually the best end result here. Salesforce is probably the worst. Lots of people hate the idea of a Google acquisition, but I think it's well suited because:

- Google learnt from its mistakes with Google+ and is eager to not repeat them

- The company is a very different one now from years ago

- Google doesn't want to mess up identity again, so that wouldn't be an issue

- Google mostly just wants a social graph

- Twitter is a bad public company that makes irrational decisions

- Merging Google engineering/leadership with Twitter might actually give direction and ease the financial pressure that seems to drive the company's poor engineering decisions

3
thr0waway1239 3 hours ago 3 replies      
Can someone actually explain to me how the situation came to this point where it practically looks as if Twitter's fate is being decided and played out in the media via endless speculation? It is not like Twitter is a tiny company with an unknown brand, few users and no possibility of improving its profit margins. I am not aware of what they are trying to do, but at the same time it is not as if they could have exhausted all the possibilities. Remember Facebook's beacon? That failed, but FB still managed to repackage the same crap into something more lucrative didn't it? Is this just impatience from stockholders?

For example, let us just say, hypothetically, something really damaging comes out about FB (e.g. the news about the fake video view metrics) and advertisers start fleeing from it. Wouldn't Twitter be the beneficiary of at least some of that exodus? Do they really have no option of an end game?

4
the_duke 5 hours ago 7 replies      
Unclear is what kind of "deal" they are talking about.

A considerable buy in? A full acquisition?

And, assuming a full acquisition... what would be the gain?

Google has a bad story with attempts at social media, apart from YouTube. (Bought Orkut, killed it, tried Google Plus, went nowhere). Twitter is hard to make profitable without alienating the users with too many ads.

For Google, it would probably be an acquisition like YouTube. With the knowledge that it might never be profitable, but intended to get control over a significant asset. But sharing Google infrastructure and resources could probably bring down operating costs in the medium term.

We'll see.

5
aresant 2 hours ago 2 replies      
Salesforce feels like a bizarre choice, although I agree with their digital chief that "I love Twitter" personally.

I use twitter every day as my primary method of content discovery.

So at their core the BUSINESS should revolve around monetizing my eyeballs, eg advertising.

So to me it's Facebook or Google that should grab it, w/FB at the lead considering their relatively smooth / unhurried / and successful takeovers of whatsapp / instagram

6
encoderer 50 minutes ago 0 replies      
Anecdotally I use twitter to advertise my SaaS monitoring product, Cronitor, with far more success than we found with AdWords. The ad platform feels easier to use, and promoting content on Twitter is less of a time investment vs selecting, culling, and optimizing sets of keywords.
7
erickhill 2 hours ago 1 reply      
I think Twitter's recent foray into becoming a content streaming source (see: NFL) is very interesting and a natural next step, albeit a late one. The user base is already there to essentially compete with Twitch and other streaming providers.
8
hornbaker 4 hours ago 4 replies      
My bet is GOOG. They need a streaming newsfeed product in which to insert ads.
9
majani 3 hours ago 0 replies      
Bad idea. I think the social networks who's main purpose is to feed into people's vanity will not stand the test of time since they're not solving a real problem and are merely novelties.
10
deepfriedbits 3 hours ago 0 replies      
Twitter's real value to Google is real-time search, in my opinion. They already license results from Twitter for search results, but having access to all of that sweet, sweet real-time data is nice.

The social graph is nice, but between Chrome and Gmail, Google already knows quite a bit about everyone.

11
hornbaker 4 hours ago 0 replies      
My bet is GOOG. They need a streaming newsfeed product in which to insert ads, especially on mobile where FB is killing it.
12
MollyR 5 hours ago 4 replies      
I really can't see salesforce buying twitter. I think a social media giant would gain something from twitter, but not much else.
13
happy-go-lucky 1 hour ago 0 replies      
If you try to make people want what you make, you end up making something disposable, a short-lived romance.
14
fideloper 5 hours ago 0 replies      
I wouldn't blame Jack one bit for wanting to get back to Square full-time (not that it's necessarily fully his decision to sell).
15
dcgudeman 3 hours ago 1 reply      
By Salesforce?? RIP twitter.
16
_kyran 2 hours ago 1 reply      
Just going to leave this here (from last month):

http://www.abc.net.au/radionational/programs/downloadthissho...

"Twitter will be sold in six months - Kara Swisher"

17
yalogin 4 hours ago 0 replies      
It doesn't matter it would still not cover my losses in the stock. Talk about crappy decisions.
18
alex_hitchins 3 hours ago 1 reply      
Might sound strange, but I think this would be a great purchase for Apple. They have the cash, they certainly have the engineers and UI skills it desperately needs. iMessage working brilliantly, but closer integration with a Twitter style feed makes real sense to me.
19
aikah 3 hours ago 1 reply      
Amazon or Facebook . It would make sense for the latter as it is its only real competitor.
20
WA 5 hours ago 1 reply      
Stock is through the roof right now. About +19%
21
rch 3 hours ago 1 reply      
Why no mention of Oracle? That's a more likely acquirer than Salesforce.

Other Oracle acquisitions: Datastax, push.io, Collective Intellect, etc.

22
mandeepj 1 hour ago 0 replies      
any idea why advertising is working on facebook and not on twitter?
23
k2xl 37 minutes ago 0 replies      
Why are Twitter's expenses so high? Don't get me wrong, scaling is hard... But at the same time, they don't have the issues of scaling photos or videos (like Facebook).
24
wslh 3 hours ago 0 replies      
Is it logical to buy Twitter at $ 16B? I think it is too expensive considering a lot of their actual users are bots.
25
sidcool 2 hours ago 1 reply      
Salesforce wud be an interesting prospect. Not sure how wud it fit in their business plan
26
nvk 5 hours ago 4 replies      
Twitter should be a public utility.
27
lcnmrn 4 hours ago 2 replies      
There are better alternatives to Twitter out there. Its time for everybody to move on.
28
ben_jones 1 hour ago 0 replies      
Anyone else see S20E02 of southpark? All I'm going to say is it put Twitter in a very interesting perspective.
9
House Passes Employee Stock Options Bill Aimed at Startups morningconsult.com
239 points by endswapper  7 hours ago   168 comments top 21
1
grellas 2 hours ago 6 replies      
The original point of ISOs was to offer to employees the opportunity to take an economic risk with stock options (by exercising and paying for the stock at the bargain price) while avoiding the tax risk (by generally not recognizing ordinary income from that exercise and being taxed only at the time the stock was sold, and then only as a capital gains tax).

AMT has since emerged to devour the value of this benefit. By having to include the value of the spread (difference between exercise price and fair market value of the stock on date of exercise) as AMT income and pay tax on it at 28%-type rates, an employee can incur great tax risk in exercising options - especially for a venture that is in advanced rounds of funding but for which there is still no public market for trading of the shares. Even secondary markets for closely held stock are much restricted given the restrictions on transfer routinely written into the stock option documentation these days.

So why not just pass a law saying that the value of the spread is exempt from AMT? Of course, that would do exactly what is needed.

The problem is that AMT, which began in the late 60s as a "millionaire's tax", has since grown to be an integral part of how the federal government finances its affairs and is thus, in its perverse sort of way, a sacred cow untouchable without seriously disturbing the current political balance that is extant today.

And so this half-measure that helps a bit, not by eliminating the tax risk but only by deferring it and also for only some but not all potentially affected employees.

So, if you incur a several hundred thousand dollar tax hit because you choose to exercise your options under this measure, and then your venture goes bust for some reason, it appears you still will have to pay the tax down the road - thus, tax disasters are still possible with this measure. Of course, in optimum cases (and likely even in most cases), employees can benefit from this measure because they don't have to pay tax up front but only after enough time lapses by which they can realize the economic value of the stock.

This "tax breather" is a positive step and will make this helpful for a great many people. Not a complete answer but perhaps the best the politicians can do in today's political climate. It would be good if it passes.

Edit: text of the bill is here: https://www.congress.gov/bill/114th-congress/house-bill/5719... (Note: it is a deferral only - if the value evaporates, you still owe the tax).

2
matt_wulfeck 4 hours ago 8 replies      
> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

I understand the desire to avoid a regressive taxation system, but why is it that every tax rule we create comes with 2x the amount of caveats and rules? Our tax system is becoming a mess.

At this rate soon nobody will be able to file their own taxes without an accountant to sort through the muck. And complicated to systems tend to benefit the wealthy.

3
djrogers 1 hour ago 1 reply      
This is good news, but it may not go anywhere -

"the Administration strongly opposes H.R. 5719 because it would increase the Federal deficit by $1 billion over the next ten years." [1]

So a really bad tax rule is in place, but since it happens to bring in ~$100M/yr, we shouldn't fix the rule?

[1]https://www.whitehouse.gov/sites/default/files/omb/legislati...

4
calcsam 3 hours ago 3 replies      
This is amazing news. Some context:

It's quite common to owe taxes today for gains on the value of your stock -- which is an illiquid asset you can't sell. This puts employees in the position of shelling out cash to keep something that rightfully belongs to them, or simply abandoning it (failing to exercise) when they leave the company. This bill would defer taxes on gains up to 7 years, or until the company goes public.

If you are awarded stock options, an you exercise them, you have to file an 83(b) election within 90 days or else you are liable on all paper gains in the value of your stock.

Even if you file an 83b election, you are still liable for paper gains between the value of your options when you were granted them and the value when you exercised.

For example, if you were awarded options with a strike price of $5 and the company raised a new round of funding and the 409A valuation (& strike price of the new options) has risen to $15 per share, the IRS considers that you now owe taxes on $10 of income / share. In other words, it costs you not $5 / share to exercise but ~$8.50 including taxes.

So the tricky part about options is that they require money to exercise, money that you often don't have ready, in order to obtain an asset that is (a) not liquid and (b) may decline in value (c) you often can't sell due to transfer restrictions.

For example: one early engineer at Zenefits had to pay $100,000 in taxes for exercising his stock....and then all the crap hit the fan, and he likely paid more in taxes than his shares will end up being worth. Ouch.

As a result of this problem with options, many startups -- especially later-stage ones like Uber -- choose instead to offer RSUs, which are basically stock grants as opposed to stock options. You don't have to pay any money to "get" them like you do for options.

However, the IRS considers stock grants, unlike options, immediately taxable income. If you get 10,000 RSUs per year, and the stock is valued at $5/share by an auditor, you now have to pay taxes on $50,000 of additional income, for an asset that you likely have no way of selling.

Some startups allow "net" grants -- which basically means they keep ~35% of your stock in lieu of taxes. That solves the liquidity problem, but offering this is completely at the discretion of the startup and some don't, which leaves employees at the mercy of the IRS, again having to pay cash on paper gains of an illiquid asset.

5
asah 6 hours ago 2 replies      
Can someone explain: if you exercise and hold the shares (eg leave the company) do you owe tax after year seven, even if the shares remain illiquid?

That's the core issue: the IRS is taxing individuals on truly illiquid assets.

6
jnordwick 6 hours ago 2 replies      
Most employees get hit by the AMT and the step up in basis when exercising their incentive employee stock options, and from just skimming the bill, I don't see how that is prevented.
7
gtrubetskoy 2 hours ago 2 replies      
I still don't understand why taxes are owed. If an option at the time of grant is worth $0 (which is how it's typically done or is that not the case?), then you don't owe anything to the IRS until you exercise the option, i.e. buy shares at the option price and sell them at presumably higher valuation and make some money, at which point you will need to part with some of it because it's income.

But if you never exercise the options, then you never owe any tax. What am I missing here?

8
martin_ 6 hours ago 2 replies      
This sounds great, though requiring "offering 80% of the workforce stock" and excluding highest paid executives seems vague - is this at time of hiring, when stock is issued, fully vested, when taxes are due, somewhere inbetween? I parted ways with a startup in the valley last year and exercised some shares on January 13th. If I had exercised just two weeks earlier, I'm told I would've been hit with north of 50k in AMT, I have until next year to figure it out now but I wonder if I'm eligible. Also curious how long it typically takes to get from house, through the senate and passed.
9
zkhalique 4 hours ago 0 replies      
Meanwhile, the USA actively encourages companies to offshore their money with their tax code:

https://en.wikipedia.org/wiki/Companies_of_the_United_States...

10
revo13 4 hours ago 3 replies      
More evidence as to why the income tax should be replaced with a consumption tax. Just let people make their dammed money already and apply a simple tax when they spend it. Windfalls wouldn't be "dangerous" or punitive in that model, and savers would be rewarded.

--Of course I oversimplify the consumption tax, and safeguard would need to be in place on that to ensure it is not regressive with respect to necessities...

11
cdbattags 2 hours ago 0 replies      
How would this affect the concept of phantom stock options? I worked at a startup who used the main excuse of no taxes paid handing out ghost options instead of normal options.

"Phantom stock can, but usually does not, pay dividends. When the grant is initially made or the phantom shares vest, there is no tax impact. When the payout is made, however, it is taxed as ordinary income to the grantee and is deductible to the employer."

https://en.wikipedia.org/wiki/Phantom_stock

12
adanto6840 6 hours ago 1 reply      
The bill text is here and is pretty easy to decipher: https://www.congress.gov/bill/114th-congress/house-bill/5719
13
stevenae 2 hours ago 0 replies      
The article appears to get the "seven years" qualification wrong. The bill states that tax must be paid at:

>> the date that is 7 years after the first date the rights of the employee in such stock are transferable or are not subject to a substantial risk of forfeiture, whichever occurs earlier

Which implies that transfer-restricted stock grants do not start this clock ticking.

14
jkern 2 hours ago 0 replies      
How does this relate to the push for startups to change from a 90 day to 10 year exercise window? It seems like that's a better option than this bill since it gives employees a larger time window to make an exercise decision, during which the likelyhood of options actual resulting in something liquid is much higher
15
mrfusion 5 hours ago 2 replies      
Does anyone have experience buying stock options from employees. I really want to own shares in a few companies that would never hire me :-(
16
koolba 5 hours ago 3 replies      
> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

Is this why I keep seeing nominal $1 salaries?

17
AdamN 5 hours ago 0 replies      
I wish this was retroactive :-(
18
tmaly 5 hours ago 0 replies      
I am wondering if there will be additional complexity added to the rule making phase of this if it becomes law.

While this amendment is short in length, it seems to add additional complexity to an already complex tax code. I would have liked to have seen an even simpler proposal.

19
ap22213 5 hours ago 0 replies      
What the house needs to do is regulate startup's shady options agreements. I see way too many developers getting burned out striving for that big payout that may never come. It's the classic con game.
20
k2xl 4 hours ago 1 reply      
I'm confused. I bought shares this year and would be hit with 50K tax bill from AMT next year.

Does this mean I don't owe AMT addition next year?

21
chillydawg 6 hours ago 4 replies      
Nice to see tax laws for the rich can get passed, but substantive change to do with criminal justice, healthcare etc go nowhere.
10
How Palantir Is Taking Over New York City gizmodo.com
78 points by jonbaer  4 hours ago   47 comments top 9
1
spunker540 1 hour ago 2 replies      
The article is implying there's more to be worried about than there actually is - they didn't mention anything actually "disturbing" but want us all to be concerned about the vague threat of city surveillance nonetheless.

What does Palantir do?integrate[s] disparate data sets and conduct[s] rich, multifaceted analysis across the entire range of data.

How does NYC use it? Tax fraud, fire code violations, fake security guards, fake IDs, fake cigarettes, fake marijuana.

So the data already existed in NYC databases and the crimes they're enforcing already existed.

And yet:"the potential for that kind of outright abuse is less disturbing than the ways in which Palantirs tech is already being used. The citys embrace of Palantir, outside of law enforcement, has quietly ushered in an era of civil surveillance so ubiquitous as to be invisible." -- total hyperbole!

If anything the most telling part of this article to me, was the small sums of money being made by Palantir which is frequently lauded as one of the most elite, selective startups for software engineering positions. It seems to operate in small change relative to all the hype.

2
Animats 41 minutes ago 1 reply      
At first I thought this was about Palantir leasing vast amounts of office space, as they did in Palo Alto. But NYC? They're not that big.

Take a look at the top 10 US government contractors.[1] Most of the top 10 make weapons systems. But two are in information processing: Leidos (used to be SAIC), and L-3 Communications. Palantir isn't even in the top 100. Maybe they're more into state and local customers.

There's lots of potential for innovation in the state and local government space. A smartphone app for building inspectors, for example. One that involves lots of picture taking and GPS tagging. There are building inspector apps, but they're basically paper forms reworked for tablets.

An ambitious project would be a system which takes the video and audio from a cop's body cam and does most of the paperwork. Show it a driver's license or a face, and it's in the record and understood by the system. Cops hate paperwork, yet have to document much of what they do. Automate that and cops will be glad to wear a cam. Difficult and controversial, but useful.

It might be easier to sell in countries where local government is more standardized. In the US, you'd have to customize a system for every police department.

[1] https://en.wikipedia.org/wiki/Top_100_Contractors_of_the_U.S...

3
maxander 1 hour ago 3 replies      
> Co-founded in 2004 by Peter Thiel and Alex Karp, Palantir...

It is a continuous marvel that Peter Thiel, nominally an outspoken and prominent libertarian, is partially responsible for one of the most insidious powers that the U.S. government has over its people.

4
someone7x 2 hours ago 6 replies      
This gives me such mixed feelings.Using technology to increase productivity? Good

Committing resources to quality of life improvements? Good

75% of enforcement done in neighborhoods of "color"? Yikes

CIA-backed data analysis firm Palantir Technologies? Dear god

5
dannylandau 8 minutes ago 0 replies      
Wow, just a few million dollars in contracts from NYC agencies. That is peanuts. Not sure how they are able to justify such a high valuation. Seems like a lot of hype at Palantr
6
hprotagonist 2 hours ago 2 replies      
"they are not all accounted for, the lost seeing stones ..."

why anyone thought it was a great idea to name their company after the remote sensing device guaranteed to lie to you and make humans suicidally depressed has always been beyond me.

7
panic 2 hours ago 1 reply      
The City Hall official discussed the citys use of the data-mining technology on background, and declined to provide the full list of data sources or describe what is contained in the datasets.

Presumably this technology is supposed to be helping the people of NYC. Shouldn't these people know what data is being collected about them so they can decide whether or not they actually want it?

8
polskibus 44 minutes ago 1 reply      
Does anyone know what differentiates Palantir from a typical OLAP + ETL stack like SASS + SSIS?
9
bogomipz 1 hour ago 0 replies      
There's a few other recent developments of the "Big Data" city that New York is aspiring to that also give some residents mixed feelings. In chronological order:

http://www.nytimes.com/2016/09/20/nyregion/cellphone-alerts-...

http://www.theatlantic.com/technology/archive/2016/04/linkny...

http://www.nyclu.org/content/automatic-license-plate-readers

11
Confessions of a Necromancer hintjens.com
159 points by jwildeboer  6 hours ago   29 comments top 8
1
gricardo99 1 minute ago 0 replies      
"Be nice to people, even those trying to hurt you. "

Perhaps one of the hardest lessons to live by, but of immense value.

2
Raphmedia 3 hours ago 0 replies      
"The foundation for a good project is: a competent client who knows the business and has power of decision; a full-stack team that can deal with the work, at all technical levels; and a technical platform that is both dependable and tractable."

Very true.

4
dalore 29 minutes ago 0 replies      
The start is straight out of Ready Player One for James Halliday.
5
kirab 5 hours ago 1 reply      
Every time I see a new post from hintjens I think: "oh, he's still alive!" and I feel a strange mix of happiness (about him being still alive) and sadness (about his very probable demise) at the same time.
6
bramjans 5 hours ago 1 reply      
Great read. I've only been in the professional software business for a couple of years, but already many of his insights (especially the ones about terrible people management) hit close to home.

Thanks a lot for taking the time to write this down!

7
taneq 5 hours ago 0 replies      
Well so far I'm intrigued...
12
Senate panel authorizes money for Mars mission, shuttle replacement usatoday.com
45 points by mcamaj  3 hours ago   18 comments top 3
1
nickff 1 hour ago 1 reply      
The worst part of this news is the continuation of the Space Launch System (SLS), which has cost over seven billion dollars (not including Ares development costs), and is expected to end up costing forty-one billion by 2025, by which time they expect to complete a total of four launches (destination: nowhere important, and probably late).

Vulcan (from ULA), New Glenn (from Blue Origin), and Falcon Heavy (from SpaceX) are all better platforms for space exploration, which could enable science (such as the Europa mission) and travel (to Mars), and cost far less than SLS (in development and $/kg to orbit). NASA should be spending money on missions, not rocket development.

2
TeMPOraL 2 hours ago 1 reply      
Great!

> Expand the full use and life of the space station through 2024 while laying the foundation for use through 2028.

So does that mean ISS is not going to be abandoned by 2024?

3
Grishnakh 1 hour ago 2 replies      
Oh please. Sure, they'll authorize money for some big things right now, then in a year or two they'll change their minds (after new people are elected) and pull the plug on it. They've done it over and over.

The only way the US can get anything done in space exploration is if it can be fully funded and completed in less than 2 years. So a little probe here and there is completely doable, but a Space Shuttle replacement or manned Mars mission or any other big project is a complete no-go. It won't ever happen.

13
Erlang Installer Beta: A Better Way To Use Erlang On OS X erlang-solutions.com
40 points by _nato_  1 hour ago   7 comments top 4
1
matt4077 50 minutes ago 0 replies      
I think I'll stick with `brew upgrade` or, more accurately,

 brew update brew upgrade npm -g upgrade for f in ~/projects/*; do cd $f npm update --save npm update --save-dev mix deps.update --all elm-packages update bundle update npm test mix test rake test done

2
Luc 1 hour ago 1 reply      
Interesting, I'll try it out.

On Linux I used the 'kerl' script to easily switch between installations: https://github.com/kerl/kerl

Seems like it works on OS X too: http://stratus3d.com/blog/2014/10/24/install-erlang-16-on-ma...

3
tomku 29 minutes ago 0 replies      
In order to use erl/erlc from the command line, would I have to launch an "Erlang Terminal" from the menu bar or is there a convenient way to get the current default Erlang's bin folder on my path?

Edit: I mostly work with Elixir and have Erlang installed via Homebrew right now.

4
electic 46 minutes ago 2 replies      
I like the idea however another menu bar item seems to be the wrong way to go about doing this. Why is brew a bad idea?
14
Using Gmail with Mutt smalldata.tech
77 points by wheresvic1  5 hours ago   31 comments top 10
1
hiphopyo 0 minutes ago 0 replies      
For what it's worth, all I ever had to use was:

mutt -f imaps://imap.gmail.com

2
brandur 3 hours ago 1 reply      
I've been using Mutt with Gmail for years to great effect, and would very much recommend the set up.

The one caveat that I should point out (because it's not mentioned in the article) is that you will probably never be fully rid of official Gmail clients. There is still no good mechanism to use some features with Mutt like thread muting, and these are essential to effective email these days. It's also often more convenient to read certain types of email (e.g. messages that are heavy in multimedia) from a client that supports graphics.

My usual habit is to read email in the web client or on mobile, and respond to or compose mail from within Mutt.

3
nabucodonosor 2 hours ago 0 replies      
Like the post. What I did differently is to use fdm to fetch emails and store them locally. I've been using Mutt with Gmail for many years. The features I like most are:

- regex search

- faster actions (like batch delete, mark as read) using tag

- can use my editor to compose. I use emacsclient -nw and it's so easy to copy things from shared buffer.

- very easy to customize, for example, I want to see the timestamp as local time regardless of the sender's timezone, I wrote a smile Go program to do that https://github.com/wujiang/localize_mutt; I also run a cronjob to archive old emails.

4
Gxorgxo 3 hours ago 6 replies      
I love to use the terminalI'm a Vim and tmux userbut I was never really able to switch to Mutt. I often receive emails with attached images or HTML code. Maybe some Mutt user can share with me some of the reasons why they like it so much?
5
jacobsenscott 2 hours ago 1 reply      
mutt + offlineimap + gmail is fantastic. When I need to use their web interface it is painful. mu gives you almost instant search, but I almost never need to search email so I don't have that dialed in.
6
daily-q 1 hour ago 0 replies      
Anyone here coming from nmh, but prefer to use Mutt? I was wondering if Mutt is worth the switch since some things in nmh seem hard to keep up to snuff with the ever-changing www.
7
wyclif 3 hours ago 1 reply      
Is it possible to use mutt and Gmail without enabling lesssecureapps? I know this post deals with some of those issues (gpg keys and whatnot), and Google strongly recommends IMAP/SMTP protocol users switch to OAuth 2.0, etc.
8
Sir_Cmpwn 1 hour ago 0 replies      
Plug for the mutt replacement I'm working on:

https://github.com/SirCmpwn/aerc

9
lighttower 2 hours ago 0 replies      
google-Calendar as a widget on my linux (mate) desktop. Aside from [1] does anyone have a working setup?

[1] https://www.linux.com/learn/tricks-using-desktop-integrated-...

10
jrcii 3 hours ago 1 reply      
I've got Mutt going with Gmail along the lines of Steve Losh's advice http://stevelosh.com/blog/2012/10/the-homely-mutt/

It works great. Very fast, and it's nice to have a local backup of my email.

15
The Moon Illusion, an Unsolved Mystery lhup.edu
8 points by bmease  52 minutes ago   1 comment top
1
dahdum 5 minutes ago 0 replies      
Wow, I've wondered many times why the moon looked massive sometimes. Never thought it was an illusion.
16
The most coveted cigars will never be smoked bbc.com
25 points by CapitalistCartr  2 hours ago   15 comments top 2
1
dageshi 1 hour ago 2 replies      
There is at present a sort of golden age for the cigar afficionado, individual countries highly regulate and tax cigars (priced even more highly than cigarettes for reasons that escape me), but are quite happy to allow vendors to sell internationally, the upshot is for any country outside the US it's far cheaper to buy from neighbouring countries or from the other side of the world for that matter...

How long it will last, who knows, sooner or later cross border sales of tobacco will be banned, but for the time being those who can are stocking up while they can. Cigars last for decades and peak in the 5-25 year range depending on the cigar so right now there's no downside to buying as many as you can, even the none special editions (cuban) are probably good for about 10% a year in appreciation.

Personally I'm buying a box a month.

2
fatdog 45 minutes ago 3 replies      
I don't know anyone in tech who appreciates cigars. It doesn't go with things like rock climbing, kayaking, snowboarding, ultramarathoning, CrossFit, or redditing that our fields are known for. I love them, but what's the HN connection?
17
Show HN: How many days until? days.to
97 points by uptown  5 hours ago   37 comments top 25
1
joshmanders 4 hours ago 0 replies      
Kind of dropped the ball on the url. days.to is great, but days.to/until/christmas is kind of wonky. I'd remove the /until/ part so it's just days.to/christmas
2
drinchev 4 hours ago 1 reply      
This is one of those cool websites that when I open for the first time I usually say : "Oh that's awesome, I should bookmark it and use more often" and then after 1 hour It's totally forgotten until the next time I clean my bookmarks.

Anyway TV Shows episode calendar is really useful and I think it's a nice idea to have this information arranged in this way.

3
sodafountan 19 minutes ago 0 replies      
Looks good, very clean and nice. You should add the ability to create your own custom events, for instance I'm taking a trip to California and Las Vegas next month that I'm looking forward to. It would be nice if I could add that event and maybe the site could be smart enough to pull a picture down when I type in California or Las Vegas. That would for me at least be enough to warrant a bookmark.
4
askopress 45 minutes ago 0 replies      
Just a design tip: the red border that separates the full date from the days till countdown is completely unnecessary. Also, a bit bigger dark gradient behind it would be nice. Also, when hovering over the top menu the link goes dark enough to be hard to read, so perhaps not change the color on hover, but add a border underneath the link instead to hint that the link is being in fact hovered?
5
ryanmonroe 1 hour ago 0 replies      
Wolfram Alpha has similar functionality (minus the home page)

http://www.wolframalpha.com/input/?i=days+until+doctor+stran...

6
RangerScience 1 hour ago 0 replies      
Neat! Good form factor, as it lets me explore things I'd otherwise never think of (mostly, cultural events in other cultures), but......Doesn't have what I care about (festivals), does have what I really don't care about (sports). Can you add a filter so I can remove the latter? Not sure what you can do about the former - data sources / input is the hard part, I'm pretty sure :P
7
Jhsto 56 minutes ago 0 replies      
Been using this site before and the only thing which bothers me is the moving background color. While it is a nice addition on desktop, it makes my 2015 Macbook's fans blow after some seconds.

I would suggest looking for less computing intensive way to make the backgrounds work like they do now.

Either way, it's not a huge deal since I seldom watch the site for longer than those few seconds, but I have clicked the alternative links on Google a few times just to see if they would function any better.

8
ohitsdom 3 hours ago 0 replies      
First thing I checked was days until Elon's Mars talk next Tuesday, since that's what I'm counting down to. So count one vote to add that event!

http://www.spacex.com/mars

9
mcargian 3 hours ago 0 replies      
I always use something like https://days.to/since/new-years-day to check when eggs are packaged. All eggs (in the USA at least) have a three digit code showing the date it was packaged on the end of the carton. Helps to see how old the eggs are at the market.
10
tmaly 2 hours ago 1 reply      
Cool idea, I would love support for geolocated events. We have a set of country fairs in Connecticut that start in August and end in October. It would be a cool application for location
11
cylinder 41 minutes ago 0 replies      
Really cool. You may want to add southern hemisphere seasons to it too
12
Kiro 3 hours ago 1 reply      
So obviously this site is built 100% with SEO in mind. The domain name and the fact that each event is its own page with the most obvious searchable title.

Good job anyway! Looks really good.

13
bpoyner 1 hour ago 0 replies      
Neat stuff. The USA total solar eclipse on August 21, 2017 should be added too, imho.
14
michaelbuckbee 4 hours ago 1 reply      
This looks like kind of a fun project, but Google already returns at least some of this information in their enhanced search results (ex: "When is Fathers Day").
15
andrewpe 4 hours ago 0 replies      
I always search on Google "days til {event or date}" and the first link is always this site. This site is a big time saver
16
leojg 3 hours ago 0 replies      
For this to be somewhat useful should have much more data about every country in the world. I don't really care about any of those dates... maybe us elections because it has an international relevance.

Or find a niche like mountain bike, and list all the mountainbiking events

17
peterbsmith 3 hours ago 1 reply      
I used this website _all_ summer. Summer is awful. I know thats a controversial stance, but I take it.

days.to/until/summer showed the days until summer ended. It was great!

Simple, to the point, beautiful.

18
pmarreck 2 hours ago 0 replies      
Was disappointed to not see a days.to "Game of Thrones Season 7 premiere", so I requested adding it.
19
asciimo 1 hour ago 0 replies      
Nice work. What kind of ad revenue are you getting?
20
Taylor_OD 2 hours ago 0 replies      
Functions well but its a bit difficult to absorb the information in the current layout. A tiny bit of space between events might be nice.
21
Jugurtha 1 hour ago 0 replies      
Pretty nice. Just a few ideas:

* Is it necessary to go to a url when one clicks on an event? The amount of information displayed is tiny (date, event title, time until event) and it's a waste of screen space and time (now I have to go back to look at other things). It would be cool to show the information in a modal box and then continue to browse what's coming up.

* Okay, now I'm on days.to and I know that an event will take place in two months. Then what? I leave the site and in a few days, I forget about it. I think I stumbled upon an email feature but I couldn't find it again. It also has a calendar. Why? Would it be better to build on something a large number of people are already using and trusting to manage their daily lives? Something like Google Calendar or Facebook Events. Maybe using their API to insert an event into the already existing calendar. Even if I leave days.to, I can still see the positive it brought to my life and I'm more likely to come back.

* Maybe topics. I push in some interests. With enough users, it might start detecting certain patterns and starts showing me upcoming events resulting from the interests of people who share some of my interests. If I like music and painting, and you like music and theatre, it might show me theatre events and show you painting events.

22
RodericDay 4 hours ago 1 reply      
You should scrape a bit and populate this for TV shows. Neither "Rick and Morty" nor "Line of Duty" were there.
23
meggar 2 hours ago 0 replies      
oh good, only another 365 days until autumn.
24
gweinberg 2 hours ago 0 replies      
Needs "Kiss a Ginger Day".
25
Phritzy 4 hours ago 1 reply      
Why does it have Inauguration Day on both January 19 and 20?
18
What to Expect as an International Founder at Y Combinator themacro.com
20 points by craigcannon  2 hours ago   4 comments top
1
gameguy43 1 hour ago 2 replies      
Read the first 3 points, saw they had nothing to do with the headline, and bounced.
19
iOS 10: Security Weakness Discovered, Backup Passwords Much Easier to Break elcomsoft.com
193 points by cpach  11 hours ago   74 comments top 11
1
camillomiller 5 hours ago 2 replies      
Ok, I got it clearer: somebody at Apple fucked up and left a weak SHA256 Hash and salt inside a db table that shouldn't be there. Probably used in testing for the betas, then nobody remembered to comment it away before the public release. Some engineer and somebody in QA will get their ass kicked pretty badly. Next iOS public release will solve it, everybody's gonna be happy.Nothing to see here folks, we can move on :)
2
tetrep 11 hours ago 3 replies      
Some technical detail would be nice. At the moment, this is just an advertisement for their iPhone backup cracking software.
3
coldcode 6 hours ago 1 reply      
Clearly they will tell Apple before publishing on their website. Then again they sell cracking software for money.

iTunes is done by a different team than the OS. At one point at least much of the iTunes web side was handled by remote contractors, not sure of the app itself. Given that Apple is releasing 4 new OSs every year its not surprising something gets screwed up.

It will be fixed within a week I bet.

4
djrogers 2 hours ago 0 replies      
The link makes it entirely unclear how this relates to decrypting the keychain. Yes, this makes it much easier to get access to the keychain, but isn't it also encrypted?
5
camillomiller 8 hours ago 3 replies      
Ok, ELI5 for me please: how using a SHA256 function unsalted with 1 iteration - as suggested by Per Torsheim - would influence the fact that you can try more passwords per second? Isn't that a flaw of whatever kind of software password-trial limit more than a flaw of the algo used by iOS to encrypt the backup?
6
kondro 8 hours ago 1 reply      
Meh, my iTunes backups are stored encrypted at rest.
7
0x0 11 hours ago 3 replies      
Why would apple implement a new weaker scheme in parallel to the existing old? Are the designers of the otherwise so secure enclave blundering? Or is this done on purpose?! (Hard to believe they would believe they would get away with it, so... amateur hour accident?)

Is this Apple's Bitlocker Elephant Diffuser?

Can I have some extrabacon with that?!

8
acqq 8 hours ago 1 reply      
Until it get fixed (if), everybody who does the local backup and worries about its possible bruteforcing should

1. wipe his previous ios10 backups

2. if the backup password is not significantly long, increase the length of his backup password with some random enough material.

And, of course, never forget that "$5 wrench" comics.

I still hope Apple will publicly respond on this. It simply doesn't fit with the other steps they did at least starting with iPhone 5s.

9
pearjuice 7 hours ago 0 replies      
And this is something visible on the surface. Imagine what else they added. Or removed. This is exactly why you shouldn't trust proprietary software.

https://www.gnu.org/philosophy/free-software-even-more-impor...

10
robmcm 11 hours ago 2 replies      
Humm, a skeptical view of this would suggest deliberate weakening of security.

Perhaps not a full back door, but more of an open upstairs window?

11
blinkingled 4 hours ago 0 replies      
Reality continues to ruin Apple's marketing ;)

Given the other crap[1], relatively speaking they still shine though - at least a patch will be out soon to everyone that turns on their device.

[1] FCC should really look into making security updates for mobile devices mandatory with a time limit, in absence of which OEM or the Carrier must replace the device free of charge with one that doesn't have the vulnerability. It's criminal what OEMs and carriers are getting away with while making ton of profits.

20
Using the Response Rate Limiting Feature in BIND 9.10 isc.org
8 points by x0rx0r  1 hour ago   1 comment top
1
spydum 13 minutes ago 0 replies      
DNS (and bind in particular) is one of those things that most people don't give a second thought to when deploying their infrastructure. "Good enough" is typically where the bar is set.. does it respond to queries?, handle my forwarding/zone looksup? done!

I'm guessing the reason this article was posted, and the feature was added (in 2013 mind you) was due to the malicious way DNS servers have been abused in the last decade, and the recent mentions by bruce schneier around the attacks on global dns infrastructure (perhaps they leverage absuing recursive queries or something? i dont know). It's sort of like BCP38, good net citizens should be doing this, not for their own networks protection, but for everyone else.

21
Robot-written reviews fool academics timeshighereducation.com
39 points by chavo-b  5 hours ago   17 comments top 8
1
GuiA 4 hours ago 1 reply      
I'm not very surprised. If you've ever reviewed papers for an academic conference, you'll find that the vast majority of them are just very bad. The average ACM conference has a ~20% or so acceptance rate - and the remaining 80% isn't just a hair away from being accepted. For a big chunk of it, it's just garbage.

Ill defined research problems, vague statements, poor methodology, many grammatical mistakes... given the nature of peer review, it's only natural that people who author nonsensical papers would nod at nonsensical reviews.

For people saying that this is because academia is an old boys network: not quite so. While it can definitely be like that when you get to the top, the vast majority of peer reviewers for most conferences are just grad students, post docs, or junior researchers who don't really discriminate by trying to guess who wrote the paper.

2
CobrastanJorji 2 hours ago 0 replies      
If you generated papers with MIT's SCIgen, reviewed them with this, and then responded to those reviews with short notes from Cleverbot, you could put together a whole collection of "peer-reviewed", fee-charging journals. You'd want to make sure they cross-cited each other heavily, thus earning the journals and their "authors" prestige.
3
more_original 2 hours ago 0 replies      
The quality of reviews is a general problem with the reviewing system. Just think about it: You're very busy with other work, you have many reviews to write, about papers that you're not probably really enthusiastic about, and reviews are anonymous to all but the editors. It shouldn't be surprising that this can result in rushed, bad reviews. It's also not too surprising that one can automatically generate a 'review' that looks like someone who forgot about the reviewing deadline and wrote something in a rush.

Nevertheless, while bad reviews do make it through, I do think the editors are able to recognise them for what they are.

4
jfroma 4 hours ago 2 replies      
It reminds me the "Postmodernism Generator"

http://www.elsewhere.org/pomo/

5
coldcode 1 hour ago 0 replies      
I would like to see the opposite too, identify fake reviews written by robots. Otherwise people will just start using fake review robot sites so as to not spend any time.
6
SixSigma 1 hour ago 0 replies      
"Markov chains still produce credible sentences after 30 years" reports Mark V Shaney

https://en.wikipedia.org/wiki/Mark_V._Shaney

7
JoeAltmaier 4 hours ago 1 reply      
Confirms the bias in the bogus peer-review bureaucracy: editors accept papers from those they know, without reading the recommendations. Its an old-boys network?
8
gaius 3 hours ago 2 replies      
This is how the ivory tower crumbles - as it becomes common knowledge that scientists are as gullible as anyone, if not more so.
22
Watching Evolution Happen in Two Lifetimes quantamagazine.org
30 points by M_Grey  4 hours ago   36 comments top 2
1
M_Grey 1 hour ago 0 replies      
I don't know about anyone else, but I found this especially interesting (and blessedly unrelated to semantic arguments):

"Our work has shown that this model of speciation does hold. But in addition, we have shown there are other routes to speciation, such as gene flow from one species to another. We see this in the Big Bird lineage but also in cichlid fishes and butterflies. There are multiple routes to speciation."

It's one thing to have a hypothesis, another to spend the decades it takes to modify it with observations.

2
JoeAltmaier 3 hours ago 5 replies      
Overstated? They witnessed natural selection in action. No genes were created/destroyed. Just a change in average beak size, presumably because the species already had genes to vary beak size.
23
Original bulletin board thread in which :-) was proposed cmu.edu
312 points by ZeljkoS  7 hours ago   127 comments top 29
1
kelvich 4 hours ago 1 reply      
Nabokov's interview. The New York Times [1969]

 -- How do you rank yourself among writers (living) and of the immediate past? -- I often think there should exist a special typographical sign for a smile -- some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question.

2
jgw 5 hours ago 4 replies      
It makes me a bit of a luddite (and a heck of a curmudgeon), but it always makes me a little sad when good ol' ASCII smileys are rendered all fancy-like. There's something charming and hackerish about showing it as a 7-bit glyph.

I think the Internet fundamentally changed when that happened.

Tangentially-related, I can't fathom why someone would post YouTube videos of `telnet towel.blinkenlights.nl`.

3
benbreen 2 hours ago 0 replies      
Apropos is this debate about whether an intentional :) shows up in a 1648 poem:

http://www.slate.com/blogs/lexicon_valley/2014/04/15/emotico...

Here's the verse:

Tumble me down, and I will sit

Upon my ruines (smiling yet :)

I think that the article does a fairly convincing job of showing that this is just weird 17th century typography, but then again, there was enough experimentation with printing at the time that it also wouldn't surprise me if it was intentional, at least at some point in the typesetting process.

4
artbikes 4 hours ago 1 reply      
Like most of the cultural inventions of virtual communities there was prior art on PLATO.

http://www.platohistory.org/blog/2012/09/plato-emoticons-rev...

5
kjhughes 5 hours ago 1 reply      
I vividly remember having the following conversation with a fellow CMU undergrad around this time:

Me: What's with all the :-) in the posts?

Friend: It indicates joking.

Me: Why?

Friend: What's it look like?

Me: A pinball plunger.

Friend: Rotate 90 degrees.

Me: Ohhhhhh.

:-)

6
ZeljkoS 6 hours ago 6 replies      
Interesting thing to note is that before Fahlman suggested ":-)" symbol, Leonard Hamey suggested "{#}" (see 17-Sep-82 17:42 post). After that, someone suggested "\__/" (see 20-Sep-82 17:56 post). But only ":-)" gained popularity.

It is funny to imagine how emoticons (https://en.wikipedia.org/wiki/List_of_emoticons) would look today if one of alternative symbols was accepted?

7
milesf 4 hours ago 0 replies      
Ah bulletin boards :)

For years I have been searching for a copy of Blue Board (https://en.wikipedia.org/wiki/Blue_Board_(software)), a popular BBS program in the Vancouver, BC, Canada area written by the late Martin Sikes http://www.penmachine.com/martinsikes/

I even talked with the owner of Sota Software, the publisher, but I never heard anything back.

If anyone has a copy, PLEASE let me know! I've been wanting to setup a memorial telnet Blue Board site for decades now.

8
hvass 4 hours ago 0 replies      
This is gold:

"Since Scott's original proposal, many further symbols have beenproposed here:

(:-) for messages dealing with bicycle helmets@= for messages dealing with nuclear war"

9
minivan 6 hours ago 6 replies      
"o>-<|= for messages of interest to women"

I'm glad we are past that.

10
wmccullough 1 hour ago 0 replies      
I love how different the conversations were on the internet then.

Now adays, if a thread came about to propose the ':-)', people would devolve into a debate about the proper use of the parenthesis, and at least one user would claim that '(-:' was a better choice, though it is the darkhorse option for the community.

11
p333347 5 hours ago 1 reply      
I see one Guy Steele in that thread. Is he the Guy Steele? Glancing wikipedia suggests he was asst prof at CMU around that time. Just curious.
12
hammock 2 hours ago 1 reply      
Reading these BBS always makes me think how much nerdier computer people were back then than they are now. Or am I off base?
13
_audakel 1 hour ago 0 replies      
"Read it sideways. "hahaha love this!
14
danvoell 2 hours ago 1 reply      
I wonder at what point the nose was removed :)
15
emmet 5 hours ago 1 reply      
| I have a picture of ET holding a chainsaw in .press file format. The fileexists in /usr/wah/public/etchainsaw.press on the IUS.

:-)

16
backtoyoujim 2 hours ago 0 replies      
I wonder how many times the initial turn head, grok, smile -- mirroring back to the pareidolia itself, has happened.
17
yitchelle 5 hours ago 2 replies      
Interestingly, before I read this post and the comments, I have always thought that :-) means a smiling face. Ie, to convey a sense of a smile after writing a message. Not a "I am joking" message.

Well, I learned something today.

18
chiph 6 hours ago 0 replies      
Interesting that there are both left-handed and right-handed smileys in the thread. :-) (-:
19
xyzzy4 6 hours ago 2 replies      
I'm sure :-) has been independently invented a million times.
20
soneca 5 hours ago 1 reply      
And the proposal to have a separate channel to jokes is as old as the smiley. There is always that guy.

Have anyone thought about creating a separate HN for jokes?

21
dugluak 3 hours ago 1 reply      
love birds

 (@> <@) ( _) (_ ) /\ /\

22
pcunite 3 hours ago 0 replies      
()

I see you

23
f_allwein 6 hours ago 1 reply      
19-Sep-82 11:44, Scott E Fahlman invents the ':-)'.

Nice. :-)

24
david-given 1 hour ago 0 replies      
I... now find myself morbidly curious as to whether you could use Unicode diacritic abuse to draw actual pictures.

Pasted in example stolen from Glitchr, mainly to see how well HN renders them:

- ...

25
equivocates 5 hours ago 0 replies      
26
equivocates 5 hours ago 0 replies      
27
anjc 6 hours ago 0 replies      
Wow that's interesting

(:

28
guessmyname 6 hours ago 0 replies      
Here is a list of popular emoticons: https://textfac.es/
29
artursapek 4 hours ago 0 replies      
This is creepy. I just opened a PR on GitHub and set the description to ":-)". Then I opened HN and saw this.
25
Stanford researchers reveal details about the feeding habits of whales stanford.edu
18 points by CapitalistCartr  4 hours ago   2 comments top 2
1
M_Grey 1 hour ago 0 replies      
"When these animals dive down to 300 meters, holding their breath for 12 minutes or more, they had better be sure its worth the cost..."

I wonder what a 300 meter dive "costs" in terms of energy for such a massive animal, and how amazing it is that they can manage 12 minutes on one breath.

2
euyyn 2 hours ago 0 replies      
Any detail interesting enough to include in the title? :)
26
Poverty and social background remain huge barriers in scientific careers nature.com
82 points by smb06  9 hours ago   63 comments top 9
1
kaitai 5 hours ago 4 replies      
As I write this, there are just a small number of comments, all saying "there's no money in research" and "scientists are paid less than McDonald's burger flipper". It's like a whole section of disgruntled grad students who wanted to be professors. There is tons of money in science! People who "fail out" of or never wanted to pursue the stupid academic dream become active in patent law, become managers at pharma and biomed companies, oversee quality control at chemical firms, write documentation for medical devices, run labs, work as lab techs. These are all good jobs compared to the jobs the majority of Americans are qualified for.

Seems like HN has a very warped view, this view that "there's no money in research" means some poor kid is better off flipping burgers into her 50s than becoming a lab manager.

2
jkot 6 hours ago 2 replies      
Great! Problem is not that scientists are often paid less than McDonald's burger flipper.

No! It is social and gender issue! Lets push even more students into this career path.

3
davidf18 4 hours ago 3 replies      
From the article:> Last year, Christina Quasney was close to giving up. A biochemistry major at the University of Maryland, Baltimore County, Quasney's background was anything but privileged.

People with little family money get free grants and low-interest government backed loans to attend college. She is attending a public university. In NYC, there are many students that are attending City University of New York and I meet some. They are studying biology, engineering, and other sciences. They generally but not always live at home.

The annual tuition and fees are less than $7000 and for transportation, the MTA subway/bus is $115 per month, unlimited rides.

Students like those mentioned in the article get Pell Grants and Stafford government backed low-interest loans.Pell Grants are $5,800 per year.Stafford Loans are 5,500 the first year, 6,500 the second year, and $7,500 for the remaining years.

In my particular case, I paid for 90% of tuition/housing/living expenses by programming computers beginning in high school. I was not eligible for Pell Grants nor any form of loans including Stafford Loans.

So, I really don't understand these arguments. Public universities provide a first class low-cost undergraduate education and of course have PhD programs and so on.

Once one has an undergraduate degree with good grades, in the sciences and engineering, if they are admitted to a PhD program so they are fully funded for both tuition and housing.

CUNY Tuition and Fees:http://www2.cuny.edu/financial-aid/tuition-and-college-costs...

Pell Grants:https://studentaid.ed.gov/sa/about/announcements/pell-2016-1...

Stafford Loans:https://www.scholarships.com/financial-aid/student-loans/sta...

4
shae 4 hours ago 1 reply      
I'd like to go get a PhD and teach, but it pays less than half of what I make as a software dev. I wish I could somehow teach science & tech and also make money.
5
junipergreen 6 hours ago 0 replies      
Studies have shown a major factor in young people's likeliness of going into science is whether their families think highly of science and science careers (regardless of parents' education levels). I'd imagine the ivory-tower nature of science can contribute a class divides in this respect as well.
6
dorfsmay 6 hours ago 7 replies      
I'm actually surprised well-off students choose science over well-paying careers given that it is well known that there is no money in research.

My own kids and their friends often express that science is interesting but they won't get into it because of that.

7
ap22213 5 hours ago 0 replies      
Man, wish I could find the link - I had read something probably 10 years ago that discussed the abnormally high number of independently wealthy people in academia and research. Makes sense given how expensive it is and how infrequently it nets positive ROI.

A big part of science are 'ideas', and ideas are interesting things in Human culture. To be the 'idea person' in a social group requires considerable social status. I see so many people in corporations battling to have their ideas win. I see so many people of higher status claiming ownership of the ideas of those 'beneath' them. I see plenty of great ideas being ignored because of who proposes them. And, it's very rare to see an outsider's idea gain influence.

They say that execution matters much more than ideas - but they go hand-in-hand. The person who gets to execute also gets to choose the idea.

Given the comparitive physical weakness of the Human, 'the idea' is their number one weapon and asset. It enables power. So, there are probably a lot of social reasons why most lower-status (lower income) people are kept out of science and research. It's probably more of a systematic result of Human behavior than just being poor.

8
known 5 hours ago 0 replies      
9
lintiness 5 hours ago 2 replies      
i'm sorry, but technology is making access to science easy enough that "my high school can't afford to offer that bio class" excuse just doesn't work anymore. hell, even news aggs like google offer a science section every day.
27
New ALS discovery: Scientists reverse protein clumping sciencebulletin.org
18 points by upen  3 hours ago   1 comment top
1
smpetrey 41 minutes ago 0 replies      
My Ecology professor was a great mentor, leader and hilarious guy. He was super inspirational and taught us that nature was something to marvel at, and always informed us it was our responsibility to take care of it. He grew up a farm-hand in the midwest. And likely developed ALS later in life due to pesticide exposure bioaccumulation. [1]

This is an amazing breakthrough though. Stabilizing the SOD1 could potentially pave the way for preventing ALS in its early stages. Would this reversal of protein clumping help patients who have been exposed to pesticides or had head injuries that lead into ALS?

Also, Please donate to ALS research if you can. [2]

[1] http://well.blogs.nytimes.com/2016/05/12/pesticide-exposure-...

[2] http://www.alsa.org/donation

28
Software for moral enhancement kajsotala.fi
39 points by kaj_sotala  6 hours ago   9 comments top 2
1
ianai 2 hours ago 0 replies      
I like this as a thought provoking piece. The examples seem less than stellar though. It begs the interesting point of the moral value of the Internet.
2
zeveb 4 hours ago 3 replies      
The trouble I see is that we all have different versions of morality. I, for example, believe that tobacco is a positive social good, cigarettes are a social ill and anti-smoking campaigners are utterly despicable; that is the precise opposite of what many people believe: my version of a morality-guiding app would encourage folks to do things that others abhor (and their version would encourage folks to do things I abhor) and discourage folks from doing things they approve of (&c.).
29
When Did Sex Become Fun? sapiens.org
41 points by drchip  3 hours ago   20 comments top 8
1
justinlardinois 18 minutes ago 1 reply      
Off topic for Hacker News; the subject material is unrelatable for many of this site's users.
2
pcl 1 hour ago 1 reply      
Sadly, the really-terribly-written article doesn't actually have anything useful to say about the question in the title.
3
Jun8 13 minutes ago 0 replies      
Here's a quite specific answer from Philip Larkin:

 Sexual intercourse began In nineteen sixty-three (which was rather late for me) - Between the end of the Chatterley ban And the Beatles' first LP.
(from https://allpoetry.com/Annus-Mirabilis)

It doesn't answer the fun part, though, because Larkin was known to have a tedious love life.

4
ScottBurson 1 hour ago 0 replies      
Somewhere somewhen I read something that said that the common housefly has an enthusiastic copulation lasting half an hour. If true, this seems to suggest that sex was fun long before vertebrates.
5
andreiw 20 minutes ago 0 replies      
What is this article about? I see sentences, but they don't seem to make a point.
6
0xdeadbeefbabe 1 hour ago 2 replies      
It became fun when I got married.
7
PercussusVII 2 hours ago 1 reply      
Since we discovered the orgasm!
8
tomc1985 43 minutes ago 0 replies      
Probably around the same time people realized it felt good
30
How does Google know where I am? stackexchange.com
110 points by kumarharsh  4 hours ago   75 comments top 10
1
pawadu 3 hours ago 4 replies      
This is not tower triangulation.

Android searches for access points even when wifi is turned off. If anyone (with GPS enabled) uses that wifi with any google services the bssid will end up in their database. Also if the google car has been nearby it has recorded the presence of the wifi access point at that location [1].

Before you freak out: Apple and Microsoft also use access point information for positioning, although not as successfully.

[1] https://googleblog.blogspot.se/2010/05/wifi-data-collection-...

2
dev1n 45 minutes ago 0 replies      
I had someone from Google (young guy, summer job stuff) come into my office park and map out wi-fi points asking us if this office is still where the company I work for works at. Google has the resources to utilize HUMINT. That's how Google knows where you are.

This guy said he does this in a bunch of cities, driving around the geographic area of the USA where I work in. Very interesting to learn about.

Edit: I am not located anywhere near where Google has an office, so for him to stop by was interesting by itself.

Edit 2: grammar.

3
wil421 3 hours ago 1 reply      
If you want to stop your network from being scanned by Google street view or stop Microsoft doing whatever they do, you can add strings to your ssid.

https://www.reddit.com/r/privacy/comments/3g3xyu/for_wifi_ms....

4
dogma1138 3 hours ago 1 reply      
GPS for a long time haven't been used as the only source of location, to some extent it can now be considered a "2nd fiddle" to INS since solid state INS sensors are very good these days.The phone receives location information from wifi and more importantly it gets location messages from cell towers, and it uses dead reckoning[0] using it's INS sensors.[0]https://en.wikipedia.org/wiki/Dead_reckoning

If you want "privacy" turn off the location services, or your phone, if you want privacy don't take your phone with you.

That said this isn't some "conspiracy" Google actually states when you enable background location services that this will be on all the time even when GPS and the wireless network are explicitly disabled, IIRC even in airplane mode the location background service can be operational without violating FCC regulations.

5
blackoil 4 hours ago 1 reply      
It was possible for long. Many years back my low IQ Nokia phone had no GPS or wifi but maps were pretty accurate using just feel phone tower tringulation.
6
awalGarg 3 hours ago 4 replies      
From the comments section of the top-voted answer:

> Something you didn't mention: when a Google-car goes around taking pictures for StreetView it also maps the location and all wifi networks name. So taking a new router with a new network name from a different ISP might work, but only until they come near your house to update their pictures...> - Bakuriu

I am not sure how this makes me feel :-|

7
slim 1 hour ago 1 reply      
If I understood correctly, android downloads nearby BSSIDs with corresponding Geoposition when internet connectivity is available, to use them when no connectivity is available.

It should be possible to reconstruct google's BSSID database, right?

8
tlow 1 hour ago 0 replies      
Prediction.
9
deadowl 4 hours ago 0 replies      
cell tower triangulation.
10
whatnotests 4 hours ago 0 replies      
Spies!
       cached 23 September 2016 19:02:02 GMT