hacker news with inline top comments    .. more ..    12 Jul 2017 Best
home   ask   best   2 weeks ago   
Taking control of all .io domains with a targeted registration thehackerblog.com
1372 points by koenrh  1 day ago   235 comments top 26
walrus01 1 day ago 11 replies      
This is a huge screwup on the part of the people who run the 'root' of .IO, and their entire operation should be severely scrutinized by ICANN.

In my opinion almost all of the 'weird' TLDs which are country codes that are actually operated by a third party commercial service are 95% spam and junk registrations. .TV is a good example.

Technical screwups aside, the existence of .IO and the fact that it "belongs" to the UK government is morally questionable, since the entire country code only exists because the British and American militaries forcibly removed the original inhabitants of islands such as Diego Garcia so that they could use the area as naval and air force bases.



If I had been able to successfully register these names and get live traffic going to a BIND9 instance, my first instinct would not be to alert the company through its first tier customer support levels, but to immediately post a summary of the problem to ARIN, RIPE, APNIC and ICANN mailing lists. This would get the issue in front of people who immediately understand how serious the problem is, and hopefully one of them would be able to contact the principals of .IO directly.

rmoriz 1 day ago 0 replies      
Originally, .io, .sh and .tm were operated by ICB PLC, later ICB LLC (owned by Paul Kane[1]). Just a couple of weeks ago .io and .sh changed hands to Afilias which swapped out all turning parts over a weekend. They really screwed up a lot of things including losing my balance, manipulating domain information and extortion attemps:

I was a reseller with ICB and have a contract. Without a cancellation period (which as defined in the contract) they wanted to take over the contract, introduce new contract details and probably a new pricing scheme. Around 20 pages of legal stuff and a window of <10 days to "decide"/comply.

I'm done with them (Afilias). I'll never ever spend a dime and advise anyone not doing any business with them.

(But also the old technology backed (which still runs .tm) is far from secure and reliable. At least it supports IPv6 whereas the Afilias infrastructure still is not IPv6 ready)

[1] https://en.wikipedia.org/wiki/Paul_Kane_(entrepreneur)

mreithub 1 day ago 5 replies      
Wow, I don't think I would've even considered such an attack...

DNSSEC, HSTS and Certificate Pinning would've made it more difficult to abuse this, but I guess it would've been pretty easy to get valid SSL certificates for all your favourite .io domains.

Let's try to play malicious party here:

Phase A: First set up a simple DNS forwarder playing by the rules and answering requests as we should (as to not get any unwanted attention). Gather usage statistics.

Phase B: Crawl the list of most-used domains to see if there are any valuable targets without HTTPS (port 443 is closed). Alternatively/additionally see if there are API subdomains used by software other than browsers (of which a few won't have annoying features like Cert Pinning - golang's DNS resolver for example afaik doesn't do DNSSEC). Pick some medium to high level targets where the attack might go undetected for at least some time.

Phase C: MitM time! Get certificates for the target domain(s) of your choice and get to work. Start with only a few percent of the requests to not draw too much attention (and to avoid the majority of their traffic coming from a single IP (range) all of a sudden)Obfuscate the attack by acting like a third party app or something simply doing requests for their users.

Congratulations on finding the vulnerability (and thanks for looking for that kinda stuff in the first place).

justboxing 1 day ago 7 replies      
I read this, but still don't fully understand the implications or impact for .IO TLD services and domain owners.

From the OP's "Impact" section.

> Given the fact that we were able to take over four of the seven authoritative nameservers for the .io TLD we would be able to poison/redirect the DNS for all .io domain names registered. Not only that, but since we have control over a majority of the nameservers its actually more likely that clients will randomly select our hijacked nameservers over any of the legitimate nameservers even before employing tricks like long TTL responses, etc to further tilt the odds in our favor.

What does this mean? Is the "poisoning" / "redirecting" of traffic for ALL .IO domains possible through this "hack" because of some flaw at the .IO registrar, or at 101domains.com or at Nameservers that .IO registrars are using, or something else?

How can this be mitigated? Or am I over-reacting since I don't fully understand this story?

mdellabitta 1 day ago 1 reply      
Just to add some color to this: There was an attack last Friday on a few geo TLDs that ended up hijacking a bunch of traffic for a few hours, including .la, .es, and .jp:


michaelbuckbee 1 day ago 5 replies      
So, the real question is: "How much should we freak out about this?"

If you scroll back a few months to Cloudbleed/Cloudflare we sort of collectively decided that because cache data containing sensitive info (passwords, tokens, whatever) might be accessible for your site using Cloudflare that everything should be revoked, force password resets, etc.

Now we have this vuln, which I'll dub "IOgate" because it's the cool thing to name these. We don't know if this has ever happened before, there clearly were not adequate safeguards in place, etc.

Should anyone operating a service using a ".io" TLD consider everything potentially compromised?

belorn 1 day ago 0 replies      
As the article mention, this is a nice example where DNSSEC would had prevented malicious activity for users which has DNSSEC validation enabled. There are also countries like Sweden were almost all ISP has this, so a rather large group of people in the world would likely have noticed if a majority of .io nameservers was responding with unsigned data.
ChuckMcM 1 day ago 1 reply      
I am really super happy that the root domain serving the largest IOT population on the planet wasn't co-opted by the MIRAI bot writers.

That could have been a net killing event.

hk__2 1 day ago 4 replies      
Side note: Please dont use such gray and thin fonts. I had to modify the CSS to use black instead of #555 for the text color.
tbarbugli 1 day ago 0 replies      
Not the first time .io TLD messes things pretty bad. A few months ago they had a couple of name servers poisoning internet with false negatives, that made a few hours very "interesting"
Mizza 1 day ago 0 replies      
NIC.io are a terrible, terrible registrar. I reported an account takeover security vulnerability to them 6 years ago that they only fixed after 4ish years.
inetknght 1 day ago 1 reply      
> After sending the email I immediately received a bounce message indicating that the adminstrator@nic.io was not an email address that existed at all

> This was not a strong vote of confidence that someone was going to see this notice.

Honestly though, this seems like common practice to me.

_eht 1 day ago 0 replies      
This sounds an awful lot like what happened to the .sr ccTLD on 06/23/17. Part of my business runs on a .sr and we were down several days.

Any way to look up a history of such an event?

tannhaeuser 1 day ago 0 replies      
Good grief! I've always found using vanity domains for your project/company to be tasteless at best. Now .io is even associated with deportation, dispute of territory, and security screwups. Using an .io domain only serves to demonstrate that you care about pretending to be a 2010-ish startup at the expense of everything else at this point.
chmike 1 day ago 0 replies      
Not even a "thank you" message from the TLD managers ? That is the most shameful behavior.
7ewis 1 day ago 2 replies      
I had a similar issue with the .IM domain three months ago. One of the four NS for the domain was not responding.

Two of the guys at Cloudflare diagnosed it for me: https://twitter.com/xxdesmus/status/855858441289572353

superjisan 18 hours ago 0 replies      
I need like a dumbed-down version of what went wrong here.
davisonio 1 day ago 0 replies      
Wow this is quite shocking. A very nice find, this is all the more concerning considering many high profile startups including financial ones are using .io domains (I'm using one too).
mgalka 1 day ago 0 replies      
Wow! That is one scary vulnerability. Glad we have smart people like this to find such problems before the bad guys do. Well done!
mpounsett 1 day ago 3 replies      
While it's definitely an error on the part of the backend registry operator for .io, this is not the major security issue the author describes. He couldn't have hijacked any DNS traffic this way.

I've written in detail about why this is the case at https://mpounsett.blogspot.ca/2017/07/the-io-error-problem-w...

cnkk 1 day ago 0 replies      
seems to be a bad idea to get domains from strange small countries just because they look cool.
mrkrab 1 day ago 6 replies      
redthrowaway 1 day ago 1 reply      
Considering there are a grand total of 2500 people in the BIOT, all of whom are British or American military personnel, it might not be a great idea to route a good chunk of the world's tech traffic through them. The disparity between how important the .io TLD is for the Internet and how few resources must go to running it is pretty appalling.
jshelly 1 day ago 0 replies      
Complicated by the fact that the .io domain has recently become popular for startups
runnr_az 1 day ago 1 reply      
That's amazing! Nice find, man!
felipeerias 1 day ago 0 replies      
Who would have thought that adminstrator@nic.io registered in a small overseas territory in the middle of the Indian Ocean might not be entirely reliable?
24-core CPU and I cant move my mouse randomascii.wordpress.com
999 points by joebaf  2 days ago   490 comments top 44
titzer 2 days ago 43 replies      
Full disclosure: I work for Google on Chrome.

A Chrome build is truly a computational load to be reckoned with. Without the distributed build, a from-scratch build of Chrome will take at least 30 minutes on a Macbook Pro--maybe an hour(!). TBH I don't remember toughing out a full build without resorting to goma. Even on a hefty workstation, a full build is a go-for-lunch kind of interruption. It will absolutely own a machine.

How did we get here? Well, C++ and its stupid O(n^2) compilation complexity. As an application grows, the number of header files grows because, as any sane and far-thinking programmer would do, we split the complexity up over multiple header files, factor it into modules, and try to create encapsulation with getter/setters. However, to actually have the C++ compiler do inlining at compile time (LTO be damned), we have to put the definitions of inline functions into header files, which greatly increases their size and processing time. Moreover, because the C++ compiler needs to see full class definitions to, e.g., know the size of an object and its inheritance relationships, we have to put the main meat of every class definition into a header file! Don't even get me started on templates. Oh, and at the end of the day, the linker has to clean up the whole mess, discarding the vast majority of the compiler's output due to so many duplicated functions. And this blowup can be huge. A debug build of V8, which is just a small subsystem of Chrome, will generate about 1.4GB of .o files which link to a 75MB .so file and 1.2MB startup executable--that's a 18x blowup.

Ugh. I've worked with a lot of build systems over the years, including Google's internal build system open sourced as Bazel. While these systems have scaled C++ development far further than ever thought possible, and are remarkable in the engineering achievement therein, we just need to step back once in a while and ask ourselves:

Damn, are we doing this wrong?

weinzierl 2 days ago 11 replies      
I grew up on the Commodore 64 (1 Core, 1 hyper-thread :-), almost 1 MHz clock freq, almost 64 K usable RAM).

The machine was usually pretty responsive, but when I typed too quickly in my word processor it sometimes got stuck and ate a few characters. I used to think: "If computers were only fast enough so I could type without interruption...". If you'd asked me back then for a top ten list what I wished computers could do, this would certainly have been on the list.

Now, 30 years later, whenever my cursor gets stuck I like to think:

"If computers were only fast enough so I could type without interruption..."

blunte 2 days ago 3 replies      
This is great work. I hope MS can improve Windows 10 by fixing this. I just added a new Win10 laptop with much better specs than my 3 year old rMBP, and I'm shocked by how much apparently random latency I experience with the UI in Windows 10 compared to the Mac. That's not to mention the issues of sloppier track pad (which constantly detects my left hand while I type) or the ungodly slow unzip (via 7z).

If only Apple would give us more than 16GB of RAM (in a laptop)... what a frustrating world for developers.

albertzeyer 2 days ago 3 replies      
I was wondering what ETW is, and found in one of his other posts: ETW (Event Tracing for Windows).

He seems to be one of the main contributors of https://github.com/google/UIforETW

Seems to be quite useful.


About the post: tl;dr: NtGdiCloseProcess has a system-wide global lock which is used quite often e.g. during a build of Chrome which spawns a lot of processes. This problem seems to be introduced between Windows 7 and Windows 10.

I thought that there would be a solution or a fix but it seems this is not yet fixed. "This problem has been reported to Microsoft and they are investigating."

quotemstr 1 day ago 1 reply      
Why is everyone talking about how C++ compilation is slow or something instead of talking about the real problem, which is that GDI is doing resource cleanup on process exit, under lock, for console-mode processes that have probably never used any GDI resources?
ComodoHacker 2 days ago 2 replies      
I remember every other book on Windows programming saying "Process creation/destruction is expensive, use thread pools (or at least process pools) instead, that's the way to go on Windows". Perhaps this mindset is ingrained for Windows QA team too - they don't have [enough] test cases for such scenarios.
bitwize 2 days ago 1 reply      
The Amiga prioritized user-input interrupts before all other interrupts, so if there ever was a time you couldn't move the mouse on the Amiga, it meant that the system was well and truly crashed.

30 years on and the peecee industry still doesn't know how to design a fucking system.

dankohn1 2 days ago 0 replies      
I feel like the (brilliant post) is missing the context that if you were debugging the latency on Linux, you would have the source code to continue the investigation until you found and fixed the problem, as opposed to just teeing it up for Microsoft.
martyvis 2 days ago 4 replies      
For my own amusement, when ever I get a new OS build on my machine, I'd open up a task manager and watch CPU load just by wiggling the mouse a lot or maybe simply pressing page up and down. I'm pretty sure it's always pretty easy to generate 25% CPU doing very little. Another thing would be just opening a local file from within a running application and wondering why multiple seconds and hence billions of CPU cycles seems to be consumed what one expects should be doing a fairly menial task. (I am pretty sure in DOS 3.3 with Norton Commander it was quicker )
iainmerrick 1 day ago 0 replies      
The first time I had to compile a Linux kernel for Android, it took only a few seconds (on the ridiculously overpowered "build machine" my employer supplied). I was sure I must have done something wrong, but no, that was the entire build. It takes longer for Android to reboot than it does to build the kernel.

It does feel like there's something seriously wrong with the massive C++ codebases we use these days for key infrastructure like browsers, and the massive compilation times we put up with.

ericfrederich 2 days ago 1 reply      
I had the same exact problem. A machine that is overkill spec wise but the mouse and keyboard would freeze up every minute on the minute.

I tracked it down to my desktop wallpaper being on a rotation. Seriously... how bad does that have to be implemented in Windows to actually hang the mouse and keyboard?

ksk 1 day ago 1 reply      
Speaking of W10, there was another annoying W10 bug where if you started typing immediately after using the touchpad there was a random delay. If you care about latency and responsiveness, it makes you want to scream at the people who implement these features.
stargrazer 2 days ago 1 reply      
I think the best summary of how to do this is to take a look at Herb Sutter's three article set on "Minimizing Compile-Time Dependencies". https://herbsutter.com/gotw/

"The Compilation Firewall", or pimpl idiom, is a clever mechanism for getting code out of the header: https://herbsutter.com/gotw/_100/

snarfy 2 days ago 1 reply      
My 16mhz 68000 based Amiga 500 had smoother mouse movement than my 3.2ghz 8 core desktop.
sgift 2 days ago 0 replies      
Probably not really feasible, but I'd be interested if something comparable happens when your build process uses many threads instead of many processes. I still think using processes instead of threads is a hack, though I know the mainstream opinion says nowadays processes are the way to go and threads are a hack.
lobo_tuerto 20 hours ago 0 replies      
I have a new Ryzen 7 CPU (you know, 8 cores, 16 threads), with 64GB RAM and for HDD a Samsung 960 PRO M.2 drive.

But when I went and plugged in an external Seagate 4TB drive, and tried to "dd zero" the s#it out of it, my whole system became unresponsive after a while, obviously I had to reset the machine as it wouldn't "kill -9" the process that made the system unresponsive.

Trying to type was a no go as keys would sometimes become "stuck". Moving the mouse around was an exercise in predictability too.

All this happened in the latest Ubuntu 17.04 64bit... #sadstory

elorant 2 days ago 6 replies      
We've come to the point where building the browser from scratch takes more than building the OS itself.
coldcode 2 days ago 1 reply      
Try running Carthage update for an iOS app with 80 frameworks. Ever seen a Mac hit 55GB of ram? MacOS kills it. It takes many of these runs, and all this does is fetch prebuilt binaries. Yes, everything I said here is beyond stupid.
hindsightbias 1 day ago 0 replies      
What Knuth said:"Premature optimization is the root of all evil"

What most developers hear:"Optimization is the root of all evil"

nailer 2 days ago 4 replies      
While the issue closing processes slowly is unique to Windows 10, I've found similar situations of not being able to control my OS on RHEL, Fedora, Ubuntu and MacOS.

At this point I genuinely believe latency will be the death of general purpose computing.

iOS and Android very rarely get out of control. Apple is already pushing iOS devices as laptop replacements. But we lose a lot on these devices with their locked down OSs and inability to install the software we want.

cat199 2 days ago 1 reply      
Can take the alternate approach -

Run your builds in a hugely underpowered VM, and wait much longer.. your regular usage will be largely unimpacted, although the builds take longer.

Source: Currently running a ~1000 package dpb(1)[1] build of my needed openbsd ports on a dual-core KVM machine hosted on a 8-9 year old amd64x2 2.2ghz. 3 Days and counting, will probably be done around next weekend.

From there, incremental updates are mostly slight, and can complete overnight from a cron job.

.. [1] https://man.openbsd.org/dpb

yuhong 2 days ago 1 reply      
I can't imagine that moving the Win32k stuff back to CSRSS would help much in this case, right? Though it is still a good thing especially for terminal servers where hopefully one CSRSS process crashing just terminate the session.
goda90 1 day ago 0 replies      
My workstation was recently "upgraded" from a Window 8.1 workstation with a 4th gen i5 to a Windows 10 laptop with 5th gen i7. The extra RAM and SSD over HDD is great, but whether it's because it went from a quad-core to a dual-core hyperthreaded CPU, or because of the jump to Windows 10, the mouse lag is considerably more noticeable now. I've convinced them to upgrade my laptop again, but now this article doesn't give me much hope for my new work toy.
drumttocs8 1 day ago 0 replies      
When I converted from Windows 7 to 10 I noticed that I started getting audio latency/glitches/squelches from my external audio interface- a Focusrite Scarlett 2i4. The mouse would also sometimes hang. Doing some basic tests, the problem seemed to be coming from network drivers... but I could never resolve it. I wonder if the author's discovery has anything to do with the issues.
faragon 2 days ago 2 replies      
Why the OS don't prioritize UI threads on one or two cores?
wjd2030 2 days ago 0 replies      
I just wanted to say good work. I'm impressed you dug so deep. A fix here could really impact the entire Windows 10 user base.
hoodoof 2 days ago 7 replies      
Saw the headline and thought "must be Windows".

I'm not a Windows hater, but one of my long standing gripes about Windows is that it just seems to have terrible multitasking compared to OSX.

I'm sure there are reasons but it just seems utterly symbolic of Microsoft that they never managed to get Windows to multitask in a rock solid, smooth and reliable way like OSX.

bane 1 day ago 0 replies      
Seems like a good way to deal with this (outside of trying to convince Microsoft to fix it) is to spin up a VM and just give it a few cores, and do the build inside the VM to isolate this behavior.
hl5 2 days ago 0 replies      
I wonder if the "privacy" features in Win10 play a role here. Seems like some extra process accounting could cause delays not present in previous versions.
nomercy400 2 days ago 0 replies      
Does this also occur on other OSes, like Linux, or MacOS? Can you move your Chrome build to another OS and not experience this problem?
fooker 1 day ago 0 replies      
Use the gold linker if your setup permits it.

This issue went away for me when I switched.

SigSegOwl 2 days ago 0 replies      
Thats why my machine gets that slow after running for weeks... explains everything!
arwhatever 1 day ago 0 replies      
Wasn't this the title of a Bruce Springsteen song?
rebootthesystem 1 day ago 0 replies      
Thank you for bringing attention to this. Experiencing this on our W10 workstations.

I hope MS does something about this immediately. It's maddening.

yourstruly33 2 days ago 2 replies      
isatty 2 days ago 1 reply      
I read mouse as house and was confused for the longest time.
yanpanlau 2 days ago 2 replies      
Just use linux
eecc 2 days ago 2 replies      
so basically a fork-bomb. I think Linux can still buckle under one, nothing that obscene...
onetokeoverthe 2 days ago 0 replies      
There's too many spinning wheels. Stop it! People are going to just stop using the internet. If I could, I would. But I can't. So please just build simple websites!
aramadia 2 days ago 2 replies      
When I saw his workstation specs, I thought, thats the exact same one I have at work! Then I checked the bottom, yep, he's at Google too.
BurningFrog 1 day ago 0 replies      
> the C++ compiler needs to see full class definitions to, e.g., know the size of an object and its inheritance relationships, we have to put the main meat of every class definition into a header file!

Without knowing anything about modern C++ and its compilers, this seems fixable.

I'm thinking a header compiler "hint" indicating the size of an object. When compiling the full class, you get an error if the number is wrong.

paulio 2 days ago 4 replies      
He's also written some stuff regarding Visual Studio perf.


Close to my heart as I use Visual Studio all day. Horrid piece of software.

I'll probably get downvoted for this.

alacombe 2 days ago 4 replies      
So what now ?

Should/does the author wait getting some traction on Hacker News and hopefully be noticed by dev at Microsoft, or is there some way to provide Microsoft directly these data, skipping Tier-1 support ?

The author seem to be working at Google, so he might get some leverage from that, but what about Random Joe ? Keep enjoying the bug "forever", I guess ?

microcolonel 2 days ago 3 replies      
The terrifying result of having ABI compatibility with the first commercially successful system of its kind from the '80s.

Granted, they aren't doing themselves any favours with their new straight-jacket style application ABI.

How Discord Scaled Elixir to 5M Concurrent Users discordapp.com
695 points by b1naryth1ef  18 hours ago   216 comments top 37
iagooar 16 hours ago 7 replies      
This writeup make me even more convinced of Elixir becoming one of the large players when it comes to hugely scaling applications.

If there is one thing I truly love about Elixir, it is the easiness of getting started, while standing on the shoulders of a giant that is the Erlang VM. You can start by building a simple, not very demanding application with it, yet once you hit a large scale, there is plenty of battle-proven tools to save you massive headaches and costly rewrites.

Still, I feel, that using Elixir is, today, still a large bet. You need to convince your colleagues as much as your bosses / customers to take the risk. But you can rest assured it will not fail you as you need to push it to the next level.

Nothing comes for free, and at the right scale, even the Erlang VM is not a silver bullet and will require your engineering team to invest their talent, time and effort to fine tune it. Yet, once you dig deep enough into it, you'll find plenty of ways to solve your problem at a lower cost as compared to other solutions.

I see a bright future for Elixir, and a breath of fresh air for Erlang. It's such a great time to be alive!

jakebasile 16 hours ago 4 replies      
I'm continually impressed with Discord and their technical blogs contribute to my respect for them. I use it in both my personal life (I run a small server for online friends, plus large game centric servers) and my professional life (instead of Slack). It's a delight to use, the voice chat is extremely high quality, text chat is fast and searchable, and notifications actually work. Discord has become the de facto place for many gaming communities to organize which is a big deal considering how discriminating and exacting PC gamers can be.

My only concern is their long term viability and I don't just mean money wise. I'm concerned they'll have to sacrifice the user experience to either achieve sustainability or consent to a buyout by a larger company that only wants the users and brand. I hope I'm wrong, and I bought a year of Nitro to do my part.

Cieplak 16 hours ago 8 replies      
I know that the JVM is a modern marvel of software engineering, so I'm always surprised when my Erlang apps consume less than 10MB of RAM, start up nearly instantaneously, respond to HTTP requests in less than 10ms and run forever, while my Java apps take 2 minutes to start up, have several hundred millisecond HTTP response latency and horde memory. Granted, it's more an issue with Spring than with Java, and Parallel Universe's Quasar is basically OTP for Java, so I know logically that Java is basically a superset of Erlang at this point, but perhaps there's an element of "less is more" going on here.

Also, we're looking for Erlang folks with payments experience.


rdtsc 17 hours ago 3 replies      
Good stuff. Erlang VM FTW!

> mochiglobal, a module that exploits a feature of the VM: if Erlang sees a function that always returns the same constant data, it puts that data into a read-only shared heap that processes can access without copying the data

There is a nice new OTP 20.0 optimization - now the value doesn't get copied even on message sends on the local node.

Jesper L. Andersen (jlouis) talked about it in his blog: https://medium.com/@jlouis666/an-erlang-otp-20-0-optimizatio...

> After some research we stumbled upon :ets.update_counter/4

Might not help in this case but 20.0 adds select_replace so can do a full on CAS (compare and exchange) pattern http://erlang.org/doc/man/ets.html#select_replace-2 . So something like acquiring a lock would be much easier to do.

> We found that the wall clock time of a single send/2 call could range from 30s to 70us due to Erlang de-scheduling the calling process.

There are few tricks the VM uses there and it's pretty configurable.

For example sending to a process with a long message queue will add a bit of a backpressure to the sender and un-schedule them.

There are tons of configuration settings for the scheduler. There is to bind scheduler to physical cores to reduce the chance of scheduler threads jumping around between cores: http://erlang.org/doc/man/erl.html#+sbt Sometimes it helps sometimes it doesn't.

Another general trick is to build the VM with the lcnt feature. This will add performance counters for locks / semaphores in the VM. So then can check for the hotspots and know where to optimize:


mbesto 17 hours ago 1 reply      
This is one of those few instances where getting the technology choice right actually has an impact on cost of operations, service reliability, and overall experience of a product. For like 80% of all the other cases, it doesn't matter what you use as long as your devs are comfortable with it.
jlouis 14 hours ago 1 reply      
A fun idea is to do away with the "guild" servers in the architecture and simply run message passes from the websocket process over the Manifold system. A little bit of ETS work should make this doable and now an eager sending process is paying for the work itself, slowing it down. This is exactly the behavior you want. If you are bit more sinister you also format most of the message in the sending process and makes it into a binary. This ensures data is passed by reference and not copied in the system. It ought to bring message sends down to about funcall overhead if done right.

It is probably not a solution for current Discord as they rely on linearizability, but I toyed with building an IRCd in Erlang years ago, and there we managed to avoid having a process per channel in the system via the above trick.

As for the "hoops you have to jump through", it is usually true in any language. When a system experiences pressure, how easy it is to deal with that pressure is usually what matters. Other languages are "phase shifts" and while certain things become simpler in that language, other things become much harder to pull off.

didibus 15 hours ago 5 replies      
So, at this point, every language was scaled to very high concurrent loads. What does that tell us? Sounds to me like languages don't matter for scale. In fact, that makes sense, scale is all about parallel processes, horizontally distributing work can be achieved in all language. Scale is not like perforance, where if you need it, you are restricted to a few languages only.

That's why I'd like to hear more about productivity and ease now. Is it faster and more fun to scale things in certain languages then others. Beam is modeled on actors, and offer no alternatives. Java offers all sorts of models, including actors, but if actors are the currently most fun and procudctive way to scale, that doesn't matter.

Anyways, learning how team scaled is interesting, but it's clear to me now languages aren't limiting factors to scale.

danso 17 hours ago 1 reply      
According to Wikipedia, Discord's initial release was March 2015. Elixir hit 1.0 in September 2014 [0]. That's impressively early for adoption of a language for prototyping and for production.

[0] https://github.com/elixir-lang/elixir/releases/tag/v1.0.0

majidazimi 8 hours ago 3 replies      
It seems awkward to me. What if Erlang/OTP team can not guarantee message serialization compatibility across a major release? How you are going to upgrade a cluster one node at a time? What if you want to communicate with other platforms? How you are going to modify distribution protocol on a running cluster without downtime?

As soon as you introduce standard message format, then all nice features such as built-in distribution, automatic reconnect, ... are almost useless. You have to do all these manually. May be I'm missing something. Correct me if I'm wrong.

For a fast time to market it seems quite nice approach. But for a long running maintainable back-end it not enough.

jmcgough 17 hours ago 0 replies      
Great to see more posts like this promoting Elixir. I've been really enjoying the language and how much power it gets from BEAM.

Hopefully more companies see success stories like this and take the plunge - I'm working on an Elixir project right now at my startup and am loving it.

StreamBright 6 hours ago 0 replies      
Whatsapp's story is somewhat similar. Relevant read to this subject.


joonoro 17 hours ago 1 reply      
Elixir was one of the reasons I started using Discord in the first place. I figured if they were smart enough to use Elixir for a program like this then they would probably have a bright future ahead of them.

In practice, Discord hasn't been completely reliable for my group. Lately messages have been dropping out or being sent multiple times. Voice gets messed up (robot voice) at least a couple times per week and we have to switch servers to make it work again. A few times a person's voice connection has stopped working completely for several minutes and there's nothing we can do about it.

I don't know if these problems have anything to do with the Elixir backend or the server.

EDIT: Grammar

agentgt 2 hours ago 0 replies      
I realize this is off topic but how does Discord make money? I can't figure out their biz model (I'm not a gamer so I didn't even know about them).
renaudg 2 hours ago 1 reply      
It looks like they have built an interesting, robust and scalable system which is perfectly tailored to their needs.

If one didn't want to build all of that in house though, is there anything they've described here that an off the shelf system like https://socketcluster.io doesn't provide ?

ShaneWilton 17 hours ago 1 reply      
Thanks for putting this writeup together! I use Elixir and Erlang every day at work, and the Discord blog has been incredibly useful in terms of pointing me towards the right tooling when I run into a weird performance bottleneck.

FastGlobal in particular looks like it nicely solves a problem I've manually had to work around in the past. I'll probably be pulling that into our codebase soon.

_ar7 17 hours ago 0 replies      
Really liked the blog post. Elixir and the capabilities of the BEAM VM seems really awesome, but I can't really find an excuse to really use them in my day to day anywhere.
OOPMan 7 hours ago 1 reply      
5 million concurrent users is great and all, but it would be nice if Discord could work out how to use WebSockets without duplicating sent messages.

This seems to happen a lot when you are switching between wireless networks (E.g. My home router has 2Ghz and 5Ghz wireless networks) or when you're on mobile (Seems to happen regularly, even if you're not moving around).

It's terribly annoying though and makes using the app via the mobile client to be very tedious.

ConanRus 16 hours ago 1 reply      
I do not see there any Elixir specific, it is all basically Erlang/Erlang VM/OTP stuff. When you using Erlang, you think in terms of actors/processes and message passing, and this is (IMHO) a natural way of thinking about distributed systems.So this article is a perfect example how simple solutions can solve scalability issues if you're using right platform for that.
sriram_malhar 9 hours ago 1 reply      
I really like elxir the language, but find myself strangely hamstrung by the _mix_ tool. There is only an introduction to the tool, but not a reference to all the bells and whistles of the tool. I'm not looking for extra bells and whistles, but simple stuff like pulling in a module from GitHub and incorporate it. Is there such documentation? How do you crack Mix?
brian_herman 17 hours ago 0 replies      
I love discord's posts they are very informative and easy to read.
andy_ppp 9 hours ago 1 reply      
Just as an aside how would people build something like this if they were to use say Python and try to scale to these sort of user levels? Has anyone succeeded? I'd say it would be quite a struggle without some seriously clever work!
neya 11 hours ago 0 replies      
Hi community,Let me share my experience with you. I'm a hardcore Rails guy and I've been advocating and teaching Rails to the community for years.

My workflow for trying out a new language involves using the language for a small side project and gradually would try to scale it up. So, here's my summary, my experience of all the languages so far:

Scala - It's a vast academic language (official book is with ~700 pages) with multiple ways of doing things and it's attractiveness for me was the JVM. It's proven, robust and highly scalable. However, the language was not quite easy to understand and the frameworks that I've tried (Play 2, Lift) weren't as easy to transition to, for a Rails developer like me.

Nevertheless, I did build a simple calendar application, but it took me 2 months to learn the language and build it.

GoLang - This was my next bet, although I didn't give up on Scala completely (I know it has its uses), I wanted something simple. I used Go and had the same experience as I had when I used C++. It's a fine language, but, for a simple language, I had to fight a lot with configuration to get it working for me - (For example, it has this crazy concept of GOPATH where your project should reside and if your project isn't there it'll keep complaining).Nevertheless, I build my own (simple) Rails clone in GO and realized this isn't what I was looking for. It took my about a month to conquer the language and build my (simple) side project.

Elixir - Finally, I heard of Elixir on multiple HN Rails release threads and decided to give it a go. I started off with Phoenix. The transition was definitely wayy smoother from Rails, especially considering the founding member of this language was a Rails dev. himself (the author of "devise" gem). At first some concepts seemed different (like piping), but once I got used to it, for me there was no looking back.

All was fine until they released Phoenix 1.3, where they introduced the concept of contexts and (re) introduced Umbrella applications. Basically they encourage you to break your application into smaller applications by business function (similar to microservices) except that you can do this however you like (unopinionated).For example, I broke down my application by business units (Finance, Marketing, etc.). This forced me to re-think my application in a way I never would have thought and by this time I had finished reading all 3 popular books on this topic (Domain Driven Design). I loved how the fact that Elixir's design choices are really well suited for DDD. If you're new to DDD I suggest you try giving it a shot, it really can force you to re-think the way you develop software.

By the end of two weeks after being introduced to Elixir, I picked up the language. In a month and a half, I built a complete Salesforce clone just working on the weekends. And this includes even the UI. And I love how my application is always blazing fast, picks up errors even before it compiles and warns me if I'm no using a variable I defined somewhere.

P.S there IS a small learning curve involved if you're starting out fresh:

1) IF you're used to the Rails asset pipeline, you'll need to learn some new tools like Brunch / Webpack / etc.2) Understand about contexts & DDD (optional) if you want to better architect your application.3) There is no return statement in Elixir!

As a Ruby developer, here are my thoughts:

1. So, will I be developing with Rails again? Probably yes, for simpler applications / API servers.2. Is Ruby dying? No. In fact, I can't wait for Ruby 3.

Some drawbacks of Elixir:1. Relatively new, so sometimes you'll be on your own and that's okay.2. Fewer libraries as compared to the Ruby eco-system. But you can easily write your own.3. Fewer developers, but should be fairly to onboard Ruby developers.


ramchip 14 hours ago 1 reply      
Very interesting article! One thing I'm curious about is how to ensure a given guild's process only runs on one node at a time, and the ring is consistent between nodes.

Do you use an external system like zookeeper? Or do you have very reliable networking and consider netsplits a tolerable risk?

dandare 4 hours ago 1 reply      
What is the business model behind Discord? They boast about being free multiple times, how do they make money? Or plan to make money?
omeid2 10 hours ago 0 replies      
I think while this is great, it is good to remember that your current tech stack maybe just fine! after all, Discord start with mongodb[0].

[1]. https://blog.discordapp.com/how-discord-stores-billions-of-m...

myth_drannon 17 hours ago 1 reply      
It's interesting how on StackOverflow Jobs Elixir knowledge is required more often than Erlang.


alberth 14 hours ago 2 replies      
Is there any update on BEAMJIT?

It was super promising 3 or so years ago. But I haven't seen an update.

Erlang is amazing in numerous ways but raw performance is not one of them. BEAMJIT is a project to address exactly that.


jaequery 16 hours ago 6 replies      
Anyone know if Phoenix/Elixir have something similar to Ruby's bettererror gem? I see Phoenix has a built-in error stack trace page which looks like a clone of bettererror but it doesn't have the real-time console inside of it.

Also, I wish they had a ORM like Sequel. These two are really what is holding me back from going full in on Elixir. Anyone can care to comment on this?

grantwu 10 hours ago 0 replies      
"Discord clients depend on linearizability of events"

Could this be possibly be the cause of the message reordering and dropping that I experience when I'm on a spotty connection?

zitterbewegung 17 hours ago 1 reply      
Compared to slack discord is a much better service for large groups . Facebook uses them for react.
brightball 17 hours ago 1 reply      
I so appreciate write ups that get into details of microsecond size performance gains at that scale. It's a huge help for the community.
framp 15 hours ago 0 replies      
Really lovely post!

I wonder how Cloud Haskell would fare in such a scenario

KrishnaHarish 9 hours ago 0 replies      
KrishnaHarish 9 hours ago 0 replies      
What is Discord and Elixir?
marlokk 15 hours ago 0 replies      
"How Discord Scaled Elixir to 5M Concurrent Users"

click link

[Error 504 Gateway time-out]

only on Hacker News

orliesaurus 17 hours ago 1 reply      
Unlike Discord's design team who seem to just copy all of Slack's designs and assets, the Engineering team seems to have their shit together, it is delightful to read your Elixir blogposts. Good job!
khanan 15 hours ago 1 reply      
Problem is that Discord sucks since it does not have a dedicated server. Sorry, move along.
What Is Ethereum? whatthefuckisethereum.com
834 points by songzme  4 days ago   257 comments top 35
mikenew 4 days ago 20 replies      
There's one point in particular that took me a while to grasp: while you can write code that runs on the Ethereum network, every single node has to process that code.

So if, for example, you had a big 3d animation sequence that you wanted to have rendered, you would not just send that code off to the network to be processed. You would have to pay to have every single person on the network process that job for you, and it would be so big the network wouldn't even accept it. Rather, you would create a simple contract that says "if you render this animation for me and prove that you did it right, I'll pay you X amount of Ether". Someone would take the job and the Ethereum network would process your contract and handle the payment. You would end up paying that person to do work for you, and you'd pay the network (using something called gas) to process the contract.

You could use Ethereum to create all kinds of interesting contracts for financial purposes, voting systems, insurance, or whatever else. But it's not some giant compute cluster that will run some big program for you.

SilasX 4 days ago 2 replies      
This doesn't seem to do it for me. The multiple explanations it offers are all either:

1) Extremely vague and handwavy, or

2) Links to extremely long documents.

Neither one is an explanation, something that hits the key points and gives you enough to know what to ask to get greater detail.

Edit: toned down criticism.

cslewis 4 days ago 2 replies      
Few weeks ago I decided to spend a weekend to try to understand Ethereum. I found the whitepaper to be pretty instructive (it even helped me cement my understanding of bitcoin).Here is an annotated version of the whitepaper by vitalik:


runeks 4 days ago 1 reply      
Ethereum is very simple in essence: it's a Turing-complete version of Bitcoin.

The way you transfer a bitcoin from one person to another is by creating a Bitcoin transaction that contains an input script that fulfills an in-blockchain output script with which a number of bitcoins are associated. The output script may say "provide a public key that hashes to <some_hash> and a signature, over a transaction (which sends the bitcoins to a new output script) that redeems this output, that verifies against this public key".

Bitcoins script language -- conditions be fulfilled to transfer tokens associated with one output script to a new output script -- is not Turing-complete, which means it's limited in how complex a condition can be set up for redemption. Ethereum's script language is Turing complete, allowing arbitrarily complex conditions for fulfilling an output script (which allows you to transfer coins/tokens from that output script to a new one).

thinbeige 4 days ago 1 reply      
Nice thread but the issue I always have with all discussions around cryptocurrencies is:

I never know if the person who just worshipped or criticized a specific currency has a vested interest.

Somebody who sits on billions of ETH will write totally different tham someboy who is shorting/margin trading ETH.

Frogolocalypse 4 days ago 0 replies      
The problem with ether isn't the tech of contracts but the implementation of the network. There are very concerning scalability issues that have not only not been addressed, but who's 'plan' for resolution are either untried systems in development (sharding) or have significant architectural issues (proof-of-stake) that very smart people think will never be resolved. I hope they overcome these obstacles, but the jury is still well and truly out.
aqsheehy 4 days ago 1 reply      
A framework for creating ponzis
amygdyl 4 days ago 0 replies      
Can anyone help me with a reference for any study that has been published on the status of research for the conditions of successful crypto currency launches?

Or a serious reputable journal reviewing the literature comprehensively?

If I'm unable to find filling for the gaps in the study I seek, I'm inclined to think that I have a good chance of being able to work towards a (sincerely and despite being tempted by the ad rates) non spammy simple site to collate the references, this summer. So if you have any suggestions for the effort I will with about 70% probability undertake, I'd be happy to hear from you as well.

notadoc 4 days ago 5 replies      
Are we at the "mania" stage of cryptocurrency interest yet?
pimeys 4 days ago 2 replies      
What is the best place to have discussions about Ethereum? I tried to follow the Reddit group, but I don't really need all the hype. I just need information and opinions on writing contracts and developing applications that use Ethereum network. I'm also very interested in the psychological side of trading and different trading strategies. No fuzz, just reasonable discussion.
myautsai 4 days ago 0 replies      
Ethereum is an awesome idea, but before any practical Ether application booms, it's more like a bubble.

If you're interested in why there's also a token named "Ethereum Classic(ETC)", you can search "DAO" on Google to explore more about why the first largest Ethereum application DAO was failed and why this leads to the hard fork(ETH => ETC, ETH) of Ethereum.

As far as decentralized storage is concerned, there's production-already a blockchain-based product named Sia (http://sia.tech/ ), but Swarm (which is a similar thing based on Ethereum) is still in very early stage.

prodtorok 4 days ago 5 replies      
How important is the adoption of Solidity for Ethereum's success?
aphextron 4 days ago 0 replies      
I'm still confused. Is this anything beyond another opt-in botnet at it's core?
ksikka 4 days ago 2 replies      
What is Ethereum _useful_ for?When would I need it?Why is AWS (et al) not sufficient?
skrowl 4 days ago 0 replies      
Wow, I actually own some ETH on coinbase and had no idea it was more than just a competing crypto currency.
amadeuspzs 4 days ago 0 replies      
Hats off to attempting to explain a deeply technical topic to a variety of audiences, acknowledging different learner needs. A lot of posters here would probably benefit from taking a humanistic approach to explaining what they do to non-hacker news readers!
ojr 3 days ago 0 replies      
I don't believe all ICOs are illegal

ICO tokens are digital commodities/assets that are more similar to app credits or in-app mobile purchases than securities.

Unlike with Ethereum apps/dapps, you can't trade traditional app credits/in-app purchases on an exchange but imagine if there were farmville ERC-20 tokens a few years backs, people will buy thousands off speculation alone, it'll be interesting to see how the SEC will regulate this

flaviuspopan 3 days ago 1 reply      
Yo songzme, I have to thank you for throwing this up because as a result, I've since stumbled on 3Blue1Brown and his awesome video on crypto and math in general. You linked his video the same day he posted it, and I couldn't have been happier to find something that explained it in laymen terms so well. I've been meaning to solidify my understanding for friends and family, so thanks for sharing! Cheers mate.
moomin 4 days ago 1 reply      
I'm being sniffy here, and the technical achievement of Ethereum is considerable, but were I talking about a general contract computing platform, I'd want one where I could demonstrate the assertions the code makes. And that basically means a dependently typed functional programming language. I'd start with Idris, but even that might need strengthening before DAO-style issues were hard to find.
danra 3 days ago 0 replies      
The time estimate listed for an in-depth reading of the Yellow Paper (54 minutes) is... optimistic.
peternicky 2 days ago 0 replies      
Is the content on this website supposed to change based on the target audience selected in the dropdown? If so, it is broken on mobile safari.
ruiquelhas 4 days ago 0 replies      
Don't need a whole website to tell me it is just a vehicle for fraudulent securities.
Temasik 4 days ago 0 replies      
Ethereum is not immutable centralized and full of bugs

Good luck in the long run

dewyatt 3 days ago 0 replies      
I know I'll get hate for not commenting on the actual content of this article, but...why is this only taking up 1/3rd of my horizontal screen space? Perhaps all that space could be used to show the images alongside the text?

> heres what well talk about: (with links if you want to jump to a specific section).

Holy smokes, it's got links! Good thing he pointed it out to me since I never could have figured that out by the visual indicators that every web browser adds to links.

kensai 4 days ago 1 reply      
I love that the article ends in this line:

"Support this website! Send ETH to: 0x1c4d1804AD9d47de0D4209882843998E03E30dF9"

ourmandave 4 days ago 1 reply      
Wouldn't whatthefuckisethereum.edu make more sense for an educational site?
djhworld 3 days ago 0 replies      
What are people using Ethereum for though.

Outside of trading it between each other based on price speculation, is there a real world thing that isn't a prototype that people are using as part of the smart contract system?

retox 4 days ago 0 replies      
An establishment backed control job.
qbaqbaqba 4 days ago 0 replies      
Another cryptocurrency based scam?
sitepodmatt 4 days ago 1 reply      
The best way to think of ethereum is bitcoin with more flexibility, and its immutability define by owners* tolerance for loss. When vladmir and slock.it lose money they just roll it back. * owners being the slack group with 90% of the mining power. There exists shitcoins with more credibility than ethereum
cm2187 4 days ago 0 replies      
Another cryptocurrency? This is becoming a hyper inflation of crypto currencies...
dmead 4 days ago 0 replies      
Sounds terrible.
joeyspn 4 days ago 1 reply      
Ethereum has had a good start being the first mover in smart contracts, but EOS is coming and the value proposition is much more solid...


Blockchain tech is evolving...

dialupmodem 4 days ago 4 replies      
So tired of seeing blockchain, machine learning, and JS in hte news. Things like Ethereum and even Bitcoin are 100% hype, "what if", and are nothing more than dreams.

Last century gave us things like the personal computer, the internet, and so on. What about this one? Fancier webpages? Selfies? Tweets?

All of this blockchain and cryptocurrency and JS hype is tiresome to see. What happened to making real stuff that works and solves real problems?

jamisteven 4 days ago 3 replies      
Can't trust ethereum or bitcoin for the same reason we can't trust the fed and their "quantitive easing": 1. It's way complicated 2. It's fiat 3. Nobody understand who really is in control of its value
How to make a friend fast happyturtlethings.net
750 points by trurl42  2 days ago   241 comments top 40
cs2818 2 days ago 3 replies      
Maybe it's because I find this research area intriguing, but I did not think that this summary adequately captured the efforts of the original paper [0].

I think it was unfortunate that this piece did not begin by conveying that the original study's authors specifically state their goal "was to develop a temporary feeling of closeness, not an actual ongoing relationship".

Additionally, like many social psychology studies the nuances of the design and methodology are extremely valuable, yet this piece deems them as "dry" and mostly "devoid of enthusiasm". Anyone who has contributed to the design of substantial social psychology studies can tell you just how carefully each of these details is considered in design, implementation, and analysis. The original article [0] is full of detail, context, and discussion, and is definitely worth a read.

[0] https://psychodramaaustralia.edu.au/sites/default/files/fall...

flyGuyOnTheSly 2 days ago 2 replies      
Makes enough sense to me...

If you don't know anything about anybody... like the homeless man rummaging through my recycling bin right now looking for bottles... It's easy to look down on them or just write them off.

But if a neighbour told me anything about that same man... perhaps that he lost his job last year and thus just tries to supplement his income by picking up bottles in his spare time... I would feel immense sorrow even just looking at that man (who is still a stranger to me). Enough so that I would probably offer him some extra cash and a bite to eat if I had it.

I witnessed a man break into my neighbor's house the other day... which enraged me at first (what if it were my house?!) but when I saw the man who did it... I immediately felt sorry for him.

This was a desperate, dirty, homeless man with a smile on his face as the police dragged him away. He was probably just looking for some shelter to sleep that night.

The house that he "broke into" did admittedly look abandoned. And I found myself trying to justify the reasons that he might have tried to break into the house, rather than hating him silently.

All because I got a look at him.

enraged_camel 2 days ago 4 replies      
After a two year period of watching my boss interact with people, I can confirm that this method works extremely well.

He is a "natural", in the sense that he can form close bonds with people incredibly quickly. At first I thought he was using some sort of secret strategy, but after a while I noticed that he was simply sharing personal details about himself (which the article refers to as "self-disclosure") without being prompted, which encourages, and in fact compels, the other side to reciprocate.

Here is an example conference call conversation from two weeks ago, in fact, in which we were chatting with a potential client to schedule a meeting. Bob is my boss:


Bob: Okay. Let's have an in-person meeting next week. What day works best for you?

Client: How about Thursday at 2?

Bob: Sounds great. You know, I'm glad you didn't say Wednesday because I have to be with my two little girls that day, and I definitely could not miss that. They mean the world to me.

Client: Oh yeah, I understand. In fact I can relate... I have a daughter myself!


And then when we actually met in person this past Thursday, the topic of their daughters was a natural conversation point.

In contrast, I tend to be fairly reserved when it comes to sharing personal info. I like to stay on topic and dislike what I perceive as derails. The above conversation for me would have gone like this:


Me: Okay. Let's have an in-person meeting next week. What day works best for you?

Client: How about Thursday?

Me: Sounds great. See you on Thursday at 2 PM.


Similar, but also very different.

hannob 2 days ago 2 replies      
Word of warning: This is a social psychology study from 1997. There's been a lot of evidence lately that social psychology the way it's been done in the past is a huge mess and calling it pseudoscience isn't that far off. The field is only at the beginning of cleaning up that mess.

Any study that old that hasn't been replicated with rigorous scientific standards is about as valuable as a magazine horoscope.

jodrellblank 2 days ago 1 reply      
Harry Potter and the Methods of Rationality ( http://www.hpmor.com ), Chapter 7:

"""Draco giggled. "Yeah, right. Anyway... to answer what you asked..." Draco took a deep breath, and his face turned serious. "Father once missed a Wizengamot vote for me. I was on a broom and I fell off and broke a lot of ribs. It really hurt. I'd never hurt that much before and I thought I was going to die. So Father missed this really important vote, because he was there by my bed at St. Mungo's, holding my hands and promising me that I was going to be okay."

Harry glanced away uncomfortably, then, with an effort, forced himself to look back at Draco. "Why are you telling me that? It seems sort of... private..."

Draco gave Harry a serious look. "One of my tutors once said that people form close friendships by knowing private things about each other, and the reason most people don't make close friends is because they're too embarrassed to share anything really important about themselves." Draco turned his palms out invitingly. "Your turn?""""

Mz 2 days ago 3 replies      
Does that mean that some close, naturally-forming relationships don't get nurtured as lovingly as was achieved in a 45-minute conversation?

As one more random data point: I am a chatty extravert. Sometimes people imagine they are close to me when they are not.

yjlim5 2 days ago 3 replies      
Interesting how this psychology paper showed up in the context of "making a friend." I had only known of this study as a way of creating romantic relationship, through a NYT Modern Love piece "To Fall in Love With Anyone, Do this."

The method is hardly fast though - it requires two people to set aside a good chunk of time in a quiet setting to fully experience the gradual escalation of self-disclosure. When trying out this method in real life, what about the fact that you chose that one person to try this with? The reasons behind that choice would contribute much a successful result of this method but still left unexplained.

SaintGhurka 2 days ago 3 replies      
You can see this played out every day in designated smoking areas. I've observed smokers tend to become friends quickly. As smoking is increasingly frowned upon, everybody who shows up in the smoking area is sort of opening up and sharing a weakness with the other smokers just by being there.
Aron 2 days ago 2 replies      
I can't help but note that A. Aron was the lead writer. You done messed up!
phaed 2 days ago 5 replies      
I think I might have permanently damaged my vision with that #FFF on #000 color scheme. I still see text floating in the air.
donretag 2 days ago 3 replies      
"Low ego-identity makes same-sex pairs closer, high ego-identity makes cross-sex pairs closer"

Perhaps this statement is true, but is the end goal for both parties the same?

For some reason, I find myself befriending mainly females (I am male). Yes, I have a high ego. However, I can never tell if the other person is interested in a friendship or something more. I always want to put them in the friend zone, but I have awkward situations in the recent past where these friends have either made subtle and not-so subtle advances. I am not interested in anything more than friendship. I do not want to lead anyone on. So yes, perhaps that pairing works well, but is friendship the goal?

In addition, males tend to bond during "experiences", so what I have been seeking as of late is more male friendships. Other males are more inclined to go on multi-day backpacking adventures. According to this study, males with low-ego are likely to become friends, but I seek high ego/high energy friends.

turc1656 1 day ago 1 reply      
Not sure I agree that "agreement on important issues" should be on the "things that don't matter" section.

I know that stuff in particular matters big time to me, even just for friends. My closest friends tend to see the world very similarly to me than people I am much less close with. In fact, more often than not I end up pruning out the people that think drastically differently than I do. Most of the time this just happens naturally because I tend to be more wary about what I say and talk about once I know someone is extremely different than me. My spidey sense puts me on high alert and I basically enter a super-PC, "what every word that comes out of your mouth" mode. Occasionally, I deliberately reduce interactions and they go from being a close friend to being just a casual friend or acquaintance.

highprofit 9 hours ago 0 replies      
the author fails to mention that certain many things need to be in place first before you even have the first interaction that leads to potential close friendship...first impression elements such as streotype, prejudice, culture, assumptions, expectations, attraction, etc are all split second decisions we make that can affect how the interaction will go or if there is ever going to be interaction to begin with...the article completely ignores these aspects which are important specially in diverse multi cultural envrionments
suneilp 2 days ago 3 replies      
This is fascinating. I have for a long time been the introverted socially incompetent and weird (partially) one.

A lot of that has changed for me recently. Going to Brazilian Jiu-Jitsu for a year now and doing more yoga has really helped me to figure out a lot of traumatic issues from the past, especially with racism during childhood.

It's really reduced my fear of socializing with people. I'm now somewhere in between extrovert and introvert and I've noticed that self-disclosure happens a lot more and interactions have improved with strangers. Even small talk happens now and then. I've always hated small talk.

I don't like the extrovert/introvert/attachment type labelling though. Reading on them or discussing them with others has some kind of effect of boxing you in to self-limiting thought patterns. It can be a good starting point to figure things out, but I implore people to not think you're stuck in your ways.

kharms 2 days ago 3 replies      
Does anyone know what is meant by "high ego identities" and "low ego identities?"
pasbesoin 2 days ago 1 reply      
Here's the paper cited in the lower left corner of the graphic (and in the text above, I now see):


which only caught my eye because of the "A. Aron" author. I'm almost certain this is the husband of Elaine Aron, who launched the identification, qualifier, and description of the "Highly Sensitive Person" (HSP).


Arthur's paper is dated 1997, while Elaine's first book on the HSP type was published in 1996. Which I find to be an interesting correlation in time.

More recently, Elaine has (in my limited knowledge) been focusing on the concept of "ranking and [or, versus] linking".


The OP topic here reminds me somewhat of her perspective on linking.

Steeeve 2 days ago 0 replies      
Oh come on. It's not like anyone needs a lesson in these kinds of things.

<bookmarks page>

k__ 2 days ago 0 replies      
"Low ego-identity makes same-sex pairs closer, high ego-identity makes cross-sex pairs closer"

Can confirm. Over the years I minimised the amount of same sex friends in my life dramatically.

theprop 2 days ago 0 replies      
What were the gradually escalating intimate questions asked?
Dove 1 day ago 0 replies      
Personally, I find the recipe of kindness, honesty, laughter, generosity, loyalty, plus a bit of magic to be both practically effective and somewhat philosophically profound.
rodrigosetti 2 days ago 1 reply      
I clicked thinking this was about making a friend go without eating
markatkinson 2 days ago 0 replies      
I read this as how to convince a friend to start intermittent fasting.
DustinOfDenver 2 days ago 0 replies      
White letters on black background... my eyes are still seeing those letters. Is it just me?
zumu 2 days ago 1 reply      
Are there really 'high-ego'/'low-ego' individuals? Is it not a spectrum over time? Is it not controllable to some degree?

How is this different than toning down the ego a notch when trying to make platonic friends, while trying to be confident when looking to make romantic partners?

Animats 2 days ago 3 replies      
That reads like the instructions on pick-up artist forums.
adrianveidt 1 day ago 0 replies      
I'd add one more question under Closeness-Generating Procedure: What important truth do very few people agree with you on?
krick 2 days ago 0 replies      
That would be absolutely unreadable if not for firefox "Reader View" feature. Picture is still unreadable, of course.
senthil_rajasek 1 day ago 1 reply      
Let me hazard a guess, study done mostly with American subjects and majority of them being white.

I would love to see a qualifier like "How to make a friend fast in America?"

mcguire 2 days ago 1 reply      
"These manipulative geniuses chose a handful of university-level psychology classes early in the semester, divided the student volunteers (who didn't know each other) in pairs, and asked them to engage in an exercise designed to increase their closeness."

Psychology is the study of the psychology of psychology undergraduates.

"Those with dismissive-avoidant personalities didn't get as close

"The dismissive-avoidant is one of the attachment types in the study of social attachment in adults. It pertains to people who feel more comfortable without close social relationships, highly value their independence, they suppress and hide their feelings, and deal with rejection by distancing themselves from its source. The other personality types in adult attachments include secure, and two other insecure types: anxious-preoccupied and fearful-avoidant. These three personality types all reported on a higher (and similar) level of closeness achieved than the dismissive-avoidants."

Well, that lets me out, then.

colecut 1 day ago 0 replies      
I have seen so much information on the benefits of intermittent fasting lately, I interpreted this headline as "How to stop a friend from eating" =)
CiPHPerCoder 2 days ago 1 reply      
The flowchart in the beginning is very heteronormative. :(
megamindbrian 1 day ago 0 replies      
This reminds me of that scene in Silicon Valley where Bertram tells Dinesh he can make a friend just as good as his friend. Someone told me a while ago if you want to have a personal connection you have to be personable yourself and tell someone something honest and truthful. I've had much better luck since then. The middle box on this diagram is good advice.
Cephlin 2 days ago 0 replies      
Does this person write their n's and m's upside down? I can barely read their handwriting, it's really odd...

Does it say freud? I assume it's meant to say friend...

Does it say watters? I'm assuming it's trying to say matters?

It took me about a minute or two to deduce these meanings though...

tejasv 2 days ago 0 replies      
lock them up and dont give them any food
howfun 2 days ago 0 replies      
What software was used to make this nice diagram?
anon_chrstian 2 days ago 0 replies      
Visit your baptist church, meet faithful people.
nthcolumn 1 day ago 2 replies      
Steal his lunch?
divbit_m 2 days ago 2 replies      
Added to my daily flash cards
iosDrone 2 days ago 1 reply      
Things like this are why people laugh at Silicon Valley, this is a very aspie/robotic approach to this topic.
threepipeproblm 2 days ago 3 replies      
Hmm, I guess 'extrovert' is an acceptable spelling nowadays, even though it's the wrong Latin root.
ECMAScript 2017 Language Specification ecma-international.org
566 points by samerbuna  1 day ago   229 comments top 27
thomasfoster96 23 hours ago 4 replies      
Proposals [0] that made it into ES8 (whats new):

* Object.values/Object.entries - https://github.com/tc39/proposal-object-values-entries

* String padding - https://github.com/tc39/proposal-string-pad-start-end

* Object.getOwnPropertyDescriptors - https://github.com/ljharb/proposal-object-getownpropertydesc...

* Trailing commas - https://github.com/tc39/proposal-trailing-function-commas

* Async functions - https://github.com/tc39/ecmascript-asyncawait

* Shared memory and atomics - https://github.com/tc39/ecmascript_sharedmem

The first five have been available via Babel and/or polyfills for ~18 months or so, so theyve been used for a while now.

[0] https://github.com/tc39/proposals/blob/master/finished-propo...

callumlocke 23 hours ago 3 replies      
This is mostly symbolic. The annual ECMAScript 'editions' aren't very significant now except as a talking point.

What matters is the ongoing standardisation process. New JS features are proposed, then graduate through four stages. Once at stage four, they are "done" and guaranteed to be in the next annual ES edition write-up. Engines can confidently implement features as soon as they hit stage 4, which can happen at any time of year.

For example, async functions just missed the ES2016 boat. They reached stage 4 last July [1]. So they're officially part of ES2017 but they've been "done" for almost a year, and landed in Chrome and Node stable quite a while ago.

[1] https://ecmascript-daily.github.io/2016/07/29/move-async-fun...

pier25 10 hours ago 2 replies      
In the last couple of years we've seen a small number of significant improvements like async/await but mostly small tepid improvements like string padding, array.map(), etc. It's like TC39 are simply polishing JS.

I'd like to see TC39 tackling the big problems of JS like the lack of static type checking. I'm tired of looking at a method and having to figure out if it is expecting a string, or an object.

We had EcmaScript4 about 10 years ago with plenty of great features but TC39 killed it. And yeah, it probably made sense since the browser vendor landscape was very different back then. Today it would be possible to implement significant changes to the language much like the WebAssembly initiative.

HugoDaniel 23 hours ago 5 replies      
I would really love to see an object map function. I know it is easy to implement, but since they seem to be gaining ranks through syntax sugar, why not just have a obj.map( (prop, value) => ... ) ? :)
ihsw2 23 hours ago 2 replies      
Notably, with shared memory and atomics, pthreads support is on the horizon.


Granted it may be limited to consumption via Emscripten, it is nevertheless now within the realm of possibility.

For this that cannot grok the gravity of this -- proper concurrent/parallel execution just got a lot closer for those targeting the browser.

flavio81 21 hours ago 2 replies      
What I wish ECMAScript had was true support for number types other than the default 32-bit float. I can use 32 and 64 bit integers using "asm.js", but this introduces other complications of its own -- basically, having to program in a much lower level language.

It would be nice if EcmaScript could give us a middle ground -- ability to use 32/64 bit integers without having to go all the way down to asm.js or wasm.

pi-rat 23 hours ago 5 replies      
Really hate the naming for JS standards.. ES2017, ES8, ECMA-262. Way to confuse people :/
baron816 21 hours ago 0 replies      
Regardless of what gets included in the spec, I hope people think critically about what to use and what not to use before they jump in. Just because something is shiny and new in JS, it doesn't mean you have to use it or that it's some sort of "best practice."
43224gg252 23 hours ago 7 replies      
Can anyone recommend a good book or guide for someone who knows pre-ES6 javascript but wants to learn all the latest ES6+ features in depth?
pgl 23 hours ago 2 replies      
Heres whats in it: https://github.com/tc39/proposals/blob/master/finished-propo...

And some interesting tweets by Kent C. Dodds: https://twitter.com/kentcdodds/status/880121426824630273

Edit: fixed KCD's name.Edit #2: No, really.

drinchev 23 hours ago 1 reply      
For anyone wondering what's NodeJS support of ES8.

Everything is supported, except "Shared memory and atomics"

[1] http://node.green

speg 23 hours ago 1 reply      
Is there a "What's new" section?
correctsir 8 hours ago 0 replies      
I've been looking at the stage 2 and 3 proposals. I have a difficult time finding use for any of them except for Object spread/rest. The stage 4 template string proposal allowing invalid \u and \x sequences seems like a really bad idea to me that would inadvertently introduce programmer errors. I do hope the ECMAScript standardization folks will raise the barrier to entry for many of these questionable new features that create a maintenance burden for browsers and ES tooling and a cognitive burden on programmers. It was possible to understand 100% of ES5. I can't say the same thing for its successors. I think there should be a freeze on new features until all the browser vendors fully implement ES6 import and export.
rpedela 23 hours ago 2 replies      
Has there been any progress on supporting 64-bit integers?
jadbox 23 hours ago 1 reply      
I wish this-binding sugar would get promoted into stage 1.
gregjw 23 hours ago 1 reply      
I should really learn ES6
ascom 22 hours ago 1 reply      
Looks like ECMA's site is overloaded. Here's a Wayback Machine link for the lazy: https://web.archive.org/web/20170711055957/https://www.ecma-...
wilgertvelinga 16 hours ago 2 replies      
Really interesting how bad the only JavaScript code used on their own site is: https://www.ecma-international.org/js/loadImg.js
emehrkay 22 hours ago 2 replies      
I'd like to be able to capture object modifications like Python's magic __getattr__ __setattr__ __delattr__ and calling methods that do not exist on objects. In the meantime I am writing a get, set, delete method on my object and using those instead
espadrine 22 hours ago 0 replies      
I made a short sum-up of changes in this specification here: http://espadrine.github.io/New-In-A-Spec/es2017/
lukasm 23 hours ago 1 reply      
What is up with decorators?
komali2 18 hours ago 0 replies      
>AWB: Alternatively we could add this to a standard Dict module.

>BT: Assuming we get standard modules?

>AWB: We'll get them.


j0e1 21 hours ago 1 reply      
> Kindly note that the normative copy is the HTML version;

Am I the only one who finds this ironic..

idibidiart 17 hours ago 0 replies      
Wait, so async generators and web streams are 2018 or 2016?
Swizec 23 hours ago 3 replies      
Time to update https://es6cheatsheet.com

What's the feature you're most excited about?

bitL 21 hours ago 2 replies      
Heh, maybe JS becomes finally usable just before WebAssembly takes off, rendering it obsolete :-D
cies 23 hours ago 2 replies      
Nice 90s style website ECMA!
Math education: Its not about numbers, its about learning how to think nwaonline.com
509 points by CarolineW  1 day ago   309 comments top 50
d3ckard 1 day ago 17 replies      
Maybe I'm wrong, but I have always believed that if you want people to be good at math, it's their first years of education which are important, not the last ones. In other worlds, push for STEM should be present in kindergartens and elementary schools. By the time people go to high school it is to late.

I never had any problems with math until I went to university, so I was merely a passive observer of everyday struggle for some people. I honestly believe that foundations are the key. Either you're taught to think critically, see patterns and focus on the train of thought, or you focus on numbers and memorization.

The latter obviously fails at some point, in many cases sufficiently late to make it really hard to go back and relearn everything.

Math is extremely hierarchical and I believe schools do not do enough to make sure students are on the same page. If we want to fix teaching math, I would start there, instead of working on motivation and general attitude. Those are consequences, not the reasons.

gusmd 23 hours ago 3 replies      
I studied Mechanical Engineering, and it was my experience that several professors are only interested in having the students learn how to solve problems (which in the end boil down to math and applying equations), instead of actually learning the interesting and important concepts behind them.

My wife went to school for Architecture, where she learned "basic" structural mechanics, and some Calculus, but still cannot explain to me in simple words what an integral or a derivative is. Not her fault at all: her Calculus professor had them calculate polynomial derivatives for 3 months, without ever making them understand the concept of "rate or change", or what "infinitesimal" means.

For me that's a big failure of our current "science" education system: too much focus on stupid application of equations and formulas, and too little focus on actually comprehending the abstract concepts behind them.

Tommyixi 2 hours ago 0 replies      
For me, math has always been a source of unplugging. I'd sit at my kitchen table, put in some headphones, and just get lost in endless math problems.

Interestingly, now as a masters student in a statistics graduate program, I've learned that I don't like "doing" math but get enjoyment from teaching it. I really like it when students challenge me when I'm at the chalkboard and I'll do anything for those "ah-ha!" moments. The best is at the end of the semester hearing students say "I thought this class was going to suck but I worked hard and am proud of the work I did." I'm hoping that on some small scale I'm shaping their views on math. Or at least give them the confidence to say, "I don't get this, but I'm not afraid to learn it."

Koshkin 1 day ago 9 replies      
Learning "how to think" is just one part of it. The other part - the one that makes it much more difficult for many, if not most, people to learn math - especially the more abstract branches of it - is learning to think about math specifically. The reason is that mathematics creates its own universe of concepts and ideas, and this universe, all these notions are so different from what we have to deal with every day that learning them takes a lot of training, years of intensive experience dealing with mathematical structures of one kind or another, so it should come as no surprise that people have difficulty learning math.
spodek 1 day ago 1 reply      
> it's about learning how to think

It's about learning a set of thinking skills, not how to think. Many people who know no math can think and function very well in their domains and many people who know lots of math function and think poorly outside of math.

J_Sherz 23 hours ago 2 replies      
My problem with Math education was always that speed was an enormous factor in testing. You can methodically go through each question aiming for 100% accuracy and not finish the test paper, while other students can comfortably breeze through all the questions and get 80% accuracy but ultimately score higher on the test. This kind of penalizing for a lack of speed can lead to younger kids who are maximizing for grades to move away from Math for the wrong reasons.

Source: I'm slow but good at Math and ended up dropping it as soon as I could because it would not get me the grades I needed to enter a top tier university.

mindcrime 13 hours ago 1 reply      
This part really resonates with me as well:

"You read all the time, right? We constantly have to read. If you're not someone who picks up a book, you have to read menus, you've got to read traffic signs, you've got to read instructions, you've got to read subtitles -- all sorts of things. But how often do you have to do any sort of complicated problem-solving with mathematics? The average person, not too often."

From this, two deductions:

Having trouble remembering the quadratic equation formula doesn't mean you're not a "numbers-person."

To remember your math skills, use them more often.

What I remember from high-school and college was this: I'd take a given math class (say, Algebra I) and learn it reasonably well. Then, summer vacation hits. Next term, taking Algebra II, all the Algebra I stuff is forgotten because, well, who uses Algebra I over their summer vacation? Now, Algebra II is harder than it should be because it builds on the previous stuff. Lather, rinse, repeat.

This is one reason I love Khan Academy so much. You can just pop over there anytime and spend a few minutes going back over stuff at any level, from basic freaking fractions, up through Calculus and Linear Algebra.

BrandiATMuhkuh 23 hours ago 0 replies      
Disclaimer: I'm CTO of https://www.amy.ac an online math tutor.

From our experience most people struggle with math since they forgot/missed a curtain math skill they might have learned a year or two before. But most teaching methods only tell the students to practise more of the same. When looking at good tutors, we could see that a tutor observes a student and then teaches them the missing skill before they actually go to the problem the student wanted help with. That seems to be a usefull/working approach.

Nihilartikel 23 hours ago 0 replies      
This is something I've been pondering quite a bit recently. It is my firm belief that mathematical skill and general numeracy are actually a small subset of abstract thought. Am I wrong in thinking that school math is the closest to deliberate training in abstract reasoning that one would find in public education?

Abstract reasoning, intuition, and creativity, to me, represent the underpinnings of software engineering, and really, most engineering and science, but are taught more by osmosis along side the unintuitive often boring mechanics of subjects. The difference between a good engineer of any sort and one that 'just knows the formulas' is the ability to fluently manipulate and reason with symbols and effects that don't necessarily have any relation or simple metaphor in the tangible world. And taking it further, creativity and intuition beyond dull calculation are the crucial art behind choosing the right hypothesis to investigate. Essentially, learning to 'see' in this non-spacial space of relations.When I'm doing system engineering work, I don't think in terms of X Gb/s throughput and Y FLOPS... (until later at least) but in my mind I have a model of the information and data structures clicking and buzzing, like watching the gears of a clock, and I sort of visualize working with this, playing with changes. It wouldn't surprise me of most knowledge workers arrive have similar mental models of their own. But what I have observed is that people who have trouble with mathematics or coding aren't primed at all to 'see' abstractions in their minds eye. This skill takes years to cultivate, but, it seems that its cultivation is left entirely to chance by orthodox STEM education.

I was just thinking that this sort of thing could be approached a lot more deliberately and could yield very broad positive results in STEM teaching.

quantum_state 18 hours ago 0 replies      
Wow ... this blows me away ... in a few short hours, so many people chimed in sharing thoughts ... It is great ... Would like to share mine as well.Fundamentally, math to me is like a language. It's meant to help us to describe things a bit more quantitatively and to reason a bit more abstractly and consistently ... if it can be made mechanical and reduce the burden on one's brain, it would be ideal. Since it's like a language, as long as one knows the basics, such as some basic things of set theory, function, etc., one should be ready to explore the world with it. Math is often perceived as a set of concepts, theorems, rules, etc. But if one gets behind the scene to get to know some of the original stories of the things, it would become very nature. At some point, one would have one's mind liberated and start to use math or create math like we usually do with day to day languages such as English.
jeffdavis 22 hours ago 2 replies      
My theory is that math anxiety is really anxiety about a cold assessment.

In other subjects you can rationalize to yourself in various ways: the teacher doesn't like me, or I got unlucky and they only asked the history questions I didn't know.

But with math, no rationalization is possible. There's no hope the teacher will go easy on you, or be happy that you got the gist of the solution.

Failure in math is often (but not always) a sign that education has failed in general. Teachers can be lazy or too nice and give good grades in art or history or reading to any student. But when the standardized math test comes around, there's no hiding from it (teacher or student).

tnone 6 hours ago 0 replies      
Is there any other subject that is given as much leeway for its abysmal pedagogical failures?

"Economics, it's not about learning how money and markets work, it's about learning how to think."

"Art, it's not about learning about aesthetics, style, or technique, it's about learning how to think."

"French, it's not about learning how to speak another language, it's..."

Math has a problem, and it's because the math curriculum is a pile of dull, abstract cart-before-the-horse idiocy posing as discipline.

g9yuayon 23 hours ago 2 replies      
Is this a US thing? Why would people still think that math is about numbers? Math is about patterns, which got drilled into us by our teachers in primary school. I really don't understand how US education system can fuck up so badly on fundamental subject like math.
alistproducer2 13 hours ago 1 reply      
I can't agree more. Math is about intuition of what the symbols are doing. In the case of functions, intuition about how the symbols are transforming the input. I've always thought I was "bad at math." It wasn't until my late 20's when I took it upon myself to get better at calculus and I used "Calculus Success in 20 Minute a Day[0]" did I finally realize why I was "bad" at it; I never understood what I was doing.

That series of book really put intuition at the forefront. I began to realize that the crazy symbols and formulas were stand-in for living, breathing dynamic systems: number transformers. Each formula and symbol represented an action. Once I understood Math as a way to encode useful number transformation, it all clicked. Those rules and functions were encoded after a person came up with something they wanted to do. The formula or function is merely a compact way of describing this dynamic system to other people.

The irony was I always thought math was boring. In retrospect it was because it was taught as if it had no purpose other than to provide useless mental exercise. Once I started realizing that derivatives are used all around me to do cool shit, I was inspired to learn how they worked because I wanted to use them to do cool shit too. I went through several years of math courses and none of them even attempted to tell me that math was just a way to represent cool real world things. It took a $10 used book from amazon to do that. Ain't life grand?


ouid 22 hours ago 0 replies      
When people talk about the failure of mathematics education, we often talk about it in terms of the students inability to "think mathematically".

It's impossible to tell if students are capable of thinking mathematically, however, because I have not met a single (non-mathlete) student who could give me the mathematical definition of... anything. How can we evaluate student's mathematical reasoning ability if they have zero mathematical objects about which to reason?

brendan_a_b 19 hours ago 0 replies      
My mind was blown when I came across this Github repo that demonstrates mathematical notation by showing comparisons with JavaScript code https://github.com/Jam3/math-as-code

I think I often struggled or was intimidated by the syntax of math. I started web development after years of thinking I just wasn't a math person. When looking at this repo, I was surprised at how much more easily and naturally I was able to grasp concepts in code compared to being introduced to them in math classes.

monic_binomial 14 hours ago 1 reply      
I was a math teacher for 10 years. I had to give it up when I came to realize that "how to think" is about 90% biological and strongly correlated to what we measure with IQ tests.

This may be grave heresy in the Temple of Tabula Rasa where most education policy is concocted, but nonetheless every teacher I ever knew was ultimately forced to chose between teaching real math class with a ~30% pass rate or a watered-down math Kabuki show with a pass rate just high enough to keep their admins' complaints to a low grumble.

In the end we teachers would all go about loudly professing to each other that "It's not about numbers, it's about learning how to think" in a desperate bid to quash our private suspicions that there's actually precious little that can be done to teach "how to think."

jtreagan 20 hours ago 0 replies      
You say "it's not about numbers, it's about learning how to think," but the truth is it's about both. Without the number skills and the memorization of all those number facts and formulas, a person is handicapped both in learning other subjects and skills and in succeeding and progressing in their work and daily life. The two concepts -- number skills and thinking skills -- go hand in hand. Thinking skills can't grow if the number skills aren't there as a foundation. That's what's wrong with the Common Core and all the other fads that are driving math education these days. They push thinking skills and shove a calculator at you for the number skills -- and you stall, crash and burn.

The article brings out a good point about math anxiety. I have had to deal with it a lot in my years of teaching math. Sometimes my classroom has seemed so full of math anxiety that you could cut it with a butter knife. I read one comment that advocated starting our children out even earlier on learning these skills, but the truth is the root of math anxiety in most people lies in being forced to try to learn it at too early an age. Most children's brains are not cognitively developed enough in the early grades to learn the concepts we are pushing at them, so when a child finds failure at being asked to do something he/she is not capable of doing, anxiety results and eventually becomes habit, a part of their basic self-concept and personality. What we should instead do is delay starting school until age 8 or even 9. Some people don't develop cognitively until 12. Sweden recently raised their mandatory school age to 7 because of what the research has been telling us about this.

taneq 1 day ago 6 replies      
As my old boss once said, "never confuse mathematics with mere arithmetic."
dbcurtis 22 hours ago 0 replies      
Permit me to make a tangentially related comment of interest to parents reading this thread: This camp for 11-14 y/o kids: http://www.mathpath.org/ is absolutely excellent. My kid loved it so much they attended three years. Great faculty... John Conway, Francis Su, many others. If you have a math-loving kid of middle-school age, I encourage you to check it out.
katdev 9 hours ago 0 replies      
You know what helps kids (and adults) learn math? The abacus/soroban. Yes, automaticity with math facts/basic math is important but what's really important is being able to represent the base-10 system mentally.

The abacus is an amazing tool that's been successful in creating math savants - here's the world champion adding 10 four-digit numbers in 1.7 seconds using mental math https://www.theguardian.com/science/alexs-adventures-in-numb...

Students are actually taught how to think of numbers in groups of tens, fives, ones in Common Core math -- however, most are not given the abacus as a tool/manipulative.

simias 1 day ago 1 reply      
I completely agree. I think we start all wrong too, the first memories I have of maths at school was learning how to compute an addition, a subtraction and later a multiplication and division. Then we had to memorize by heart the multiplication tables.

That can be useful of course (especially back then when we didn't carry computers in our pockets at all times) but I think it sends some pupils on a bad path with regards to mathematics.

Maths shouldn't be mainly about memorizing tables and "dumbly" applying algorithms without understanding what they mean. That's how you end up with kids who can answer "what's 36 divided by 4" but not "you have 36 candies that you want to split equally with 3 other people, how many candies do you end up with?"

And that goes beyond pure maths too. In physics if you pay attention to the relationship between the various units you probably won't have to memorize many equations, it'll just make sense. You'll also be much more likely to spot errors. "Wait, I want to compute a speed and I'm multiplying amperes and moles, does that really make sense?".

jrells 23 hours ago 0 replies      
I often worry that mathematics education is strongly supported on the grounds that it is about "learning how to think", yet the way it is executed rarely prioritizes this goal. What would it look like if math curriculum were redesigned to be super focused on "learning how to think"? Different, for sure.
alexandercrohde 20 hours ago 0 replies      
Enough "I" statements already. It's ironic how many people seem to think their personal experience is somehow relevant on a post about "critical thinking."

The ONLY sane way to answer these questions:- Does math increase critical thinking?- Does critical thinking lead to more career earnings/happiness/etc?- When does math education increase critical thinking most?- What kind of math education increases critical thinking?

Is with a large-scale research study that defines an objective way to measure critical thinking and controls for relevant variables.

Meaning you don't get an anecdotal opinion on the matter on your study-of-1 no-control-group no-objective-measure personal experience.

andyjohnson0 7 hours ago 0 replies      
A couple of years ago I did the Introduction to Mathematical Thinking course on Coursera [1]. Even though I found it hard, I enjoyed it and learned a lot, and I feel I got some insight into mathematical though processes. Recommended.

[1] https://www.coursera.org/learn/mathematical-thinking

WheelsAtLarge 17 hours ago 0 replies      
True, Math is ultimately about how to think but students need to memorize and grasp the basics in addition to making sure that new material is truly understood. That's where things fall apart. We are bombarded with new concepts before we ultimately know how to use what we learned. How many people use imaginary numbers in their daily life? Need I say more?

We don't communicate in Math jargon every day so it's ultimate a losing battle. We learn new concepts but we lose them since we don't use them. Additionally a large number of students get lost and frustrated and finally give up. Which ultimately makes math a poor method to teach thinking since only a few students can attain the ultimate benefits.

Yes, Math is important, and needs to be taught, but if we want to use it as away to learn how to think there are better methods. Programming is a great way. Students can learn it in one semester and can use it for life and can also expand on what they already know.

Also, exploring literature and discussing what the author tries to convey is a great way to learn how to think. All those hours in English class trying to interpret what the author meant was more about exploring your mind and your peer's thoughts than what the author actually meant. The author lost his sphere of influence once the book was publish. It's up to the readers of every generation to interpret the work. So literature is a very strong way to teach students how to think.

lordnacho 1 day ago 4 replies      
I think a major issue with math problems in school is that they're obvious.

By that I don't mean it's easy. But when you're grappling with some problem, whatever it is, eg find some angle or integrate some function, if you don't find the answer, someone will show you, and you'll think "OMG why didn't I think of that?"

And you won't have any excuses for why you didn't think of it. Because math is a bunch of little logical steps. If you'd followed them, you'd have gotten everything right.

Which is a good reason to feel stupid.

But don't worry. There are things that mathematicians, real ones with PhDs, will discover in the future. By taking a number of little logical steps that haven't been taken yet. They could have gone that way towards the next big theorem, but they haven't done it yet for whatever reason (eg there's a LOT of connections to be made).

bojo 5 hours ago 0 replies      
When I first saw it I thought the sign in the mentioned tweet may have been because the deli was next to a mathematics department and the professors/students would stand around and hold up the line while discussing math.

Overactive imagination I guess.

dahart 1 day ago 4 replies      
I wonder if a large part of our math problem is our legacy fixation on Greek letters. Would math be more approachable to English speakers if we just used English?

I like to think about math as language, rather than thought or logic or formulas or numbers. The Greek letters are part of that language, and part of why learning math is learning a completely foreign language, even though so many people who say they can't do math practice mathematical concepts without Greek letters. All of the math we do on computers, symbolic and numeric, analytic and approximations, can be done using a Turing machine that starts with only symbols and no built-in concept of a number.

lucidguppy 14 hours ago 0 replies      
Why aren't people taught how to think explicitly? The Greeks and the Romans thought it was a good idea.
Mz 13 hours ago 0 replies      
Well, I actually liked math and took kind of a lot of it in K-12. I was in my 30s before I knew there were actual applications for some of the things I memorized my way through without really understanding.

When I homeschooled my sons, I knew this approach would not work. My oldest has trouble with numbers, but he got a solid education in the concepts. He has a better grasp of things like GIGO than most folks. We also pursued a stats track (at their choice) rather than an algebra-geometry-trig track.

Stats is much more relevant to life for most people most of the time and there are very user-friendly books on the topic, like "How to lie with statistics." If you are struggling with this stuff, I highly recommend pursuing something like that.

listentojohan 1 day ago 0 replies      
The true eye-opener for me was reading Number - The Language of Science by Tobias Dantzig. The philosophy part of math as an abstraction layer for what is observed or deducted was a nice touch.
GarvielLoken 5 hours ago 0 replies      
tl;drA couple of numbers-nerds are sad and offended that math is not as recognized as reading and literature, where there are great works that speaks of the human condition and illustrates life.

Also they have the mandatory "everything is really math! ". "LeGrand notes that dancing and music are mathematics in motion. So ... dance, play an instrument."

Just because i can describe history through the perspective of capitalism or Marx theories, does not make history the same thing as either of those.

jmml97 1 day ago 1 reply      
I'm studying math right now and I have that problem. We're just being vomited theorems and propositions in class instead of making us think. There's not a single subject dedicated to learning the process of thinking in maths. So I think we're learning the wrong (the hard) way.
keymone 1 day ago 1 reply      
i always found munging numbers and memorizing formulas discouraging. i think physics classes teach kids more math than math classes and in more interesting ways (or at least have potential to).
JoshTriplett 1 day ago 0 replies      
One of the most critical skills I see differentiating people around me (co-workers and otherwise) who succeed and those who don't is an analytical, pattern-recognizing and pattern-applying mindset. Math itself is quite useful, but I really like the way this particular article highlights the mental blocks and misconceptions that seem to particularly crop up around mathematics; those same blocks and misconceptions tend to get applied to other topics as well, just less overtly.
cosinetau 22 hours ago 0 replies      
As a someone with a degree in applied mathematics, I feel the problem with learning mathematics is more often than not a problem or a fault of the instructor of mathematics.

Many instructors approach the subject with a very broad understanding of the subject, and it's very difficult (more difficult than math) to shake that understanding and abstract it to understandable chunks of knowledge or reasoning.

archeantus 22 hours ago 0 replies      
If we want to teach people how to think, I propose that math isn't the best way to do it. I can't tell you how many times I complained about how senseless math was. The real-world application is very limited, for the most part.

Contrast that to if I had learned programming instead. Programming definitely teaches you how to think, but it also has immense value and definite real-world application.

k__ 20 hours ago 0 replies      
I always had the feeling I failed to grasp math because I never got good at mid level things.

It took me reeeally long to grasp things like linear algebra and calculus and I never was any good at it.

It was a struggle to get my CS degree.

Funny thing is, I'm really good at the low level elementary school stuff so most people think I'm good at math...

CoolNickname 21 hours ago 0 replies      
School is not about learning but learning how to think. The way it is now it's more about showing off than it is about anything actually useful. They don't reward effort, they reward talent.
humbleMouse 20 hours ago 0 replies      
On a somewhat related tangent, I think about programming the same way.

I always tell people programming and syntax are easy - it's learning to think in a systems and design mindset that is the hard part.

crb002 1 day ago 2 replies      
Programming needs to be taught alongside Algebra I. Especially in a language like Haskell or Scheme where algebraic refactoring of type signatures looks like normal algebra notation.
calebm 21 hours ago 0 replies      
I agree, but have a small caveat: math does typically strongly involve numbers, so in a way, it is about numbers, though it's definitely not about just memorizing things or blindly applying formulas.

It just bugs me sometimes when people make hyperbolic statements like that. I remember coworkers saying things like "software consulting isn't about programming". Yes it is! The primary skill involved is programming, even programming is not the ONLY required skill.

yellowapple 23 hours ago 0 replies      
I wish school curricula would embrace that "learning how to think" bit.

With the sole exception of Geometry, every single math class I took in middle and high school was an absolutely miserable time of rote memorization and soul-crushing "do this same problem 100 times" busy work. Geometry, meanwhile, taught me about proofs and theorems v. postulates and actually using logical reasoning. Unsurprisingly, Geometry was the one and only math class I ever actually enjoyed.

gxs 15 hours ago 0 replies      
Late to the party but wanted to share my experience.

I was an Applied Math major at Berkely. Why?

When I was in 7th grade, I had an old school Russian math teacher. She was tough, not one for niceties, but extremely fair.

One day, being the typical smart ass that I was, I said, why the hell do I need to do this, I have 0 interest in Geometry.

Her answer completely changed my outlook and eventually was the reason why I took extensive math in HS and majored in math in college.

Instead of dismissing me, instead of just telling me to shut up and sit down, she explained things to me very calmly.

She said doing math beyond improving your math skills improves your reasoning ability. It's a workout for your brain and helps develop your logical thinking. Studying it now at a young age will help it become part of your intuition so that in the future you can reason about complex topics that require more than a moment's thoughts.

She really reached me on that day, took me a while to realize it. Wish I could have said thank you.

Wherever you are Ms. Zavesova, thank you.

Other beneits: doing hard math really builds up your tolerance for building hard problems. Reasoning through long problems, trying and failing, really requires a certain kind of stamina. My major definitely gave me this. I am a product manager now and while I don't code, I have an extremely easy time working with engineers to get stuff done.

0xFFC 1 day ago 0 replies      
Exactly, as ordinary hacker i was always afraid of math. But after taking mathematical Analysis I realized how wonderful math is. These day i am in love with pure mathematics. It literally corrected my brain pipeline in so many ways and it continues to do it further and further.

I have thought about changing my major to pure mathematics too.

pklausler 21 hours ago 0 replies      
How do you "learn to think" without numbers?


EGreg 22 hours ago 0 replies      
There just needs to be faster feedback than once a test.


yequalsx 23 hours ago 3 replies      
I teach math at a community college. I've tried many times to teach my courses in such a way that understanding the concepts and thinking were the goals. Perhaps I'm jaded by the failures I encountered but students do not want to think. They want to see a set of problem types that need to be mimicked.

In our lowest level course we teach beginning algebra. Almost everyone has an intuition that 2x + 3x should be 5x. It's very difficult to get them to understand that there is a rule for this that makes sense. And that it is the application of this rule that allows you to conclude that 2x + 3x is 5x. Furthermore, and here is the difficulty, that same rule is why 3x + a x is (3+a)x.

I believe that for most people mathematics is just brainwashing via familiarity. Most people end up understanding math by collecting knowledge about problem types, tricks, and becoming situationally aware. Very few people actually discover a problem type on their own. Very few people are willing, or have been trained to be willing, to really contemplate a new problem type or situation.

Math education in its practice has nothing to do with learning how to think. At least in my experience and as I understand what it means to learn how to think.

bitwize 22 hours ago 0 replies      
Only really a problem in the USA. In civilized countries, there's no particular aversion to math or to disciplined thinking in general.
Cloudflares fight with a patent troll could alter the game techcrunch.com
649 points by Stanleyc23  21 hours ago   230 comments top 32
jgrahamc 19 hours ago 3 replies      
More detail on what we are doing from three blog posts:

Standing Up to a Dangerous New Breed of Patent Trollhttps://blog.cloudflare.com/standing-up-to-a-dangerous-new-b...

Project Jengohttps://blog.cloudflare.com/project-jengo/

Patent Troll Battle Update: Doubling Down on Project Jengohttps://blog.cloudflare.com/patent-troll-battle-update-doubl...

JumpCrisscross 18 hours ago 4 replies      
I've used Latham & Watkins. Just made a call to let a partner there know what I think about his firm's alumna and how it colors my opinion of him and his firm.

Encourage everyone to check with your firm's General Counsel about this. If you use Latham, or Kirkland or Weil, encourage your GC to reach out and make your views heard. It's despicable that these lawyers are harassing their firms' former and potential clients.

drtillberg 11 minutes ago 0 replies      
This is a dysfunction in the patent and legal processes that cannot be fixed by even more dysfunctional tactics deployed against the NPE. The rules against champterty (buying a cause of action) have been relaxed considerably to the extent in many jurisdictions of being a dead letter, and the litigation financing industry seems to have a better sound bite.

At least half of the problem is the "American Rule" of rarely shifting legal fees, which if you dig a bit you will find is of recent vintage. Back in time, for example in Massachusetts, there actually is a law for shifting legal fees as costs as a matter of course; the catch is that the fee is very low (even at the time it was enacted) of about $2.50 per case, which partly reflects inflation and partly antagonism toward legal fees.

I wonder whether a compromise solution would be to require a deposit for costs of a percentage of the demand for recovery like 2.5% of $34mm, which post-suit you could figure how to divvy up. That would make the demand more meaningful, and provide a tangible incentive to the plaintiff to think a little harder about pricing low-probability lottery-ticket-type litigation.

notyourday 20 hours ago 3 replies      
It is all about finding a correct pressure point.

Long time ago certain Philadelphia area law firms decided to represent vegan protesters that created a major mess in a couple of high end restaurants.

A certain flamboyant owner of one the restaurants targeted decided to have a good time applying his version of asymmetric warfare. The next partners from those law firm showed up to wine and dine their clients in the establishment, the establishment(s) politely refused the service to the utter horror of the lawyers.

Needless to say, the foie gras won...

[Edit: spelling]

tracker1 19 hours ago 2 replies      
I think that this is absolutely brilliant. I've been against the patent of generalistic ideas, and basic processes for a very long time. Anything in software should not really be patentable, unless there is a concrete implementation of an invention, it's not an invention, it's a set of instructions.

Let software work under trade secrets, but not patents. Anyone can implement something they think through. It's usually a clear example of a need. That said, I think the types of patent trolling law firms such as this deserve every bit of backlash against them that they get.

avodonosov 13 hours ago 6 replies      
It was late summer night when I noticed that article on HN. I immediately noticed it's organized like a novel - this popular lame style which often annoys me lately:

 Matthew Prince knew what was coming. The CEO of Cloudflare, an internet security company and content delivery network in San Francisco, was behind his desk when the emails began to trickle in ...
Was he really behind his desk?

Hesitated a little before posting - am I trying to self-assert by deriding others? But this "novel" article style is some new fashion / cliche which might be interesting to discuss. Let's see what others think.

siliconc0w 21 hours ago 2 replies      
I'm not a fan of the argument that if Blackbird weren't a NPE it'd be okay because Cloudflare could then aim it's 150 strong patent portfolio cannon back at them. It's basically saying incumbents like Cloudflare don't really want to fix the system, they want to keep the untenable 'cold war' status quo which protects them but burdens new entrants.
oskarth 19 hours ago 5 replies      
> So-called non-practicing entities or holders of a patent for a process or product that they dont plan to develop often use them to sue companies that would sooner settle rather than pay what can add up to $1 million by the time a case reaches a courtroom.

Why on earth aren't non-practicing entity patent lawsuits outlawed? Seems like a no-brainer, and I can't imagine these firms being big enough to have any seriously lobbying power.

tragomaskhalos 3 hours ago 0 replies      
This reminds me of an altercation in the street that my neighbour reported overhearing some years ago:

Aggressive Woman: You need to watch your step, my husband is a criminal lawyer

Woman she was trying to intimidate: (deadpans) Aren't they all ?

mabbo 20 hours ago 2 replies      
> [Is Blackbird] doing anything thing that is illegal or unethical? continues Cheng. For the most part, its unethical. But its probably not illegal.

If it's not illegal, more work needs to be done to make it illegal. Inventors always have avenues, moreso today than ever before.

corobo 2 hours ago 0 replies      
I'm hoping their fight actually leads to a defeat rather than a submission. I have faith that Cloudflare will see this through but I also had faith that Carolla would too.


mgleason_3 13 hours ago 2 replies      
We need to get rid of software patents. Patents were created to encourage innovation. Software patents simply rewarding the first person who patents what is almost always an obvious next step. That's not innovation.
anonjuly12 3 hours ago 0 replies      
> Its for this reason that Prince sees Cloudflares primary mission as figuring out how to increase Blackbirds costs. Explains Prince, We thought, if its asymmetric, because its so much cheaper for Blackbird to sue than for a company to defend itself, how can we make it more symmetric? And every minute that they spend having to defend themselves somewhere else is a minute they arent suing us or someone else.

They should take it a step further and apply the Thiel strategy of finding people with grievances against the founders of the patent troll and support individual lawsuits against them.

bluesign 2 hours ago 0 replies      
Tbh I dont think there is a practical solution for patent trolls.

Patents are basically assets, and they are transferable.

Making then non-transferable is not a solution at all. Basically law firms can represent patent owners.

System needs different validity for patents, which should be set after an evaluation, and can be challenged at the courts.

Putting all patents in the same basket is plain stupid.

shmerl 20 hours ago 2 replies      
Someone should figure out a way how to put these extortionists in prison for protection racket.
ovi256 21 hours ago 4 replies      
I've noticed a Techcrunch comment that makes this fight about software patents and states that forbiding them would be a good solution. I think that's a very wrong view to take. The software patent fight is worth fighting, but do not conflate the two issues. Abuse by patent trolls or non-practicing entities can happen even without software patents.

The law patch that shuts down patent trolls will have no effect on software patents, and vice-versa.

bluejekyll 10 hours ago 0 replies      
Something needs to give on this stuff. It's probably going to be hard to get a significant change done, such as getting rid of software patents (following from no patents on Math).

I've wondered if one way to chip away at them, would be to make Patents non-transferable. This would preserve the intent, to protect the inventors R&D costs, but not allow the patents to be exploited by trolls. This would have the effect of devaluing patents themselves, but it's not clear that patents were ever intended to carry direct value rather they exist to grant temporary monopolies for the inventor to earn back the investment.

avodonosov 14 hours ago 0 replies      
I've read the patent. But what part of CloudFlare services it claims to cover?

Also, the patent applies the same way to almost any proxy server (ICAP and similar https://en.wikipedia.org/wiki/Internet_Content_Adaptation_Pr...)

fhrow4484 17 hours ago 1 reply      
What is the state of "anti-patent trolls" laws in different state? I know for instance Washington state has a law like this effective since July 2015 [1][2]. What is it like in other states, specifically California?

[1] http://www.atg.wa.gov/news/news-releases/attorney-general-s-...

[2] http://app.leg.wa.gov/RCW/default.aspx?cite=19.350&full=true

redm 19 hours ago 0 replies      
It would be great if the "game" was really altered but I've heard that statement and hope many times over the last 10 years. While there has been some progress, patent trolling continues. Here's hoping...
FussyZeus 21 hours ago 3 replies      
I've never heard a good argument against this so I'll say it here: Require that the plaintiff in this cases show demonstrable, actual, and quantifiable loss by the activity of the defendant. It seems like such a no-brainer that a business suing for damage to it's business prospects after someone stole their idea would have to actually show how it was damaged. Even allowing very flimsy evidence would do a lot to dissuade most trolls, because as every article points out, they don't make anything. And if they don't make or sell a product, then patent or not, they haven't lost anything or been damaged in any way.
arikrak 14 hours ago 0 replies      
Business usually settle rather than fight patent trolls, but I wonder if fighting is worth it if it can deter others from suing them in the future? I guess it depends somewhat on the outcome of the case..
avodonosov 13 hours ago 0 replies      
Can the Decorator design pattern be considered a prior art?
SaturateDK 19 hours ago 0 replies      
This is great, I guess I'm going "Prior art searching" right away.
danschumann 20 hours ago 0 replies      
Can I create 5 more HN accounts just to +1 this some more?
kelukelugames 20 hours ago 1 reply      
I'm in tech but not in the valley. How accurate is HBO's representation of patent trolls?
unityByFreedom 17 hours ago 0 replies      
> Blackbird is a new, especially dangerous breed of patent troll... Blackbird combines both a law firm and intellectual property rights holder into a single entity. In doing so, they remove legal fees from their cost structure and can bring lawsuits of potentially dubious merit without having to bear any meaningful cost

That's not new. It's exactly what Intellectual Ventures was (or is?) doing.

y0ssar1an 12 hours ago 0 replies      
Go Cloudflare Go!
draw_down 18 hours ago 0 replies      
Unfortunately, I think this is written in a way that makes it hard to understand what exactly Cloudflare is doing against the troll. They're crowdsourcing prior art and petitioning the USPTO?
dsfyu404ed 20 hours ago 1 reply      
subhrm 20 hours ago 1 reply      
Long live patents !
ivanbakel 21 hours ago 3 replies      
I don't see anything game-changing about their approach. Fighting instead of settling should definitely be praised, but the only differences between this legal challenge and any of the previous ones are the result of recent changes in the law or the judiciary, which are beyond Cloudflare's control. Nothing suggests that patent-trolling itself as a "game" is going to shift or go away after this, and until that is made to happen, it's going to be as lucrative as ever.
Counterintuitive problem: People in a room keep giving dollars to random others decisionsciencenews.com
548 points by aqsalose  3 days ago   238 comments top 55
jordigh 2 days ago 9 replies      
This is an irreducible Markov chain (i.e. it's possible to go from any state to any other with positive probability in a certain number of steps) on a finite state space (because money never enters or leaves the system). This means that every state is recurrent, i.e. will happen if you wait long enough.

This is the relevant theorem:


This means you should eventually see each person hoard all of the money in turn. Try it with 5 people and 5 dollars each. Then the system only has 29 choose 25 = 23751 states[1], which should give your computer enough time to hit them all.

Edit: Here, Python "proof"

 from random import randint numpeople = 5 initmoney = 5 ledger = [initmoney]*numpeople while True: recipients = [] for i in range(numpeople): if ledger[i] == 0: recipient = None else: recipient = randint(0, numpeople-2) if recipient >= i: recipient += 1 recipients.append(recipient) for i, recipient in enumerate(recipients): if recipient is not None: ledger[i] -= 1 ledger[recipient] += 1 if max(ledger) == initmoney*numpeople - 1: print ledger

[1] https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)...

AnthonyMouse 3 days ago 3 replies      
This is not surprising. Random doesn't mean homogeneous. Notice that in this model there are rich people and poor people, but advance many iterations and the rich people aren't the same people as they were.

If you want to see the real problem, don't let the people with no money get away with not paying the dollar. Make them borrow it with interest from someone with money.

jldugger 2 days ago 2 replies      
IMO, this reveals more about human intuition regarding randomness than whatever financial point it purports to make. It's still a useful point though.

Reminds me of the load balancing literature. There, the explicit goal is to evenly divide the burden across your fleet of servers, and having wide distributions is a problem on both ends: you're paying for servers to sit idle, and some are over burdened and giving customers a bad experience (high pageload times).

By way of illustration, I took the code and made a simple modification to it, implementing power of 2 random choice (http://www.eecs.harvard.edu/~michaelm/postscripts/tpds2001.p...).

Here's the video result: https://www.youtube.com/watch?v=94Vc7gf3ONY Much tighter distribution, though you need to be able to identify the size of people's bank accounts. In this model, it's very rare for anyone to give the richest anything, unless you magically choose two people randomly tied for richest.

pontus 2 days ago 2 replies      
As others have noted, every state in the state space will eventually be explored and so the appearance of convergence is a little misleading. Eventually the richest person will lose all the money and someone else will take that place.

An interesting question is why we happen to see the system in such an unequal state when we look at it after e.g. 10000 iterations. The reason for this, I believe, is entropy. There are just vastly more ways to distribute the money unequally than equally (thinking of these as macro states). For example, there is only one (macro) state where everyone has $100, but there are 100! states where the money is distributed in a perfectly unequal way (e.g. 0, 1, 2, ..., 99, 5050 or any other way where all the people can be distinguished). So, if you randomly peek at the system at any given time it'd be very unlikely that you'd see it in any way other than a highly unequal state. To be fair, this argument neglects the dynamics (perhaps the transition probabilities to those states are sufficiently small that the n! multiplicity is watered down, although I suspect this is not the case.)

nabla9 3 days ago 2 replies      
For each person its' random walk.

Dollars received is additive process of 99 independent random variables from [0,1].

Dollars given is -1 or 0 if there is none.

The process has memory because givers can run out of money.

detaro 3 days ago 2 replies      
Worth reading the comments under the post e.g. people testing it and seeing that values for individual people go from 0 to very high and back over time, which you can't see in the histogram.
scottmsul 2 days ago 0 replies      
This kind of simulation results in an exponential distribution, which is fairly equal compared all things considered. In an exponential distribution, most people have roughly the same order of magnitude of wealth (10s to 100s of dollars). In real life, the bottom 99% follow an exponential distribution pretty closely, while the 1% follow a pareto distribution, which is WAY More unequal. The transition is very sharp too, and has been studied in econophysics models.

Brief introduction to econophysics for the mathematically inclined: https://arxiv.org/abs/0709.3662

dmurray 3 days ago 2 replies      
If you have $1, you have to give it away, but you receive less than $1 back on average (because some people have no money to give you). So the poor get poorer. But wait, by the same argument the rich get poorer and this simulates a system of stochastic handouts to the very poorest - those with $0.

Alternatively, it simulates a lottery. From the point of view of any individual, every turn you spend $1 on a lottery ticket and get a prize of between 0 and $99. Viewed that way, it's not surprising that some people end up rich in the simulation.

Falkon1313 2 days ago 1 reply      
This is a bit too "spherical cow in a vacuum" for my tastes. As a quick experiment, I included population growth (using average population growth rate of the US over the last 50 years) and GDP growth (again using US average for last 50 years). Assumptions being that new population start with $0 and GDP growth applies equally to all as a percentage of their then-current wealth. It substantially enhanced the inequality.

In the simple run, median/mean was 67%, but with compounded growth it was only 7.9%. Moreover, the standard deviation went from 1.31 times the median (87% of the mean) to 17.83 times the median (141% of the mean).


Wonder what it would be like taken further - forcing population that have $0 to take a loan from the wealthiest people and pay them interest.

hedora 3 days ago 2 replies      
This reminds me of "power of two" balls and bins, which is useful for load balancing. Throwing work onto a random machine doesn't work well. Picking two random machines, and throwing the work on to the less loaded machine is asymptotically better.

I think you'd see something very similar with this type of simulation.

dmayle 2 days ago 0 replies      
The only reason why anyone might find this surprising or counterintuitive is that they're using a terrible visualization. Dollars should be on the x-axis, and number of people should be on the y-axis. If you had that, you'd see this slowly converge to a standard distribution (bell curve).
colanderman 3 days ago 2 replies      
Of course it's not uniform. The number of dollars handed to any given person each tick is a random variable drawn from some distribution (too tired to think of which right now). An individual's final worth will probably be selected from some sort of Gaussian, due to the trials being repeated and summed.

Question is, why did they have to hide the solution in a video? Are we allergic to static images and written text now?

js8 3 days ago 0 replies      
A similar simulation from Peter Norvig (was also on HN):http://nbviewer.jupyter.org/url/norvig.com/ipython/Economics...
dbatten 3 days ago 2 replies      
I fundamentally disagree that the resulting distribution isn't more or less equal. It's basically randomly distributed around 45, with a pretty small standard deviation to boot...
kirillseva 2 days ago 1 reply      
Modified the problem from this post to also include debt. If someone doesn't have the funds to pay during the turn, they have to borrow money from the person they should be giving it to, and pay back with interest.

Check out the source and the produced animation here: https://gist.github.com/kirillseva/961fa1f5b5d64254e0117caf1...

daddyo 2 days ago 0 replies      
High school diploma solution:

There are increasingly more ways to distribute wealth unevenly, than evenly.

Reduce the problem to the simplest case of 3 persons: `a`, `b`, and `c`. Person `a` has decision to give to either `b` or `c`. Then use combinatorics:

 from itertools import product decisions_a = ['ab', 'ac'] decisions_b = ['ba', 'bc'] decisions_c = ['ca', 'cb'] for combination in product(decisions_a, decisions_b, decisions_c): print combination >>> ('ab', 'ba', 'ca') # uneven >>> ('ab', 'ba', 'cb') # uneven >>> ('ab', 'bc', 'ca') # even >>> ('ab', 'bc', 'cb') # uneven >>> ('ac', 'ba', 'ca') # uneven >>> ('ac', 'ba', 'cb') # even >>> ('ac', 'bc', 'ca') # uneven >>> ('ac', 'bc', 'cb') # uneven

petermcneeley 2 days ago 1 reply      
This problem lead to the discovery of Quantum physics.Simply replace the $1 with energy quantums.


nsnick 2 days ago 0 replies      
We see this result because as a percentage of their total wealth, the poor are giving away a larger portion. Try this experiment again, but instead of having everyone give $1 to a random person, have everyone give 1% to a random person.
dskloet 2 days ago 1 reply      
I would have expected them to follow Zipf's law.


Vsauce made a video about it: https://www.youtube.com/watch?v=fCn8zs912OE

ivoras 2 days ago 0 replies      
So I've repeated the experimental model. Yes, if at each step everyone gives $1 to a random person, the same thing happens as in the article: https://www.youtube.com/watch?v=Hu0vcn_-vX4 .

If, instead, at each step, everyone gives 1% of their currently held amount, this happens: https://www.youtube.com/watch?v=N8Ce3eQTA9c .

The results are radically different. Draw your own conclusions.

kazinator 2 days ago 0 replies      
Intuitively, I would expect that we can collapse the result of doing this type of dollar scrambling over many iterations of the simulation into a single operation which takes a sequence of dollars and partitions it into N randomly-sized partitions for N people.

For instance, we can arrange the dollars into a sequence and then insert N-1 randomly-placed divisions to chop up the sequence. Then each of N people in the corresponding people sequence gets their corresponding (possibly empty) piece of the dollar sequence.

If we distribute randomly placed chops into section of the real number line, we end up with string lengths that follow an exponential distribution: https://en.wikipedia.org/wiki/Exponential_distribution

It is not intuitive at all to expect the pieces to be more or less identically long (which corresponds to the wrong intuition that everyone will have more or less their equal share of the dollars). That would mean that the chops are evenly spaced and not random.

Randomly placing chops is a https://en.wikipedia.org/wiki/Poisson_process

However, part of the intuition here (possibly wrong) is that Poisson is applicable to a crudely discrete process like this, in some approximate way.

anigbrowl 2 days ago 1 reply      
I don't have R installed and don't want to spend the rest of teh day setting up packages and trying to get animations underway, but I'd like to explore two other scenarios:

a. Same rules (100 people, 100 dollars), but everyone has to give away 1% of their wealth instead of $1 each round.

b. Same rules as the original, but anyone who has $0 at the end of a round 'dies' and is no longer able to participate.

We have simulation games like Simcity, Civilization, the Sims and so on. I wonder why there aren't better economic simulation games. I know of various academic tools but I mean games that would be accessible and enjoyable for consumers while also allowing them to easily explore simulation spaces. You can mess around with tax policy in simCity type games, for example, but it's really primitive. Academic economic simulation tools tend not to be very engaging (since they're not built to entertain) and also have ugly visuals and user interfaces.

curiousgeorgio 2 days ago 0 replies      
This model essentially represents an economy wherein every person spends exactly what they make, and in the real world, there are plenty of people who behave that way (at every position on the wealth spectrum).

However, it also ignores the fact that for the most part, people are not forced to live that way. Sure, there's a certain minimum cost associated with simply staying alive, but there's also tremendous potential for people to save and/or invest (a portion of) their money rather than spend it, which in turn may increase their likelihood of being on the receiving end of other people's "random" expenses. It also ignores the fact that for every dollar spent in this economy, we can assume that a dollar of value is received in return. If a person spends a dollar truly randomly (or frivolously), then that expense has an opportunity cost, and in many cases (above a baseline for basic living expenses), that dollar would be better spent/invested in activities that will increase the person's future money-making power.

I suppose if all those points were included in the model, we'd probably see even more inequality in the wealth distribution at the top end, but on the middle-to-low end, there's no reason it couldn't be much flatter; when people fall on hard times (as all will eventually), they can make the difficult-but-necessary decision to spend less than they make, thereby slowing or reversing any downward trend. Philanthropic activity would also help the unlucky few on the extreme lower edge of the distribution to bounce back quickly.

Of course, the real problem in this scenario is that we should never underestimate the number of people who just don't care - or those who spend their money more "randomly" than not, with little concern for the future. Those are the same people who later on will be lining the streets protesting about how "unfair" the system is when they're finally broke.

RodgerTheGreat 2 days ago 1 reply      
I whipped up an interactive simulation of this effect in K:


Source, for the curious:

 p: 72 / players s: p#100 / score c: {(+/(!#x)=/:(+/t)?#x)-t:0<x} / change (per round) b: {[c;p;y;x](p+0,2*y;;x#c)} / draw bar (color;pos;y;x) g: {x'[!#y;_60*y%|/y]} / draw graph (bar;data) tick: {{s+::c s}'!20} / iterate several steps/frame draw: {g[b[3;10 10];s@<s],g[b[2;90 10];s]} / graph sorted, raw value

abalone 2 days ago 2 replies      
I'm not saying I'm smarter than your average PhD but it seemed intuitive that it would concentrate wealth, basically because you can only give out 1 dollar per round even if you have >1 dollars.

Having said that I would not draw too much inference about economics and human society from this. There is no such "1 dollar per round" rule in life.

daveFNbuck 2 days ago 0 replies      
Because expectation is linear and the total amount of money in play never changes, we know that the sum of the expectations for each player over a given turn is equal to 0. If no players have run out of money, this means that the expected change for each player over a single turn is zero.

If instead we have m players with money and b broke players, each player still has an equal expected number of dollars received, and the m players with money each expect to give 1 dollar. Summing this, we have a total expected change of (m + b)E(received) - m, which must equal zero, meaning E(Received) = m/(m + b), so players with money expect a change of -b/(m+b) and players without money expect a change of m/(m+b).

This tells us that the expectation for a turn is basically always zero and never gets above zero in a way that allows accumulation of wealth for a single player. So over long periods of time we should expect this to look like a drunk walk with a weird distribution.

xrd 2 days ago 0 replies      
I don't understand the math here. But, it seems people are saying there is equal probability that any state can arise. Is that correct? If so, I'm unclear why that's true given that the state where one person has all the dollars seems like it can only arise from a much smaller set of states (where only one person, and there would be 99 options, had one dollar and one person had all the rest). In comparison, wouldn't there be lots of times when the next state (say where all the people had similar amounts of money) would be easier to get to from multiple states. I'm saying it simplistically seems like the extreme states are harder to get to than the "middle" states. Is than an erroneous assumption? Is there some technique I can use to reason about this problem in a different way?
100ideas 2 days ago 0 replies      
More interesting: "wealthier" person gives 1/2 their wealth to "poorer" person every turn.

Probably looks like Boltzman distribution describing velocity of ideal gas.

Kenji 3 days ago 0 replies      
> Youll never guess what happens next.

One of the most annoying and overused phrases in the universe.

lohankin 2 days ago 1 reply      
Let's consider a different problem.There are 100 non-negative numbers summing up to 10000.Let's choose a random combination of numbers satisfying this condition. Intuitively, it's quite clear that typical case will be very different from equilibrium (where every number=100). I fail to see how the transfer of 1 dollar at a time is qualitatively different. So the result is quite intuitive - contrary to what the article suggests.
Zeebrommer 2 days ago 0 replies      
Interesting phenomenon! To me it makes intuitive sense when viewed as a uniform distribution of money between 0 and ~$100. Bottom graph approaching linear.

In that light these kinds of posts can be a bit disappointing: "here, have a simulation to prove the phenomenon". Yes but how does this work in general? As others have noted, is it stable over time? Which factors play a role in the speed of convergence? What if one introduces tax per transaction, giving the returns to the poorest? What would be a fair percentage?

jwfxpr 3 days ago 3 replies      
Very interesting. I'd like to see this simulation run with different (pseudo-)RNGs. I wonder if and how the quality of the randomness impacts the outcome.

Can anyone with insight offer comment?

adrianmonk 2 days ago 0 replies      
Lots of complicated explanations why this doesn't strongly tend toward being very even. Here's my attempt to state it simply.

Every round, each person (except those who are broke) definitely gives away $1. They may or may not get a dollar. They might even get more than a dollar.

So it's not surprising that some get a bit ahead and others fall a bit behind.

nullc 2 days ago 0 replies      
I believe happens because the states where everyone has close to the same amount of money have lower entropy e.g. there is only 1 state where everyone has equal money, but there are by far more states where the funds are exponentially distributed.

If it makes you feel better, in this model eventually everyone will spend time as both rich and poor... probably not so much like the real world. :)

phatoni 2 days ago 0 replies      
Given the uneven distribution of wealth being more likely then an even distribution of wealth per tick. If you now add the rule that someone having less than $100 is unhappy and someone having at least $100 is happy you end up with lower overall happiness on average compared to the initial state were erveyone has $100 and would be happy with that.
subroutine 2 days ago 0 replies      
I've coded this sim (ml/octave) if anyone wants to play around with other numbers (number of people, starting amount of dollars):


The writeup there includes a clip simulating 500 people (each starting with 100 dollars) and some distribution fits.

pavlov 3 days ago 2 replies      
The outcome looks suspiciously like the real-world wealth distributions that we presume to be the result of efficient free markets.

Is capitalism mostly noise?

usaphp 2 days ago 2 replies      
> gives a dollar to one randomly chosen other person

I wonder is the person randomly chosen by the machine or is it randomly chosen by the person who gives away the dollar? If it's the second case - I wonder how much does personal attraction matters here. Unconsciously you would want to give something to a person who you find more attractive...

gus_massa 3 days ago 2 replies      
> If on quick reflection you thought more or less equally, you are not alone. I asked 5 super-smart PhDs this question and they all had the same initial intuition.

Mmm ... What kind of PhD did they ask? Mathematicians have a strong bias for exact solutions, and not too much intuition for this kind of problems. Have they tried asking Physicists?

It's not surprising that the distribution is not even. I expect that the spread increase in time. I guess something like

average + k1 * sqrt(t) * erf((n- middle) * k2)

where t is the number of simulation steps, and k1 and k2 are some magic constants that I'm too lazy to estimate. ("Proof": Everything is a Gaussian.)

I was surprised that at some point the distribution was almost linear. Perhaps k2 is not a constant, and the correct guesss of the estimation is

average + k1 * sqrt(t) * erf((n- middle) * sqrt(t) * k2')

This estimation fails when t is big. (And it's probably incorrect anyway.)

After some time the person with more money apparently starts to increase the amount of money. I'd like to see a longer simulation to see if after some time the first one accumulates most of the money.

kharms 2 days ago 1 reply      
I love this! If I had to ELIA5 it, I would say: if you get lucky you get rich quickly. If you get unlucky you get poor slowly. So some people will get lucky and some will get unlucky, but the first to get unlucky will get below 100 and the first to get lucky will get above 100, explaining the initial behavior.
unoti 2 days ago 1 reply      
The lack of money "accumulates" because you always have a chance to receive money, but when you run out of money you cannot give more. So there is a lower cap but no more upper cap.

In the real world this is even worse. Once you're out of money your chances of getting money go down drastically.

rayiner 2 days ago 2 replies      
> How does the distribution look? Play the movie above to see.

How about I don't play the movie, and you use these magic things called "words" that let you describe experimental results. Maybe we can call these descriptions "abstracts" and put them at the beginning of articles.

qubex 2 days ago 0 replies      
I guessed an exponential distribution of wealth right after the problem was posed. Apparently years of crying hot tears over agent-based economic simulations served my intuition well.
chrisallick 2 days ago 0 replies      
I don't see why it's counterintuitive. If we know that "random" distribution will place values on a curve, then why would it not follow that the money would distribute this way and lead to inequality.

But I like the article! Excited to share to students.

raldi 2 days ago 1 reply      
I don't understand the video -- with 45 players, wouldn't one have $89 after round 1? But that's not what we see -- it takes hundreds of rounds before the first person breaks $80.
lngnmn 2 days ago 0 replies      
"Dollar studies" is perhaps the second bullshitest part of psychology after "infant-looking-times studies".

Artificial cartoon-like setups and biased interpretations cannot be considered scientific or even accurate.

Babies are merely confused and overwhelmed, dollar studies are over-simplified and sterile and does not take complex emotional and hormonal patterns (which are much more powerful than rationality biological driving forces in real life) into account.

Snap-judgements, jumping to conclusions, escaping from emotional pressure and reduction of cognitive load is what derives people's behavior. Any sales or ad professional would confirm this.

hasenj 2 days ago 0 replies      
I wonder what would happen if every 20 or so rounds, each player is given a new set of dollars (say 5 dollars), to sort of simulate UBI.
amelius 2 days ago 1 reply      
Now invent a tax rule that brings back the equality. And simulate it.

And then simulate how this tax rule works in a capitalist market model.

I'm curious.

ineedasername 2 days ago 0 replies      
Hmm, I ran a few trillion cycles of this and got nowhere: the average never changed at all. :)
IshKebab 3 days ago 1 reply      
Without the 0 cut-off it's just the binomial distribution.
Cosmopolitan 2 days ago 1 reply      
It is not randomly chosen by the person who gives away the dollar. Personal attraction matters here. Unconsciously you would want to give something to a person who you find more attractive.
jancsika 2 days ago 0 replies      
This restatement of the problem might help:

"Imagine a room full of 100 people with 100 dollars each. With every tick of the clock, the set of people with money each give a dollar to one randomly chosen other person. The set of people with zero dollars simply lose the opportunity to give away a dollar and thus decrease the overall chances that any participant will receive a dollar. After some time progresses, how will the money be distributed?"

To me this now seems intuitively obvious. Since the game creator has declared by fiat that the number "0" gets a special branch in the program, it stands to reason that the resulting distribution will change once any participant hits zero and triggers that code-path.

For example-- suppose you start with game with one person who has $10,000 and the remaining 99 people have zero. (I don't think you can arrive at that state game organically, but it doesn't matter for the point I'm making.) The first tick through the game all that happens is that the ten-thousandaire gives one dollar to one marginally lucky participant while the other 99 do-- nothing at all. Essentially the 99 all begin by losing a turn.

E.g., imagine an implementation where each tick loops through the participants and the ten-thousandaire happens to be the first participant. If you are the last in line, the only opportunity you have to collect a dollar is when the ten-thousandaire gives away a dollar on the very first iteration of the loop. For the rest of the 99 iterations of the loop you have exactly 0% chance of receiving a dollar. That means on the first tick you and all the other zero-aires would have 1 in 99 chance of receiving a dollar. If on the other hand you started out with everyone having a dollar you would have [some immensely larger chance of receiving a dollar that someone who isn't lazy like me can probably calculate here].

But it's actually worse on the first tick of this example for the ten-thousandaire. That player has a 100% chance of losing a dollar and a 0% chance of gaining one.

I believe this "problem" essentially describes the conditions of an economic depression. It's also a bit of a visual/conceptual illusion because we're likely to look at that chart and see accumulation of dollars as "winning" and zeros as "losing". But if you instead measure liquidity it should be clear that those who accumulated dollars suffer decreased liquidity even worse than the zero-aires.

Also if you remove the special case for zero then the intuition about "more or less equal" distribution should be true.

Edit: change "lose a turn" to "lose the opportunity to give away a dollar" to guard against pedantry. Also, changed "tick/turn" to "tick".

whatnotests 2 days ago 0 replies      
Wait wait wait wait.

You mean we don't naturally end up with 1% of the people having 99% of the dollars??

curiousgal 3 days ago 2 replies      
Since people are incapable of making truly random decisions, I reckon the person with the most pleasing physical appearance will receive most of the money.
dracodoc 3 days ago 1 reply      
The policy is not symmetric:

1. everybody need to give 1, but may receive 0 (99/100 probability) or 1 (1/100 probability)

2. when somebody run out of money, the total give out is no longer 100 (before this, 100 changed hand in every tick), and the probability of receiving money also changed.

So this is history dependent. The simulation need to run many rounds then compare the results of all rounds.

SFO near miss might have triggered aviation disaster mercurynews.com
470 points by milesf  1 day ago   407 comments top 37
ddeck 1 day ago 4 replies      
Attempts to take off from or land on taxiways are alarmingly common, including those by Harrison Ford:

 Harrison Ford won't face disciplinary action for landing on a taxiway at John Wayne Airport [1] Serious incident: Finnair A340 attempts takeoff from Hong Kong taxiway [2] HK Airlines 737 tries to take off from taxiway [3] Passenger plane lands on the TAXIWAY instead of runway in fourth incident of its kind at Seattle airport [4]
[1] http://www.latimes.com/local/lanow/la-me-ln-ford-taxiway-agr...

[2] https://news.aviation-safety.net/2010/12/03/serious-incident...

[3] https://www.flightglobal.com/news/articles/hk-airlines-tries...

[4] http://www.dailymail.co.uk/travel/travel_news/article-337864...

charlietran 1 day ago 3 replies      
There's an mp3 of the radio chatter here:


> Audio from the air traffic controller communication archived by a user on LiveATC.net and reviewed by this newspaper organization showed how a the confused Air Canada pilot asks if hes clear to land on 28R because he sees lights on the runway.

> Theres no one on 28R but you, the air controller responds.

> An unidentified voice, presumably another pilot, then chimes in: Wheres this guy going. Hes on the taxiway.

> The air controller quickly tells the Air Canada pilot to go around. telling the pilot it looks like you were lined up for Charlie (Taxiway C) there.

> A United Airlines pilot radios in: United One, Air Canada flew directly over us.

> Yeah, I saw that guys, the control tower responds.

Animats 1 day ago 1 reply      
Here's a night approach on 28R at SFO.[1] Same approach during the day.[2] The taxiway is on the right. It's a straight-in approach over the bay. The runway, like all runways at major airports worldwide, has the standardized lighting that makes it very distinctive at night, including the long line of lights out into the bay. This was in clear conditions. WTF? Looking forward to reading the investigation results.

The planes on the taxiway are facing incoming aircraft as they wait for the turn onto the runway and takeoff. So they saw the Air Canada plane coming right at them. That must have been scary.

[1] https://www.youtube.com/watch?v=rNMtMYUGjnQ[2] https://www.youtube.com/watch?v=mv7_lzFKCSM

watson 1 day ago 5 replies      
English is not my native language, but shouldn't the headline have read "SFO near miss would have triggered aviation disaster"? "Might" seems to indicate that something else happened afterwards as a possible result of the near miss
tmsh 1 day ago 2 replies      
The moral of this story for me is: be that "another pilot." To be clear, "another pilot" of another aircraft. Not as clear as it could be just like the title of this article is ambiguous.

The moral of this story for me is: call out immediately if you see something off. He's the real hero. Even if the ATC controller immediately saw the plane being misaligned at the same time - that feedback confirming another set of eyes on something that is off couldn't have hurt. All 1000 people on the ground needed that feedback. Always speak up in situations like this.

WalterBright 1 day ago 4 replies      
In the early 1960s, a pilot mistook a WW2 airfield for Heathrow, and landed his 707 on it, barely stopping before the end of the runway.

The runway being too short to lift a 707, mechanics stripped everything out of it they could to reduce the weight - seats, interiors, etc. They put barely enough gas in it to hop over to Heathrow, and managed to get it there safely.

The pilot who landed there was cashiered.

mate_soos 1 day ago 3 replies      
Before crying pilot error, we must all read Sydney Dekker's A Field Giude to Understading "Human Error" (and fully appreciate why he uses those quotes). Don't immediately assign blame to the sharp end. Take a look at the blunt one first. Most likely not a pilot error. Assigning blame is a very human need, but assigning it to the most visible and accessible part is almost always wrong.
cperciva 1 day ago 1 reply      
Can we have "might have triggered" changed to "could have triggered" in the title?
phkahler 23 hours ago 0 replies      
A different kind of error... I was returning from Las Vegas in the middle of the day and the tower cleared us for departure on 9 and another plane on 27. We had taxied out and then the pilot pulled over, turned around and waited for the other plane to depart. He told us what had happened - there was a bit of frustration in his voice. Imagine pulling up and seeing another plane sitting at the opposite end of the runway ready to go. (it may not have been 9 and 27 I don't know which pair it was) Earlier waiting in the terminal I had seen a different plane go around, but didn't know why. Apparently there was a noob in the tower that day. This is why you look out the window and communicate.
lisper 22 hours ago 0 replies      
Possible explanation for why this happened: it was night, and the parallel runway 28L was closed and therefore unlit. The pilot may have mistaken 28R for 28L and hence the taxiway for 28R. This comes nowhere near excusing this mistake (there is no excuse for a screwup of this magnitude) but it makes it a little more understandable.
mikeash 1 day ago 0 replies      
I wonder just how likely this was to end in disaster. It feels overstated. The pilot in question seemed to think something was wrong, he just hadn't figured it out yet. I imagine he would have seen the aircraft on the taxiway in time to go around on his own if he hadn't been warned off.

I'm having trouble figuring out the timeline. The recording in the article makes it sound like this all happened in a matter of seconds, but it's edited down to the highlights so that's misleading. LiveATC has an archived recording of the event (http://archive-server.liveatc.net/ksfo/KSFO-Twr2-Jul-08-2017..., relevant part starts at about 14:45) but even those appear to have silent parts edited out. (That recording covers a 30 minute period but is only about 18 minutes long.) In the archived recording, about 40 seconds elapse between the plane being told to go around and the "flew directly over us" call, but I don't know how much silence was edited out in between.

Certainly this shouldn't have happened, but I wonder just how bad it actually was.

URSpider94 8 hours ago 0 replies      
Incidentally, I heard a story on KQED (SF Bay Area public radio) today that mentioned a potential clue. There are two parallel runways on this heading -- however -- the left runway is closed for repairs and therefore is currently unlit. If the pilot didn't remember this (it would have been included in his briefings and approach charts for the flight, but he may not have internalized it), he would likely have been looking for two parallel runways and would have lined up on the right one, which in this case would have been the taxiway...
blhack 1 day ago 3 replies      
People "could" run their cars off of bridges every day, but they don't because they can see, and because roads have signs warning them of curves.

This sounds like a story of how well the aviation system works more than anything. The pilot is in constant communication with the tower. The system worked as intended here and he went around.

It seems like a non story.

vermontdevil 1 day ago 0 replies      
Found a cockpit video of a landing approach to 28R to give you an idea (daylight, good weather etc)


mannykannot 1 day ago 0 replies      
AFAIK (not that I follow the issue closely) the problem of radio interference that ended the last-chance attempt to prevent the Tenerife crash has not been addressed [1]. If so, then it may be very fortunate that only one person called out that the landing airplane had lined up its approach on the taxiway, and not, for example, the crews of every airplane on the taxiway, simultaneously.

[1] http://www.salon.com/2002/03/28/heterodyne/

TL;DR: At Tenerife, both the Pan-Am crew and the tower realized that the KLM aircraft had started its take-off roll, and both tried to warn its crew at the same time, but the resulting radio interference made the messages unintelligible. The author states that a technical solution is feasible and relatively easily implementable.

ryenus 1 day ago 1 reply      
This reminds me of the runway incursion incident at Shanghai, in Oct 2016:


radialbrain 1 day ago 0 replies      
The avherald article has a slightly more factual account of the event (with links to the ATC recording): https://avherald.com/h?article=4ab79f58
rdtsc 1 day ago 4 replies      
Without knowing the cause but if I had to guess this looks like pilot error. At least statistically that the leading cause of crashes.

I am surprised pilots still manually land planes. Is the auto-landing feature not implemented well enough? But then it's relied upon in low visibility. So it has to work, they why isn't it used more often?

exabrial 1 day ago 1 reply      
Wouldn't the word be "near hit" instead of "near Miss"? If you were close too missing, you'd hit something...
BusinessInsider 19 hours ago 0 replies      
Theoretically - if the plane had landed, how many planes would it have taken out? It obviously wouldn't have been pretty, but I doubt the AirCanada would have reached the fourth plane, or maybe even the third.
TheSpecialist 19 hours ago 0 replies      
I always wondered what about SFO makes it so much more dangerous than the other airports in the area? It seems like they have a potential disaster every couple years.
milesf 1 day ago 3 replies      
How is this even possible? Is it gross negligence on the part of the pilot, a systems problem, or something else? (IANAP)
jjallen 23 hours ago 0 replies      
Does anyone know just how close of a call this was? Was the landing aircraft 100, 200 meters above ground?

How many more seconds until they would have been too slow to pull up?

TrickyRick 23 hours ago 2 replies      
> Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic. [1]

Is it just me or is this blatantly off-topic? Or is anything major happening in the bay area automatically on-topic for Hacker News?

[1] https://news.ycombinator.com/newsguidelines.html

heeen2 1 day ago 0 replies      
Aren't there lights that have to line up if you're on the right course for the runway like with nautic harbors?Or warning lights that are visible when you're not aligned correctly?
martijn_himself 1 day ago 1 reply      
I get that this was a manual (non-ILS) landing, but why is there no audio warning to indicate the aircraft is not lined up with the runway?
FiloSottile 1 day ago 11 replies      
I am just a passenger, but this looks very over-blown. A pilot aligned with the taxiway, that's bad. But no pilot would ever land on a runway (or taxiway) with 3 planes on it. Just search the Aviation Herald for "runway incursion". And indeed, he spotted them, communicated, went around.

Aviation safety margins are so wide that this does not qualify as a near-miss.

kwhitefoot 1 day ago 0 replies      
Why is instrument landing not routinely done? Is it because it is not good enough?
perseusprime11 13 hours ago 0 replies      
How will an autonomous system handle this issue? Will it figure out the light colors of runways vs. taxiways or will it rely close geolocation capabilities?
4ad 1 day ago 0 replies      
It's not a near miss, it's a near hit.


cmurf 22 hours ago 0 replies      
Near as I can tell HIRL could not have been on, they were not following another aicraft to land, and the runway and taxiway lighting must've been sufficiently low that the taxi lights (low intensity version of a landing light) on the queued up airplanes on the taxiway, made it look like the taxiway was the runway. Pilot fatigue, and experience at this airport also are questions.


All runways have high intensity runway lighting (HIRL) and 28R has touchdown zone and centerline lighting (TDZ/CL). Runway lights are white, taxiway lights are blue. If you see these elements, there's no way to get confused. So my assumption is the pilots, neither of them, saw this distinction.

HIRL is typically off for visual landings even at night. That's questionable because night conditions are reduced visibility situations and in many other countries night flying is considered as operating under instrument rules, but not in the U.S. You do not need instrument rated aircraft or pilot certification. For a long time I've though low intensity HIRL should be enabled briefy in the case of visual night landings, where an aircraft is not following behind another, at the time "runway in sight" verbal verification happens between ATC and pilot.

dba7dba 1 day ago 0 replies      
I'd like to suggest that if you are still interested in learning more about what happened, you should look for a video from "VASAviation" on youtube. I'm sure his subscribers have asked him already for analysis and he's working on the video.

The channel focuses on aviation comms channel.

I find it informative because the youtube channel provides detailed voice/video/photo/analysis of incidents (actual/close-calls) involving planes/passengers taxing/landing/taking-off in/around airports.

leoharsha2 1 day ago 0 replies      
Reporting on disasters that didn't happen.
briandear 1 day ago 0 replies      
I wonder why on 35R they wouldnt have the taxiway to the left of the runway. Then the right is always the runway. Same for the left. Basically have parallel taxiways on the opposite side of the R/L designation of the runway. So at SFO, the parallel taxiways would be inside the two runways.

However, approach lighting is pretty clear, but at dusk, I agree with another comment that it can be rather hard to distinguish depending on angles. I think that approach would be landing into setting sun, so that could have some bearing.

EGreg 1 day ago 0 replies      
stygiansonic 1 day ago 0 replies      
Wow, I landed on the next day on the same flight (AC 759)
petre 1 day ago 1 reply      
Paint the runway and the taxiway in different colors and also use different colors for the light signals that illuminate them at night. Blue/white is rather confusing. Use clearly distinguishable colors such as red/blue or orange/blue or magenta/yellow.
Students Are Better Off Without a Laptop in the Classroom scientificamerican.com
373 points by thearn4  22 hours ago   217 comments top 56
zeta0134 19 hours ago 5 replies      
Oh, okay, I thought the study was going to be on the benefits of attempting to use the laptop itself for classroom purposes, not for social media distractions. This would be more accurately titled, "Students Are Better Off Without Distractions in the Classroom." Though I suppose, it wouldn't make a very catchy headline.

I found my laptop to be very beneficial in my classroom learning during college, but only when I made it so. My secret was to avoid even connecting to the internet. I opened up a word processor, focused my eyes on the professor's slides or visual aids, and typed everything I saw, adding notes and annotations based on the professor's lecture.

This had the opposite effect of what this article describes: my focusing my distracted efforts on formatting the article and making my notes more coherent, I kept myself focused, and could much more easily engage with the class. Something about the menial task of taking the notes (which I found I rarely needed to review) prevented me from losing focus and wandering off to perform some unrelated activity.

I realize my experience is anecdotal, but then again, isn't everyone's? I think each student should evaluate their own style of learning, and decide how to best use the tools available to them. If the laptop is a distraction? Remove it! Goodness though, you're paying several hundred (/thousand) dollars per credit hour, best try to do everything you can to make that investment pay off.

makecheck 21 hours ago 9 replies      
If students arent engaged, they arent going to become star pupils once you take away their distractions. Perhaps kids attend more lectures than before knowing that they can always listen in while futzing with other things (and otherwise, they may skip some of the classes entirely).

The lecture format is what needs changing. You need a reason to go to class, and there was nothing worse than a professor showing slides from the pages of his own book (say) or droning through anything that could be Googled and read in less time. If there isnt some live demonstration, or lecture-only material, regular quizzes or other hook, you cant expect students to fully engage.

ourmandave 20 hours ago 5 replies      
This reminds me of the running gag in some college movie where the first day all the students show up.

The next cut some students come to class, put a recorder on their desk and leave, then pick it up later.

Eventually there's a scene of the professor lecturing to a bunch of empty desks with just recorders.

And the final scene there's the professor's tape player playing to the student's recorders.

njarboe 18 hours ago 1 reply      
This is a summary of an article titled "Logged In and Zoned Out: How Laptop Internet Use Relates to Classroom Learning" published in Psychological Science in 2017; The DOI is 10.1177/0956797616677314 if you want to check out the details.

Abstract: Laptop computers are widely prevalent in university classrooms. Although laptops are a valuable tool, they offer access to a distracting temptation: the Internet. In the study reported here, we assessed the relationship between classroom performance and actual Internet usage for academic and nonacademic purposes. Students who were enrolled in an introductory psychology course logged into a proxy server that monitored their online activity during class. Past research relied on self-report, but the current methodology objectively measured time, frequency, and browsing history of participants Internet usage. In addition, we assessed whether intelligence, motivation, and interest in course material could account for the relationship between Internet use and performance. Our results showed that nonacademic Internet use was common among students who brought laptops to class and was inversely related to class performance. This relationship was upheld after we accounted for motivation, interest, and intelligence. Class- related Internet use was not associated with a benefit to classroom performance.

imgabe 19 hours ago 4 replies      
I went to college just as laptops were starting to become ubiquitous, but I never saw the point of them in class. I still think they're pretty useless for math, engineering, and science classes where you need to draw symbols and diagrams that you can't easily type. Even for topics where you can write prose notes, I always found it more helpful to be able to arrange them spatially in a way that made sense rather than the limited order of a text editor or word processor.
stevemk14ebr 21 hours ago 2 replies      
I think this is a highly personal topic. As a student myself i find a laptop in class is very nice, i can type my notes faster, and organize them better. Most of my professors lectures are scatter brained and i frequently have to go back to previous section and annotate or insert new sections. With a computer i just go back and type, with a pen and paper i have to scribble, or write in the margins. Of course computers can be distractions, but that is the students responsibility, let natural selection take its course and stop hindering my ability to learn how i do best (I am a CS major so computers are >= paper to me). If you cannot do your work with a computer, then don't bring one yourself, dont ban them for everyone.
shahbaby 16 hours ago 1 reply      
"Thus, there seems to be little upside to laptop use in class, while there is clearly a downside."

Thanks to bs articles like this that try to over generalize their results, I was unsure if I "needed" a laptop when returning to school.

Got a Surface Book and here's what I've experienced over the last 2 semesters.- Going paperless, I'm more organized than ever. I just need to make sure I bring my surface with me wherever I go and I'm good.

- Record lectures, tutorials, office hours, etc. Although I still take notes to keep myself focused, I can go back and review things with 100% accuracy thanks to this.

- Being at 2 places at once. ie: Make last minute changes before submitting an assignment for class A or attend review lecture to prepare for next week's quiz in class B? I can leave the surface in class B to record the lecture while I finish up the assignment for class A.

If you can't control yourself from browsing the internet during a lecture then the problem is not with your laptop...

baron816 20 hours ago 0 replies      
Why are lectures still being conducted in the classroom? Students shouldn't just be sitting there copying what the teacher writes on the board anyway. They should be having discussions, working together or independently on practice problems, teaching each other the material, or just doing anything that's actually engaging. Lecturing should be done at home via YouTube.
zengid 20 hours ago 0 replies      
Please excuse me for relating an experience, but it's relevant. To get into my IT grad program I had to take a few undergrad courses (my degree is in music, and I didn't have all of the pre-reqs). One course was Intro to Computer Science, which unfortunately had to be taught in the computer lab used for the programming courses. It was sad to see how undisciplined the students were. Barely anyone paid attention to the lectures as they googled the most random shit (one kid spent a whole lecture searching through images of vegetables). The final exam was open-book. I feel a little guilty, but I enjoyed seeing most of the students nervously flip through the chapters the whole time, while it took me 25 minutes to finish (the questions were nearly identical to those from previous exams).
rdtsc 20 hours ago 2 replies      
I had a laptop and left it home most of the time. And just stuck with taking notes with a pen and sitting upfront.

I took lots notes. Some people claim it's pointless and distracts from learning but for me the act of taking notes is what helped solidify the concepts a better. Heck due to my horrible handwriting I couldn't even read some of the notes later. But it was still worth it. Typing them out just wasn't the same.

alkonaut 20 hours ago 0 replies      
This is the same as laptops not being allowed in meetings. A company where it's common for meeting participants to "take notes" on a laptop is dysfunctional. Laptops need to be banned in meetings (and smartphones in meetings and lectures).

Also re: other comments: A video lecture is to a physical lecture what a conference call is to a proper meeting. A professor rambling for 3h is still miles better than watching the same thing on YouTube. The same holds for tv versus watching a film on a movie screen.

Zero distractions and complete immersion. Maybe VR will allow it some day.

brightball 20 hours ago 1 reply      
Shocker. I remember being part of Clemson's laptop pilot program in 1998. If you were ever presenting you basically had to ask everyone to close their laptops or their eyes would never even look up.
dalbasal 4 hours ago 0 replies      
I think there is a mentality shift that may come with digitizing learning which might help here.

The discussion on a topic like this can go in two ways. (1) Is to talk about how a laptop can help if students use it to xyz and avoid cba. It's up to the student. Bring a horse to water...(2) The second way you can look at this is to compare outomes, statistacally or quasi-statistically. IE, If laptops are banned we predict an N% increase in Z, where Z is (hopefully) a good proxy for learning or enjoyment or something else we want. IE, think about improving a college course the same way we think about optimizing a dating site.

On a MOOC, the second mentality will tend to dominate. Both have downsides, especially when applied blindly (which tends to happen). In any case, new thinking tends to help.

tsumnia 20 hours ago 1 reply      
I think its a double edge sword; not just paper > laptop or laptop > paper. As many people have already stated, its about engagement. Since coming back for my PhD, I've subscribed to the pencil/paper approach as a simple show of respect to the instructor. Despite what we think, professors are human and flawed, and being in their shoes, it can be disheartening to not be able to feed off your audience.

That being said, you can't control them; however, I like to look at different performance styles. What makes someone binge watch Netflix episodes but want to nod off during a lecture. Sure, one has less cognitive load, but replace Netflix binge with anything. People are willing to engage, as long as the medium is engaging (this doesn't mean easy or funny, simply engaging).

[Purely anecdotal opinion based discussion] This is one of the reasons I think flipping the classroom does work; they can't tune out. But, if its purely them doing work, what's your purpose there? To babysit? There needs to be a happy median between work and lecture.

I like to look at the class time in an episodic structure. Pick a show and you'll notice there's a pattern to how the shows work. By maintaining a consistency in the classroom, the students know what to expect.

To tie it back to the article, the laptop is a great tool to use when you need them to do something on the computer. However, they should be looking at you, and you should be drawing their attention. Otherwise, you're just reading your PowerPoint slides.

wccrawford 21 hours ago 3 replies      
I'd be more impressed if they also did the same study with notepads and doodles and daydreams, and compared the numbers.

I have a feeling that people who aren't paying attention weren't going to anyhow.

However, I'd also guess that at least some people use the computer to look up additional information instead of stopping the class and asking, which helps everyone involved.

_e 4 hours ago 0 replies      
Politicians are also better off without a laptop during legislative sessions [0].

[0] http://www.snopes.com/photos/politics/solitaire.asp

BigChiefSmokem 13 hours ago 0 replies      
I'll give you no laptops in the class if you give me no standardized testing and only four 15-20 minute lectures per day and let the kids work on projects the rest of the time as a way to prove their learning and experiences in a more tangible way.

Trying to fix the problem by applying only patches, as us technically inclined would say, always leads to horribly unreliable and broken systems.

Zpalmtree 9 hours ago 1 reply      
I like having a laptop at uni just because I can program when the lectures are boring, I find the material is too easy in UK universities in CS at least, dunno about other courses or countries, but the amount of effort you need to get good marks along with the amount you're paying is a bit silly, and mostly you'll learn more by yourself...

That said, if you're in a programming class, having a laptop to follow along and try out the concepts is really handy, when we were in an C++/ASM class, seeing the different ASM GCC/G++ and Microsoft's C++ compiler spat out was quite interesting.

emptybits 19 hours ago 0 replies      
It makes sense that during a lecture, simple transcription (associated with typing) yields worse results than cognition (associated with writing). So pardon my ignorance (long out of the formal student loop):

Are students taught how to take notes effectively (with laptops) early in their academic lives? Before we throw laptops out of classrooms, could we be improving the situation by putting students through a "How To Take Notes" course, with emphasis on effective laptopping?

It's akin to "how to listen to music" and "how to read a book" courses -- much to be gained IMO.

brodock 12 hours ago 1 reply      
Any research that takes students as an homogenic group is flawed. People can be (more or less) in about one of the 7 different types of learning styles https://www.learning-styles-online.com/overview/.

So making claims like "doing X works better than Y" is meaningless without pointing to a specific learning style.

That's why you hear people defending writing to paper, while others prefer just hearing the lectures or others have better performance while discussing with peers (and some hate all of the other interactions and can perform better by isolating and studying on your own... which is probably the one who will benefit the most of having a laptop available).

Fomite 17 hours ago 1 reply      
Just personally, for me it was often a choice between "Laptop-based Distractions" or "Fall Asleep in Morning Lecture".

The former was definitely the superior of the two options.

free_everybody 17 hours ago 0 replies      
I find that having my laptop out is great for my learning, even during lectures. If somethings not clear or I want more context, I can quickly look up some information without interrupting the teacher. Also, paper notes don't travel well. If everything is on my laptop and backed up online, I know that if I have my laptop, I can study anything I want. Even if I don't have my laptop, I could use another computer to access my notes and documents. This is a HUGE benefit.
LaikaF 18 hours ago 0 replies      
My high school did the one laptop loan out thing (later got sued for it) and I can tell you it was useless as a learning tool. At least in the way intended. I learned quite a bit mainly about navigating around the blocks and rules they put in place. In high school my friends and I ran our own image board, learned about reverse proxying via meebo repeater, hosted our own domains to dodge filtering, and much much more. As far as what I used them for in class... if I needed to take notes I was there with note book and pen. If I didn't I used the laptop to do homework for other classes while in class. I had a reputation among my teachers for handing in assignments the day they were assigned.

In college I slid into the pattern they saw here. I started spending more time on social media, paying less attention in class, slacking on my assignments. As my burnout increased the actual class times became less a thing I learned from and more just something I was required to sit in. One of my college classes literally just required me to show up. It was a was one of the few electives in the college for a large university. The students were frustrated they had to be there, and the teacher was tired of teaching to students who just didn't care.

Overall I left college burnt out and pissed at the whole experience. I went in wanting to learn it just didn't work out.

jon889 7 hours ago 0 replies      
I have had lectures where I have had a laptop/iPad/phone and ones where Ive not had any. i did get distracted, but I found that if I didnt have say Twitter Id get distracted for longer. With Twitter Id catch up on my news feed and then a few minutes later be back to concentrating. Without it Id end up day dreaming and losing focus for 10-20 minutes.

The biggest problem isnt distractions, or computers and social media. Its that hour long lectures are an awful method of transferring information. In my first year we had small groups of ~8 people and a student from 3rd/4th year and wed go through problems from the maths and programming lectures. I learnt much more in these.

Honestly learning would be much more improved if lectures were condensed into half an hour YouTube videos you can pause, speed up and rewind. Then have smaller groups in which you can interact with the lecturers/assistants.

kyle-rb 20 hours ago 0 replies      
>students spent less than 5 minutes on average using the internet for class-related purposes (e.g., accessing the syllabus, reviewing course-related slides or supplemental materials, searching for content related to the lecture)

I wonder if that could be skewed, because it only takes one request to pull up a course syllabus, but if I have Facebook Messenger open in another tab, it could be receiving updates periodically, leading to more time recorded in this experiment.

fatso784 17 hours ago 0 replies      
There's another study showing that students around you with laptops harm your ability to concentrate, even if you're not on a laptop yourself. This is in my opinion a stronger argument against laptops, because it harms those not privileged enough to have a laptop. (not enough time to find study but you can find it if you search!)
Radle 4 hours ago 0 replies      
If students thing the class is boring enough, they'll watch youtube whether on the laptop or on their mobile is no really important.
TazeTSchnitzel 20 hours ago 0 replies      
> In contrast with their heavy nonacademic internet use, students spent less than 5 minutes on average using the internet for class-related purposes

This is a potential methodological flaw. It takes me 5 minutes to log onto my university's VLE and download the course materials. I then read them offline. Likewise, taking notes in class happens offline.

Internet use does not reflect computer use.

Bearwithme 19 hours ago 0 replies      
They should try this study again, but with laptops heavily locked down. Disable just about everything that isn't productive including a strict web filter. I am willing to bet the results would be much better for the kids with laptops. Of course if you let them have free reign they are going to be more interested in entertainment than productivity.
catnaroek 53 minutes ago 0 replies      
This is why I like to program in front of a whiteboard rather than in front of my computer: to be more productive.
Shinchy 3 hours ago 0 replies      
I've always find the idea of taking a laptop to a lecture pretty rude. I'm there to give the person teaching my full attention, not stare at a laptop screen. So personally I never use them in any type of lecturing / teaching environment simply as a mark of respect.
homie 21 hours ago 0 replies      
instructors are also better off without computers in the classroom. lecture has been reduced to staring at a projector while each and every students eyes roll to the back of their skull
vblord 19 hours ago 0 replies      
During indoor recess at my kids school, kids don't eat their lunch and just throw it away because of the chromebooks. There are only have a few computers and they are first come first serve. Kids would rather go without lunch to be able to play on the internet for 20 minutes.
thisrod 13 hours ago 0 replies      
> First, participants spent almost 40 minutes out of every 100-minute class period using the internet for nonacademic purposes

I think that I'd be one of them; in the absence of a laptop, I'd spend that time daydreaming. How many people can really concentrate through a 100 minute nonstop lecture about differential geometry or the decline of the Majapahit empire?

nerpderp83 21 hours ago 1 reply      
Paying attention requires work, we need to purposefully use tools that are also distractions.
mark_l_watson 11 hours ago 0 replies      
In what universe would it be a good idea for students use laptops in class?

Use of digital devices should be limited because the very use of digital devices separates us from what is going on around us. Students should listen and take notes (in a notebook) as necessary.

kgilpin 14 hours ago 0 replies      
It sounds like what students need are better teachers. I haven't been to school in a while but I had plenty of classes that were more interesting than surfing YouTube; and some that weren't.

The same is true for meetings at work. In a good session, people are using their laptops to look up contributing information. In a bad one... well... you know.

zokier 19 hours ago 1 reply      
I love how any education-related topic brings out the armchair-pedagogist out from the woodworks. Of course a big aspect there is that everyone has encountered some amount of education, and especially both courses they enjoyed and disliked. And there is of course the "think of the children" aspect.

To avoid making purely meta comment, in my opinion the ship has already sailed; we are going to have computers in classrooms for better or worse. So the big question is how can we make the best use of that situation.

qguv 12 hours ago 0 replies      
Internet access, especially to Wikipedia, did wonders for me whenever the lecture turned to something I was already familiar with. That alone kept me from getting distracted and frustrated as I would in classes whose professors prohibited laptop use.
erikb 20 hours ago 0 replies      
I'd argue that students are better off without a classroom as long as they have a laptop (and internet, but that is often also better at home/cafe than in the classroom).
wh313 13 hours ago 0 replies      
Could it be that the intermittent requests to servers by running apps, say Facebook Messenger or WhatsApp, be tracked as social media use? Because they all use HTTPS I don't see how the researchers distinguished between idle traffic vs sending a message.
zitterbewegung 17 hours ago 0 replies      
When I was in College I would take notes using a notebook and pad and paper. I audited some classes with my laptop using latex but most of the time I used a notebook. Also, sometimes I would just go to class without a notebook and get the information that way. It also helped that I didn't have a smartphone with Cellular data half of the time I was in school.
polote 19 hours ago 0 replies      
Well it depends on what you do in the classroom, when class is mandatory but you are not able to learn this way (by listening to a teacher), having a laptop can let you do other things. And then use your time efficiently, like doing some administrative work, send email, coding ...

Some students are of course better with a laptop in the classroom

marlokk 18 hours ago 0 replies      
Students are better off with instructors who don't bore students into bringing out their laptops.
Kenji 20 hours ago 0 replies      
If you keep your laptop open during class, you're not just distracting yourself, you're distracting everyone behind you (that's how human attention works - if you see a bright display with moving things, your attention is drawn towards it), and that's not right. That's why at my uni, there was an unspoken (de-facto) policy that if you keep your laptop open during lectures, you're sitting in the backrows, especially if you play games or do stuff like that. It worked great - I was always in the front row with pen & paper.

However, a laptop is very useful to get work done during breaks or labs when you're actually supposed to use it.

jessepage1989 15 hours ago 0 replies      
I find taking paper notes and then reorganizing on the computer works best. The repetition helps memorization.
alistproducer2 13 hours ago 0 replies      
"Duh" - anyone who's ever been in a class with a laptop.
Glyptodon 20 hours ago 2 replies      
I feel like the conclusion is a bit off base: that students lack the self control to restrict the use of laptops laptops to class-related activities is somehow a sign that the problem is the laptop and not the students? I think it's very possible that younger generations have big issues with self-control and instant gratification. But I think it's wrong to think that laptops are the faulty party.
exabrial 14 hours ago 0 replies      
Students are best of with the least amount of distractions
ChiliDogSwirl 19 hours ago 1 reply      
Maybe it would be helpful if our operating systems were optimised for working and learning rather than to selling us crap and mining our data.
aurelianito 12 hours ago 0 replies      
Even better, just remove the surrounding classroom of the laptop. Now we can learn anything anywhere. Having to go to take a class were a professor recites something is ridiculous.
partycoder 19 hours ago 1 reply      
I think VR will be the future of education.
rokhayakebe 20 hours ago 1 reply      
We really need to begin ditching most studies. We have the ability now to collect vast amount of data and use that to make conclusions based on millions of endpoints, not just 10, 100 or 1000 pieces of information.
FussyZeus 20 hours ago 0 replies      
Disengaged and uninterested students will find a distraction; yes, perhaps a laptop makes it easier but my education in distraction seeking during middle school, well before laptops were even close to schools, shows that the lack of a computer in front of me was no obstacle to locating something more interesting to put my attention to.

The real solution is to engage students so they don't feel the urge to get distracted in the first place. Then you could give them completely unfiltered Internet and they would still be learning (perhaps even faster, using additional resources.) You can't substitute an urge to learn, no matter if you strap them to the chairs and pin their eyeballs open with their individual fingers strapped down, it won't do anything. It just makes school less interesting, less fun, and less appealing, which makes learning by extension less fun, less appealing, and less interesting.

microcolonel 20 hours ago 1 reply      
Students are also better off without forcible teacher's unions and federal curriculum mandates; no chance of hearing about that.

Maybe the best way out of this mess is vouchers.

If the schools are functioning, it should be obvious to them that the laptops are not working out.

bitJericho 21 hours ago 2 replies      
The schools are so messed up in the US. Best to just educate children yourself as best you can. As for college kids, best to travel abroad.
DRM Is Toxic to Culture meshedinsights.com
392 points by jrepinc  3 days ago   250 comments top 19
musesum 2 days ago 3 replies      
Some background: I developed a software binary sandbox, in the 90's; patent cited 500+ times ... mostly by developers of DRM. Now, I am working on an open source project.

IMO, DRM extends a presumption of scarcity. Hunters share their meat, whereas farmers guard their harvest. For the hunter, the kill is a short term abundance of value; if it isn't shared, it will spoil. For the farmer, the harvest is a long term store of value. Scavengers exploit the asymmetry of effort in which to obtain that value.

Once value became easy to copy, the symmetry of effort has shifted. As a result, culture is shifting with it. Freely copied recorded music shifted the value back to live concerts. Back to a short term abundance of value. When was the last time you posted on Twitter or FB? These are freshly hunted moments. Wait too long and that muse will spoil.

skywhopper 3 days ago 3 replies      
I would say that DRM is merely an inevitable outgrowth of our society's enshrinement and worship of the idea of "intellectual property". That word, "property", gives the opposite connotations of "copyright"--permanent rightful ownership vs a temporary grant of monopoly on an idea.

But yes, it's definitely the case that the cultural idea that humans and corporations have the right to own a copyright for a length of time greater than the duration of any human life means that for 99.99% of all works, the effective duration of copyright truly is forever, which reinforces the idea that copyright should be forever. And if it's forever and it's "property" why shouldn't strong, user-hostile DRM exist? It's the barbed-wire, severe-tire-damage, border-wall of electronic media.

cygned 2 days ago 5 replies      
Lately, I wanted to buy an eBook. I usually buy prints, but this time I needed it as soon as possible.

Upon searching, every book store told me, I would need to install some Adobe stuff on my computer to read the file. I am neither a fan of Adobe nor of the idea of being locked in to an application to view a book. After realizing that there is no other digital way, I simply bought the printed version and waited the two days it took to ship.

It's a very frustrating situation for someone who just wants to read something.

zanny 2 days ago 0 replies      
This is just the tip of the leaf of the IP tree that is destroying culture.

Since the advent of permanent copyright there is no more commons. The commons stopped in 1923. Disney will continue to buy the US government into guaranteeing there is no common heritage of America for as long as they are able.

It will be interesting to see how, in a thousand years, (hopefully) scholars of the time will reflect on how systemic the damage was to American society to artificially constrain creativity so fundamentally and for so long. It is human nature to take your experiences and reinterpret them in new ways. That is how creativity is defined. But copyright and IP, especially in perpetuity, prevent that, all for what is claimed to be the protection of profits by the original creators, even for decades after the creators are dead.

makecheck 3 days ago 6 replies      
DRM and advertising tech have a similar problem of bad execution. Sure, there exists a way to do each of these that many may consider "reasonable" but in practice industries have shown that they would rather be obnoxious and make everything annoying or just much more difficult than necessary. This means they can't be trusted to do it well, and the solution must be to avoid it completely (DRM-free content and ad blocker).
scarface74 2 days ago 1 reply      
The usual spiel about DRM is that you don't own what you "bought" but in the case of subscriptions and rentals, why would anyone think they have a right to ownership when they are clearly paying for access?

Why should I have a problem with paying Apple for DRM free music from iTunes and paying for a subscription to DRMd music for Apple Music?

I haven't pirated music since the iTunes Store opened in 2003. Even from 2003-2008 when it was DRM encumbered, there was a simple built in way to remove the DRM -- burn it to a CD and rip it.

I also don't have a problempaying to rent a movie from Apple/Amazon, but I would never "buy" a movie from Apple/Amazon with restrictive DRM. I also wouldn't have thought about buying a physical disk without an easy way to rip it to my Plex Server and use in the way I see fit.

rmrfrmrf 2 days ago 6 replies      
I would love to see some use cases of what people want to be able to do with DRM'd media that 1) can't be done with current DRM limitations and 2) is within the guidelines of what the distributor allows (or within the rights given by the user's local government).

For example, my local library buys a set number of digital licenses for ebooks, so there's a waitlist system for ebooks and each digital copy is time gated to a certain number of days before becoming unusable. In that time, though, you can read the ebook offline once downloaded through the library's app. Annoying sometimes, but overall seems reasonable.

How would that example translate to a non-DRM version while making sure that the library isn't distributing an unbounded number of licenses to customers?

Phenomabomb 2 days ago 1 reply      
DRM only hurts legitimate customers. It seems increasingly rare that people aren't able to crack DRM's fairly quickly. So the only people that even see the DRM are the paying customer, and there is always some sort of inconvenient drawback for them.
ctulek 2 days ago 1 reply      
My only concern with DRM in browser is that, as far as I understand the technology, any website can put a small piece of arbitrary DRM content and your browser will send information to that website uniq to your device, and your device will be tracked by DRM providers and probably also by that website.

Firefox gives you the option to disable DRM in the settings. Chrome also has it but it is buried deeply in advanced settings. Safari does not seem to have it at all.

gaius 3 days ago 3 replies      
We offer research reports and white papers for your management and Board from experienced advisors

Strangely, I couldn't find any of these reports available for free download and in the public domain...

jancsika 2 days ago 0 replies      
> Thus your children wont get to play your music, show your favourite films, read your books, share your culture, with your grandchildren because they wont inherit anything digital from you thats usable. Historians wont be able to track the influences on an event because the sources have digitally corroded. Youll not even be able to share what you like with your friends.

Even Kodi assumes by (sane) defaults that the user does not want to save their own particular copy of the relevant media on their own particular device.

Imagining a future where grandchildren can't inherit the digital libraries stored on their family devices is early-2000's futurism that somehow skips completely over the year 2017 where the greater bulk of these future grandparents are streaming everything.

ucaetano 2 days ago 0 replies      
"Travelling frequently in Europe, Ive had the chance to use two approaches to the underground/metro/subway, the Paris Metro and the U-bahn in various German cities."

The author failed to understand the actual cultural issue driving the two approaches: In Germany you don't need a gate to have people pay the ticket, most people will do it anyway. In Paris, most people won't pay and just ride for free, and tragedy of the commons ensues.

So by his own argument, DRM is a product of the culture of a people.

microcolonel 2 days ago 0 replies      
Just don't use it, today it is entirely within your power not to use DRM. If you want it to stay that way, you'll have to be the demand you wish to see in the world.
tomcam 2 days ago 0 replies      
It's all true--but it does nothing to explain how artists can be paid for their stuff at prices they set.
Silhouette 3 days ago 2 replies      
The problem with technology-enforced restrictions isnt that they allow legitimate enforcement of rights; its the collateral damage they cause in the process.

Indeed, but the problem with not making any effort to enforce restrictions technically is that typically you then aren't enforcing your legitimate rights.

The metro analogy was interesting. If the system in Germany works, it is because people are honest and pay for what they are using without physical compulsion.

Sadly, if you try to start a business creating original content and making it available online in the same spirit of trust, you will quickly learn that many people in the world are not so honest.

You will also learn that unlike a citywide metro system, it is all too easy for someone of less noble intentions to not only take your content for themselves without paying but also set up their own redistribution channels and steal your customers and your revenues.

A third lesson you will learn is that a lot of people may quite innocently assume that if they can do something then it's allowed, particularly if they speak a different language and don't necessarily understand the deal being offered. DRM can be quite effective at deterring this sort of casual and often unintentional infringement.

mcbobbington 2 days ago 0 replies      
Interestingly the DMCA prohibits creating software to bypass DRM. It is completely unconstitutional, violating freedom of the press. That part of the law should be struck down.
nerdponx 3 days ago 2 replies      
Its like checking the lift ticket, yes, but also the guy checks you are only wearing gear hired from the resort shop, skis with you down the slope and trips you if you try any manoeuvres that werent taught to you by the resort ski instructor; then as you go down the slope he pushes you away from the moguls because those are a premium feature and finally you get to run the gauntlet of armed security guards at the bottom of the slope checking for people who havent paid.

How is this at all an accurate analogy for DRM?

erikpukinskis 2 days ago 2 replies      
All forms of DRM can be defeated by pointing an iPhone at them.
rayiner 2 days ago 4 replies      
I disagree with the characterization of all this media as "culture." They're entertainment products that exist for the sole purpose of generating a profit.
EU Prepares "Right to Repair" Legislation to Fight Short Product Lifespans bleepingcomputer.com
386 points by SwellJoe  2 days ago   332 comments top 21
maaaats 2 days ago 14 replies      
EU is also working on mandatory two year expected lifetime on many products. As in, if a product is in a category expected to last more than two years (eg most products except perishable items), if it breaks down through no fault of the consumer, the producer will have to fix or replace it. This extends beyond whatever guarantees/warranties provided by the producers.

I actually think this isn't progressive enough. I would expect my washing machine computer, smartphone, oven etc to last longer than two years. 5 years minimum.

filleokus 2 days ago 6 replies      
Is this really a good idea? Sure, "right to repair" and longer lifespans might sound good. But think of all the advancements made by people like Apple with almost all of their products that would be prohibitively hard to do under this legislation. How would you design a iMac where the user could easily fix broken parts (without using a suction cup to remove the whole front glass) or a superthin Macbook Pro where no parts could be fastned by glue, like the batteries. Not to mention the iPhone. I imagine that by demanding easy user repair would make everything look like the bulky Lenovo machines.

I don't believe that Apple design their stuff with the intention to make it hard to repair for end users, but rather that they make trade-offs that improve other things with the cost of making user repairs hard. But I might be wrong...

zabana 2 days ago 1 reply      
To me what's actually ridiculous is that this is even subject to debate. Consumers should not have to fight for their right to repair, it should be assumed. Also, this raises a lot of frightening questions with regards to ownership (and its transfer) and how it's perceived by the Apples and Samsungs of this world. Realistically, if we let this slide, it could open up the door for many more abusive violations of rights, but I digress.
skrause 2 days ago 3 replies      
The right to repair is a good step, but I think the better approach to fight short product lifespans would be much longer and mandatory guarantees on products.

A mandatory 5-year guarantee on laptops and 10 years on washing machines or dryers doesn't sound unreasonably.

bluGill 2 days ago 2 replies      
Right to repair means nothing without ability to get parts.

I work on embedded products. We have perfectly good working machines that don't need any changes, but the CPU (or some other off the shelf chip) is not going to be made anymore so we have to do an expensive port to a new CPU: EVERYBODY loses. We have to charge more for our products because the cost of engineering is only amortized over a few years. we have ideas for a different machine we could build, but that engineering budget (expertise as much as money) is stuck working on the port. If someone does find a bug we have to fix it in both versions.

In the mean time you can find parts for 100 year old cars. Part of that is someone in their garage making them (can you make a replacement chip in your garage given just the old one) but part of that is the molds to make oil filters still exist so they can make another batch on demand.

diego_moita 2 days ago 1 reply      
One company deserves a lot of praise for respecting the "right to repair" is Baratza[1], a coffee grinder manufacturer in Seattle.

Support for customers is one of their mission statements and they provide not only instructions but also all necessary parts and plenty of videos on You-Tube on how to fix their products. Sometimes they even provide upgrades on parts at no extra cost.

No, I am not associated with them in anyway other than being a customer.

[1] https://www.baratza.com/

objectivistbrit 1 day ago 2 replies      
People are focusing on the wrong question.

The question is not: what are the relative costs and benefits to user-repairable products? Of course there are both benefits and drawbacks.

The question is: who should decide?

That is, should the government enforce a "right to repair" - which means a ban on non-repairable products? Or should companies be able to manufacture both repairable and non-repairable products, and let consumers decide which they want to buy?

A free society is broader than just the free market. No companies manufacture a repairable version of X product? You can use free association to form the "Society for User-maintainable X Devices". You can use free speech to campaign in the press for user-maintainable X devices. You can promote to others the virtues of having a user-maintable X, until there's enough latent demand for startups to spring up to fill it.

None of this requires the government.

squarefoot 2 days ago 2 replies      
Ths is good, but let's see how it clashes with copyright laws since "repairing" an obsolete electronic appliance might involve its reverse engineering and at least distributing information on how to do so if not complete binaries containing parts of the original firmware along with other parts which were hacked or developed from scratch. I expect some opposition by the usual suspects lobbied by the industry.
rdl 2 days ago 2 replies      
It might be nice for some products to have this designed into them, but the transition will be tough. It might mean redesigning a product well before the end of a natural product cycle. It will almost certainly raise costs (and thus prices) at transition.

Last big EU effort like this was RoHS, which actually lowered product lifespans and reliability substantially.

Even if it is overall a net good, the winners here will be those selling to EU people but exempt from the regulation during the transition (US, Canadian, etc retailers); once all the costs have been paid by EU consumers to the point that it makes sense to voluntarily adopt elsewhere, then global production might shift overall to selling these kind of devices only.

So, essentially a subsidy by the EU to the rest of the world in two separate ways.

jaclaz 2 days ago 0 replies      
Osiris30 2 days ago 0 replies      
See the previous discussion on HN - "recraiglist" and "They Used To Last 50 Years" (1) - a well-circulated blog post by a successful used home appliance repair & trading entrepreneur.


yummybear 2 days ago 5 replies      
How are planned obsolescense implemented in practice?
Shivetya 2 days ago 1 reply      
I have a feeling that in some cases less efficient manufacturing and more individually replaceable parts will be more damaging to the environment.

what worries me most is who is to decide what parts of a product must be user replaceable or replaceable at all? Does the piece have to comprise a certain percentage of the device or will the separate it down to only specific components? Components being battery, screen, and logic board.

amelius 1 day ago 3 replies      
We should move to a service economy. If instead of buying a washing machine, I could buy the service of washing my clothes, the company has an incentive to keep that machine working.

As always in politics, getting the incentives right works wonders.

lousken 1 day ago 0 replies      
Replaceable batteries in smartphones will return? I might wait for that.
rajeshmr 2 days ago 0 replies      
This is highly welcome! I wish all the governments introduce this legislation 'coz we are losing our beautiful planet and its resources to capitalistic greeds. We should by all means, extend the life of products to efficiently utilize resources.
amiga-workbench 2 days ago 0 replies      
Nice to hear, wish companies and consumers would stop chasing the thinness meme at all costs and get their priorities straight.

Removing socketed components to save a few cents off the BoM and a couple of mm off the thickness is a revolting trade-off.

mikl 2 days ago 4 replies      
Clueless legislation. When you look inside a modern smartphone or laptop, youll find that all available space is crammed with components or batteries. If you want components to be replaceable, youd need much more space for doors and hatches, and waterproofing would be all but impossible.

Especially when it comes to batteries, this is moronic. Ultrabooks are only possible, because multiple, weirdly shaped battery packs can be installed wherever theres room inside the casing.

This effectively means that hardware producers would need to create a larger, shittier version of their products just for the EU market, and EU citizens would be forbidden from purchasing the good versions of things. I sense a massive wave of parallel imports coming

jokoon 1 day ago 0 replies      
This would be a true gesture for the environment...
pasbesoin 1 day ago 0 replies      
My 40 year old microwave just died. Magic Chef, manufactured in October, 1977 in Anniston, GA. Ran like a champ up to its last moment. A faint wiff of overheated electronics, at the end of one run, and upon the next, the magnetron (or klystron, someone told me such an old unit might have?) wouldn't power up. Control panel still works just fine.

My previous furnace, a Lennox, was circa 35 years old when it was replaced. The blower motor was failing and I took the advice of the technician to put the money to repair instead towards a new unit. A Rheem, which came with at least two significant (and noisy) defects that took repeated, ineffective visits (with additional expenses) and over a year to finally partially mitigate. What a piece of shit, that "highly regarded" Rheem unit. And the subsequent support for it, under warranty. And, I understand it's expected lifetime is on the order of 15 years.

New stuff may be more efficient, but a lot of it is crap for endurance and sometimes even simple convenience. And, I am increasingly comparing the supposed savings (energy, water, etc.) against cost -- both in money and in time and effort -- of maintaining and replacing these... "chintzy" newer models.

Sure, slap a sheet of stainless steel on the exterior. Style it up. Inside, it's still kind of a piece of crap.

My parents replaced their many years old Kenmore washer with a top of the line top-loader made by LG. (For various reasons, a front-loader didn't work for them.) The clothes consistently come out of the LG wrinkled as well as covered with lint. If you air dry (which makes clothes last longer and not shrink and all sorts of good things), it's a real problem.

The suggested work-around passed on by the seller? Run all loads with the "Bulky Items" setting on. What does this do? Fills the drum to the top with water. There go the supposed water savings and some of the energy savings (from the mass of heated water consumed, as well as the additional power to move the extra water around). At least the unit has this setting -- thank goodness! Otherwise, it would be pretty unusable.

Someday, someone's going to take the time and effort to research and write up a book full of comparisons between our current household machinery and older generations. And, I suspect, some of the results aren't going to be pretty.

JoeAltmaier 2 days ago 4 replies      
Devices are going to last all sorts of times - from a few months to years. Depending on what you pay for it.

To legislate that no cheaper versions of things should be available, even making them illegal, seems blind to the issues of those of limited means. It seems like a society pawn saying "I always buy Gucci; I mean why do they even make other brands? Har har!"

Demanding a warranty for instance would be a softer approach. But to make cheap, low-lifetime options actually illegal shows some fundamental misconception about how a free market works.

Joe Hruska, founder and CEO of RescueTime (YC W08), has died rescuetime.com
456 points by robby1066  1 day ago   46 comments top 43
save_ferris 1 day ago 1 reply      
I used RescueTime to kick my social media habit and learn to code 5 years ago. I was working in the retail industry and found a way to spend 10 hours a week coding on my own for almost two years before I got my first full-time development job, largely due to RescueTime.

I didn't think much of it then, but in retrospect, the impact Joe's product had on my career turned out to be pretty significant. RIP Joe.

therajiv 1 day ago 0 replies      
Wow - literally just saw an article about RescueTime on the frontpage of NYT (https://www.nytimes.com/2017/07/05/your-money/where-does-the...). And installed it 5 minutes ago. Coincidence, if I've ever seen one... RIP.
ludicast 1 day ago 0 replies      
Fuck that's sad.

Never knew him but from his app/site he definitely "made the world a better place".

I don't know how many person-years he had on earth, but he helped people liberate many thousands of their own.

mmaunder 1 day ago 0 replies      
My sincere condolences to the whole RescueTime team. I read about this for the first time here, today. I mentioned it to my wife - we met some of the RT team a long time ago and then lost touch. Her first reaction was total shock and then "omg, Joe was such a nice guy." That's the impression Joe left both of us with. I'm sure his passing has left a huge hole in many people's hearts.
skinnymuch 1 day ago 0 replies      
This is so sad. I always have subscribed to RescueTime during their sales and pay for a year upfront. So even though I never used it as much after the first couple months, it made sense to continue subscribing year after year. He has helped so many people like myself. RIP Joe. I talked to him twice in the early days of RT. Time to find those emails. RIP.
squidbot 23 hours ago 0 replies      
I have a very heavy heart from this news. Joe's wife was my son's favorite teacher (he still talks about her two years later) and I'd met and spoken with Joe several times socially, though he was quite ill through much of the time I knew him. We had some good "geek out" times when my wife and his chatted (both are teachers so they'd talk teach and we'd talk tech.) They were a wonderful, loving, caring and giving couple, and Joe was just a good soul. You are missed Joe.
edshiro 1 day ago 0 replies      
I recently reinstalled RescueTime as I wanted to better track my productivity since I am studying deep learning and soon doing the Udacity self-driving car engineer nanodegree.My condolences to the RescueTime team and Joe's family. Beyond the grave I would like to sincerely thank him for the awesome product he has built.
nstart 1 day ago 1 reply      
This was a shocker to read. I never really thought of the CEO of RescueTime, but the product that was made truly fit into the mold of "changing lives". I took my habits of random social media visits from 3 hours a day (stuff accumulates terribly), to two planned and focused visits totaling 20 minutes a day.

That over a period of 3 years, is almost 87 days of my life saved. I pay for the product because of all the good it has given me. Feels really sad to see someone who gave me back so much time having to leave so soon. RIP Joe. If anyone from the rescuetime team is reading this, stay strong. Condolences to all of you as well.

ajohnclark 2 hours ago 0 replies      
Thank you Joe for saving me countess hours and from social media addiction. You certainly left a great great legacy and your family should be very proud.
1123581321 1 day ago 0 replies      
I've been an on-and-off paid user of RT for several years and it has probably saved me thousands of hours. Since the software is somewhat popular, I think he had a great impact on the world. Thanks to the RT team for this thoughtful announcement.
wouterinho 1 day ago 0 replies      
When we started out with our company, RescueTime was one of our "competitors": more or less the same technology, but different use cases (discovery vs productivity). I would often find Joe having the same issues and posting in the same support tickets are I did.

I have always followed them since and think they actually have a very relevant product in this world of ever increasing interruptions. My condolences.

bfioca 23 hours ago 0 replies      
Thank you all for such a great showing in these comments. It really cements what kind of person Joe was, and how much he meant to us at RescueTime.
petecooper 1 day ago 0 replies      
I was not aware of RescueTime or Joe until I read this.

Now I know what it is, it looks like a very good fit for me. Horrid circumstances for me to find out about it, and I don't know what else to say about this.

Family, friends, colleagues: I am sorry for your loss.

kilroy123 1 day ago 0 replies      
I didn't know Rescue Time was from YC and I've used their product for years.

Sad to hear about this though.

artur_makly 1 day ago 0 replies      
His impact... was immeasurable.


..now if Apple would only let RT track all our app use..

narrator 1 day ago 0 replies      
Rescue Time has helped me stay focused on work and to quit doing so much news browsing during business hours. Joe's work has saved a lot of people a lot of hours of their lives, and that's something to be proud of.
pocketsquare2 1 day ago 0 replies      
Awesome vision, product and person. I started using RescueTime back in '09 and it has helped me immensely over the years. I would say incalculably but I can actually calculate the impact quite well. :)

RIP and best to his family, friends and team.

psyc 1 day ago 0 replies      
Oh no. I had a few conversations with him years ago when I interviewed there. Super nice guy. My condolences.
guantanamo_bob 1 day ago 0 replies      
Even though I never knew Joe, it's sad to hear of his passing. RescueTime is one of the most eye-opening products I've ever used, and has improved my life considerably the last few years.
seshagiric 1 day ago 0 replies      
It is inspiring to read the story of how Joe continued to be involved and lead despite his illness. RIP>
mhartl 1 day ago 0 replies      
I'm so sorry to hear this. I was in the same YC batch as Joe and the other RescueTime founders back in 2008. Joe will be sorely missed by all who knew him.
rhizome 1 day ago 0 replies      
RT was innovative, or "innovative enough," when I used it years ago, and it really helped me get a handle on my unfocused internet travels. I hope my freeloading didn't affect his passing. RIP.
kayhi 1 day ago 0 replies      
Seems like this would deserve a black bar here on HN
pj_mukh 1 day ago 0 replies      
After using numerous productivity apps I keep coming back to rescuetime. Kudos and RIP.
stanfordkid 1 day ago 0 replies      
RIP -- he seems like a great person.
pritianka 1 day ago 0 replies      
So so sad. I learned a lot from the RT business model while building WakaTime.
mgiannopoulos 1 day ago 0 replies      
RT is a major part of my working environment. From the comments here it's obvious his work has touched many lives. R.I.P.
rexpop 1 day ago 0 replies      
RescueTime is a phenomenal piece of technology, one that helps makers with the hardest problem in art spending time with your ass in the chair.

If growth necessitates measurement, RescueTime is a hugely important tool in the belt of anyone who hopes to make their contribution to the human race.

This review is maybe slightly tinged by my having just read Pressfield's "War of Art", but I've been using RescueTime for years, and it has helped me celebrate countless victories against Resistance and that has guaranteed them all the more.

Thanks, Joe & co.

realdlee 1 day ago 0 replies      
Not the first time I heard about RT, but just signed up! RIP Joe.
pmarreck 1 day ago 0 replies      
I was just wondering how I could get more productive and just learned about this product via this (odd, eh?)
arunabh 1 day ago 0 replies      
RIP I have been using RescueTime for almost 2 yrs now
poirier 1 day ago 0 replies      

Amazing application and great mission.

lyime 1 day ago 0 replies      
Really sad to hear. Big fan of his work. RIP.
ryan-allen 1 day ago 0 replies      
Poor guy, Rescue Time is a great product and I use it to keep myself honest. RIP.
maerF0x0 1 day ago 1 reply      
A man who helped us get back our own lives has lost his. RIP good sir.
booop 1 day ago 0 replies      
RIP and condolences to his family, friends and colleagues. RescueTime looks like a product I should've gotten several years ago.

I wish I heard of it earlier from some other news.

jayliew 1 day ago 0 replies      
Deepest condolences :(
arasakik 1 day ago 0 replies      
Rest In Peace Joe.
habeanf 1 day ago 0 replies      
yosito 1 day ago 0 replies      
balls187 1 day ago 0 replies      
chronic6a5 1 day ago 1 reply      
mrlinx 1 day ago 0 replies      
Good at programming competitions does not equal good on the job [video] catonmat.net
372 points by jpn  4 days ago   166 comments top 33
pitt1980 4 days ago 4 replies      
I'm not able to watch the video at my current computer

but its actually really typical that when something becomes an advantage for being selected to a certain pool

the success of the those in the pool after the selection will negatively correlate with that thing


the really obvious example of this is the hockey birthday thing from Malcolm Gladwell's Outliers

people with the earlier birthdays were more likely to make it past each selection stage in becoming an NHL player

but those with the later birthdays who were able to be selected in spite of their later birthdays, were typically more successful after the selection


The authors contend that the strategy might actually work against a team's success because they found that players born later in the year and drafted later actually had more productive hockey careers.

Deaner said the study showed that men drafted in the second half of the year were about twice as likely to have successful careers in the NHL ??? reaching benchmarks like 400 games played or 200 points scored ??? than those born earlier in the year.

"If the team wasn't making this mistake, they probably would have been more successful," he said. "The guys born in the first part of the year are much more likely to be busts."


fatjokes 4 days ago 11 replies      
I'm also a former ICPC world finalist and I'm willing to believe this is the case in a BigCo because:1) Top ICPC competitors tend to be extremely socially awkward, and may not work well in groups. 2) Vast majority of engineering work is very algorithmically simple, thus these folks don't get a chance to "shine".

However, I've noticed that they are popular in prop trading firms, where work tends to be in very small teams or individual. I don't know how their performance correlates to fund performance.

If I were hiring, I'd still prefer to hire at least some top ICPC performers. The hard algorithms are rare---but can make or break your product.

I also think the knowledge learned from programming contests is invaluable. I'd like to be able to discuss bipartite matching or min-cut with my colleagues without eliciting a blank look.

rkunnamp 4 days ago 2 replies      
Just like 'There is no such person as Good CEO', but only 'A Good CEO for a particular company, at a particular point of time', there is no such a person as 'Good Programmer', there is only 'A Good Programmer for a particular job, at a particular point of time'.

Context matters a lot. One does not use an AK-47 to kill a mosquito. It is terrible for that job. But that does not make AK-47 a terrible weapon.

Good at programming competitions does not equal good at any programming job, is a more appropriate sentence.

lr4444lr 4 days ago 3 replies      
In the chapter of his interview in Coders at Work[0], Norvig found that the strongest correlate in the interview process with success at Google was paradoxically to have been given the lowest possible score by one of the interviewers. He surmised that this was because in order for such a person to have even gotten hired at the end of the process, someone trustworthy must have seen so much potential in the prospective hire that he strongly advocated for the person to be offered a position which worked in spite of that other low rating.

Kind of ironic for a company whose product values are so tightly tied to quantitive data.


mcv 4 days ago 2 replies      
I once had an online discussion about the difference between competitive programmers and professional programmers, and which were better. Someone argued that competitive programmers were better, because they had to perform in more extreme circumstances. As he put it: they were sent into the forest with a knife to kill a lion.

Everybody else ran with that metaphor. Someone asked what a lion was doing in a forest; don't they live on the savannah? I asked whether he was sure there was a lion; plenty of times I've been sent to kill a lion and ended up having to kill a goat or an elephant instead. Are we even sure it needs killing?

I think that's the difference between a competitive programmer and a professional programmer. The competitive programmer will be much faster with a solution to the given problem, but the professional programmer will solve a better problem.

NTDF9 4 days ago 0 replies      
I have had a bad experience with an algorthmic topcoder.

The guy has a brilliant mind. But he doesn't understand the big picture or why his code would fail when integrated with big codebases.

He thinks his work is "done" when he has written his tiny little code with some queues and trees. Then he leaves it for someone else to integrate and thus really solve the problem.

The best engineers I found were the ones who took ownership of their projects and had the ethic to dig deep. Not the ones who could solve toy algo problems.

bit_logic 4 days ago 0 replies      
Here's a list of companies that don't do this https://github.com/poteto/hiring-without-whiteboards I hope it's the beginning of an industry wide trend away from these types of interviews.
FTA 4 days ago 0 replies      
Look at programming competitions from an assessment perspective: what are they measuring and is what's being measured good or bad for a job?

Ability to work under short time constraints (probably good), to hack out some solution that will work temporarily (good) but probably not solid (bad), to forego much time to consider the implications of design and implementation choices (bad), to develop without communication if solo competition (bad) or without communication outside of the small group of core developers (bad), to build a solution without getting feedback and refinements from stakeholders (bad), and so on.

kazinator 4 days ago 0 replies      
People who enter programming competitions are looking for some sort of glory: to be stars. When they don't get it from the job, they get bored.

I suspect that the people good at programming competitions could easily perform well on the job, if the motivation were there. I don't think it necessarily has anything to do with short-term versus long-term problem-solving focus.

There are plenty of short-term problems that you have to solve on the job to be effective. You're not doing them in competition against anyone such that if you procrastinate, you will lose, so the motivation vaporizes.

Also, since job interviews are like programming competitions, people who are good at programming competitions figure they can easily get a job anywhere, and to do that repeatedly. They are not motivated into working hard by job security concerns.

js8 4 days ago 1 reply      
I would tend to agree, and while I was never very good at programming competitions, when I saw some of the winning code on Topcoder, I somehow lost interest. If that's the code I would write if I learned to be better at competing, then I should rather spend time learning something else.

On the other hand, something like Project Euler or even Advent of Code is very nice, if you do it at your own pace, or for learning a new language.

0x4d464d48 4 days ago 0 replies      
I haven't done a code competition before but my understanding is that code competitions reward cowboy coding over good engineering practice, i.e. implementing features while paying no heed to maintainability.

When you have that mind set and start dealing with a codebase as monstrous as Google's it sounds like a recipe for some serious technical debt.

pklausler 4 days ago 1 reply      
As a long-time interviewer, I've learned that a candidate being good at programming competitions means that they're probably good at programming competitions.

It's a weak signal either way for success or failure at interviewing and being able to do the job. Part of the problem, as we've discussed on HN so often recently, is that a programming interview has to waste time with FizzBuzz-style questions just to flag the candidates with great resumes, transcripts, and phone screenings who still actually can't program a computer to solve even a trivial problem.

bjacokes 4 days ago 0 replies      
Here are some of the positive qualities I would expect from a competitive programmer compared to the general CS population early in their career:

- knowledgeable at algorithms and data structures

- good at analyzing correctness and edge cases, even on simple non-algorithmic problems (e.g. FizzBuzz)

- accustomed to working hard and learning new concepts. this attribute is not specific to competitive programmer for example, I'd expect similarly from an open source contributor but it's higher than for a typical college student.

Some negatives I'd expect, which are fixable over the course of the course of their career:

- over-confidence in code, under-testing

- less skilled in OOP, coding style, version control systems, as well as web development or systems code (unless they have specific previous work in these areas)

- sometimes looking down on gruntwork/rote as beneath them (like the view that pure mathematicians have towards applied math or statistics)

I think that list of positive attributes often outweighs the potential negatives, especially during an internship or in the first year or two of someone's career. After that, I would expect many non-competitive programmers to have picked up on some of those advantages (code correctness, learning new concepts).

I've tried to steer my own interviews away from algorithms (especially DP) and focusing more on giving problems that are relatively straightforward, while still being complicated enough that someone has to write precise code and identify/fix a few edge cases.

potatolicious 4 days ago 6 replies      
It continues to surprise and frustrate me that as an industry we continue to highly prize proxy signals for engineering skill, when engineering skill is so directly measurable.

Why even bother with measuring things that are N-degrees removed from actual engineering, when you can just get people to engineer things?

I know others on HN have been hammering this point home for years, but until something changes it deserves to be repeated ad nauseum: work samples work samples work samples work samples.

Stop with the trivia questions. Stop with the contrived algorithms questions. Ask people to design systems, ask people to defend their designs, ask people to write real runnable code that directly relates to the work they will be doing at your job, ask people to review a real piece of code written at your company, ask people to critique real design produced at your company. Anything but what we're doing right now.

kinkrtyavimoodh 4 days ago 0 replies      
Adding to the good points here, I think coding competitions superstars are also potentially more likely to have a diva-complex, while others are more likely to have an impostor complex when they get into a company like Google. The right amount of impostor complex (where it enables you but does not cripple you) is actually very helpful as it helps you learn and better yourself.
sghiassy 4 days ago 2 replies      

Does being good at programming competitions make you good at interviewing for programming jobs?

Obviously there's some irony in that question. But I also think there's some truth to it as well.

paulcole 4 days ago 0 replies      
Also important to remember that good at programming does not necessarily equal good at job.

Excelling as a communicator, being an empathetic person, and having great interpersonal skills are just as (perhaps more) important than how well you can code.

adavidoaiei 4 days ago 0 replies      
They are two types of business, first type business to business, software made to run internal in a company, the business run on this software, generally in this kind of software you use enterprise frameworks, the second type are business to consumer where anyone can made an account and use application, some software as a service are like that, they are too other types of web applications business to consumer.

When you allow that anyone to make an account and use your application sometimes you couldn't rely on enterprise technology and it's better to write everything from scratch or to customize open source solutions.

What they are testing in algorithms contests:1.) the algorithm is correct with various test sets2.) the performance of the algorithm, running time in milliseconds, there you should to know how to measure time complexity of algorithm with O(N)3.) memory consumed, there you should know how much memory you are allocating for variables, for example a variable of type byte consume 8 bits

I think that programmers which has prizes in algorithms contests are suitable for business to consumer applications because there they will find a challenge, this kind of programmers will be bored to dig in enterprise frameworks.

aqp 3 days ago 0 replies      

It looks like it wasn't "being good at programming competitions" that was negatively correlated with job performance.

It was "participated in programming competitions".

And there are some more "how to interpret machine learning models" caveats in that blog post.

It seems to me the biggest factor in explaining this is that the people who are just below the hiring line but participate in competitions get a bump over the line. Since there are more people just below the line than above it, the "participates" group is bottom-heavy, producing the correlation.

I do a lot of interviews, and it seems to me that lots of people with experience perform below how they "should", because they're not practiced at solving problems from scratch, they work all day on modifying larger systems. Programming competitions would fix that for them, as would most open-source hobby projects.

agounaris 4 days ago 2 replies      
Winning the dunk competition does not make your team be an NBA champion...
AndrewStephens 4 days ago 0 replies      
I don't find this surprising. I played around on Top Coder enough to reach the first division but the problems presented were in no way related to my day to day work. While the people that did very well (I never got anywhere once I hit the top grade) might have had an excellent command of very specific data structures and techniques, non of the code would have been very useful in the real world.

If I was interviewing someone, a career as a competitive programmer would not be a detriment but it would not count for very much overall. We are looking for creative thinkers that can work as a team.

CalChris 4 days ago 0 replies      
The market is pretty good at sorting this out and I'm sure programming competition winners are fairly valued. I've seen more Berkeley, Stanford, MIT, CMU and IIT degrees rise to the top than other schools. I haven't seen any top coders. Maybe they're there and I haven't noticed. But I haven't seen them on resumes that I've culled through.

It's probably something you might list on a first job resume but further on down the road, you'd just list your work and your education.

qdev 3 days ago 0 replies      
My viewpoint is taken from the context as someone who is a reasonably seasoned developer returning to the job market. I never did programming competitions in university, though I surely was someone who fit the profile (computer science guy with a discrete math bent). As part of my recent prep for a job search, I joined one of the programming competition websites and did a couple of contests.

I posit that one of the dangers of spending a lot of time doing programming competitions and becoming very proficient at them is that, perhaps, you can come to believe that "true" programming, some sort of Platonic ideal of programming, is about coming up with the clever insight that solves an algorithmic puzzle.

But, in fact, a fair bit of _commercial_ programming is down and dirty, with databases, and user interfaces, and a lot of the time is really just shuffling data from one place to another, maybe filtering it or combining it with another set of data.

And that's just at the beginning of your career. Later on in your career, success means being able to work at larger scales in a team. That means organizing the code in a way that supports the efficient development of the codebase by individuals like yourself, by your team, by the development group as a whole... And at the architect level, you perhaps are looking at designing the system to support the efficient operation of the entire organization.

So I can easily believe that success at a programming competition does not correlate with long-term success as a software engineer in commercial software development. The two are really very different.

(Btw, I actually found the competitions that I did to be fun, but mentally exhausting. I'd say go ahead and do them, especially if you have an inclination for those types of problems. Just be prepared to use a different mindset for commercial software development.)

WalterBright 4 days ago 2 replies      
All job interviews rely on proxies for whether someone will be good on the job or not, because the only way to know is to actually hire them.

All proxies are inaccurate.

bshanks 4 days ago 1 reply      
I see a lot of comments speculating that there may be something wrong with programming competition winners, but this result might be a statistical curiousity unique to Google. It's possible that for most companies, good at programming competitions could still be positively correlated with job performance. For example, what if most people whom Google hires could have won some programming competitions if they had wanted to (a situation that most other companies are probably not in)? In this case, 'won programming competitions' could be data that is a lousy filter for Google, yet a good filter for other companies.
purpleidea 4 days ago 0 replies      
Completely agree with Peter. I'm shit at those competitions (maybe not the worst ever) but I think I'm pretty good at my job. But I think there are also some who are good at both.
x1798DE 3 days ago 0 replies      
In general, I think there are a lot of things like whiteboard interviews and coding competitions where employers would prefer "good at X means good on the job", but are reasonably happy to settle for "bad at X means bad on the job."
samlittlewood 4 days ago 0 replies      
When I am interviewing:

Candidate having competed in programming competitions - big green flag.

But, if success in competitions is, in their view, their biggest asset - amber flag.

I would extend this to most competitive endeavours.

hajderr 4 days ago 0 replies      
well if you're not good at programming contests would that entail a contemplating/slow programmer?

I'd rather pick an algorithmist and teach him/her to reflect than a so called "reflectionist" / "slow coder" and teach him/her how to solve algorithmic problems.

I'd be interested in knowing the guy's (catonmat) own reflections and experiences too.

twii 4 days ago 0 replies      
Depends also on what job I guess?
bluetwo 4 days ago 0 replies      
williamsmj 4 days ago 2 replies      
The title of the post understates the claim in the link, which is that, "being good at programming competitions correlates negatively with being good on the job".
Still locked out of my AWS account docs.google.com
462 points by colinmeinke  5 days ago   272 comments top 38
stickydink 4 days ago 18 replies      
I've been locked out of my AWS account for almost 3 years now. I have 2FA enabled, tied to a phone number I no longer have access to.

I receive two emails per month. One invoice, for $0.80 (I cannot remember what is running on there, but I guess it must be something). One threatening, "Your AWS Account is about to be suspended". I've been "about to be suspended" for the entire 3 years.

I want to pay the bill, I can't log in.

I emailed, I phoned. Eventually I spoke to somebody, who told me the only way I could access my account was to fill in "some paperwork"[1]. They emailed that to me. It has to be notarized. I called again, they absolutely will not accept this if it is not notarized.

I've explained to them that all I want to do is pay the ~$30 in bills I've accrued for 3 years. I don't mind if I never get access to the account - let me pay the bill and shut the account down, they won't have it. I don't have any free notary access, and I'm not willing to pay more than the AWS bill amount just to be able to pay the bill. I've explained that to them, and they don't seem to care about that either, they'd rather not have the bill payed and continue piling it up.

[1] https://s3.amazonaws.com/AWSCS_CustomerForms/IdentityVerific...

colinmeinke 5 days ago 7 replies      
8 days ago I tried to log in to my Amazon retail account, and received a password invalid error. As it turned out my account had been closed, as it appeared to Amazon that it had received a suspicious log in. This is the same account that I use for AWS - hosting websites critical to my business.

Today it appears I am no closer to gaining access back to my AWS account than I was on day 1, even though I have been billed as normal for my services during this time.

This should serve as a warning to anybody else who has an Amazon account that is shared between retail and AWS.

Linked is a list of every event and interaction I have had during the last 8 days with Amazon, via Twitter, email, phone and chat.

colinmeinke 4 days ago 2 replies      
UPDATE: The hold has been removed from my account, and I have access again. Even though I had previously been told my account had been closed, it seems like this wasn't final. This resolution was down to an escalation of my case at Amazon, after a team member contacted me through Twitter and promised to personally look into it.
mirekrusin 4 days ago 8 replies      
Also having problem with AWS - can't access it and they keep billing me for something there I want to shut it down (EC2?) but I can't.

I recently moved from Brazil to UK (new address) and changed phone + sim card (Authenticator after restore from backup lost all 2 factor auth entries).

This is the moment when you realise that you're outside of predefined use cases of The Machine and you're fucked. Nobody is here to help you. I've tried, nobody gives a shit at Amazon. They have procedures, you know.

I blame 2FA and I think it's great if you don't have problems but it's shit if you have, ie. you move places, change phones etc. in your life. Something there in the process is missing like "next of kin" recovery that should be mandatory when enabling 2FAs.

coleca 4 days ago 0 replies      
Some great advice in here about creating multiple accounts with a + sign in the email address. One thing I didn't see mentioned that is a standard best practice for AWS is to create a separate IAM account for your day to day usage and NEVER log into your root account (unless you need to open a billing support ticket) after creating the IAM account.

You will get a new login page and username to log in with and not need an email address specific account.


mabbo 4 days ago 2 replies      
The moral of the story is that your AWS account and your personal amazon buying account should be separate.

As well, if you use Kindle Direct Publishing, are an Amazon Seller, work for Amazon Flex, or use the Amazon Affiliates program, each of these should also be on an independent Amazon account.

This way, problems on one won't affect problems on the other.

steven777400 4 days ago 0 replies      
Stories like this are frustrating and a symptom of the impact very large companies can have on multiple facets of our lives.

One possible approach is to keep accounts separate for personal and each business that you are involved with. For example, you probably have at least a separate personal checking account and business checking account. Likewise, it would make sense to have all accounts used for a given business to only be used for that business.

In addition to providing some safety against automated action, division of accounts provides a nice legal line, wherein if a court order requires you to disclose information, you can simply dump everything on the account without touching any of the other businesses or personal documents.

Stymieing this, of course, is companies (Facebook?) that have a policy of prohibiting a single real person from having distinct accounts.

luckystarr 4 days ago 1 reply      
Send an email to jeff@amazon.com

Jeff Bezos himself said if you are having problems you should mail him directly. Behind this address there is a full team investigating the issue and if it's something they want to handle will actually lead to improvements for all customers.

raesene6 4 days ago 1 reply      
Whilst this is more about AWS accounts, the ability of Amazon (and other cloud providers) to lock you out of an account with very limited recourse does present some other problems.

I've got quite a few ebooks I've bought via Amazon Kindle. If amazon one day decided to delete/lock my account, I would lose access to all that content which I had "bought".

The more data people store in various cloud providers systems, the more the need for some kind of recovery mechanism / dispute resolution process becomes apparent.

Whilst its relatively easy, in many cases, for more technical users to ensure they have backups of data that they control, less technical users could have a lot of their information tied up in these systems, and loss of it could be quite bad for them.

elliottcarlson 4 days ago 1 reply      
I have yet to receive a human response from Coinbase after contacting them 59 days ago... time to nag them again.
dredmorbius 4 days ago 0 replies      
"Who are you?" is the most expensive question in technology. No matter how you get it wrong, you're fucked.

Letting the wrong person in to an account? You're fucked.

Locking the right person out of an account? You're fucked.

Given that data can't be reversed as charges can, arresting an account may be slightly preferable, but it remains highly disrupting.

I've been through the experience a few times myself, largely with Google. Out of a fit of pique, the temporary account I created for myself (and through which I negotiated for recovery) was "The Real Slim Shady". Several of my G+ contacts noted that they could be pretty certain that this was in fact me, though I'm a little frightened whichever way that works out.

(I did have other profiles through which I could announce my plight.)

I still think that the matter of idientification, or rather, the more primary matters of authentication, authorisation, integrity, validation, payment authorisation, ownership, receipts rights, and similar associations, need to be worked out.

I'm also strongly in favour of a system in which a physical token -- and I think a signet ring with a very-near-field chip and accompanying sensors on mobile, laptop, and desktop devices would be just about perfect -- should be part of that systme.

Not an insertable device (as with Yubikey), or something requiring keying in a value (as with RSA fobs). But something which is worn (so: on you at all times), replaceable, destroyable, discardable, but also exceedingly difficult to duplicate or appropriate, or to read without intention on the part of the owner.


maxehmookau 4 days ago 1 reply      
We had exactly the same issue. We solved it by creating a new AWS account and using that to call support.

Once through, being persistent eventually (it took a week or so) saw us regain access to the account.

ordinaryperson 4 days ago 0 replies      
I spent a month and a half locked out of my AWS account due to 2FA issues and being caught in a Mobius strip with AWS support.

IMHO 2 problems with how AWS handles customer support (vs. other co's):

1. Different support rep every time = following the same script with every phone call. I'm sure assigning first available rep speeds up response times but for you, if the problem can't be resolved in 1 phone call it's like talking to someone with Alzheimer's, you're constantly re-answering the same 15 questions to a new person every time.

2. Customers are not allowed to directly interface with level 2+ support, only the nontechnical level 1 support can do that. Good luck getting them to communicate your technical issue correctly.

For example, every single support rep asked me if I had 2FA disabled for my Amazon retail account (I did). After re-answering this question with every single rep, they'd file tickets with the next level of support...only to be rejected later because level 2+ said it was most likely because I had 2FA on on my Amazon retail account (I did not). It was nearly impossible to bridge this disconnect.

Customer support is not easy to do well so I hesitate to widely impugn Amazon's efforts, but if you're an AWS customer and you have an issue that's an edge case outside of the scripts these support reps are using prepare to waste weeks or months of your life trying to resolve it.

iddqd 4 days ago 2 replies      
Might sound obvious in hindsight, but _always_ create separate AWS accounts for your different projects.
ikeboy 4 days ago 0 replies      
I've had the same issue with buying and selling accounts - my buying account got locked out, which prevented me from logging in to my selling account, with 50 pending orders at the time. Luckily I emailed jeff@amazon.com and got my main account unlocked within 2 days and my buying eventually got back a week or two later.

Seems like they still haven't fixed the underlying issue of bots locking accounts across services.

bryanthompson 4 days ago 0 replies      
I'm one account flag change or one password reset from being in this exact situation, and it's terrifying. I have been an Audible customer since before Amazon bought them. Somewhere in their account aggregation process, I've ended up with at least four distinct logins for amazon that all use the same email address.

One email address... And i use one password for audble, one for amazon, one for aws, and one for amazon affiliate. If I password reset on any one of those services, my accounts are all bricks. I've made that mistake once and had to frantically call audible support & climb through the support chains until someone could basically undo my password change.

During the process, they offered to try and deduplicate my accounts, but I think we're going to need a team of senior-level DBAs to sort this shit out.

chrisacky 4 days ago 0 replies      
My recommendation would be to email jbarr[at]amazon.com directly.

He will probably naturally see this thread over the course of the day though if it get's popular anyway....

serpix 4 days ago 0 replies      
Exact same issue. Have tried to resolve it for six months now. I've cancelled the credit card and they can f* themselves.
tjbiddle 4 days ago 1 reply      
I love AWS - but why, oh why, is a retail account linked to it? These need to be entirely separate.
jasonrhaas 4 days ago 0 replies      
Email jeff@amazon.com. This same thing happened to me a few months ago. I tried everything and was banging my head against the wall. After emailing jeff@amazon.com the issue was resolved in less than 24 hours.
Naushad 4 days ago 0 replies      
Expected from the earths most customer centric company. Here in India, my first order at Amazon was a fraud done by Amazon with its JV firm CloudTail. tl dr: Amazon Fulfilled product sold at fake discount and MRP mentioned on site was higher than printed MRP.


akhatri_aus 4 days ago 0 replies      
I'm actually glad its this way. Its difficult to take over an account.

If you can't keep your passwords in check and use 2FA you ought to lose the account & make a new one. Some kind of consistency, e.g 2FA or password is needed to keep it secure.

Paul-ish 4 days ago 0 replies      
They tell him to create a new account because they think the old one was compromised? I would be furious. I have bought a number of books on Kindle. If this happened to me, I would essentially lose all of the books I purchased. I'm not sure I would ever use Amazon again.
danjoc 4 days ago 0 replies      
>This should serve as a warning to anybody else who has an Amazon account that is shared between retail and AWS.

Wouldn't Amazon have locked your AWS account for unauthorized login as well? I don't see how the retail activity matters here.

wiradikusuma 4 days ago 0 replies      
Since we're on this topic: Is a Google Apps account considered "seperate"? Eg I have jim_gordon@gmail.com and jim@gordon.com.

So I'm thinking of centralizing all non-business to gmail.com and the rest to gordon.com.

likelynew 4 days ago 0 replies      
Before bitcoin caught up with general public, I became interested in what it is to mine. To try it, I set up mining software in heroku. I knew even then that it is not a very nice thing to do, but it was free for me. I wasn't looking for money or something. In a day, my account was banned permanently, without prompting me of anything. While it deserved to be banned, but maxing out the CPU might even be caused by simple mistake in a legitimate way. I have a big suspicion that they can read the application code if there is a problem like this.
jasonrhaas 4 days ago 0 replies      
This happened to me as well. Email jeff@amazon.com. No joke, within 24 hours I was good to go. It worked for me a few months ago.
slosh 4 days ago 0 replies      
I was locked out and charged from October till May being charged. Finally got a full refund. It's a nice savings plan lol
kbullaughey 4 days ago 2 replies      
Any references to how one should go about separating an Amazon retail account and an AWS account, if they are presently the same?
kyledrake 4 days ago 1 reply      
This is such an unfathomable problem to me. When I need someone at my DC, I just call them and they pick up the phone.
Thaxll 4 days ago 0 replies      
Always use MFA.
Crontab 4 days ago 0 replies      
This story reminds me of a friend of mine who has a second Amazon account just for his Kindle. He does that because he read a story where someone lost access to their Kindle books after Amazon closed their account.

Guess this idea applies to AWS as well.

crb002 4 days ago 0 replies      
Time to contact your solicitor chap.
markaius 4 days ago 3 replies      
FRex 4 days ago 0 replies      
I had two experiences getting locked out of accounts.

First was an old pre-Atlassian BitBucket one that just broke due to shenanigans with Atliassian accounts integration or SOMETHING. But big props to them. I complain and I get it fixed super quick. Just how it should be. Solid 4 out of 5 (5 is for guys who managed to not lock my account due to weird mergers, I'm even OK with the weird "don't use FRex, form now on login with your email: smtsmt@gmail.com" I get on attempting 'FRex' + password).

Second is Twitter. FUCK THEM so hard. Excuse my French but I have no other words for how idiotic this situation is, it'd make a saint mad.

I make an Twitter account using my secondary/side gmail account that has been phone verified and 2FA using Google Auth for Andoird, verified my Twitter account by clicking link in the email they sent there, connect it to my YouTube account that has been phone verified, send out the welcoming tweet they propose (something like "Hello Twitter!", I think it was just a button press or a combo box to pick from but I might be wrong now) and I get banned for (exact wording may vary) 'suspicious/possibly automated activity' (mhmm... these huge botnets of phone verified 2FA gmail and YT accounts operating out of EU ips... good job catching me Twitter).

I could of course act like a good peon and provide them a phone number and be graciously allowed by the Twitter Heavenly Emperors to use my 10 minutes old account. I write to their support via some super idiotically hidden panel of theirs on twitter while still in my 'locked' account and.. I get an automated (!) email to my gmail (!!) telling me in steps how to just fuck off and enter my phone number (!!!) and to ask for help if I don't have it unlocked after providing a phone number and waiting a few minutes ('fucktastic' was the word of the day that day, seriously, that made my day). I wrote another one, telling them to shove it (in kinder words and with zero profanity but firmly making it clear I'd not provide my number on account I did literally nothing on and want to use for YouTube connectivity to a verified channel and created on a 2FA and phone verified gmail account I verified by clicking the link in the email) and got another bot email and no reply since then (about a month ago). Total human replies: 0. Bot replies: 3+ (see below). And I'm the one running an automated operation in here.

And the cherry on top: I still get trending political BS tweets (because that's what trending where I live every week) sent to my social tab in gmail and can't disable it since my account is locked and throws me to 'provide a number' screen that only has 'help' (blabbering about how I must be the one in the wrong here but if I provide a phone number..) and 'log out' available. Good fucking riddance. I truly dodged a bullet by using my alternate gmail!

And all this on a service that has users that are outright bots, Nazis, terrorists (ISIS itself), hacktivists (you can argue some of it is a positive force for change or securing up but it's still highly illegal and often done just for lulz) and the like.

Of course I'm not going to give in to this BS. I can sort of understand Google/YouTube with their stuff and it actually helped me once by requiring SMS verification when my kinda weak old password got cracked/guessed but what Twitter did is downright dumb extortion ("gib phon number! gib, gib, don't write support requests! 1st gib!") or them being idiots (what did I do that's suspicious exactly.. make a Twitter account in 2017?) and grossly neglecting their users (0 human reply, ever). Twitter fortunately would just be a nice-to-have for my side hobby of YTing and I have the privilege of saying "fuck no" to them for this and shitting on them on every occasion but if this was my mail gmail it'd do me in for weeks before I recovered all of my stuff.

There are horror stories on YT too, see Millbee (let's plays) or I Hate Everything a.k.a. IHE(critique/shitposting), banned overnight (Millbee for a nip snip in an anime game despite all the nudity, GONE SEXUAL and borderline CP on YT going unpunished and IHE for 'community guidelines' for a video of smashing a film DVD that was later hand judged as not in violation), both returned after a social shitstorm but with no apology, explanation, nothing. I bet if I was a high up in some company and had a company account tweet what BS I went through it'd all suddenly be fixed in a jiffy with no need for my phone number. But what are internet and real life rank and file tech nobodies supposed to do..?

lightedman 4 days ago 2 replies      
Protip for everyone on this site.

Look at a companys stock price. If it's in the triple digits, avoid it, because they can and will screw you over.

the_common_man 4 days ago 1 reply      
If it's critical to your business, why not do your hosting elsewhere? Is it because you are locked in already? I am building a product that will help people migrate across cloud providers, so that's the context...
gorkonsine 5 days ago 4 replies      
This should be a good warning about what you can expect when relying on other companies for service.
Lessons from my first year of live coding on Twitch medium.com
447 points by ingve  4 days ago   101 comments top 21
jzelinskie 4 days ago 2 replies      
I've streamed myself programming on Twitch, and can echo some additional knowledge in addition to what's shared in the article:

Don't expect anyone from Twitch to randomly discover your stream and have any idea what you're doing. Programming anything that isn't a video game on Twitch will be totally unfamiliar to their primary demographics. That said, use Twitter or something else to BRING YOUR OWN AUDIENCE. Be prepared to stream for a few hours or else you will likely never build up traction in your chat.

As this post says, vocalizing your stream of consciousness is vital; think of it like pair programming with the chat. I try to engage the chat without getting totally nerd sniped and ending up off topic.

I think the best way to really kick the tires on Twitch programming content would be to stream podcasts and/or have a joint channel of shared programming content and have many different programmers participating either via a shared account or Twitch Teams[0].

[0]: https://twitchtips.com/twitch-teams/

Steeeve 4 days ago 1 reply      
So my kids watch people streaming non-stop. At first I had no opinion. Then it seemed like they did it way too much and I fought it. Then I started paying attention and found it interesting as well. For them it's gaming, game discovery, and seeing personalities they enjoy regularly.

Something like this... I would really enjoy. Bookmarking this page so I can have a look at all your streams. I would like to do this myself. It would be interesting to be able to share exactly what it is that I try to get done on a daily basis. It would be useful to show my colleagues and management interests. I don't know how much they would pay attention, but even being able to point back to a video at a specific time to share something would be useful.

Unfortunately, I don't have a solo office, so figuring out a regular location to stream from will be a challenge, but that's neither here nor there. I can do it from home a few times without bothering anyone to figure out all the logistics before I actually try to do anything real.

noopkat 4 days ago 2 replies      
wow, author here. I didn't post this to Hacker News (I didn't even have an account on here until 30 seconds ago), but thanks for the thoughtful comments everyone! Is it weird if I address some of the questions in here or is that odd etiquette? :)
rctay89 4 days ago 6 replies      
I wonder if you could include all keystrokes like Ctrl-x in the screencast, maybe even a timeline of the keystrokes; I imagined it to be "running" along the screen like in Guitar hero. Nevertheless this is a great effort in distributing latent/implicit knowledge, which I think coding to be heavy on; for example, how would a terminal user know that Ctrl-R is reverse-search? (don't get me started on finding out about going back to a previous match...) I remember how I found out about Tab by accident... While being ignorant of these epistemes are not barriers, they do slow down/kills joy.
avitzurel 4 days ago 2 replies      
I started streaming a few months back and I absolutely love it.

Some solid tips on here and OBS is a surprisingly good piece of software but it can be a resource hog at times.

The hardest thing about it is to keep the schedule and be emotionally available when the stream comes on. I wrote about it here [1].

What I like the most is working through a project in stages on the stream. People can connect with the project and also contribute to it. Working on one-offs tutorial style did not really work for me.

I stream full stack content. From Node.js to Golang and even Devops. [2]

The screen to not show the desktop when doing secret things is good, however, as mentioned here I would definitely recommend a second screen. It changes the way you work a lot.

[1] https://fullstack.network/announcing-my-most-ambitious-strea...

[2] https://www.twitch.tv/kensodev

thinkingemote 4 days ago 4 replies      
There is a game developer on twitch that uses his VODs (video replays) as a versioning system as he doesn't use anything. "Wait, what and why did I do this? Lemme look at yesterday's vod".

It suprisingly works quite well and is more entertaining for the viewers to see what and how bugs get introduced.

rmccoy6435 4 days ago 0 replies      
I have live streamed some stuff before when coding, and I must say most of the people who come into a channel doing coding are really nice people who ask really insightful question, or offer good solutions. It's like Mob Programming with the internet (or as the author here says an "MMOPP"), and it also makes me a better programmer because before I even think about writing any code I'm thinking about how it will be perceived by someone else peering over my shoulder (akin to the pro arguments for TDD).
bleair 4 days ago 1 reply      
Nice writeup. I think the people who are good at twitch streaming are charismatic and/or funny. Being entertaining is awesome, but isn't necessarily the same as being good at reasoning through problems or engineering good software.

As a watcher of such streams, be they game session I wonder how people don't see something and go crazy because they can't actively participate - the author does mention that some people will have contributed pull requests by the end of the session. :)

stale2002 4 days ago 2 replies      
Hey! I do this too on twitch.

I mostly stream myself doing algorithms though. Basically I do the Google Style interview questions, and almost run my stream like I am doing an actual interview.


The reality of the livecoding scene is that most "real life" coding is actually pretty boring to watch.

In order to be actually interesting, you have to be talking and explaining what you are doing the entire time. I am glad that other people out there are having success!

Kiro 4 days ago 3 replies      
I want to do this but I'm afraid of two things:

1. Show how horrible my code is.

2. Accidentally leaking sensitive stuff.

ioddly 4 days ago 1 reply      
This is an interesting topic to me, as I'm giving a talk with some coding in two weeks and a major concern of mine has been making sure that what I'm doing is interesting and more importantly followable. Normal coding for me just is a flurry of vim activity.

Questions for anyone who does this or views these sorts of streams:

Do you find that people can follow what's happening in vim well enough? I've considered just using plain VSCode because I'm concerned jumping around too much as I do normally might be hard to follow.

Do you feel that this might be good interview practice as well, since the process of explaining code as we write it doesn't come naturally to some of us?

Any additional tips to make sure what I'm doing is comprehensible would be appreciated.

nightcracker 4 days ago 3 replies      
If you are programming, especially in a live studio environment, you should really invest into multiple monitors.
gallerdude 4 days ago 1 reply      
I'm really inarticulate, so my biggest fear would be people not able to understand what I'm saying...
rmason 3 days ago 0 replies      
Been learning the serverless framework and found a couple of videos on YouTube from people live coding on Twitch.tv.

I'm not a gamer and would have never thought to look on there for tutorials. Their search isn't very good and I've found YouTube is the best place to find out who is using Twitch. There are guys with a thousand followers who don't show up on search! Still a very valuable outlet that I bet most coders don't know about.

mb_72 4 days ago 1 reply      
Very interesting, however as comfortable as I am after 20+ years with some degree of coding in front of others (pair programming, fixing something with the boss breathing down my neck) I couldn't imagine doing this as strangers watch on; it's, at least for me, sharing a too intimate or personal experience. I also have a long-standing habit of using four-letter comments as attention-grabbing and transitory 'to-do' markers, which would likely be in breach of any kind of streaming service's TOS.
alexashka 3 days ago 1 reply      
Suggestion: include a link to your twitch channel, in the first paragraph of this blog post.

It's good advertisement, and it lets people have a look at the results of your streaming. For all we know, your stream could be great, good, bad or not at all. We don't know.

Without that information, it is hard to know what to make of the rest of the blog post.

mdmnk 4 days ago 1 reply      
i do this a lot at twitch.tv/mdmnk (less so recently due to losing a job and moving to a new state, then getting a new job and once again moving to a new state).

It was stressful at first... had to deal with trolls (and also about 80% sure hackers that were trying to get me to root my system live). After I got the hang of it all though it became a lot of fun.

I stream game development which is a hobby of mine (software engineer for the interwebs by day). Being that it is a hobby I am still learning. I've been able to make some internet friends, pick up techniques and learn more about C# thanks to twitch streaming. One day I would absolutely love to transition to full-time independent game development and do it all live on Twitch.

Another thing to note - although I have 14 years of professional experience, I used to get nervous coding in front of people, thanks to streaming on Twitch that went away -- I'm no longer afraid to fail or mistype. My overall confidence has improved. I strongly recommend live coding.

edit: I've also gotten MUCH better at talking through my code because to maintain an audience on twitch you have to talk and explain what you're doing almost the entire time.

ranman 3 days ago 0 replies      
I started the twitch.tv/aws stream and we've seen some success. I like these tips!

One issue I have is the balance between high quality broadcasts that don't have the opportunity for frequent audience input - or lower quality broadcasts that are more interactive.

pacaro 4 days ago 1 reply      
It would appear that the author is trying to post as noopkat and has been declared dead (new account?)
dsjoerg 4 days ago 0 replies      
Thanks for this I've been thinking about doing the same thing and this is super helpful.
manbearpigg 4 days ago 3 replies      
liveedu.tv (formerly LiveCoding) has a coding audience orders of magnitude larger than that on Twitch at this moment.
How To Go Viral By Using Fake Reddit Likes hack-pr.com
425 points by scribu  1 day ago   187 comments top 35
jawns 1 day ago 6 replies      
The entire stunt appeals to people's sense of moral outrage over businesses buying influence in the form of political donations. The reason people find it morally outrageous is because it corrupts the political process: politicians are supposed to represent their constituents, not the whims of the highest corporate bidder. Politicians who engage in this kind of quid pro quo behavior put selfish gain ahead of the good of the community.

Which is why I found it particularly galling that the PR firm relied on people's moral outrage about paying for influence to peddle their message ("tell them you like our initiative and are TIRED of politicians taking legal bribes") -- while doing exactly the same thing: paying for influence, in the form of purchased Reddit upvotes, which corrupts the upvote process and puts selfish gain ahead of the good of the Reddit community.

Normally, when PR firms use "hacking" to describe their techniques, they're talking about novel approaches to getting coverage, sort of like how "life hacks" are novel solutions to life's problems. But in this case, the firm is using "hacking" very literally -- infiltrating and taking control of a system by illicit means. They are black hats, and we should view them not only as morally bankrupt but also very dangerous.

I'm expecting that any day now they'll run a follow-up post, "How we hacked the U.S. media to help an anonymous powerful Russian client sway the presidential election."

0x00000000 1 day ago 6 replies      
People get extremely defensive on Reddit if you insinuate that this is common. But it really doesn't take a whole lot of skepticism to see though the more blatant ones.

Reddit is still a really great site when you unsubscribe from all default subs and any sub that has gone "critical shill" at about 100k or more subs.

illys 1 day ago 2 replies      
Is this article for real?

On a topic where one would expect citizens chasing for public good, we find marketers and advertisers working for a wealthy businessman paying a convictionless campaign to become famous!

And the advertisers are so proud of it, they give all the details of their Reddit cheating, and worse, all the details of the absence of political conviction of their wanna-be-politician client.

Maybe the story is real, but I cannot believe the advertisers are dummy enough to be the ones writing this article.

I would better think of someone related to Fiverr.com behind... [edit: or an enemy/competitor of the politician]

paultopia 1 day ago 9 replies      
Didn't they just massively throw their client under the bus? Not hard to find the guy's name, and now everyone knows:

- his big political stunt wasn't even his own idea, and

- he paid people a ton of money to fraudulently promote it.

What a way to burn your clients...

flashman 1 day ago 2 replies      
Look, I give them credit for coming clean to the public. And a lot of people use Reddit to promote their business, band or other brand (though they do it honestly, not by purchasing a boost). But the more well-known the technique of buying upvotes becomes, the worse the site will be for myself and other users.

Early paid upvotes are the seed for later organic upvotes. You don't even need to spend $200 to get them.

Simulacra 1 hour ago 0 replies      
I don't know if this is a hack, per se. I work in media and PR, and this is just one of those things you do. Pump up the issue, get eyeballs on the campaign, find a way to jazz the reporters, and off to the races. What may have made this fly is that the idea was already in the minds of the public, and the media. It's a LOT easier when that happens.
Haydos585x2 1 day ago 4 replies      
This was an interesting read. I'm not sure it's the best idea as a blog post because I'm sure Reddit staff will get onto it then keep a much closer eye on this firm. I feel like journalists will be the same too. If I received 10 emails about these guys I'd be a bit skeptical that there is any actual interest.

As an aside, I wonder if they're using the same tactics here.

ricksharp 1 day ago 4 replies      
Dear Reddit, Maybe this idea would help slow down this type of abuse:

It seems like it would be easy enough and cheap enough to build a honeypot to identify accounts used for the purchased Reddit upvotes.

For example, Reddit could set up some honeypot posts to track paid upvote accounts.

They then go and pay these upvoters to upvote the honeypot post and identify the accounts used. (It would be helpful if the post was hidden so other people don't find it accidentally. In fact, it is possible to just use a tracking redirect page given only to the paid upvoters and use any post as the upvote "job" so it would be hard to identify by the upvoters.)

Then Reddit could ghost those identified accounts. Simply ignore their votes in the system, but don't tell the account owners, so the owners continue using the accounts without realizing the problem.

This would make it very difficult for the account owners to know which of their accounts were compromised.

Then on any new posts where these upvoter accounts are being used in majority, other accounts can be found. The other accounts that also similarly upvoted on this article could represent other paid upvote accounts.

Track those other accounts and how often they appear beside the ghosted paid accounts, and voila, you have found more paid upvoters.

Keep doing this and it makes the paid upvoters ineffective because although they can work the system, their work is only being used to find other paid upvote accounts and also clients who are paying for paid upvotes.

After a time period, the clients could be sent a warning:

It has been detected that you are using paid upvote services which are against Reddit TOS. Please contact customer service so we can work together to remedy the problem. Failure to do so may cause your account to be banned and all your posts removed from Reddit. Have a good day.

Of course Reddit doesn't have to do this, and really anyone could do the same process to build a list of paid upvoter accounts and a list of articles and clients that use those services...

So what do you think, would this put a dent in the upvoters effectiveness?

minimaxir 1 day ago 3 replies      
> This gave the campaign the boost we needed and it was all the direct result of one thing: hustle .

Deliberately breaking the rules that exist for a good reason isn't "hustle." It's just cheating.

scotchio 1 day ago 5 replies      
Speaking of fake Reddit stuff...

Reddit has a SERIOUS political astro-turfing problem.

Some would argue it swayed the US election. Some would argue Reddit is bought and sold.

The popular or all experience is completely different. Commenting you don't even know if it's a real person or not.

Does anyone know a forum similar to this or Reddit where it's ALL verified accounts?

imron 1 day ago 4 replies      
> these are the types of things we do several times a day now

And this type of marketing posing as news, pushed to the front page by bots and fake accounts is precisely why /r/politics is now a shitbed.

Thanks Hack-PR.

gehsty 1 day ago 0 replies      
Maybe this is all just another 'viral' advertisement for a guy selling upvotes on Fiverr?
soared 19 hours ago 0 replies      
Post was deleted, heres a cached link. mirror


visarga 1 day ago 0 replies      
The OP is using techniques that used to work on the wild wild web 10-15 years ago. I thought by now everything is being normalized, or at least serious people don't use spamming techniques to launch a business.

If all these bought upvotes come from new accounts, or from the same few IP ranges, or have a lesser ratio of comments to upvotes, or are interacting only between themselves and not with the larger community -> reddit can detect them and turn them into ghost accounts.

Reddit needs to open up a Kaggle challenge for detecting rented upvotes and other abuses, use the data it has already shared with the AI community (the reddit dataset) to detect such attempts as they happen.

rmc 1 day ago 0 replies      
The title of the article is "How we hacked reddit...", this submission currently says "How to go viral by using fake reddit likes", and is more accurate. They didn't hack reddit, they bought upvotes.
JonDav 1 day ago 1 reply      
oDot 1 day ago 1 reply      
Comments here are missing one crucial thing -- it's a shame that success in Reddit depends so much on initial upvotes.
joelthelion 1 day ago 0 replies      
Fake likes only explain part of this initiative's success. This would never have worked with an idea that doesn't appeal to redditors.
lsmarigo 1 day ago 1 reply      
Everyone does this, including the reddit founders themselves in the early days of the site (https://motherboard.vice.com/en_us/article/z4444w/how-reddit...).
seoseokho 16 hours ago 1 reply      
Anybody have a copy of this? link is 404 now
danso 1 day ago 0 replies      
I don't get the impression that there's any substantial vote monitoring, and so it surprises me that it even cost money to do this kind of astroturfing. How hard would it be to setup and maintain a dozen Reddit accounts and spread them over a VPN service? 10 min initial startup, and not more than a minute a day of doing innocuous activity on those accounts, occasionally. When a campaign rolls out, then have the accounts work in concert.

Sure, it might not be as 100% successful as Fiverr (though I imagine it's fairly easy for Reddit to ad-hoc identify voting blocs if something was known to be bought). But you could employ additional optimization techniques, such as the one used by most high-karma users (e.g. Gallowboob): if a post fails to hit critical upvote mass, then delete and resubmit later in the day.

To give you an idea of how things seem to be relatively unmonitored until users flag it, there's the story of Unidan:


And as a more recent, obscure example, there was the mystery of why the mod of r/evilbuildings had something like 499 of the 500 most upvoted posts in his own subreddit. The math was so laughably in favor of manipulation but a Reddit admin, using whatever shit tools they have to investigate this, acquitted the mod:


Follow up:


The details of how this mod was able to boost his own posts without being called out for vote manipulation is too banal to explain in detail (basically, he would shadowdelete other popular posts so that his would get picked up by the Reddit front page, and then undelete the popular posts before anyone noticed). But the fact that a Reddit admin (I.e. a paid employee) thought that the evilbuildings mod always having the top post in his own forum for 6 months straight was just a coincidence, and/or because that mod was just apparently an amazing content submitter, spoke hugely about how uncreative the Reddit admins might be in detecting fraud.

Edit: if you are interested in subreddit drama details, here's a thread that combines the evilbuildings drama and Gallowboob: https://www.reddit.com/r/SubredditDrama/comments/6d3syc/evil...

If this is the kind of effort users put toward imaginary points (though arguably raising karma is part of Gallowboob's professional work), I'm nervous to think about the schemes that PR firms will construct when they realize the easy return on investment offered by Reddit popularity.

rnprince 1 day ago 2 replies      
If you're into this kind of thing, I enjoyed reading "Trust Me I'm Lying" by Ryan Holiday.
RileyJames 1 day ago 1 reply      
It seems that everyone is aware that likes, follows, upvotes, etc can all be bought, and therefore these numbers are manipulated regularly. But does anyone care to see the problem solved?
blackice 1 day ago 0 replies      
Reddit should really try to proxy / VPN / Bot detection because I'm willing to bet the people on fiverr are using large proxy networks to achieve this.
meant2be 1 day ago 0 replies      
What would be the proper way of gaining traction on reddit? Is that even possible anymore? I mean if the game is already rigged what chance do honest businesses stand in this environment?I dont have an account on reddit (been there for what? 7 years now?) and I always wondered how somebody go viral and get traction now this stuff makes me think everything is basically done and paid for.
dchuk 1 day ago 2 replies      
So I'm working on a side project that basically has an HN/Reddit interface. One monetization idea I had for down the road is basically a legitimate means to boost certain posts for certain periods of time, giving them prominence on the site in a clearly labeled area for such purposes.

Is this something people would be interested in?

ameister14 1 day ago 0 replies      
While I understand people finding this distasteful, it's exactly the kind of rule-breaking that they should be doing. Cheating? Airbnb broke Craigslist's rules to good effect, among others.

It's naughty without being outright evil. When did that become a bad thing on HN?

Doubletough 23 hours ago 0 replies      
Looks like they've been shamed into submission and have pulled the article. It was getting hammered with comments earlier. Well deserved ones.
silimike 1 day ago 1 reply      
This story brought to you by Fiverr.com
visarga 1 day ago 1 reply      
What I'm worrying about is that the reddit database is used by AI for learning dialogue and this kind of spamming actions just pollute the dataset.
known 1 day ago 0 replies      

"Media does not spread free opinion; It generates opinion" --Oswald,1918 https://en.wikipedia.org/wiki/Decline_of_the_West

paulpauper 1 day ago 0 replies      
How about all the times this failed
HearMeRoar 1 day ago 1 reply      
>How we hacked Reddit

Really? Hacked?

notananthem 1 day ago 0 replies      
That is the least hacky and also least efficient way to do that, and also make yourself look like a total goober.
logicallee 1 day ago 1 reply      
A lot of people don't seem to realize that being the top link on r/politics is a public good that's available to everyone. Just because someone pays $200 to make some politician's publicity stunt that nobody cares about be the top link there (I mean really, nobody cared - the idea of a law forcing politicians to walk around wearing the logos of their top ten donors is beyond silly), doesn't mean that everyone else can't also be the top link there at the same time, with other publicity stunts nobody cares about!

The great thing about being a top link is everyone can do it at the same time. It doesn't corrupt the process at all or waste anyone's time. Everyone can benefit from it and it doesn't make things worse for anyone.

For example imagine if all the top links on hacker news were just corporate advertisements disguised as stories. Would it be a worse place or cause any of us harm? Of course not.

Hire a former SoundClouder docs.google.com
324 points by FHMS  4 days ago   111 comments top 53
jaxelsson 0 minutes ago 0 replies      
If anyone is interested in the future of Network Automation and Orchestration, Itential is a start up in Atlanta looking to make big waves. Check us out. http://www.itential.com/careers/
kfk 4 days ago 2 replies      
OK, completely out of topic, but it's interesting to notice how broadly they use the term "data science". This goes to confirm my thinking that data science is becoming more and more an "all around data". The various facets of data science I am seeing in this spreadsheet:

- BI- Data visualization- Data engineers- Statistician- Warehouse- SQL- Python-fu- Marketing

I look at it and I think it's a great prove that data science is not only phd territory, there are so many ways to bring value to the field.

ILIKEPONIES 4 days ago 0 replies      
One of the founders of Underdog.io (https://underdog.io) here. Almost goes without saying, but we'd be happy to help any SoundClouders looking for something in NYC or SF (our two main markets). We work with ~350 of the best tech companies in those locations.


It takes 60 seconds to apply and we can get you fast-tracked to an upcoming batch. Can also email us with questions -- support[at]underdog[dot]io.

emeraldd 4 days ago 5 replies      
Two questions ...1. This is a somewhat odd spreadsheet to find semi-randomly posted to HN. Can anyone speak to its provenance?

2. Is something, not so public, going on at Sound Cloud or is this just normal turn over for a company of there size?

georgecalm 4 days ago 1 reply      
The top of the spreadsheet says, "For companies currently hiring, please add your details to the second tab", but I'm seeing the spreadsheet in readme-only mode. Can the creator of this spreadsheet comment on how companies can edit it, please (or whether it's permanently locked now)?
Jdam 4 days ago 2 replies      
As I'm living in Berlin, where most of the layoff is happening, and knowing that SoundCloud is known for good talent, I'm wondering what impact that will have on the job market here. I track the amount of weekly recruiter requests on LinkedIn&such, curious if this will have an impact on this.
tnbeatty 4 days ago 0 replies      
Definitely a shame - we're big fans of SoundCloud at Iris.

Please add IrisVR to the list. We're building AR / VR collaboration tools for 3D design pros, so lots of interesting challenges here for the right designers and engineers.

We're hiring a DevOps engineer (kubernetes, google, terraform, golang) and a Software Engineer for our desktop app (web/cloud technologies, electron, javascript, node) and would love to talk with anyone in NYC or Boston.

Terretta 4 days ago 0 replies      
If anyone is looking to maintain an entire team (especially in the areas listed in my profile, but any good dev+cloud team actually), I will hire you as an entire team.

Unfortunately, not many names on this list in NYC which is where we'd prefer, but I can also do L.A., Phoenix/Tempe/Chandler, S.F., San Diego, Charlotte, or other options.

Of course, happy to hire individuals as well.

My username at gmail.

gangstertim1 4 days ago 0 replies      
Please add Squarespace! We're hiring all kinds of technical and non-technical talent in NYC and we're working on building a more beautiful web. Interested parties can reach out directly to our lead tech recruiter Kelly Jeanes (kjeanes@squarespace.com) or apply here: http://grnh.se/dn27gt1

Here's all of your columns:Company: Squarespace

Role: Creative Developer / SRE Manager / Android Eng / Frontend Eng / Product Backend Eng / Analytics Eng / App Infrastructure Eng / Data Pipelines Eng / Core Services Eng / Security Eng / SRE Eng / Recruiters / PMs / Designers

Location: NYC

Remote: No

All job listings: http://grnh.se/dn27gt1

Contact Kelly (kjeanes@squarespace.com)

wc- 4 days ago 0 replies      
Copying my post from the who's hiring thread to here. Always been a fan of SC and clearly there were a ton of talented people working there. If you are anywhere along the spectrum of data scientist to data engineer and interested in python and go, shoot me an email.

Short term contract-based jobs are available if you are just looking for something to generate a little income while you explore the market.

Job posting:

Exigent Capital | Chicago | Data Scientists, Data Engineers | Full-time, Contract, Part-time | Onsite or Remote

Market Making / HFT group focused on cryptocurrency exchanges. Looking for quant / data scientists to find new edges in the market and talented Go/Python engineers to expand the trading platform.

Contact wes+hnsc ||at|| exigentcapital.com

_kyran 4 days ago 0 replies      
If anyone from SoundCloud is passionate about podcasts get in touch at hello at zencast dot fm (https://zencast.fm)

I'm currently in Berlin if anyone would like to grab coffee.

strife25 4 days ago 0 replies      
You can add Sprout Social in Chicago to this list (assuming OP has write access).

Sprout Social | Chicago | ONSITE | https://sproutsocial.com/careers/open-positions

We are hiring people in all engineering and product roles. We are looking for Python engineers, React.js engineers, SREs, QA, Mobile (iOS, Android, and React Native), etc.

Also, if you would like to add our resident recruitment lead to the list on the "recruiters" tab, here is their contact info (and info for the columns in the sheet):

Amy Wolcott | amy@sproutsocial.com | Sprout Social | Talent Lead

DustinOfDenver 4 days ago 0 replies      
Wow... based on this list - Companies might consider hiring in Denver/Boulder... because SF looks like a mad house.
markivraknatap 4 days ago 0 replies      
Adobe Ad Cloud is hiring !

Want to help us build the best data engineering platform in the industry that handles billions of ad events every day including a vast distributed system that makes 250,000+ decisions per second ? Join us build the first end-to-end platform for managing advertising across traditional TV and digital formats, simplifying what has been a complex and fragmented process for the world's biggest brands. We're hiring for senior & lead dev and qe roles




adamstober 3 days ago 0 replies      
We'll triage at http://www.layoff-aid.com starting with SF, where we launched last week.

We've been building a solid list of hiring companies for several months already. Our mission is to help people affected by layoffs and we're now ready with a network of local companies specifically hiring SF tech talent affected by layoffs.

May expand to NYC pretty soon given demand. NYC/SF/Boston Hiring companies can sign up now to get people from the next SoundCloud, Etsy, Twitter, etc

taekyunTTD 4 days ago 0 replies      
Very sorry to hear about the news SoundClouders. I was an avid user and loved the site, especially how you could comment on specific parts of the songs.

We've got a bunch of engineering positions (everything from entry level to Sr level) at The Trade Desk in a variety of different locations such as London, Sydney, San Francisco, New York, Boulder, San Jose, Aliso Viejo, Ventura and Bellevue.

Here's a list of our open jobs: https://www.thetradedesk.com/join-us/open-positions

Please feel free to email me directly as well if you'd just like to hear more. taek.yun@thetradedesk.com

md224 4 days ago 1 reply      
Context? Were there layoffs at SC?
foklepoint 4 days ago 0 replies      
If anyone on this list is interested in working at Reddit, we're hiring a ton of people in Engineering and other departments in SF! Check out our jobs page: https://about.reddit.com/careers/#jobs-16253

If anything seems interesting, you can email me at saurabh.sharma at reddit.com

From an engineer's perspective, it's a wonderful opportunity to be part of a company that's only now building out most of their tech stack. We get quite a lot of traffic and it's very interesting to work at this scale.

nunez 4 days ago 0 replies      
If any of you on that list are interested in hacking on docker, kubernetes/docker swarm, aws/azure/google cloud or infrastructure provisioning tools (terraform, cloudformation, cm tools like chef and puppet) and making scalable and testable infrastructure a reality at really large companies, email me at carlos@contino.io.

I've been working with some amazing folks on some really interesting projects, and we have clients all over the world. We are located in NYC, London and Melbourne but hire from anywhere (I live in Dallas.)

jobso4me 3 days ago 0 replies      
If any Back-end and Mobile engineers from SoundCloud is looking to make a transition into healthcare. do get in touch jobs[at]outcomes4me[dot]com or visit this landing page outcomes4me.com to see our open positions.

Short term contract-based jobs can also be arranged if you are just looking for something to generate a little income while you explore the market.

Fzzr 4 days ago 3 replies      
Interesting, was this set up by someone from soundcloud or independently?
FHMS 4 days ago 0 replies      
Hi, Markus, Founder of DataRevenue here - we can't open the original sheet for editing as that would put the employees data at risk. So here's another sheet for companies to post and edit directly - https://docs.google.com/spreadsheets/d/1JDx5acZPvdmmSeMyEr77...
microtherion 4 days ago 0 replies      
Seeing so many presumably youngish and modern companies listed, I was struck by how many of them offered no remote work. Is remote work an idea whose time still hasn't come yet for many companies, or is it actually declining ? (Yahoo! was one of the companies listed, and they famously abolished remote work some years ago)
MattHeard 4 days ago 1 reply      
The spreadsheet columns are not matching the right companies.

For example, the link for Modomoto points to jobs at Oberlo. The link for Modomoto jobs is under Zalando.

philo_employee 4 days ago 0 replies      
We're building out the future of TV at Philo in SF. There are lots of hard/fun engineering challenges to take on all across the stack. We're hiring core engineering members who will get to launch our product into the market and grow our user base by an order of magnitude.

If you're curious to learn more please reach out to me at maxg at (company name) dot com

davidmckayv 4 days ago 0 replies      
CTO of PolicyGenius (NYC) here. We'd love to have former SoundClouders come join. Not just for engineering, we have a bunch of openings across the business. You can also email recruiting@policygenius.com.


akashizzle 4 days ago 0 replies      
Please add Gigster to the list of companies hiring (www.gigster.com). We're hiring for Sales, Customer Success, Engineering, Product and Ops. Open roles are here - https://jobs.lever.co/gigster.

Roles are in SF and in some cases remote.

joshrotenberg 4 days ago 0 replies      
Capital One is hiring for all kinds of roles in SF, New York, McLean and Richmond, VA, and other places as well: http://rolp.co/oZ3Nb and/or you can email me with questions at josh.rotenberg at capitalone.com
mrs233 4 days ago 0 replies      
CB Insights is hiring across the board in Engineering, Marketing, BD, Research and more in the NYC office: https://www.cbinsights.com/jobs or you can email me directly at: mchang at cbinsights.com
The_Sponge 4 days ago 0 replies      
Credit Karma | San Francisco, Charlotte, LA | Full Time, ONSITE | https://creditkarma.com/careers

If you're a scala engineer in sf who knows finagle, we're hiring. Plus, pretty much everything else.

dna_polymerase 4 days ago 0 replies      
Sad for Berlin.

But what great times we live in actually, a community caring for people they don't even know.

seertaak 4 days ago 0 replies      
C++ devs - zenAud.io GmbH wants to meet you. We make the world's first AU/VST hosting and MIDI capable sequenced live looper.



steeve 4 days ago 0 replies      
Can you add Zenly please?

We work with Go, Go Mobile, C++, gRPC, Maps, UE4/Unity/Vulkan, distributed systems, low level systems development, Spark, iOS, Android.

We're located in Paris, France.

Reach out to me directly at steeve at zen dot ly.

DROSA 4 days ago 0 replies      
You can ADD LADDERS in NYC to the list of willing to hire! Javascript React front end Java/Clojure backend! also looking Technical Product Manager and account managers

I am INTERNAL technical recruiter please feel free to reach out Drosa@theladders.com

Douglas Rosa 646-307-7522

maltelandwehr 4 days ago 0 replies      
We (Searchmetrics, http://www.searchmetrics.com/ ) are happy to hire Data Scientists, Frontend Developers, and Backend Developers in Berlin :-)
sramanan 4 days ago 0 replies      
Please add MakerSights, Inc to the list of companies hiring. (https://www.makersights.com)

- Hiring engineering, UI/product design (in SF, will consider remote).

alexellisuk 4 days ago 1 reply      
Most of these people are listed as still working at SoundCloud - what is the "explain it like I'm five" about this document? Did all of the employees just get laid off and I missed that news?
anotherhue 4 days ago 0 replies      
Jet.com are hiring in Dublin for all you Berliners


edmack 4 days ago 0 replies      
Please add to the list of companies hiring :)

SketchDeck, Full stack engineer, Sunnyvale California, No remote, Join our little dev team to grow our design marketplace - happy to talk: david@sketchdeck.com

jeffmanu 4 days ago 0 replies      
Is there a way to add my company to the list of recruiters or export it as a .xls/.csv.

GrowingStartup.com would love to work with some of these people. I'm glad to see how supportive everyone is.

ArlenBales 4 days ago 1 reply      
Do the people listed know that their email addresses, possibly personal, are listed on a public spreadsheet with massive visibility?
rdslw 4 days ago 2 replies      
BTW what are notice periods for Berlin and London based SoundClouders?

For SF it's 2 weeks I assume?

acobster 4 days ago 0 replies      
Wow, I had no idea Phil Collins worked for SoundCloud! :D
knowaveragejoe 4 days ago 0 replies      
What a shame, SoundCloud is/was a great service.
retox 4 days ago 0 replies      
Gib job.
robbomacrae 4 days ago 2 replies      
You can add SoundHound (the confusingly similarly named and colored startup with not unrelated lines of work) to the list of companies willing to hire!

I'll copy paste my usual Who's Hiring message here:

SoundHound | All roles available in Santa Clara/San Francisco. Engineering roles only in Toronto. NLP only in Sacramento/Baltimore | ONSITE - http://soundhound.com/careersI'm an NLU / Data Engineer at SH. We've just raised $75 Million from NVIDIA, Samsung, KP and others to take on Amazon and Google in AI with our "Collective AI" Houndify platform. Our open Houndify platform has the worlds fastest speech recognition and most sophisticated natural language understanding. We've had a lot of interest from partners and there are a LOT of really interesting projects being worked on requiring complex problem solvers who can work well independently.Things have come a long way since our leaked demo video took top spot on Reddit a year ago!https://www.reddit.com/r/videos/comments/38fdyl/this_is_insa...https://www.houndify.com/http://app.jobvite.com/m?3uCiQhw0If you have any questions you'd like to ask an engineer here just email me: rob at (company name) dot com. I respond to all emails but please no agents!

markovp 4 days ago 1 reply      
You recently wrote:> It is hard to determine which is needed more, as a home has costs, loosing a job leads usually loosing a home.

It sucks to lose a job. These folks lost their jobs suddenly, and without having done anything wrong.

Simultaneously, the industry struggles to find qualified talent. Somebody put two and two together, and realized there's a quick, easy, and inexpensive way to slightly help reduce the pain of a layoff.

I'm sure these folks would've found jobs anyway, but this might help a few find it faster.

Frankly, I find it very disturbing that you understand the pain of job loss, but mock people who are reducing the pain. It says a lot about you; but those people are good people. Better than you'll ever be.

carlsverre 4 days ago 0 replies      
MemSQL here! We have open positions for a number of roles! You can see the full list here: http://www.memsql.com/jobs/

Some highlights from the list:

* Javascript engineers who want to work on React.js based single page browser applications. We are pushing the boundaries with what you can do in the browser. From running the entire binary protocol on the client to rendering an amazing user experience, everyone is able to find something they love to do at MemSQL.

* SRE's with experience with both cloud and on-premise - come work with a team of experienced SRE's managing infrastructure spanning multiple international locations.

* Software engineers with a focus on systems - MemSQL delivers a advanced, enterprise ready database to customers worldwide. Come work on hard problems like code-generation, query optimization, vectorized execution, and more.

Please add MemSQL to the list, and for people who want to learn more feel free to email me directly or apply on our website. carl at (company name) dot com. I will respond to all direct (non-agent) communication.

expertentipp 4 days ago 1 reply      
bitL 4 days ago 1 reply      
I am pretty sure all ex-soundclouders are going to be super happy when low-ball offers start coming to them. They should consider their next job a temporary one and immediately start searching for another, better one...

EDIT: for downvoters, you probably never worked for Berlin startups, right?

kodfodrasz 4 days ago 8 replies      
I may have lost current trends, but this is very disturbing to me.

Putting up this data together is very disturbing and to be honest somewhat desperate in my eyes. Cannot these guys find jobs themselves, they must be sold in a bundle, or what is the idea behind?

I believe the startup bubble is starting to slowly drain, and we'll see more and more of this.

JSavage-Toptal 4 days ago 1 reply      
Take a moment to check out Toptal opportunities! https://www.toptal.com/careers We're a 100% remote company that allows people to work from anywhere in the world. Our people make up the top 3% of freelance talent. The people at Toptal are truly some of the smartest, most interesting people you will come across in your career. We currently have a number of open Core Team roles in addition to freelance opportunities: Client Experience Roles, Back-end Developer, Content Strategist, Sales Recruiter, Client Partner, Engineering Manager, Desktop Developer, Front-end Developer
Linus: I no longer feel like I can trust init to do the sane thing lkml.org
348 points by cnst  2 days ago   335 comments top 20
floatingatoll 2 days ago 2 replies      
Is there any useful comment anywhere on this post that talks about the technical meat of Linus' post here? He explicitly goes out of his way not to go into a long-form argument about systemd, and then HN dumps something like 150 points of comment karma into systemd.

EDIT: Here are links to comments discussing the actual content of his email, in case anyone else came here for that and left wanting.


https://news.ycombinator.com/item?id=14733973 ->https://plus.google.com/+TheodoreTso/posts/EJrEuxjR65J

EDIT: I read all the comments on this post, and I am disgusted to report that (as of this EDIT, nitpickers) the 2 above-linked comments are the sum total of actual replies to the content Linus was trying to discuss. I hope LKML had more self-control than HN.

EDIT: 14733558, it was really excellent of you to try and defuse things, too.

dredmorbius 2 days ago 1 reply      
Ted Ts'o, the Linux ext2/3/4 filesystem developer, posted this two days ago at G+ as well. Interesting discussion, particularly about 1) systemd's bad programming taste and complexity, and 2) Lennart Poettering's inability to admit when he is wrong.

"For me a lot of "good taste" is about adding complexity when it's needed, and avoiding it when it's not needed. And if you do have complexity, making sure you have the tools so you can debug things when they break. And for me one of the things that I don't like about systemd is that it has added a lot of complexity, and when it breaks, trying to debug it can be almost impossible."


"Heck, I don't even I want to file bug reports, just so I can get abusive messages from Lennart. At least when Linus flames me, it's because I did something wrong which hurts users and which I d*mned well should have known better, given my years of experience in the community. Lennart just flames you because you're wrong, and he's right. By definition."


"The high bit, in my opinion, is "not being able to admit you are wrong". If someone (such as Lennart) is always right, then it's impossible to have a technical discussion in a post-mortem to avoid similar problems in the future. In a company, if there are personnel issues like that, you can escalate to the person's manager, or use other mechanisms. In the open source world, all you can do is route around the damage. Whether you call the inability for someone to admit that he or she has contributed to the problem a "lie" or just that they were "mistaken" is really beside the point as far as I'm concerned."


throw2016 2 days ago 3 replies      
Making too many assumptions about the user environment is not smart. Then you start putting constraints and becoming defensive when these assumptions are questioned. This is becoming a pattern for systemd as is blaming everyone else. [1]

Those who need the complexity or some features should adopt the technical debt, there is zero rationale to enable it by default and impose it on everyone.

For instance if you have too many network interfaces and naming is a problem then switch on predictable network names which btw is anything but predictable for human beings and let the 97% who don't carry on.

12 digit random names are simply not predictable or better for human beings than eth0, wlan0? If there is a problem predictable network names is not the solution.

Similarly if you need binary logging and an audit trail then turn on binary logging but don't impose it on everyone else.

Its time to move beyond silly accusations of 'hate' etc every time there is a discussion about systemd because at the moment it just serves to deflect criticism and avoid accountability for questionable decisions.

[1] https://github.com/systemd/systemd/issues/2407

zdw 2 days ago 1 reply      
systemd is a mess. A while ago I tried writing a script for CD ripping that ran when udev detected a music CD in the optical drive, ripped the CD, then ejected the disk.

This kept failing. It turned out to be systemd having a hardcoded timeout of something like 30 seconds on any process launched by udev, that couldn't be changed without recompiling it from source, or having it fork off some other process group that lacked this limitation.

From what I can tell, systemd has some of the worst sort of developer myopia, where whole swaths of legitimate use cases are actively destroyed.

theonemind 2 days ago 2 replies      
Linus has the proverbial moral authority to bootstrap an init system and the technical skill to pull it off. Amusingly, this strikes me something like the chairman of the Federal reserve hinting that he thinks interest rates are too high. Hopefully, the systemd developers will take notice and clean up some of their act. (Not to insinuate anything. They have widely known problems in playing nice with other projects, taking feedback, etc.)
owaislone 2 days ago 0 replies      
They [systemd] shouldn't piss him off too much. He will sit down for a week, come out with his own init system which will take over the world. /s but not entirely :)
ausjke 2 days ago 6 replies      
Like what happened to git in the past, will Linus write a new init himself this time?
gjvc 2 days ago 0 replies      
The quote here in the title is missing the vital context -- the full quote is

""And yes, a large part of this may be that I no longer feel like I can trust "init" to do the sane thing. You all presumably know why.""

dsfyu404ed 2 days ago 0 replies      
This reminds me of a relevant joke.

Systemd walks into a bar. It shoots the bar owner and proclaims itself to be the new owner. It then turns the bar into a farm, brewery and distillery, opens a casino, a freight rail line and an investment bank. Oh, and there's an init system thrown in there somewhere too.

testcross 2 days ago 5 replies      
The internals of systemd are maybe not the best. But the UI is definitely easier to grasp than what existed previously.
qb45 2 days ago 0 replies      
Just wanted to say that the issue is about automatically copying rlimits from the init processes to all processes executing setuid binaries - apparently Linus thinks that init could possibly apply some rlimits to itself for its own purposes which wouldn't necessarily be suitable for copying to any other random process.

IMO this idea with copying is in itself even less sane than init using rlimits on PID 1.

eleitl 2 days ago 2 replies      
Gee, such a surprise. Why didn't you speak out about it earlier, then?
jwilk 2 days ago 0 replies      
For those who can't stand lklm.org UI, here's the mail on spincs.net:


yuhong 2 days ago 0 replies      
I wonder why Poettering chose Kay Sievers to work on systemd in the first place.
Rjevski 2 days ago 10 replies      
I would assume this is about systemd. Personally my experience with it has been great - the occasional screw-ups (they did happen - something related to dbus made all systemctl commands fail until a reboot) were nothing compared to the time I gained not having to worry about prehistoric initscripts.

So while it isn't perfect, it's IMO a step in the right direction, if only by lowering the barrier to entry to actually using and managing the thing.

If you are worried about stability then you shouldn't rely on a single machine anyway. Personally I manage systemd screw-ups (though I haven't had one in production yet - all of them were on my personal machines while monkeying around with Archlinux) just like any other hardware failure - by making sure my app/service stays available even if I yank the power cord from the machine.

TheAceOfHearts 2 days ago 15 replies      
Here's something I don't understand... Why do so many people hate systemd but not macOS's launchd [0]? Don't they serve similar purposes? I've been digging into macOS for a few months now, and I'm pretty happy with their defaults.

EDIT: I don't know why I'm being downvoted for asking a genuine question. I'm not a linux expert, so I'd love to hear your criticisms.

[0] http://www.launchd.info

stephenr 2 days ago 4 replies      
I fucking hope not.

Linus is clearly very smart, but user-facing tools is not what he's good at. Git is a perfect example of this.

Mercurial and Git were created ridiculously close to each other, and their main features/use model are quite similar, and yet mercurial is literally years ahead in terms of usability. The results to a given command are 99% of the time, what you would intuitively expect.

Now here's what I would love to see: Linus initiate a fork/rewrite/whatever of the init part of systemd, that keeps the concept of non-executing service definition files. They don't have to be compatible with systemd but that wouldn't necessarily be a bad thing either.

Edit: actually, aiming for compatibility probably means having to reimplement systemd weirdness, so probably is a bad thing as I think about it.

teddyh 2 days ago 1 reply      
This is taking a quote out of context and trying to start the old usual systemd flamewar. Flagged.
Tharkun 2 days ago 0 replies      
Systemd is a steaming turd. However, it's one we have to live with, and there doesn't seem to be anything better on the horizon.

A good book on the subject might make things slightly better. However, it doesn't seem like anyone is interested in writing one :-(

cperciva 2 days ago 1 reply      
I'd say that I no longer feel like I can trust 'git' to do the same thing either... but who am I kidding? I never trusted it to do the sane thing.
API Security Checklist for developers github.com
314 points by eslamsalem  3 days ago   69 comments top 12
tptacek 3 days ago 8 replies      
There's some OK stuff here, but the list on the whole isn't very coherent. If this is a guide specifically for "APIs" that are driven almost entirely from browser Javascript SPA's, it makes sense. Otherwise, a lot of these recommendations are a little weak; for instance, most of the HTTP option headers this list recommends won't be honored by typical HTTP clients.

Further, the list succumbs to the cardinal sin of software security advice: "validate input so you don't have X, Y, and Z vulnerabilities". Simply describing X, Y, and Z vulnerabilities provides the same level of advice for developers (that is to say: not much). What developers really need is advice about how to structure their programs to foreclose on the possibility of having those bugs. For instance: rather than sprinkling authentication checks on every endpoint, have the handlers of all endpoints inherit from a base class that performs the check automatically. Stuff like that.

Finally: don't use JWT. JWT terrifies me, and it terrifies all the crypto engineers I know. As a security standard, it is a series of own-goals foreseeable even 10 years ago based on the history of crypto standard vulnerabilities. Almost every application I've seen that uses JWT would be better off with simple bearer tokens.

JWT might be the one case in all of practical computing where you might be better off rolling your own crypto token standard than adopting the existing standard.

Kiro 3 days ago 1 reply      
> Don't use Basic Auth

Why not? If it's an API meant to be consumed by a server I don't see what the problem is.

tiffanyh 3 days ago 0 replies      
I don't bookmark many links but here's [1] a good one for all to keep on a similar topic.

It's a SO article on security for web transactions.

[1] https://stackoverflow.com/questions/549/the-definitive-guide...

moxious 3 days ago 2 replies      
No amount of checklisting and best practices substitutes for hiring someone smart to break your stuff and tell you how they did it. You can check all the boxes and still get pwned.

You can learn and run automated tools for 6 months and end up knowing 1/3rd of what a great pentester knows.

If you want to know you can resist an attack from an adversary, you need an adversary. If you want to know that you followed best practices so as to achieve CYA when something bad happens, that's a different story.

But honestly the security picture is so depressing. Most people are saved only because they don't have an active or competent adversary. The defender must get 1,000 things right, the attacker only needs you to mess up one thing.

And then, even when the defender gets everything right, a user inside the organization clicks a bad PDF and now your API is taking fully authenticated requests from an attacker. Good luck with that.

Security, what a situation.

bodhi 3 days ago 4 replies      
What are peoples thoughts on using TLS client certificates for authentication?

Given we're talking about APIs, we avoid many of the UX problems, but it feels like taking on a different set of problems than just using a bearer token. It does provide baked in solutions for things like revocation and expiry though.

drdaeman 3 days ago 0 replies      
> Always try to exchange for code not tokens (don't allow response_type=token).

There is absolutely nothing wrong with the implicit flow if the application (including in-browser ones) is requesting the token for itself (and not for some server or any third party). In case of a standalone app that would be just an extra meaningless step.

There is a slight difference in presence/absence of refresh token, though, but that would make implicit flow more secure (because, if standard-compliant, there won't be any refresh tokens at all), not less.

In case of a browser, the token would end up in the browser's history, but given that a) if browser itself is compromised game is already over, and b) that it's not possible for other parties to access the history (besides some guesswork that doesn't work for tokens), paired with a fact that c) such tokens should be short-lived, it's not a big deal.

> User own resource id should be avoided. Use /me/orders instead of /user/654321/orders

This has absolutely nothing to do with security. TBH, I don't see any issue if /me/ would be a redirect or an alias for /user/654321/. That may make perfect sense if a conceptual purity is desirable ("for each object there is one and only one URL - the canonical one"), with its pros and cons.

> Don't use auto increment id's use UUID instead.

Similarly, that barely has anything to do with security. One just has to understand that sequential IDs are trivially enumerable (and an obvious consequence of this fact - that API consumers would be able to enumerate all the resources or, at the very least, estimate their cardinality).

And as for the security - it should've probably said UUIDv4, because if one accidentally uses e.g. UUIDv1 their IDs would lose the unguessability.

philip1209 3 days ago 5 replies      
> User own resource id should be avoided. Use /me/orders instead of /user/654321/orders

Can somebody explain this?

ikeboy 3 days ago 0 replies      
So I'm developing a simple SAAS with little to no private info and where failure isn't critical.

For initial release I build a page that uses html buttons and basic javascript to GET pages, passes a key as a parameter, and uses web.py on the backend.

It seems like it would be a lot of work to implement the suggestions here. At what point does it make sense?

EGreg 3 days ago 1 reply      
There is a lot more you can do.

For example you can sign session IDs or API tokens when you issue them. That way you can check them and refuse requests that present invalid tokens without doing any I/O.

baybal2 3 days ago 0 replies      
The guy forgets the main thing here: length, type and range checks!

I'm finding issues like API servers hanging/crashing due to overly long or malformed headers all the time when I work on front-end projects.

Programming in a language with automatic range and type checks does not mean that you can forego vigilance even with the most mundane overflow scenarios: lots of stuff is being handled outside of the "safe" realm or by outside libraries.

ViktorasM 3 days ago 1 reply      
Not a security topic, but POST is not necessarily "create" and PUT is not necessarily "update".
kpcyrd 3 days ago 0 replies      
I've filed a pull request to include a CSP technique I've started adding on some of my apis:


Judge rules Utah law banning undercover farm filming is unconstitutional ksl.com
257 points by SwellJoe  4 days ago   84 comments top 13
burntrelish1273 4 days ago 2 replies      
Ag-gag laws are legitimized, crooked censorship.

The meat processing industry is still extremely insulated, but let's hope this gets chipped away to reveal their secrets. This is one of the reasons undocumented immigrant labor is exploited: they generally won't say anything.



See also: SHowing Animals Respect and Kindness (SHARK) to see what happens to those whom film use drones to legally film live caged animal shoots and other embarrassing/horrible human-animal interactions. http://sharkonline.org/

westnortheast 4 days ago 1 reply      
This is great news for those that try to work to improve conditions for farm animals. Collecting this footage is already risky as it is (violent retribution is not unheard of), but remains a crucial tool in convincing people that factory farming (ie, more than ~95% of animal agriculture in the US) continues to treat animals brutally and inhumanely.
downandout 4 days ago 5 replies      
Isn't trespassing, regardless of what you do once you are on private property without permission, already illegal? The video would just be evidence of the crime the person doing the filming committed by being there in the first place. It seems silly that there would need to be a separate law for this.
alexjray 4 days ago 0 replies      
What is everyone talking about, this is good; free speech is a crucial part of our democracy.

you can put up no trespassing signs, thats legitimized. Encroaching on free speech however is not.

paulddraper 3 days ago 1 reply      
I'm a Utahn, have lived next to several farms/butcher operations, and have slaughtered my own animals.

I don't understand how trespassing on other people's property could be anything but illegal.

Yes, yes, I've heard the retoric that the industry has bad guys and bad guys don't have rights. (So stop using encryption all you bad people!)

But that can't be the real reason, right? The KSL article is pretty brief. Anyone know the reason it was ruled that trespassing isn't trespassing if it's on a farm?

lighthazard 4 days ago 3 replies      
Can anyone explain the justification for not recording? I can only see positive things come out of it.
nickhalfasleep 4 days ago 2 replies      
Besides the welfare of the animals, this is also important to the food safety and health of society to have clear insight into how food is produced.
briandear 3 days ago 1 reply      
Yet it is interesting that the undercover Planned Parenthood video guys where prosecuted. I am curious if the same sort of activists that applaud this decision also support anti-Planned Parenthood filmers.

The cynic in me would offer that very few activists actually care about free speech only that their speech is free. It might seem that we are all hypocrites to some degree.

gweinberg 3 days ago 1 reply      
It seems strange to me that agriculture should be treated differently than any other industry.
microtherion 4 days ago 1 reply      
divbit_m 4 days ago 0 replies      
Why not just put cameras in farm equipment and tell everyone straight up they are being filmed? No need for creeping around
arthur_trudeau 4 days ago 3 replies      
Reporting on legal decisions is garbage when it doesn't even include the name of the case, or better a pointer to the opinion, so I can read the ruling myself.

But it seems tenuous to me to claim that the state can criminalize trespassing, but not trespassing for the purposes of X, or conspiring to trespass to commit X, regardless of X.

averagewall 4 days ago 5 replies      
How is it ever legal to break into a business's property and secretly video the operations? You surely can't sneak into a factory and record their processes to sell to China. Is it only allowed when it's being used for protesting, not for stealing IP? Or only on farms but not in factories? What about trespass law? Non-disclosure agreements for employees? Are those invalid?
Minimal PDF brendanzagaeski.appspot.com
359 points by ingve  3 days ago   99 comments top 14
aidos 3 days ago 3 replies      
I've spent much of the last year down in the internals of pdfs. I recommend looking inside a PDF to see what's going on. PDF gets a hard time but once you've figured out the basics, it's actually pretty readable.

Some top tips; if you decompress the streams first, you'll get something you can read and edit with a text editor

 mutool clean -d -i in.pdf out.pdf
If you hand mess with the PDF, you can run it through mutool again to fix up the object positions.

Text isn't flowed / layed out like HTML. Every glyph is more or less manually positioned.

Text is generally done with subset fonts. As a result characters end up being mapped to \1, \2 etc. So you can't normally just search for strings but you can often - though not always easily find the characters from the Unicode map.

j_s 3 days ago 2 replies      
See also on the same site: Hand-coded PDF tutorial | https://brendanzagaeski.appspot.com/0005.html

If you need more, the "free" (trade for your email) e-book from Syncfusion PDF Succinctly demonstrates manipulation barely one level of abstraction higher (not calculating any offsets manually): https://www.syncfusion.com/resources/techportal/details/eboo...

"With the help of a utility program called pdftk[1] from PDF Labs, well build a PDF document from scratch, learning how to position elements, select fonts, draw vector graphics, and create interactive tables of contents along the way."

[1] https://www.pdflabs.com/tools/pdftk-server/

tptacek 3 days ago 3 replies      
The biggest complexity (and security) problem with PDF is that it's also effectively an archive format, in which more or less every display file format conceived of before ~2007 can be embedded.
ekr 3 days ago 1 reply      
See also klange's resume: https://github.com/klange/resume. Resume pdf that's also a valid ISO 9660, bootable toaru OS image.
ad_hominem 2 days ago 1 reply      
I've encountered .pdf files which internally embed a proprietary Adobe extension called XFA[1]. I think they are created using Adobe's LiveCycle product.

They are a real pain because they render fine in Adobe Acrobat, but most other PDF renderers (including browser built-in ones) can't render them. Instead they render a blob of interstitial "loading..." text that is also embedded in the PDF (which the XFA rendering would then overwrite). It was a pain to me personally because I had to figure out a way to do programmatic form-filling of some fillable form XFAs, and most PDF libraries don't work with them (they expect traditional AcroForms fillable forms).

But in reading the XFA specification I found it interesting it had its own JavaScript interpreter (including supporting XHR requests as part of some internet-integrated form-filling feature) and another proprietary scripting language called FormCalc. I guess it opened my eyes to PDFs being a container format and the kinds of things they allow you to embed.

[1]: https://en.wikipedia.org/wiki/XFA

kazinator 3 days ago 0 replies      
Plain text ... but with hard offsets ... encoded as decimal integers. Yikes!
fizixer 3 days ago 4 replies      
This is good but Postscript is even better. Someday I'll learn it and see what I can do with it.
amenghra 3 days ago 0 replies      
If you like this you might enjoy this repo: https://github.com/mathiasbynens/small
cmurf 2 days ago 0 replies      
I'm frustrated by governments using PDF for full in forms and yet open source tools are very weak in this area.

This is not better than paper and pencil, in terms of accessibility. And we need to do better somehow.

mp3geek 3 days ago 2 replies      
Would be nice if browsers would support saving pages directly as pdf using there own pdf librarys.
eru 3 days ago 2 replies      
> Most PDF files do not look readable in a text editor. Compression, encryption, and embedded images are largely to blame. After removing these three components, one can more easily see that PDF is a human-readable document description language.

Of course, PDF is intentionally so weird: it was a move by Adobe because other companies were getting too good at handling postscript.

Embedding custom compression inside your format is seldom worth it: .ps.gz is usually smaller than pdf.

Noctem 3 days ago 0 replies      
This page was helpful to me a couple years ago while crafting the tiny PDF used for testing in Homebrew. https://github.com/Homebrew/legacy-homebrew/pull/36606
jimjimjim 3 days ago 0 replies      
PDF is like c++

it's used everywhere because you can do everything with it.

This also leads to the problem where you can do anything with it.

so each industry is kind of coming up with their own subset of pdf that applies some restrictions in the hopes of making them verifiable.

the downside is that these subsets slowly start bloating until they allow everything anyway.

i'm looking at you PDFa. grr.

ilaksh 3 days ago 1 reply      
PDF is literally the worst possible format for document exchange because it has the most unnecessary complexity of all document formats, which makes it the hardest to access. But popularity and merit are two totally different things.
Photos of Mosul 2017 petapixel.com
360 points by wfunction  4 days ago   130 comments top 22
aphextron 4 days ago 4 replies      
The recent drone footage from Mosul is some of the most mind blowing, surreal stuff imaginable. God help anyone trapped in that city.


andmarios 4 days ago 3 replies      
These shots capture the harshness of human conflict and how we (human beings) change in the face of it. Thank you.

I disagree with people saying these are good shots. In the collection another user posted ( http://www.kainoalittle.com/mosul ) there are better shots technically and artistically. Yet that the photographer chose these shots for the article speak tons. These aren't photographs to make you feel better, these are photographs to make you understand (even a tiny bit) what real war looks like and it is ugly.

superflyguy 4 days ago 1 reply      
"But the soldiers had fed me and given me a seat in their Humvees, and the refugees had tolerated my presence on some of the worst days of their lives. They very rightly expected that I would tell their story."

You could have explained a little about the death of newspapers and other media outlets, and how much existing footage/photography there is of conflict in Iraq (to say nothing of Afghanistan, Syria etc). They should have been under no illusions as to the likelihood of any particular photographer getting their work paid for and published. It's like the Simpsons joke about space missions. The first guy landed on the moon and it's the story of the century. The 12th lunar mission though? Meh - what else is on. Trying to get a western, especially an American, audience excited about another batch of photographs about fighting in the Middle East seems like something of a tall order. Even outside of the news photography has died a death. Anyone can take an adequate photograph nowadays with cheap, simple equipment so few people are willing to pay a professional.

skywhopper 4 days ago 1 reply      
These are great pictures, and makes me realize that while we hear a lot (though not enough) about these conflicts around the world, its usually just accompanied by a photo or two or a few seconds of video capturing a firefight or an explosion. But such things are cliche at this point and don't hit home. Seeing real soldiers taking a break between firefights, and real families fleeing their homes with their children and a few belongings really makes the reality of the situation much more graspable. I think we need to see a lot more of these candid photos (and video too) of the real people whose lives are being devastated by these conflicts.
notacoward 4 days ago 1 reply      
I'd never heard of the Free Burma Rangers before, and these photos helped me learn. I guess that means the author has succeeded in his admirable mission to tell others' stories.
e40 4 days ago 0 replies      
A consistently excellent place to view photos of this sort, though not exclusively about war, etc, is the Atlantic "In Focus" page:


I visit them once a week and go through the photos. Quite amazing.

arcticbull 4 days ago 4 replies      
I would love to donate to this person, and I couldn't find a way. Any suggestions?
tomxor 4 days ago 2 replies      
Good shots Kainoa, I really wish I could read the story that goes with them.

I can see it being hard as a news photographer of any sort these days, as papers are going through a very hard transition that may see them completely vanish at the other side of it.

Have you considered finding some writers on Medium? Their new concept is to be the platform without the paper, readers select writers more organically and writers get paid more directly, photographers don't fit directly into this model yet but you could at least sell to writers there. Maybe medium will come up with a way to have multiple contributors to articles that photographers could be a part of too.

LeoNatan25 4 days ago 0 replies      
Some really good photos. War is tragic, but it sure allows for people to capture amazing photography. What does this say about us?
thekid314 3 days ago 0 replies      
For photos by a pro check these out: www.asmaawaguih.com/albums/liberating-mosul/
jokoon 3 days ago 1 reply      
It's weird because I've read several times over that violence in the world is decreasing on average, so it would seem the middle is the last violent place. Still quite violent...
tomxor 4 days ago 0 replies      
There seems to be a few more photos in the collection on his site too:


anonu 3 days ago 0 replies      
Curious why the photographers pictures didn't sell? Is there an oversupply of stuff material? Is the source questionable?
HearMeRoar 4 days ago 8 replies      
Why would no one buy these amazing photographs? Either OP is bad at finding a buyer or photography is one tough job.
NicoJuicy 4 days ago 1 reply      
I suppose you mostly tried US papers? I see a better audience for this in Europe
wfunction 4 days ago 1 reply      
Note: Just to clarify in case anyone misinterprets, these are not my photos! I merely copied the title from the page. If there's a better way to title pages like these, please let me know, since I was confused as to what title I should put while avoiding it being misleading.
hajderr 4 days ago 1 reply      
I hope this thread doesn't get hijacked with political discussions. I'll spare my comments at least. Let the story be told through the images...
zdkl 4 days ago 2 replies      
Wildly off topic: this site redirected my mobile chrome to some advertisement with a screen blocking scammy alert and closed all other tabs. What the hell?
onetokeoverthe 4 days ago 1 reply      
His website won't load. No wonder they're not sold.
JCDenton2052 4 days ago 0 replies      
Very nice, but could have used higher resolution.
Nanite 4 days ago 1 reply      
They're decent pictures, but not "GREAT" pictures. Even in warzones photojournalism is a highly competitive field. Editors at Reuters, AP and other news agencies get tons of sets offered every day.

Whining about it in an extremely passive aggressive blog post really isn't going to help his career along.

smallhands 4 days ago 0 replies      
As a foolish man that had not experienced war.these pictures remind me of Call of Duty a lot!!???.I am kind of wonder the magics advance VR can do in training of soldiers?
Battle for the Internet battleforthenet.com
312 points by anigbrowl  8 hours ago   149 comments top 20
drucik 3 hours ago 4 replies      
I don't get why I see arguments like 'Oh, why would it matter, its not neutral anyway' or 'it won't change anything' and no one tries to explain why allowing an end of net neutrality would be bad.I would say the reason why net neutrality is important is the following:

'On paper' the end of net neutrality will mean that big companies like google or facebook (which, according to the website, do not support net neutrality [why would they, right?]) will pay the ISPs for priority connection to their service, and ISPs will be able to create 2 payment plans for their customers - throttled network and high-speed, super unthrottled network for some premium money.And some people are fine with that - 'it's their service' or 'i only use email so i don't care' or other things like that.

But we are living in a capitalism world and things aren't that nice. If it is not illegal to slow down connections 'just because', I bet in some (probably short) time companies will start abusing it to protect their markets and their profits. I'd expect under the table payments, so the company F or B will be favored by a given ISP, and you can forget about startups trying to shake up the giants.

d3sandoval 1 hour ago 0 replies      
If your internet browser were a hearing aid, the information coming in would be sound - whether that's your husband or wife asking you to do the dishes, a ring at your doorbell, or even an advertisement on the radio.

now imagine if that hearing aid wasn't neutral in how it handled sound. imagine if, when the advertisement played on the radio, it would be louder than all other sounds around. at that time, you might miss an important call, maybe your wife just said "I love you", or perhaps there's a fire in the other room that you are now not aware of, because clorox wipes demanded your full attention.

without net neutrality, we lose the ability to chose our own inputs. our provider, our hearing aid, gets to choose for us. this could mean slower video downloads for some, if they're using a competitor's streaming service for instance, but it could also mean the loss of vital information that the provider is not aware even exists.

By rejecting Title II recommendations, the FCC will introduce a whole new set of prioritization problems, where consumers no longer have the ability to decide which information is most important to them. and, if the provider goes so far as to block access to some information entirely, which it very well could without Title II protections, consumers would be at risk of missing vital information - like a fire in the house or their husband saying "I love you"

pedrocr 4 hours ago 5 replies      
I fully support the net neutrality argument, it seems like a no brainer to me. However I find it interesting that companies like Netflix and Amazon who heavily differentiate in which devices you can have which video quality[1] will then argue that ISPs shouldn't be able to differentiate which services should have which transport quality.

The situation seems completely analogous to me. I'm paying my ISP for a connection and it thinks it should be able to restrict which services I use on top of it. I'm paying a content provider for some shows/movies and it thinks it should be able to restrict which device I use to view them.

The argument for regulation also seems the same. ISPs don't have effective competition because physical infrastructure is a natural monopoly. Content providers also don't have effective competition because content access is also a natural monopoly because of network effects (right now there are 2-3 relevant players worldwide).

[1] Both of them heavily restrict which devices can access 4K content. Both of them make it very hard to have HD from non-standard devices. Netflix even makes it hard to get 1080p on anything that isn't the absolute mainstream (impossible on Linux for example).

shortnamed 5 hours ago 8 replies      
love the blatant americentrism in the site:

"This is a battle for the Internet's future." - just American internet's future

"Team Cable want to control & tax the Internet" - they will be able to control the global system in which the US is just a part of?

"If we lose net neutrality, the Internet will never be the same" - American internet, others will be fine

eriknstr 2 hours ago 3 replies      
Very recently I bought an iPhone and a subscription that includes 4G service. With this subscription I have 6 GB of traffic per month anywhere in EU, BUT any traffic to Spotify is unmetered, and I don't know quite how to feel about this. On one side it's great having unlimited access to all the music in Spotify at any time and any place within the whole of EU, but on the other side I worry that I am helping damage net neutrality.

Now Spotify, like Netflix and YouTube and a lot of other big streaming services, almost certainly has edge servers placed topologically near to the cell towers. I think this is probably ok. In order to provide streaming services to a lot of people you are going to need lots of servers and bandwidth no matter what, and when you do you might as well work with the ISPs to reduce the cost of bandwidth as much as possible by placing out servers at the edges. So IMO Spotify is in a different market entirely from anyone who hasn't got millions or billions of dollars to spend, and if you have that money it should be no more difficult for you to place edge servers at the ISPs than it was for them.

But the unmetered bandwith deal might be harmful to net neutrality, maybe?

_nedR 3 hours ago 1 reply      
Where were the protests, blackouts, outrage and calls for action from these companies (Google, Amazon, Netflix) when the Internet privacy bill was being repealed? I'll tell you where they were - In line outside Comcast and Verizon, eagerly waiting to buy our browsing histories.

We had their back the last time net neutrality issue came around (lets be honest, their business depends on a neutral net). But they didn't do the same for us. Screw them.

franciscop 4 hours ago 2 replies      
As a foreigner who deeply cares about the web, what can I do to help? For good or for bad, USA decisions on the Internet spread widely around the world. "Benign" example: the mess before UTF8, malign example: DRM and copyright fight.

Note: Besides spreading the word; I do not know so many Americans

superasn 5 hours ago 1 reply      
This is great. I think the letter textarea should also be empty.

Instead there can be a small wizard with questions like "why is net neutrality important to you", etc with a guideline on what to write.

This way each letter will be a differently expressed opinion instead of every person sending the same thing and may create more impact.

agentgt 2 hours ago 2 replies      
I have often thought the government should provide an alternative option for critical service just like they do with the mail and now health insurance (ignoring current politics).

That is I think the net neutrality issue could be mitigated or non issue if there were say a US ISP that operates anywhere where there is a telephones poles and public towers analogous to the United States Postal service (USPS).

Just like the roads (postal service) the government pseudo owns the telephone poles and airways (FTC) so they should be able to force their way in.

I realize this is not as free market as people would like but I would like to see the USPS experiment attempted some more particularly in highly leverage-able industries.

Pigo 2 hours ago 0 replies      
It's very disheartening that this is a battle that doesn't seem to end. They are just going to keep bringing proposals in hopes that one time there won't be enough noise to scare politicians, or worse the politicians are already in pocket just waiting for the opposition level to be at a minimum. The inevitability vibe is growing.
bluesign 2 hours ago 0 replies      
Why not make barrier of entry easy for other/new ISPs by forcing them to share infrastructure for a fee, and then allow them to tier/price as much as they want?
kuon 2 hours ago 1 reply      
I'm fully in favor of net neutrality, but I am not against premium plans for some content.

For example let's say I have a 50/20Mb internet. I should be able to browse the entire internet at that speed. But, if I want to pay extra to have like 100Mb with QoS only from netflix, I am not against this kind of service.

joekrill 1 hour ago 1 reply      
Is this form broken for anyone else? I'm getting CORS errors when it tries to submit to https://queue.fightforthefuture.org/action. That seems like a pretty big blunder, so I'm guessing maybe the site is just under heavy load?
AndyMcConachie 2 hours ago 4 replies      
Just to be clear, this has nothing to do with the Internet, and everything to do with the USA. Most Internet users can't be affected by stupid actions of the FCC.

I guess I'm just a little annoyed that Americans think their Internet experience somehow represents 'the' Internet experience.

Flemlord 1 hour ago 0 replies      
reddit (US Alexa rank = 4) is showing a popup to all users that sends them to www.battleforthenet.com. It is about time a major player started leveraging their platform. Why aren't Google, HN, etc. doing this too?
sexydefinesher 4 hours ago 0 replies      
*the American internet

Meanwhile the EU already has laws for Net Neutrality (though zero-rating is still allowed).

_eht 5 hours ago 4 replies      
All I can find are arguments for neutrality, it seems like a very vocal crowd full of businesses who currently make a lot of money from people on the internet (reddit, Facebook, et al).

Anyone want to share resources or their pro priority internet stance?

dep_b 2 hours ago 0 replies      
Interestingly all this kind of stuff seems to happen in the 1984th year since Jesus died.
throwaway2048 3 hours ago 0 replies      
strange most of the top level comments are arguing against either net neutrality, or this campaign.

On a site that is otherwise extremely strongly for net neutrality.

Nothing suspicious about that...

sharemywin 2 hours ago 0 replies      
I'm all about the net neutrality.

But while we're at it how about some hardware neutrality.

And some data portability and control over who sees my information.

And maybe an API neutrality.

And how about letting the municipalities offer free wifi.

Toward a Reasonably Secure Laptop qubes-os.org
324 points by doener  1 day ago   93 comments top 11
HugoDaniel 1 day ago 2 replies      
"Finally, we are going to require that Qubes-certified hardware does not have any built-in USB-connected microphones (e.g. as part of a USB-connected built-in camera) that cannot be easily physically disabled by the user, e.g. via a convenient mechanical switch. However, it should be noted that the majority of laptops on the market that we have seen satisfy this condition out of the box, because their built-in microphones are typically connected to the internal audio device, which itself is a PCIe type of device. This is important, because such PCIe audio devices are by default assigned to Qubes (trusted) dom0 and exposed through our carefully designed protocol only to select AppVMs when the user explicitly chooses to do so."

This made me download Qubes. Amazing project that seems to care.

x86insecure 1 day ago 5 replies      
There are things we can do to help get us out of this Intel ME rut.

* Let AMD know that open-sourcing/disabling PSP is important to you [1].

* Contribute to RISC-V. You can buy a RISC-V SoC today [2]. Does your favorite compiler have a RISC-V backend?

[1] https://www.reddit.com/r/linux/comments/5xvn4i/update_corebo...[2] https://www.sifive.com/products/hifive1/

cyphar 1 day ago 0 replies      
> Another important requirement were introducing today is that Qubes-certified hardware should run only open-source boot firmware (aka the BIOS), such as coreboot.

I recently flashed coreboot on my X220 (and it worked surprisingly enough). However, I couldn't find any solid guides on how to set up TianoCore (UEFI) as a payload -- does Qubes require Trusted Boot to be supported on their platforms (I would hope so)? And if so, is there any documentation on how to set up TianoCore as a payload (the documentation is _sparse_ at best, with weird references to VBOOT2 and U-Boot)?

Otherwise I'm not sure how a vendor could fulfill both sets of requirements.

d33 1 day ago 10 replies      
If I read that right, they're allowing Intel ME, which sounds like a sad compromise to me. Given that it's a pretty big complex black box that one can't easily disable, would you agree that x86 is doomed when it comes to security? If that's the case, is there any hope we could have a CPU with competitive capabilities? (For example, is there an i7 alternative for ARM?)

What could one do to make it possible to have ME-less x86 in the future?

Taek 1 day ago 3 replies      
Is this something we could achieve with a corporate alliance? I know a lot of tech companies would like to give their employees secure laptops. I also know that there are large costs associated with making hardware, especially if you are talking about dropping ME.

A dozen companies with 1000 employees each and a budget of $2,500 per employee gets you $30 million, which is surely enough to get a decent, qubes-secure laptop with no ME. You aren't going to be designing your own chips at that point, but you could grab power8 or sparc or arm.

Are there companies that would reasonably be willing to throw in a few million to fund a secure laptop? I imagine at least a few. And maybe we could get a Google or someone to put in $10m plus.

ashleysmithgpu 1 day ago 5 replies      
Looks like Qubes make you pay to get certified: https://puri.sm/posts/ "The costs involved, requiring a supplementary technical consulting contract with Qubes/ITL (as per their new Commercial Hardware Goals proposal document), are not financially justifiable for us."
Aissen 1 day ago 1 reply      
> The vendor will also have to be willing to freeze the configuration of the laptop for at least one year.

This is one of the most important points. The speed at which laptop vendors are releasing new SKUs is staggering. I know the whole supply chain is to blame, but apart from a few models, the number of different SKUs is way too high.

notacissp 21 hours ago 0 replies      
This article helped me get up and running with Qubes:


digi_owl 21 hours ago 1 reply      
Once more i get the impression that computer security people are off in a different universe where a computer at the bottom of the ocean is a "reasonable" way to do computing.
listic 20 hours ago 0 replies      
Looks like even Purism is not interested in certifying compatibility with Qubes anymore. That's sad.
awinter-py 22 hours ago 0 replies      
It's a shame that chromebook's boot verification isn't easily extensible to open source.
Snap falls to IPO price usatoday.com
277 points by prostoalex  1 day ago   237 comments top 21
habosa 1 day ago 7 replies      
$17.00 was the IPO price but only for investors with access. Your average investor with an eTrade account saw a price of $24.00+ when the market opened that morning, and it hit almost $27.00 that day. So those folks have seen a 30%+ drop since IPO.

Given that Snap paid out billions in IPO bonuses to executives and other employees, it's turned out to be a pretty big wealth transfer from retail investors to Snap employees.

skywhopper 1 day ago 4 replies      
As skeptical as I am that Snap will ever be a moneymaker, this data point is not meaningful in any way. Facebook traded below (often _well_ below) its IPO price for the first 15 months on the market.
cft 1 day ago 1 reply      
Financially it does not matter for the founders that much anymore. Significant wealth has already been transferred and cashed out:


beager 1 day ago 11 replies      
What does $SNAP need to do to deliver on the hype? Is there anything that can make $SNAP a good investment for anyone other than the parties involved in trading the IPO?

I tend to be bearish on $SNAP in general, but I'm interested in the discussion. How do they right the ship and boost back up to that $25-30 range? What's their play?

mmanfrin 1 day ago 1 reply      
Pardon my language but: no fucking shit. $20bil was unbelievably overpriced.

This is the one major tech stock that I simply do not get. ~$125 a user is insane.

stevenj 1 day ago 0 replies      
I think Snap's stock price will continue to fall (to under $10) and then at some point it'll be acquired, possibly by an asian company or investment group.
JumpCrisscross 1 day ago 1 reply      
Lock-up expires 29 August [1]. On the other hand, they have $3.2bn of cash and short-term investments (as of 31 March) while burning about $600MM of cash from their operations (FYE 31 December 2016) a year. That's 5 years to get it right.

EDIT: insider lock-up at T+150 comes in at the end of July.

[1] http://www.nasdaq.com/markets/ipos/company/snap-inc-899497-8...

chollida1 1 day ago 0 replies      
Not sure this is really news worthy.

They've been heavily shorted for a while now and their puts are expensive. But really 1 Billion new shares could flood the market in just a few more weeks.

To put it in perspective I believe Twitter, the other poster child for giving away options, had about half that amount.

Even the IPO underwriters are starting to crack. Credit Suisse used the stock drop to keep an "outperform" rating on the stock while lowering its target from $30 to $25.

IMHO Snap will do just fine for hte next year. They are big enough that companies will carve out a piece of their marketing budgets for Snap. It won't be until a year later when they have enough data on how well their Snap advertising is working that they'll decide if its worth it or not.

zitterbewegung 1 day ago 1 reply      
Following snapchats fluctuations about its stock price is somewhat interesting . To be honest though I'm getting fatigued on this. I remember when Facebook IPOed and we were getting similar posts. The fact that Snap was able to IPO and get money to become more competitive will have to wait for a few quarters though.
andirk 1 day ago 0 replies      
With no knowledge other than using the app and from general bar talk, I saw this as a huge flop. I wish I shorted it. It may have potential, but I don't see what that potential is when their features are so incredibly easy to mimic. And no, SnapChat, I don't care what Puff Daddy did with Beyonce or whatever it is their news feed tells me.
bsaul 1 day ago 0 replies      
i wonder if the community realizes how bad those kind of overhyped companies makes us look to the general audience, with founders cashing out shortly after the stock gets public and everybody realizes valuation were simply absurd.

i don't think we'll have to wait for a long time before we see traditional investment funds and banks not willing to take part in that game anymore.

dfrey 1 day ago 6 replies      
snapchat baffles me. It's just another instant messaging platform except that they came up with the idea of messages that delete themselves after a time period. The problem is that this killer feature is easily subverted by taking a picture of your screen. So basically, they have provided an instant messaging platform with one extra useless feature.
havella 1 day ago 0 replies      
As an excercise for the reader, would be interesting to test the performance of a buy and hold strategy of stocks a.) Going below IPO price, b.) Going below 50% IPO price. A second filter that can be applied is the time span between first day of trading of such event.
Rjevski 1 day ago 2 replies      
I can't understand what are they doing to not make any profits. They are selling ads. Server resources to exchange the pictures are a minor cost, so where is all that ad money going to?

If they stopped wasting money on development time making their UX even worse, or stupid stuff like Spectacles, or this: https://www.recode.net/2017/6/17/15824222/snapchat-ferris-wh... - maybe they would actually be making profits right now.

smpetrey 1 day ago 1 reply      
For your consideration, not a single share purchased is a voting share. SNAPs shares should be closer to $13.
opendomain 20 hours ago 0 replies      
Is is much below the IPO price now

currently 15/62 but trending lower

This is before the lockup period ends when ususal new stocks drop

gigatexal 1 day ago 0 replies      
maybe twitter should buy them
nicolrx 1 day ago 0 replies      
All my friends are turning off their Snapchat account to keep using Instagram that does the same + better photo features.
mmmpop 1 day ago 1 reply      
Invest in the things you love and use most! What could go wrong?!
korzun 1 day ago 2 replies      
I think it is safe to say that the IPO was a total scam. The company was never profitable, and numbers never made any sense.

I have a bridge in Brooklyn up for grabs (cheap) if you still think the valuation was based on ridiculous data points such as active users, etc.

People already lined their pockets up and you will be reading another P.R piece on how great of a businessman Evan is within the next couple of months.

Spend all of the funding on aggressive marketing to get the numbers up pre-IPO, file for an IPO and cash out. Rinse & repeat.

jdavid 1 day ago 1 reply      
Based on revenue and flat growth, the company is worth about $4 a share. If it get's to that price I'll think about buying in, if and only if i think the company is going to turn it around.

I've made good money waiting for the time to be right before buying in. This stock is worthless above $8 a share.

How to See What the Internet Knows About You nytimes.com
307 points by alexkavon  4 days ago   100 comments top 18
owlninja 4 days ago 11 replies      
These companies know "more about your life than you do", but yet the targeted ads continue to be pretty useless. The ad on this site was an Amazon ad for a pool filter I bought on Amazon about a week ago. It should know that I don't need these for like another year now. The other was for a cruise which I just got back from last week. Maybe some after-sun lotion or something vehicle related for the fact I drove 6 hours each way? The ads I always see just never seem that smart given the terabytes of data that everyone supposedly has on me. They really just seem to reflect my search history.
jasonrhaas 4 days ago 5 replies      
I recently got tired of all the creepy targeted advertising, so I hit the nuclear option. Turned off all targeted ads on all social networks, cleared all cookies, all search histories, and installed the Disconnect Chrome extension. Also disabled all images in my email to stop those email trackers. I've always used an ad-blocker but I don't think that stops the data collection aspects of it.

I've always known this is happening and previously I didn't care because you figure if you are going to get advertised to, why not make it relevant, right? Well, it's more complex than that, here are the problems I have with this kind of data tracking:

1. Data can we used against you in ways you might not understand or have considered.2. It's a waste of money and resources. Think of all the extra bandwidth, server space, human time, that is spent on just trying to sell people more shit.3. I don't like the idea of people making money off my data without my knowledge.4. Targeted ads might distract me and cause me to waste extra time online.

Bottom line -- I have a moral objection to the whole idea of it, and will do everything I can do stop it.

j_s 4 days ago 1 reply      
Michael Bazzell is an expert on (both sides of) Open Source Intelligence (OSINT: data collected from publicly available sources).


He maintains a virtual machine "pre-configured for online investigators" and a podcast.


komali2 4 days ago 2 replies      
>start with this neat and medium-scary site, which our friends at Gizmodo flagged, that shows you everything your browser knows about you the second you open it. (clickclickclick.click)

"Subject has entered the website. Subject has 4 cores..." followed by a bunch of notes about my mouse moving around, or me making the window inactive.

Hardly terrifying.

esnard 4 days ago 1 reply      
Have I been pwned[0] is missing from this list, and is an awesome tool to check whether some of your information have leaked in the past.

Passwords are, in my opinion, way more valuable than browser history.

[0]: https://haveibeenpwned.com/

losteric 4 days ago 0 replies      
Is the issue ad targeting, or data collection?

For me it's the latter. This is what Google thinks I like: http://imgur.com/a/g8Q7l - reasonably accurate, and I don't care if it's public or used for targeting non-intrusive ads. I like those topics, I buy things, go ahead and show me your products.

My objection is over the data used to learn those interests. I don't want Google tracking my search and page views across the internet, I don't want my credit card company tracking my purchase history, and I certainly don't want my ISP/phone provider tracking my "metadata".

I feel like this article glossed over that aspect. Google's "Ad Personalization" page controls what Google uses, not what it knows. In this age of state, corporate, and foreign surveillance/hacking, I'm far more concerned about the latter.

commenter98456 4 days ago 4 replies      
Why is targeted advertising acceptable? it is not ok for a seller to stalk potential buyers.

I get that everyone has accepted this as a "fact of life" but consider how TV advertising was done. The advertisement is tailored to the show you are watching and the time of day.

In other words, if I'm on a tech blog, advertise to me tech products, if I'm reading a cooking recipe, advertise kitchenware,grocery delivery and the like.

As a consumer, I am more likely to remember an advertised product if the AD is within the context of the web page I'm visiting.

Target pages not users! not even as precise as I mentioned above but statistical approximations can be made, much like with TV ads (for example, "ycombinator visitors are likely to buy artisnal cookware compared to people visiting nytimes" )

The problem at the end of the day is that users in general don't like to be tracked. some may sacrfice the privacy for the convenince but most will prefer if the sacrifice wasn't neccesary.

I think better advertising solutions are what is needed. It is unfortunate how much money is instead spent on technology to stalk users, as if using a complex computer system makes it more accpetable or ethical.

telesilla 4 days ago 0 replies      
Anecdote: I'm having dinner last night with a couple at their house and I'm advising one of them about getting an appliance for her partner - so we browse some options on her cellphone together and find something suitable.

Perhaps 30 minutes later, her partner was on his cellphone and gets an ad for the same appliance, he let us know because it was something he had been wanting. They were both on the same IP address via the router. While they initially had a good laugh, it was supposed to be a surprise.

It really did annoy me and when I explained what was happening they found it equally amusing and frightening. I am pretty sure both of them went to bed that night worried that the other was going to be getting ads for things they might be looking at on the internet.

jcoffland 4 days ago 1 reply      
Most of what the article complains about can be solved with an ad blocker. The rest by staying off of Facebook.
dublinben 4 days ago 0 replies      
This site also goes into a lot of detail about the digital tracks you leave online, and how to minimize them: https://myshadow.org/
whatsmyhandle 4 days ago 1 reply      
I generally consider targeted ads to be relatively harmless, with one major recent exception:

I'm starting to shop for an engagement ring, but since I live with my GF I have to do online research in Incognito mode so ads don't ruin what I hope to be a surprise someday.

File this under "things peeps didn't have to worry about 10-15 years ago"

pavement 4 days ago 0 replies      

 Please sign in to see what the internet knows about you.
Uh, actually, that's what I tell the internet about myself, for the most part.

It doesn't really impress me when I approach the internet and introduce myself by name. Of course the internet is going to remember me, if I permit my cookies and request parameters and IP address linger.

Much more interesting is when the internet makes my identity without my having volunteered anything.

Oh well.

mirimir 4 days ago 0 replies      
Adversaries only know what you let them. But this article does little more than rehash well-known clickbait. And it offers little useful coaching.
squarefoot 4 days ago 0 replies      
" The ad on this site was an Amazon ad for a pool filter I bought on Amazon about a week ago. It should know that I don't need these for like another year now. The other was for a cruise which I just got back from last week. "

Advertising isn't just about what you really need, but also about what you will need more, or again, or earlier, because the commercial impressed the product need in your head. Neuromarketing is a science nowadays, and is pretty scary to me.

Some interesting content on the topic here.http://www.neurosciencemarketing.com/blog/home

mungoid 4 days ago 2 replies      
It seems so strange to me that there is such an ever increasing push to get people to click ads when I can't tell you a single person I know that ever actually clicks one on purpose. If I see an ad for something I just search it instead because I don't trust ads..

Except for one time years ago when I saw an ad on my own site for a drawing tablet and getting my account suspended for cheating even though I actually bought the tablet. Decided then I'm never gonna do ads on any of my sites.

I think the advertisers care more about about manipulating companies to buy ads than actually increasing that companies sales.

callesgg 4 days ago 0 replies      
I find it interesting that the internet is talked about as a monolithic entity. Rather than as a bunch of different services and communities.
jszymborski 3 days ago 0 replies      
Here's my "completely not ergonomic but entirely satisfying if silent protest is your thing" solution:

- NoScript [0], First defence. Breaks 30% of sites, but you can selectively enable local scripts to ameliorate that.

- Self-Destructing Cookies [1], We all need cookies for logging in to things and etc..., but once we've closed the tab, we shouldn't have to keep those cookies with us. This means you have to log-in more often, but I prefer that. Prevents people from seeing your email/facebook/etc... when you're still logged in.

- uBlock Origin [2], You'll occasionally need to allow scripts you don't like because they're required for the proper presentation of the website. In those cases, you can depend on uBlock to get rid of awful trackers.

- HTTPSEverywhere [3], Because snoopers aren't always outside your network.

- DuckDuckGo as default search engine [4] and not-Google Chrome as your default browser, At the very least, you're not supporting the worst privacy offender on the net.

- Non-ad-supported/privacy-oriented service. In the paid category (recommended), I suggest mailbox.org, due to their track record and founding principles. In the free(-ish) category, ProtonMail.ch and tutanota.com.

- Clear Cookies/History on Browser Exit

- Forbid Third-Party Cookies

- And because I believe in magical faeries, DoNotTrack set to True

[0] https://addons.mozilla.org/en-US/firefox/addon/noscript/?src...

[1] https://addons.mozilla.org/en-US/firefox/addon/self-destruct...

[2] https://addons.mozilla.org/en-US/firefox/addon/ublock-origin...

[3] https://addons.mozilla.org/en-US/firefox/addon/https-everywh...

[4] https://duckduckgo.com

Jimmie_Rustle 4 days ago 2 replies      
Why is this on HN lol
Soti A $1B firm built in a basement bbc.com
383 points by ghosh  2 days ago   202 comments top 14
mabbo 2 days ago 12 replies      
> One problem Mr Rodrigues says the company has faced, is struggling to recruit enough good computer programmers.

Well yeah, he's trying to run a software business in Mississauga. As a developer who used to commute to the 'Saug every day: very few of us want to work there. No decent food, no after work fun, just suburb hell for miles around.

He's competing with the banks and Amazon downtown in Toronto proper. He's also competing with Google Waterloo 30 minutes in the other direction for people not interested in living in the city. And he's going to be competing with them on pay as well as location.

Edit: woo, this may be my most controversial comment ever based on the point swings. I welcome counterpoints! Discussion is always good.

yitchelle 2 days ago 2 replies      
"I don't think they realised that they were talking to just one guy in a basement, so when the person asked to speak to someone in sales I came back on the phone with a slightly different tone."

This takes the "Fake it till you make it" mantra perfectly!

2manyredirects 2 days ago 0 replies      
I met somebody the other day who works for a tiny start-up and said their MD frequently hires a couple of extra actors to make the office look busier when they have client meetings.
jliptzin 2 days ago 1 reply      
Yet another surprising aspect of this story is that someone born in Pakistan and now living in Canada is named Carl Rodrigues.
zachruss92 2 days ago 4 replies      
This is a very inspiring story and the quintessential manifestation of JFDI. It takes some serious guts to quit your job and say "i'm going to build a software product", with no ideas to start with. I hope Carl can keep growing Soti or even start something else once he gives the reigns to his managers.
plg 2 days ago 1 reply      
I don't get it --- why would a supermarket chain be interested in software that allows one to control a mobile phone from a computer?
cyberferret 2 days ago 2 replies      
Amazing story. Well done Carl. I don't think I would have had the strength of character to turn down an acquisition offer from Microsoft like he did.

Good to see a company with turnover in the $80M/yr range and still being 100% owned by him and his wife.

majani 2 days ago 2 replies      
Just to balance the hagiography out, the company has mostly bad reviews on Glassdoor:


sixQuarks 1 day ago 1 reply      
This company, while impressive, was not built in a basement. It merely started in a basement, which is way different.

That's like saying "Apple - A $700B firm built in a garage"

I'd like to know what the actual largest single-founder business is that's being run out of a basement

_nedR 1 day ago 0 replies      
I am still trying to wrap my head around the idea of a Roman Catholic Pakistani-born Canadian named Rodrigues.
sakshyamshah 1 day ago 0 replies      
In Nepal, with 100k USD, one can rent decent office space, offer in-work benefits/recreations and employ 15+ professionals (including tech and non-tech), for 2 years. Yes, you can literally trade salary of 1 engineer with entire company fund.
pavement 1 day ago 0 replies      
tl;dr: He [...] spent a number of years working as a consultant, before launching Soti in 2001. [B]ack in 2001 he [...] started to try to dream up something. [...] After a month of working[...], Mr Rodrigues had come up with [an] idea - a software system that allowed the user to control his or her mobile phone from their laptop. [M]ost people have never heard of the firm - because it sells its mobile technology software systems to companies instead of consumers
Siecje 1 day ago 0 replies      
What is the use case for controlling your phone from your computer? To enforce company policy on company phones?

Or Tech support? Like the Amazon Fire phone?

digitalshankar 2 days ago 0 replies      
Soti ONE platform, explained by Carl Rodrigues:


Chrome DevTools: Find unused CSS and JS code with the Coverage tab developers.google.com
240 points by HearMeRoar  3 hours ago   71 comments top 23
umaar 48 minutes ago 4 replies      
If you're interested in staying up to date with Chrome DevTools, I run this project called Dev Tips: https://umaar.com/dev-tips/

It contains around 150 tips which I display as short, animated gifs, so you don't have to read much text to learn how a particular feature works.

cjCamel 1 hour ago 2 replies      
From the same link, being able to take a full page screenshot (as in, below the fold) is also very excellent. I notice from the YouTube page description there is a further shortcut:

 1. Open the Command Menu with Command+Shift+P (Mac) or Control+Shift+P (Windows, Linux, Chrome OS). 2. Start typing "Screenshots" and select "Capture full size screenshots".
I needed this literally yesterday, when I used MS Paint to cut and paste a screen together like a total mug.

TekMol 1 hour ago 4 replies      
So I recorded my site for a while. Then sorted by unused bytes. What was on top?

Google's own analytics.js

err4nt 1 hour ago 0 replies      
Interesting tool, but even more interesting results. I just tried it on a simple, one-page website I built recently and there is not a single line of _code_ that's unused, yet it's still showing me 182 lines unused.

Things it seems to consider unused: `style` tags, if your CSS rule is on more than one line - the lines for the selector and closing tag.

There should be 0 unused lines since there are 0 unused rules, and the opening and closing `style` tags are DEFINITELY being used, so until these false results get weeded out it will be noisey to try to use this to track down real unused lines.

wiradikusuma 40 minutes ago 0 replies      
How do I exclude "chrome-extension://" and "extensions::" from the list? I can't do anything with them anyway, so it's just clutter.
genieyclo 51 minutes ago 0 replies      
Is there an easy way to filter out extensions from the Coverage tab, besides opening it in incognito mode?
orliesaurus 2 hours ago 3 replies      
Chrome Dev tools, the first reason why I started using Chrome. I wonder if HN has any better alternatives to suggest? I'm curious to see what I could be missing on!
dethos 22 minutes ago 0 replies      
Is there anything similar for Firefox? On the developer tools or as an external extension?
hacksonx 1 hour ago 1 reply      
"{ Version 57.0.2987.98 (64-bit)

 Updates are disabled by your administrator

Guess I will only be able to comment on these when I get home. The full screen screenshot feature is going to be a welcomed addition. I will especially have to teach it to the BA's since they always want to take screenshots to show to business when design is finished but test is still acting up.

laurencei 2 hours ago 2 replies      
My vague recollection of the Google event where this was first announced (was it late 2016 or early 2017?) - was it was going to "record" your site usage for as long as you were "recording" - and give the report at the end.

But this now sounds like a coverage tool for a single page?

Does anyone know if it can record over multiple pages and/or application usage (such as an SPA)?

KevanM 2 hours ago 2 replies      
A single page solution for a site wide issue.
mrskitch 1 hour ago 0 replies      
I wrote a tool to automate this (right now it's just JavaScript coverage) here: https://github.com/joelgriffith/navalia. Here's a walk through on the doing so: https://codeburst.io/capturing-unused-application-code-2b759...
TekMol 1 hour ago 0 replies      
In the CSS file view, isn't it unpractical, that it marks whitespace as unused? That makes it much harder to find rules that are unused.
indescions_2017 1 hour ago 0 replies      
I like this, and it's addictive ;) Any way to automatically generate output that consists of the 100% essential code subset?

As suspected: a typical medium.com page contains approx 75% extra code. Most egregious offenders seem to be content loader scripts like embedly, fonts, unity, youtube, etc.

On the other hand, besides net load performance, I'm not really worrying about the "coverage" metric. Compiling unreal engine via emscripten to build tappy dodo may result in 80%+ unused code, but near native runtime worth is a healthy tradeoff.

Try, for example: http://webassembly.org/demo/

i_live_ther3 2 hours ago 2 replies      
What happened with shipping everything in a single file and letting cache magic happen?
rypskar 2 hours ago 1 reply      
Excellent timing, I had given up finding a good tool for coverage on JS and CSS and where right now using audits in Chrome trying to find unused CSS and searching through the code to find unused JS on our landing page. Even if it is hard for at tool to find everything that is unused on a page it will show what is used so we know what we don't have to check in the code
TekMol 1 hour ago 0 replies      
Would be super useful if it recorded over multiple pageviews. To find unused CSS+JS and to measure the coverage of tests.

But it seems to silently forget what happened on the first page as soon as you go to the second page.

geniium 1 hour ago 0 replies      
Very nice! The Coverage feature is something I have been waiting for since a long time!
arthurwinter 2 hours ago 2 replies      
It'd be awesome if there was a button to download a file with the code that's used, and the code that's unused, instead of just having a diff. Hint hint :)
foodie_ 2 hours ago 1 reply      
Hurray! Now they just need to make it part of an analytics program so we can let the users tell us what code is never running!
wmkthpn 2 hours ago 2 replies      
Can this be useful when someone uses webpack?
_pmf_ 1 hour ago 0 replies      
When your developer experience depends on how much free time Chrome developers have ...
ajohnclark 2 hours ago 0 replies      
       cached 12 July 2017 15:11:01 GMT