hacker news with inline top comments    .. more ..    11 Apr 2017 Best
home   ask   best   2 years ago   
Snowden: NSA just lost control of its Top Secret arsenal of digital weapons twitter.com
644 points by Yrlec  2 days ago   295 comments top 32
cyphunk 2 days ago 4 replies      
A good time to remember the official US Intelligence Community statement and policy/lie on 0days, as given post-heartbleed:

 When Federal agencies discover a new vulnerability in commercial and open source software a so-called Zero day vulnerability because the developers of the vulnerable software have had zero days to fix it it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose.


spydum 2 days ago 1 reply      
Why is everybody posting/curious about the language of the blog post and not the contents of the file?

I've looked through some of the contents.. Some look incredibly old, but others target odd things.. lots of cPanel. My only guess is take the low hanging fruit to build "jump box" type systems?

Some odd examples: ElegantEagle/toffeehammer.. focuses on cgiecho for RCE. The thing is, a CVE was just released for this case maybe a month ago?: http://www.cvedetails.com/cve/CVE-2017-5613/

So if this dump was from 2013, why did the CVE recently pop up? Or is that coincidence?

sillysaurus3 2 days ago 9 replies      
It's pretty fascinating to read the Shadow Broker's posts. They have to write something, since they can't just say "I work for Russia and we're reminding America that they're not invulnerable." So they have to come up with all sorts of contrived reasons about why they're doing this, complete with broken english to fool stylometry detection that walks the fine line between being believable and preposterous. Someone spent a lot of work getting it to look so terrible.
tenaciousJk 2 days ago 0 replies      
He goes on to further state:

"Quick review of the #ShadowBrokers leak of Top Secret NSA tools reveals it's nowhere near the full library, but there's still so much here that NSA should be able to instantly identify where this set came from and how they lost it. If they can't, it's a scandal."

itchyjunk 2 days ago 1 reply      
Asking a president to do x,y or z by making this type of public statement probably implies it's geared towards the immediate readers and not some leader that might read it.

The security agencies might have made a lot of enemy over the years so it's not clear who benefits from this. Either financially or as ego boost.

The internet is definitely bigger that what most people might have predicted 20 years ago. So its not really a big surprising to see as much or even more power struggle than in real world battle fields.

Since every side has a propaganda to peddle, I, personally can draw no reasonable or coherent conclusions on what type of decisions are shaping the world I live in. But I am nonetheless curious to see how this all plays out in the coming years.

There is a related post on HN about this. [0]


[0] https://news.ycombinator.com/item?id=14066596

iandanforth 2 days ago 1 reply      
Can someone remind me why Snowden would be in a position to comment on if this release comprises a full or partial set of hacking tools? Specifically, does this imply that his cache of data included a list of these tools, or was his day to day job one such that he would have been normally in contact with this toolset?
hl5 2 days ago 2 replies      
Obviously, Perl is the NSA top language choice due to it's built in support for obfuscation and job security.
akud 2 days ago 2 replies      
The content reads pretty clearly like a native English speaker imitating immature hacker-speak. It comes across as if it were written by a script-kiddy; that may be intentional.
r721 2 days ago 1 reply      
Nicholas Weaver: "Overall, though, it looks like the auction file from Shadow Brokers is mostly a bust, better stuff in the free file."


the grugq: "Calling it now: the first ShadowBrokers dump was an expensive signal. This latest one was not (expensive, that is.)"


theocean154 2 days ago 4 replies      
Looking through some of the code and some of the docs, these look old. In absence of a lot of time or some missing docs, not sure how usable these things are.
mcintyre1994 2 days ago 1 reply      
From the Medium post linked (https://medium.com/@shadowbrokerss/dont-forget-your-base-867...)

- Dont care if you swapped wives with Mr Putin, double down on it, Putin is not just my firend he is my BFF.

- Dont care if the election was hacked or rigged, celebrate it so what if I did, what are you going to do about it.

This has got to be a fake group trying to discredit Trump right? I don't like him or what he's doing, but surely surely his supporters don't subscribe to at least the latter view there?

strictnein 1 day ago 0 replies      
> "NSA just lost control of its Top Secret arsenal of digital weapons"

This is just inaccurate, or at least purposefully misleading. The NSA did not just lose control of its "Top Secret arsenal of digital weapons".

They "lost control" of mainly a bunch of old exploits whose release will not matter because anyone who is running this old junk won't be updating their servers because of this news.

codezero 2 days ago 1 reply      
A lot of the scripts appear to have been written by the same person, or is that just me reading into it? They have a distinct comment style in both Python and Perl.

Also, a lot of the tools appear to instruct people to paste various things in to them. I find it unlikely that a single person wrote all the tooling for the NSA, but, who knows.

fixxer 2 days ago 1 reply      
I don't know anything about the value of this crap, but I do find it interesting to grep through looking at the IPs (which I presume are compromised machines from which they are initiating attacks). See `./bin/pyside/targets.py`
remarkEon 2 days ago 3 replies      
I haven't read enough broken English to take a gander at what the native language is for the authors of that...manifesto. Anyone have a good guess? There's some pretty common mistakes throughout ("peoples" for people, "Americans' having" for "Americans have").
Animats 2 days ago 2 replies      
This stuff looks old. There are versions for Solaris and SCO Unix.
jasonhansel 2 days ago 1 reply      
I wonder what this is for: https://github.com/x0rz/EQGRP/blob/master/Linux/bin/strangeF...

It looks like it's searching for files/directories with unusual names (like ". ") that system administrators wouldn't normally notice.

znfi 1 day ago 0 replies      
I have a bit of a hard time understanding why so many people think this is written by Russians. Obviously the grammar is not correct, but it would seem very strange to think this has any significance, and it seems more plausible that it was done in an attempt to hide the authors identity. (My spontaneous feeling was that it was written by Jar Jar Binks, and not Russians, for whatever that's worth.)

I'm not from the US and have not followed the news from there recently, but from what little I have seen much of the actual contents of the message does seem to reflect the feelings of Trumps "base"? Or would people more familiar with US politics say this is incorrect?

i336_ 22 hours ago 0 replies      
Excuse me while I just...


Not because I'm especially interested in the tools (although, granted, I have not had a look at any of them yet), but because I always wished this could be given to everyone.

Also, for a moment there, I was concerned 7z was insecure and that the passphrase had been bruteforced. Apparently not! Very nice.

jorblumesea 2 days ago 2 replies      
Is there any doubt the Shadow Brokers are Russian and working for Russian interests? The timing of releases, international events concerning both countries and pointed measures are far too suspicious to be considered circumstantial.
eps 2 days ago 1 reply      
Likely a response to the Syrian airbase tomahawking from a couple of days ago?

Russians are known for what they themselves call "asymetrical answers", so this seems to fit the pattern.

0x38B 2 days ago 0 replies      
Like others are saying, there's a mismatch between the overall sentence structure and progression - which strikes me as more native - and the mistakes. I don't buy the verb misconjugation especially, a Russian ESL learner at that level would get that right more often than not.

Source: many conversations with Russians learning English (also near-native Russian)

hl5 2 days ago 0 replies      
Regardless of the source, full disclosure works. Whomever is responsible for releasing this material is also improving computer security for everyone. Thank you.
zengid 2 days ago 0 replies      
All of this spy vs spy intrigue makes my head hurt
mavdi 2 days ago 5 replies      
Given the latest world events, I've personally come to realise that security agencies play an important role in keeping us safe, from external entities or from ourselves.

This is disaster in my (current) opinion. We tend to dismiss the work the likes of NSA do, not thinking much about what would happen if they didn't do it. Snowden categorically dismissing anything that NSA does, just means he's a deluded idealist, much like I used to be.

shitgoose 2 days ago 0 replies      
shadowbrokerss remind me of this guy:


100% American from Georgia, sometimes loses Russian accent and slips into perfect English:)

Harken 2 days ago 1 reply      
"We voted for you, comrade. Here is old malware from deepnet kiddy porn site post for to confuse."

Could be Russia pissed about puppet twitching without permission, or could be Bannon (via Cambridge Analytics?) pissed about puppet twitching without permission.

Twitch, puppet, twitch!

elastic_church 2 days ago 1 reply      
ShadowBroker's blog posts always crack me up
theocean154 2 days ago 1 reply      
ElegantEagle. nice
oculusthrift 2 days ago 4 replies      
remember that 1000s of paid russians were used to interrupt our election on sites like reddit. wouldn't be surprised if a few leaked to this site. especially with green accounts.
lngnmn 1 day ago 0 replies      
Looks like bullshit. It does not match the vault7 leak, which is supposed to be from the very same NSA.

It is Russians. The classic example of Dunning Kruger effect. In a generally low IQ environment and primitive criminalized cultural environment they truly believe that what is enough to fool everyone around them, including the bosses (who are supposed to be really smart), will surely fool everyone else.

This is the phenomenon of negative selection of a cancer-like corrupted society (which ran for a three decades already) at work. They are literally decades behind of the technological progress and culture of the modern civilization.

They simply have no idea of what possible level of intelligence and sophistication could be found in places with decades of consistent high-IQ-based selection, like companies staffed with top 5% of MIT/Standford/Caltech/Berkeley graduates and what this kind of organization could do (think of Apple, Google, etc).

A high-tech US govt agency would never had such a crap in their folders. They are not a bunch of disconnected from reality, overconfident, self-deluded with their own primitive propaganda Russian punks.

The reference D compiler is now open source dlang.org
557 points by jacques_chirac  3 days ago   301 comments top 26
iamNumber4 3 days ago 7 replies      
Good news indeed.

Switched to D 4 years ago, and have never looked back. I wager that you can sit down a C++/Java/C# veteran, and say write some D code. Here's the manual, have fun. They will with in a few hours be comfortable with the language, and be fairly competent D programmer. Very little FUD surrounding the switching to yet another language with D.

D's only issue is that it does not have general adoption, which I'm willing to assert is only because it's not on the forefront of the cool kids language of the week. Which is a good thing. New does not always Mean, improved. D has a historical nod to languages of the past, and is trying to improve the on strengths of C/C++ and smooth out the rough edges, and adopt more modern programming concepts. Especially with trying to be ABI compatible, it's a passing of the torch from the old guard to the new.

Regardless of your thoughts on D; My opinion is I'm sold on D, It's here to stay. In 10 years D will still be in use, where as the fad languages will just be foot notes in Computer Science history as nice experiments that brought in new idea's but were just too out there in the fringes limiting themselves to the "thing/fad" of that language.

jordigh 3 days ago 2 replies      
Walter, thank you so much for finally doing this! I am so happy that Symantec finally listened. It must have been really frustrating to have to wait so long for this to happen. I have really been enjoying D and I love all the innovation in it. I'm really looking forward to seeing the reference compiler packaged for free operating systems.

Thanks again, this news makes me very happy!

tombert 3 days ago 2 replies      
Honestly, since I'm slightly psychotic about these things, this is a kind of huge to me. Part of the reason I never learned D was because the compiler was partly proprietary.

Now I have no excuse to avoid learning the language, and that should be fun.

WalterBright 3 days ago 4 replies      
And best of all, it's the Boost license!

Here it is:


vram22 3 days ago 1 reply      
Good to hear the news, and congrats to all involved.

Since I see some comments in this thread, asking what D can be used for, or why people should use D, I'm putting below, an Ask HN thread that I had started some months ago. It got some interesting replies:

Ask HN: What are you using D (language) for?


JoshTriplett 3 days ago 2 replies      
Interesting change! Before, people had a choice between the proprietary Digital Mars D (dmd) compiler, or the GCC-based GDC compiler. And apparently, since the last time I looked, also the "LDC" compiler that used the already-open dmd frontend but replaced the proprietary backend with LLVM.

I wonder how releasing the dmd backend as Open Source will change the balance between the various compilers, and what people will favor going forward?

brakmic 3 days ago 4 replies      
Please don't get me wrong, as I don't want to start a flame here, but why do they call D a "systems programming language" when it uses a GC? Or is it optional? I'm just reading through the docs. They do have a command line option to disable the GC but anyway...this GC thing is, imho, a no-go when it comes to systems programming. It reminds me of Go that started as a "systems programming language" too but later switched to a more realistic "networking stack".


bluecat 3 days ago 0 replies      
Something I always thought was cool about dlang was that you can talk to the creator of the programming language on the forums. I don't write much D code as of now, but I always visit the forums everyday for the focused technical discussions. Anyways, congrats on the big news!
saosebastiao 3 days ago 2 replies      
This was something that always rubbed me the wrong way about the language, and it was an impediment for adoption for me (for D, but also Shen and a few others). In this era, there is no excuse for a closed source reference compiler (I could care less if it's not a reference compiler, I just won't use it). I'm surprised it took this long to do this, it seems like D has lost most of its relevance by now...relevance it could have kept with a little more adoption. I wonder if it can recover.
softinio 3 days ago 4 replies      
Whats special about D? why should i learn it?
jacquesm 3 days ago 1 reply      
That is excellent news :)

Congratulations Walter, now let's see D take over the world.

petre 2 days ago 0 replies      
Is there support for BigFloat in D/phobos or any auxiliary library? I was playing around with D sequences and wrote a D program that calculates a Fibonacci sequence (iterative) with overflow detection that upgrades reals to BigInts. I wanted to also use Binet's formula which requires sqrt(5) but it only works up to n=96 or so due to floating point precision loss.
zerr 3 days ago 4 replies      
Anybody worked on performance critical stuff in D? How good is its GC?
xtreak_1 2 days ago 0 replies      
Thanks a lot! I am also consistently amazed at the performance of the forum like any other day even though the story is on top of HN.
Samathy 3 days ago 0 replies      
Amazing! D has really been exciting me for the past couple of years. It has great potential.

Hopefully a fully FOSS compiler will bring it right into the mainstream.

petre 3 days ago 1 reply      
This is great news. I was using LDC because the DMD backend was proprietary. Thank you Walter, Andrei and whoever made this possible.
noway421 2 days ago 0 replies      
It's really surprising that to this day, there are languages in use which have its reference implementation closed source. All the possible optimizations and collaboration possible when it's open is invaluable.
virmundi 3 days ago 1 reply      
Has there been any new books out there to learn D? I have one that still references the Collection Wars (Phobos vs Native). Once I saw that, I put the book back on the shelf and stuck with Java.
snackai 3 days ago 0 replies      
This is big. Heard from mmany people that this avoids adoption.
tbrock 3 days ago 0 replies      
Is anyone else surprised that it wasn't before?
herickson123 3 days ago 0 replies      
I wanted to play around with D using the DMD compiler but it's unfortunate I have to install VS2013 and the Windows SDK to work with 64-bit support in Windows. I've installed VS in the past and found it to be a bloated piece of software I'm not willing to do again.
imode 2 days ago 0 replies      
what is it like to 'bootstrap' D? I know in many languages you can forego the standard library and 'bootstrap' yourself on small platforms (C being the main example).
spicyponey 2 days ago 0 replies      
Tremendous effort. Congrats.
nassyweazy 2 days ago 0 replies      
This is an awesome news!
joshsyn 3 days ago 4 replies      
Please get rid of GC :(

I want to have smart pointers instead

sgt 3 days ago 0 replies      
Off topic question; are you related to https://en.wikipedia.org/wiki/Jacques_Chirac ?
The Utter Uselessness of Job Interviews nytimes.com
538 points by tomek_zemla  1 day ago   397 comments top 49
schmit 1 day ago 11 replies      
I find it quite problematic that researchers get to talk about their own research and present it as facts without anyone taking a critical look.

Over time Ive become more skeptical about this kind of psychology research (as more studies fail to replicate) and, as is often the case, here the sample size is quite small (76 students, split across 3 groups), predicting something relatively noisy as GPA. It is unclear to me that one would be able to detect reasonable effects.

Furthermore, some claims that make it into the piece are at odds with the data:

> Strikingly, not one interviewer reported noticing that he or she was conducting a random interview. More striking still, the students who conducted random interviews rated the degree to which they got to know the interviewee slightly higher on average than those who conducted honest interviews.

While Table 3 in the paper shows that there is no statistical evidence for this claim as the effects are swamped by the variance.

My point is not that this article is wrong; verifying/debunking the claims would take much more time than my quick glance. But that ought to be the responsibility of the newspaper, and not individual readers.

Politicians dont get to write about the successes of their own policies. While there is a difference between researchers and politicians, I think we ought to be a bit more critical.

gatsby 1 day ago 11 replies      
Laszlo Bock (former SVP of People at Google) did a great job summarizing decades of research around structured interviewing in his book 'Work Rules!'

For a quick reference, the two defining criteria for a structured interview are:

1.) They use one or several consistent set(s) of questions, and

2.) There are clear criteria for assessing responses

That second point is really important. You can't only ask candidates the same sets of questions and have a structured process: you need to understand what a "one-star" response vs. a "five-star" response actually looks or sounds like. Training and calibrating all of the interviewers in a large company around a similar rating system is nightmarish, so most companies don't bother.

The book also outlines that pairing a work sample with a structured interview is one of the most accurate methods of hiring.

If anyone is interested in some in-depth structured interview questions or work sample ideas, feel free to email me. I've spent the last few years working on a company in the interviewing space and would love to chat.

uncensored 1 day ago 14 replies      
How do you know that the person you'll be marrying won't cheat on you and won't leave you in hard times? If you apply the methods we use today for interviews you'll end up with a 50/50 chance at best, a coin toss.

Yet why do some marriages last forever (till death do us apart) while others fail miserably or crumble even after 20 years?

The search for the global optimum cannot be performed by asking a set of questions. I argue that it cannot be done consciously. It's gut/instinct thing. If you have a mechanical approach, anyone can game the system and get a job because humans can be like chameleons to present themselves as the right candidate, and they can study for the interview. Only way IMO is to have that 3rd eye or whatever you call it... instinct, gut feeling, etc

The problem with this conclusion is that instinct and sexism/racism are often conflated.

No good answer.

ravitation 1 day ago 1 reply      
I have some major issues with their conclusions... and the title of the article (which is mostly nonsensical clickbait).

The real conclusion should be that "unstructured interviews provide a variable that decreases the average accuracy of predicting GPAs, when combined with (one) other predictive variable(s) (only previous GPA)."

This conclusion seems logical. When combined with an objective predictive measure of a person's ability to maintain a certain GPA (that person's historical ability to maintain a certain GPA), a subjective interview decreases predictive accuracy when predicting specifically a person's ability to maintain a certain GPA.

To then go on to conclude that interviews then provide little, to negative, value in predicting something enormously more subjective (and more complicated), like job performance, is absurd - and borderline bad science.

There are numerous (more subjective) attributes that an unstructured interview does help gauge, from culture fit to general social skills to one's ability to handle stressful social situations. I'd hypothesize all of these are probably better measured in an unstructured (or structured) interview than in most (any) other way. To recommend the complete abandonment of unstructured interviews (which is done in the final sentence of the actual paper) is ridiculous.

hobls 1 day ago 4 replies      
I've been a programmer for a bit over ten years. I've worked at scrappy little startups, midsized companies, now for a tech giant for a few years. The engineers I work with at the tech giant are consistently better engineers than my other coworkers have been, and I credit the very structured interview process. We're trained to ask specific questions, look for specific types of answers, and each interviewer is evaluating different criteria.

Also, it is quite often not the technical questions that end up making us decide not to hire someone. That's just one area. I know it's the part that sticks out, and candidates give a lot of weight to it in their memory of the interview, but you shouldn't just assume it was that your whiteboard code wasn't quite good enough. I actually don't think that's the most common thing we give a "no hire" for.

Spooky23 1 day ago 1 reply      
It's the whole process that's useless.

Remember a few years ago when this forum was drowning about how to find and hire "10x" people? 98% of employees were useless in the face of the 10xer.

The reality is, most of the time screening for general aptitude, self-motivation and appropriate education is good enough.

I've probably built a dozen teams where 75% of the people were random people who were there before or freed up from some other project. They all work out. IMO, you're better off hiring for smart and gets thing done and purging the people who don't work.

tptacek 1 day ago 2 replies      
Take a second to read about the experiments this author conducted. They included:

Dummy candidates mixed in with the interview flow that gave randomized answers to questions (interviews were structured to somewhat blind this process), and interviewers lost no confidence from those interviews.

Interviewers, when told about the ruse, were then asked to rank between no interview, randomized interview, and honest interview. They chose a ranking (1) honest, (2) randomized, (3) no interview. Think about that: they'd prefer a randomized interview to make a prediction with over no interview at all.

Of course, the correct ranking is probably (1) no interview, (2) a tie between randomized and honest. At least the randomized interview is honest about its nature.

ordinaryperson 1 day ago 0 replies      
The problem is GPA itself isn't necessary a valid data point. It's less fallible than "gut instinct", as the author here seems eager to claim, but personality type can be more important than ability to memorize facts.

I'd rather hire a programmer who knew less and could get along with others than a master dev who's a total a-hole.

rb2k_ 1 day ago 3 replies      
It should probably have been titled "The Utter Uslessness of Unstructured Job Interviews", because that's the kind of interview the author criticizes.

In my personal experience, structured interviews can be very helpful in determining a candidates abilities.

numinary1 1 day ago 1 reply      
This discussion misses an important element, the skill of the interviewer. It is unsurprising that unskilled interviewers' assessments are poor predictors of future performance. It would be interesting to measure the accuracy of interviewers who have had years of experience interviewing, hiring, and managing people.

Here's how I think it works. Skilled interviewers are biased toward rejecting candidates based on any negative impression. Structured interviewing has the same effect. It's the precision versus recall tradeoff. For this use case only precision matters. Extremely low recall is fine.

Also, in the GPA prediction example, the interviewer is penalized for predicting a low GPA for a person who performed well. But in hiring, there is no penalty for failing to hire someone who would have performed adequately.

(Yes, I understand there is an implicit assumption in my argument that candidates are not in short supply, but that's usually true, certainly at Google)

dkarapetyan 1 day ago 0 replies      
> The key psychological insight here is that people have no trouble turning any information into a coherent narrative. This is true when, as in the case of my friend, the information (i.e., her tardiness) is incorrect. And this is true, as in our experiments, when the information is random. People cant help seeing signals, even in noise.

People see patterns where there are none. I think this is fundamentally why humans fail at statistics. If every fiber of your being wants to see patterns then you will see patterns. Probably why people hallucinate when in sensory deprivation tanks as well. The brain will make up patterns just so it can continue to see them.

The paragraph right after follows up with the statistical failure that pattern seeking leads to

> They most often ranked no interview last. In other words, a majority felt they would rather base their predictions on an interview they knew to be random than to have to base their predictions on background information alone.

So people would rather do busy work in order to continue to satisfy established pattern seeking habits than figure out a better way.

redthrow 1 day ago 1 reply      
Matt Mullenweg advocates audition/tryouts instead of job interviews.


santoshalper 1 day ago 0 replies      
As someone who has worked at every level of IT (startup to Fortune 500 executive), hired thousands of people, and personally interviewed hundreds of candidates of all levels of experience, the conclusion I have come to is that interviews are almost entirely worthless.
akhilcacharya 1 day ago 2 replies      
It's interesting, because I'd argue that the best companies in tech have have interviews that are very structured and predictable.
bahmboo 1 day ago 0 replies      
When interviewing for smaller orgs you do have to answer the question: can I work with this person everyday? Subjective and arguably harder to predict than technical performance, and oftentimes more important.
vonnik 1 day ago 1 reply      
The traditional recruiting and hiring process is broken. I say this as a former technical recruiter. I wrote about the problems of recruiting for a closed-source startup here:


I mention closed-source for a reason. For technical hiring, there is nothing better than open source. Open-source projects allow engineers and their potential employers to collaborate in depth over time. The company can experience whether the engineer is competent, reliable and friendly. The engineer can judge the team's merits in the same way. And they can both decide whether the fit is right.

Closed-source and/or non-engineering jobs are the opposite. You get a resume, a Github repo if you're lucky, and a half-day's worth of interviews and tests. Then you roll the dice on that imperfect information.

This is one reason why a lot of recruiting and hiring happens through the networks of people that a company can tap into. It may seem corrupt or nepotistic, but the advantage of those referrals is that someone with more information than you is willing to stake their reputation on a candidate's performance.

Large companies with lots of historical data have the opportunity to train algorithms to learn how job applications and long-term performance/flight risk/etc. actually correlate. From what I can tell, most haven't.

m-j-fox 1 day ago 2 replies      
Known useless indicators:

* Resumes

* Skills tests (hacker rank)

* Whiteboard interviews

* Unstructured interviews

* Employee referrals

No wonder headhunters have such a good business. Not that they're more discriminating, but they can pretend to be the solution to an intractable problem.

Boothroid 1 day ago 3 replies      
But surely structured interviews just test a candidate's ability to improvise plausible stories? Whether they are truthful or not is a different matter..
daenz 1 day ago 2 replies      
As someone with many interviews coming up in the near future, this scares me. It's easy to get in a self-conscious feedback loop when you know every behavior, response, and gesture is being fed into a fundamentally irrational character-judging process.

The best interview I've ever been on was one for a young startup. They gave essentially a homework problem, a day to solve it, and then in the interview we talked about the problem and my solution. The worst interview I've been on was sitting in front of multiple engineers as each one threw out a random CS question (from seemingly the entire space of CS) and asked me to talk intelligently about it. When I seemed unsure of myself, they glanced around nervously and disapprovingly.

Interviews are the worst. I've spent my time trying to bolster my OSS projects, so that I can point to them as evidence of my competence, but I can't help but prepare for the worst anyways.

exabrial 1 day ago 1 reply      
They're useless if it's attempt to prove you know more tham them about some algorithm that's been implemented 150x times (every job interview in California). I'd rather work with someone pleasent, hard working, and concerned with everyone's well being.
douglasjsellers 1 day ago 0 replies      
All that this article says is that past performance, in terms of GPA, is the best performance of future performance - rather than a 30 minute interview predicting future performance. This sees like a basic truism to me and the main lesson that tech hiring processes can take away from this article.

In my experience (having hired > 100 engineers) one of the basic problems that tech hiring, as a whole, has is that it misunderstands the point of a technical interview. Organizations and hiring managers see the interview process as a way of improving the brand of the engineering organization - "We have super high standards and to prove this our interview process is really hard - therefore if you think you meet these standards you should apply". This leads to the current interviewing trends of super academic/puzzle/esoteric technology based interviews. Applicants leave those interviews saying that it was super hard reinforcing the brand messaging (classic marketing).

Rather, in my experience, the best results come from viewing the hiring/interviewing process for what it is - an attempt to predict future performance (and specifically performance at your organization) using a variety of techniques which interviewing is one. In this context, of attempting to predict future performance, interviews are not a great tool - better to look at specific past performance.

Past performance is always the best predictor of future performance and the point of a technical interview, in my mind, is to critically inspect that past performance to understand how closely it relates to the future performance that your organization needs.

inopinatus 1 day ago 3 replies      
Articles like this - and the comments that follow - always overlook the primary value of job interviews, which to me is answering the question: "Do I want to work for this company?"
woodandsteel 1 day ago 0 replies      
My basic problem with interviewing is you are observing behavior in one sort of situation, and on that basis trying to predict behavior in a very different sort of situation, namely job performance, which is actually a whole bundle of different types of situations.

It seems like it would be much better to instead put the job prospect in situations that model the sorts that would come up at work.

cricfan 1 day ago 2 replies      
I wonder if we can extrapolate to marriages and to how arranged marriages (at least in India) having a higher success rate. Usually the parents on either side decide on a match based on family background, financial stability, education background etc,. rather than letting the to-be married decide.
tempodox 18 hours ago 0 replies      
> So great is peoples confidence in their ability to glean valuable information from a face to face conversation that they feel they can do so even if they know they are not being dealt with squarely. But they are wrong.

If people utterly refuse to learn from proven mistakes, then all hope is lost. Einstein was right, human stupidity is infinite.

damagednoob 1 day ago 1 reply      
> In one experiment, we had student subjects interview other students and then predict their grade point averages for the following semester.

Not sure how using inexperienced interviewers proves anything. Would have been more interesting to have lecturers interview the students.

andrewstuart 1 day ago 1 reply      
I am a recruiter. Recently, I started working with a new employer. We could not get anyone through their interview process. Eventually I asked the HR person to clarify precisely what was being asked in these interviews.

She said that, essentially, the interviews were ad-hoc, with the interviewer just coming up with whatever questions they thought relevant based on the resume - often asking the candidate to go through their career history.

I explained that the only effective approach I have found with recruiting is to have a set of pre-defined questions, and each question is specifically designed to give insight into how the candidate meets the pre-defined job requirements. Very much like software development, where test cases are related to software requirements.

I explained also that it is not critical to stick precisely to these questions, but that should mostly be the case - interviews are human interactions and some flexibility is required depending on circumstance.

The HR person then explained this to the hiring managers at the company, and worked with the hiring managers to define interview questions that give insight into the job requirements.

The next two people interviewed got the jobs, after months of no one getting through the interviews.

In the early days of software development, the business was often dissatisfied with software delivered because it simply did not meet the requirements of the business. So the software development process matured and came up with the idea of tests that can be mapped back to the requirements via a requirements traceability matrix. Thus the business has a requirement, the developers write code to meet the requirement, and a test is designed to verify that the software meets the defined requirement.

Recruiting currently has no such general understanding in place of the relationship between job position requirements and definition of quantifiable questions that identify to what extent a given job candidate meets a requirement.

Once you get your head around the idea that recruiting should be very similar to software development in this regard, then it is easy to see that ad-hoc interviews do nothing to verify in any organised way to what extent a candidate meets the requirements of a given job opening.

throwaway71958 1 day ago 1 reply      
One thing interviews can't select for: creativity and motivation. And in tech those two criteria are the most vital, especially motivation. I can easily fill in the skills gap in someone who's motivated. I can't do anything with someone who doesn't give a shit, even if they're the second coming of Albert Einstein. So folks, please, don't apply for jobs you don't really care about. Save yourself and your prospective employer time, aggravation, and the opportunity cost.
cordite 1 day ago 0 replies      
I'm sure it has been discussed before, but what factors push people away from say two-day internships? Is it because these things are so short that they too can eventually be gamed? Or is it because you need to have staff of the same specialty dedicating their resources to a potentially short-lived investment?
kasey_junk 1 day ago 0 replies      
Structured interviews are better than unstructured ones, but in my experience they are really a Trojan horse for the idea that interviews in all forms are largely worthless (as predictors for good hiring).

Once you start collecting data on your hiring pipeline work sample hiring becomes so much obviously better that it makes little sense to spend the time to do the hard work of making a good structured interview process.

evervevdww221 1 day ago 1 reply      
We had 2 slackers on the team. One jumped directly to Google.

The other jumped around for few years, got laid off by some company, recently joined FB.

sgt101 1 day ago 0 replies      
Odd - I was trained to do competence based interviews 20 years ago, apparently this is now rediscovered knowledge or something!
dlwdlw 1 day ago 0 replies      
The issue with silicon valley interviews is that it's leading a new paradigm of management styles that deal with knowledge and creative work

This shift MUST be accepted by everybody ornostracism is risked. (Like trump supporters)

But paradigm shifts take time and the majority of managers still want cogs. But instead of filtering dor cogs they have to dress the filters up as filtering for a "i give smart people freedom team" and the convoluted mental gymnastics needed for this creates shitty interview processes.

All "well this technique worked for us" stories are mostly Not useful because tehy are just N=1 stories about managers using their preferred filters.

The issue isn't with the filters themselves (all sorts exist) but with a culture that obligates everyone to out on false facades.

People who just want to be paid and can work well need to pretend to be passionate. Managers who want well-paid cogs need to pretend to promote individualistic thinking etc...

mck- 1 day ago 0 replies      
If you care about your company's culture, a person's humbleness, art of concise debating, etc - more important parameters than sheer GPA or coding skills imho - you can never do away with in-person interviews.

Calling them "utterly useless" is an utter click-bait.

donovanm 1 day ago 0 replies      
It certainly does feel like job interviews are a coin flip to me after having done many interviews.
WalterBright 1 day ago 0 replies      
More accurately, the article is about the use of unstructured free-form interviews.
thomastjeffery 1 day ago 0 replies      
> Alternatively, you can use interviews to test job-related skills, rather than idly chatting or asking personal questions.

Alternatively? How is this not the focus of an interview?

Sure, confidence and social skills are important, but obviously they cannot predict a person's actual ability.

DrNuke 1 day ago 0 replies      
It is some time we are going towards word of mouth and peer recommendations as the preferred way to hire new personnel. Cold applications are for outsiders and as such a very different market, with all the strings and the bulls.it attached.
jasonthevillain 1 day ago 0 replies      
Well, the worst interview I've ever had was the ones where the interviewer wouldn't deviate from the script, even after he recognized that these questions made no sense for someone with my background (I was self-taught, and had never managed my own memory or written a sort algorithm. They were also irrelevant to the position in question).

It was painfully awkward.

It was also a fantastic way to accidentally discriminate against women and older candidates.

I'm not saying anyone should conduct an interview completely by the seat of their pants, but please don't encourage this foolish consistency.

JacksonGariety 1 day ago 0 replies      
Does it bother anyone else that the example given in the article (showing up 25 minutes late) is judging interviewing by its worst rather than its best?
Zigurd 1 day ago 1 reply      
It's not surprising. Employee selection is basically voodoo. Outcomes don't get fed back into redesign of the process, and the process is far more based in tradition than data. When the process gets challenged it's ripe to fall apart.
geebee 12 hours ago 0 replies      
I'm coming late to this discussion, but there is one point I'd like to make. Our "interviews" in the world of software engineering, are immensely different from "interviews" in the standard sense of the word.

I've worked in different fields, and I talk to people who work in other fields. Most of those fields work in a way that is described in this article - interviews are question and answer sessions, where people are evaluated by a number of highly subjective criteria. "Tell me about your fundraising experience?" "How do you deal with difficult clients or coworkers". That kind of thing.

Software interviews are exams. They're not "more like" exams, they are flat out exams. There is very little banter. The closest I've come in google and Netflix interviews has been the more open-ended system design style question they often put in there, but even that has an academic test quality to it.

It's pretty much 5 hours of technical exam. "How do you find all matching subtrees in a binary tree" might be a question - and you really are expected to get it written at the whiteboard. "Find all permutations of a set". "Find all square sub matrices in an NXM matrix." The "top" companies are good at modifying the question so that you must know how to do this but can't just regurgitate it.

Alternatively, you may do a "take home" exam. Most recently, I did a mini rails project. I actually liked my result, I kind of enjoyed writing it. However, it was a no-hire, one reason given was that my routing was non-standard. True. I hadn't really thought about it, it was a take-home, so I mainly focused on the UI and code, and just chucked in a couple of named routes for demo and testing purposes. The other reason was that there was some duplicate code (I disagreed and had a reason for this, but there is no chance to defend your code, you write it and send it in, and they say "no hire").

I have no idea if it was a real piece of crap and they were just being nice. It had 100% test coverage and git for version control, and implemented a few features. Unfortunately, like I said, I never got a chance to defend the code.

Our processes in high tech are badly broken. I'm probably done interviewing, my next job will have to be one that doesn't involve a software interview. The routing and duplicate code, along with a google interview, pretty much sealed the deal for me.

My advice to people is (this isn't my idea) be an X that programs, not an X programmer. Coding is an amazing tool for a job, but avoid making it your job. For instance, I actually know a fundraiser who does a lot of data science, and he's a rock star in his field, but I guarantee you nobody asks him to reverse a binary tree in an interview!

Best of luck out there. Our interviewing processes are their own special version of horridness, just not uselessness described here.

RichardHeart 1 day ago 2 replies      
You hear lots of these stories about stupid interviews. You rarely hear the stories about the horrible, terrible employees that weren't weeded out, got hired and did great harm to the company and their coworkers.

Interviews can be good and bad, I'd venture to say that many the horrible hire has been avoided by any interview at all. Thus, don't make perfect the enemy of good, and try to improve on good.

The set of potential bad hires is vast compared to the good hires, and that ratio is only remedied by good filtering before and during the interview.

dba7dba 1 day ago 0 replies      
I believe a lot of places hire a candidate through a consensus, meaning some members in the team accept or reject a potential candidate. When enough accept the candidate, the hiring is done.

Is there any company that tracks who rejects a particular candidate during an interview process, and how often that negative feedback turned out to be true. I guess with the turnover rate at todays tech places, such tracking of a record of an interviewer is not really possible?

I always wonder about this.

peterwwillis 1 day ago 0 replies      
No interview will tell you the future, so to my mind, the only thing the interview can tell you immediately is how much someone knows and whether their personality will mesh with the company's culture.

In order to ascertain this, I propose job-hiring hackathons. Have the company hold a mini-hackathon, once every 2 weeks or once a month, where all job applicants must show up and work on projects (corporate employees' presence can be optional). Just watch them complete the projects and hire the best candidates.

nameisu 1 day ago 1 reply      
i have an interview with apple for a mechanical engineer. i will report back once its done.
ppidugu 1 day ago 0 replies      
Such a careless claim and writing articles and throwing on peoples faces gives just a chance to everyone to debunk ny times articles
erikpukinskis 1 day ago 2 replies      
Once we get to the point where most people have several jobs with separate contracts, interviews become superfluous because you can just hire someone for a few hours at a time and then fire them. The only reason that doesn't work today is we're still clinging to the idea that you only work for one entity at a time. Never mind that most people already manage at least a couple bosses within the same company.
nebabyte 1 day ago 0 replies      
> not one interviewer reported noticing that he or she was conducting a random interview. More striking still, the students who conducted random interviews rated the degree to which they got to know the interviewee slightly higher on average

Yeah well, when you're asking questions of someone who looks thoughtful very briefly, then answers almost immediately after, it sounds like you might have more reason to think you know them better than the one actually considering their answer

Might be introducing a confound or two that you then proceed to completely ignore and even conclude past lest someone accidentally draw other conclusions

Semantic UI semantic-ui.com
683 points by jhund  1 day ago   208 comments top 53
jameslk 23 hours ago 10 replies      
I've always found it ironic that this library calls itself "Semantic UI" but doesn't follow the practice of semantic HTML/classes[0]. W3C suggests[1] that classes should be used for semantic roles (e.g. "warning", "news", "footer"), rather than for display ("left", "angle", "small" -- examples taken from Semantic UI's docs). So instead of giving a button the class of "button" it would be better to give it a class such as "download-book." The benefit of this is when it comes time to redesign parts of a site, you only have to touch your stylesheets instead of manipulating both the stylesheets and HTML. That is, so we don't fall into the old habits of what amounts to using <b> <font> <blink> tags.

0. https://css-tricks.com/semantic-class-names/

1. https://www.w3.org/QA/Tips/goodclassnames

jwr 22 hours ago 5 replies      
I use Semantic UI in production on https://partsbox.io/ and can list some upsides and downsides.

On the positive side:

* very complete, with good form styling, and lots of widgets you will use often, which is especially important for larger apps,

* the default theme is mature and has good usability, without the crazy "oh, how flat and invisible our UI is!" look.

* the class naming plays well with React (I use ClojureScript and Rum) and looks good in your code,

On the negative side:

* the CSS is huge and there is little you can do to trim it down,

* the JavaScript code is not Google Closure-ready, so it's a drag compared to my ClojureScript codebase: large and unwieldy,

* there is a jQuery dependency, so I have to pull that in, too,

* the build system is well, strange, let's put it that way. I'm used to typing "make" and getting things built, while this thing here insists on a) painting pretty pictures in the console window, b) crapping node_modules in a directory up from the one I'm building in, c) requires interactive feedback. I still haven't found a way to automatically build Semantic UI from a zip/tarball, and others seem to struggle with it, too.

Overall, I'm happy with the choice and it has been serving me well.

jlukic 23 hours ago 2 replies      
For people who are curious about theming here is classic GitHub done entirely in Semantic UI.http://semantic-org.github.io/example-github/

(Click the small paint icon in the top menu to swap themes to see in native SUI)

I did a meteor dev night where I talked about some of the ideas behind Semantic UI, which might clear up some of the linguistic origins for the library and it's ideas about language:https://www.youtube.com/watch?v=86PbLfUyFtA

And if anyone wants to dig really deep, there are a few podcasts as wellhttps://changelog.com/podcast/106https://changelog.com/podcast/164

TomFrost 23 hours ago 9 replies      
Semantic recently adopted my team's React adaptation as their official React port. It's lighter weight, eliminates jQuery, and all components are standard React components that can be extended or dropped in as-is.


xiaohanyu 23 hours ago 5 replies      
Hi, guys,

We have spent hundreds of hours build a new website with Semantic-UI for Semantic-UI: http://semantic-ui-forest.com/.

Semantic-UI is my favourite front-end CSS website, I have built several websites with Semantic-UI, and I love it, feel delightful when developing with Semantic-UI.

But compared with Bootstrap, the ecosystem of Semantic-UI is small, so we have semantic-ui-forest for you: http://semantic-ui-forest.com/posts/2017-04-05-introducing-s... .

In this website, we have ported 16 themes from bootswatch(bootstrap) to Semantic-UI (http://semantic-ui-forest.com/themes), and also, we have ported 18 official bootstrap examples (https://getbootstrap.com/getting-started/#examples) and reimplemented in Semantic-UI (http://semantic-ui-forest.com/templates/).

Not advertising, however, we think this maybe helpful for people who are interested in Semantic-UI and want to give it a try.

sheeshkebab 1 day ago 6 replies      
This doesn't work well on mobile - on iOS at least... scrolling is funny, flickering screens, jerky inputs. Loading feels slow too - and I'm on wifi.
tbabb 10 hours ago 0 replies      
> Design Beautiful Websites Quicker

* "...More Quickly".

Quicker is an adjective, used to describe nouns. You could say "design quicker websites", "quicker" in that case describing an aspect of the website. If you wanted to describe the manner in which you will do the designing, you have to use the adverb "quickly"-- "design websites quickly". Adding the adverb "more" to modify the adverb "quickly" is the proper way to make it comparative.

franciscop 23 hours ago 1 reply      
When I started Picnic CSS[1] there were few CSS libraries out there and the ones that were available were severely lacking. They didn't have either :hover or :active states, no transitions, etc.

Now with new libraries or modern versions of those, including Semantic UI, I wonder whether it's time to stop supporting it and switch to one of those. They are still different but with somewhat similar principles (at least compared to others) such as the grid: <div class="flex two"><div></div><div></div></div>.

What I want to say, kudos. As I see jlukic answering some questions, how do you find the time/sponsorship to keep working on it? Is it a personal project, company project, funded through some external medium, etc? I see there's a donate button, does people contribute there a lot?

[1] https://picnicss.com/

malloryerik 1 day ago 6 replies      
Might also check out Ant Design.https://ant.design/

It's integrated with React and there's a separate mobile UI for it. Ant is Chinese, with docs translated into English. Like China, it's huge^^I've just been fooling around with it today for the first time in create-react-app and seems good so far. Haven't tried on mobile.

JusticeJuice 1 day ago 1 reply      
I've done a few projects with Sematic UI. I think it's great for desktop based business applications. It looks slick, and has great animations. Plays nice with heaps of frameworks, I was using meteor.js

However, don't use it on mobile - it will destroy performance.

tomelders 19 hours ago 7 replies      
Please stop with these things. They're never fit for purpose, and now there's another thing that looks - to non technical people - like a panacea for all development woes. Designers will never follow your constraints. Managers will never understand why this hasn't magically reduced our estimates by 90%. And yet again, it's just "developers being difficult" because there's a bunch of guys in India who say they CAN work with this for half the price.

This sort of stuff is worse than useless.

aphextron 1 day ago 0 replies      
32,000+ stars is insane, how have I not heard of this? Does anyone have production experience with it?
nkkollaw 20 hours ago 1 reply      
It looks great.

However, I've used it in the past and the CSS size is _HUGE_, with no way to reduce it. We're talking about > 500KB of CSS (in my case, at least). The JavaScript is extremely bloated as well.

Honestly, being that heavy I wonder how anyone can use it. If your site is to be viewed by mobile users, adding 500KB just to style a few elements is unacceptable.

I'd much rather go with Bootstrap. It has the added benefit of having the majority of front-end devs know it, and you can buy or use a theme for free and make it look great.

notliketherest 1 day ago 0 replies      
I love semantic UI React for my teams internal tools. So easy to drop in an use without having to think about css
ssijak 16 hours ago 0 replies      
Why is bootstrap 4 taking so long to get to a final version? All that waiting is pushing me towards other libraries. But me, being primarily backend engineer, want a library that has a large community because I am not so skilled with frontend UI and want the possibility to find the help easily.
constantlm 20 hours ago 0 replies      
I recently dropped Bootstrap early in a project and switched to Semantic. I've been using it for a few months - so far it seems fantastic and much more "natural" to work with than Bootstrap. The gigantic set of components, and integration with both EmberJS and React make it even more amazing.
flukus 23 hours ago 0 replies      
Wouldn't a semantic UI have things like a <menu> tag that was up to the browser to render?
dandare 21 hours ago 0 replies      
Sidenote: the https://en.bem.info/ website (mentioned in the first paragraph of text about Semantic UI) totally irritates me. Would you be so kind and explain with a single sentence what is the purpose of your website/platform/framework?
jv22222 3 hours ago 0 replies      
There's a pretty bad bug on that website. When you open it in Safari on iPhone 6 it jitters badly as you scroll the page down.
taeric 1 day ago 0 replies      
I alternate between thinking this sort of thing is merely misguided, or merely a waste of time.

I want to like it, a lot. But I can't help feeling that this ship sailed years ago.

Simple UIs that are easy to interpret are a thing of the nineties. We left them because we evidently didn't realize what we had. Also, people like flashy things. A lot.

cknight 21 hours ago 1 reply      
I chose Semantic UI for my project: https://suitocracy.com if anyone wants to see another live example, it also uses the default theme.

Like others, I was somewhat concerned about the bloat - over half of my front page's total file size. But at about 250KB all up, I realised this was only around a tenth of what the average website throws at people these days. https://www.wired.com/2016/04/average-webpage-now-size-origi...

keehun 14 hours ago 1 reply      
Am I the only one that passionately dislikes the menus that require clicking on the hamburger icon? I'm okay with it in phone apps when used tastefully, but it seems like too many websites are adopting it now for no good reason. This trend is especially evident among the online Wordpress/HTML template communities and creators...
nwmcsween 8 hours ago 0 replies      
I guess it's as good of time as any to plug my project s.css[1], it tries to be the exact opposite of semantic ui. Class names are simply abbreviated properties such as .di-bl { display: block; }. It isn't meant to be used as a framework, but something that other frameworks can build upon (I will soon be releasing something that build on it).

[1] github.com/nwmcsween/s.css

inputcoffee 1 day ago 2 replies      
What is the best way to think of this. Is this like Twitter Bootstrap and Zurb Foundation, or is this like something else entirely?
cyberferret 22 hours ago 1 reply      
I've been a Bootstrap user for years on all my web apps, but thinking that perhaps instead of re-learning things for v4, I look at expending a similar amount of time and effort to learn something new.

I came across Semantic-UI last year and remember being impressed by it, but for some reason it just slipped my mind until I saw this post today. I seems it could work for another small project that I am thinking of starting.

Just to clarify - No reliance on jQuery with this framework, right? Has anyone else worked with Semantic-UI using Umbrella.js and/or Intercooler.js ??

ludbek 16 hours ago 0 replies      
I have been using Semantic UI for a while now. Overall I love this framework. It has lots of essential components. I highly recommend it to lean startups who dont have enough expertise for designing and developing their own UI components.

But I do hate it for having weak and restrictive responsive queries.

Too 12 hours ago 1 reply      
> Intuitive javascript: $('select.dropdown') .dropdown('set selected', ['meteor', 'ember']);

Please no...just use React, Vue, Angular or some other sane data binding framework already. Don't mix logic and presentation. Your javascript code should never know about CSS classes, and ids and preferably not DOM-states either.

debacle 1 day ago 1 reply      
Seems like a next evolution of Bootstrap components. The trick with this type of stuff is always in how it plays with other frameworks. Can I drop into jQuery if I need to, and still interact easily with controls? Is there some obscene DOM skeletons in the closet that's going to bite me in the ass later?
symboltoproc 9 hours ago 0 replies      
I work for quite some time now with Javascript and I must say:

$('select.dropdown').dropdown('set selected', ['meteor', 'ember']);

Is the most unintuitive Javascript I've ever seen.

vinayakkulkarni 20 hours ago 1 reply      
Just FYI,

https://www.zomato.com/one of the biggest in it's industry uses Semantic-UI :)

Love the Framework and Jack + all contributors effort in it :)

Mizza 23 hours ago 0 replies      
Semantic has replaced Bootstrap as my go-to web framework. I find it more natural, and the default components are nicer. I think it needs a larger theme ecosystem and more consistent documentation, but I appreciate all the work that has gone into it.
nwmcsween 10 hours ago 0 replies      
There seems to be some sort of impedance mismatch CSS is _for_ developers give me .di-bl { display: block; }, make it easy to understand by just looking at the markup instead of having to having to dig into other files.
tmikaeld 21 hours ago 0 replies      
My company has been using SUI in production the past 3 years and it's been absolutely great, sure it is big, but that translates into flexibility and speed of development as well as having a production-ready framework that we know can handle anything thrown at it.

I've seen some mentions of jQuery, I don't think that's a bad thing at all - the framework uses the plugin system so fully that without jQuery, I'm sure the framework would be even bigger and less flexible. The added advantage is that other jQuery plugins work without adding anything.

tabeth 23 hours ago 3 replies      
Is it possible these days to have a fully interactive mobile application with just HTML and CSS? Have CSS animations gotten good enough? I'm talking things like pure CSS accordians, modals/pop-ups, tooltips, etc.

Semantic UI is something I personally use for a few projects, but I really wish some of this stuff didn't require so much javascript and was more encapsulated like Tachyons [1]. The main problem I've encountered when using Semantic UI is that it becomes difficult to change the prebuild components significantly.

[1] http://tachyons.io/

wishinghand 22 hours ago 0 replies      
I love the style and components of Semantic UI, but it's really heavy in terms of CSS file size, even once minified. I'd recommend running UnCSS or something similar on it before deployment.
Finbarr 1 day ago 1 reply      
We used Semantic UI for Startup School (https://startupschool.org) and it has been awesome. Really happy with the choice.
dmoreno 20 hours ago 0 replies      
I love semantic UI. I'm using it now with my new project (serverboards.io) and it really was a huge time saver.

I would prefer it using sass, but the is a 'port' (https://github.com/doabit/semantic-ui-sass/tree/master/app/a...)

baby 15 hours ago 0 replies      
I use it for small projects/pages just because it looks so good :) http://cryptologie.net/links

but I found it harder to get into compared to bootstrap/foundation.

aecorredor 13 hours ago 0 replies      
Does anyone else feel that the documentation does not clearly explain how to create responsive layouts? I see the visual examples, but no clear code like in bootstrap's docs.
ndarilek 7 hours ago 0 replies      
As a blind web developer, I want to like Semantic. My usual mode of developing HTML, once it's at the "I need to make this look good" stage, is "show it to my girlfriend and ask her various questions." She says things like "I wish X were a bit larger," or "Y should be blue," and pulling that off in Bootstrap is challenging. I can drop down to lower-level CSS, but have no clue how my changes interact with Bootstrap's defaults, or indeed if they take effect at all. I mean, I can tweak font sizes and hex codes, but at the end of the day they're all numbers, when what I want to do is say "No really, make this thing larger relative to these other things," not "make it 125%, with this hex code I scraped out of some color list and hope looks nice."

But, gods, buttons as divs. Maybe they're easier to style, but if I had a dollar for every time I couldn't use someone's site because they used a div as a button, then didn't do the several other things that <button/> gives you for free that make all the accessibility difference, well, I'd not worry about money ever again.

I'm glad to see that the homepage example at least uses <button/>, but then the rendering of the example isn't keyboard-focusable or actionable. Then, when I look at the actual code they're rendering, it's back to divs. So they're not even rendering their example code.

Can I use Semantic with the actual HTML elements that the divs are meant to style, so I can use the CSS class names some folks hate and derive their benefits to me, but still get the accessibility benefits of the tags? I'd read their docs and check, but I don't know if they're linked from the main page. I see links to 1.X/0.X docs, but I can't find a link to 2.X docs. There's a "Menu" link which may pop up more links, but I can't seem to trigger this with Enter. I seriously spent 10-15 minutes on this page looking for docs using only my keyboard, before deciding that I really had better ways to spend my day.

I hate to advise people to avoid projects because I'm not so arrogant as to think my language/stack/framework/whatever is anything other than my favorite, and I do want to like this one, but every time I look at it the accessibility story is disappointing, and given that it's a framework, that means other sites will likely inherit disappointing accessibility stories too.

And now it's back to drinking, which seems to be the only fix for this[1].

1. Not really, but damn am I tired of a) fighting the same battles again and again and b) answering the same questions about said battles again and again. All of this stuff is exhaustively documented by folks who are smarter than I am, so it isn't obscure, nor is it something I need to (or am even highly qualified to) answer.

chenshuiluke 20 hours ago 0 replies      
Semantic UI is really great! I suck at frontend design and it really helps me to make decent looking websites :)
mark_l_watson 15 hours ago 0 replies      
I have been using bootstrap exclusively for years. I will give this a try on a small throwaway project. I am concerned by the apparently large size of CSS and JS, based on other comments here.
jff 12 hours ago 0 replies      
All this and it still looks like Yet Another Bootstrap website. Guess that's the modern meaning of 'beautiful website'.
karimdag 23 hours ago 0 replies      
Personally I have chose Semantic UI as my go-to css framework over bootstrap. While bootstrap performs better on mobile, SUI is way nicer/cleaner it therefore eliminates the need to customize anything which I think is one of the reasons that someone would use a css framework in the first place.
daurnimator 21 hours ago 0 replies      
Anyone able to help explain to me how to use this with e.g. a simple static site?

i.e. hand written HTML (perhaps compiled from markdown) with no JS?

The manuals for semantic UI seem to jump strait into integrations with other frontend frameworks and build tools; but I don't want to use them.

kbr 23 hours ago 0 replies      
Checked it out, and it looks quite nice! Congrats on making such a nice tool. I'm a fellow CSS library author here, of Wing[1].

Everything seems fine, but as others have said, the scrolling is jumpy. Might want to fix that :)

1. http://usewing.ml

xyproto 18 hours ago 0 replies      
Sounds great in theory, but the dropdown box on the front page is a list where only half the height of the letters are shown, instead of a proper dropdown box.
macca321 17 hours ago 0 replies      
I'd like to find a framework like this that comes with platform-neutral (handlebars or similar) templates for each component
kuon 20 hours ago 0 replies      
I am starting a new project, and I am considering semantic UI and grommet. Anybody has experience with grommet?
voidhawk 17 hours ago 0 replies      
Anyone else find the pages jitter when scrolling? At least on Safari (iPhone)
zeeshanu 21 hours ago 1 reply      
The interface looks good but it is like a nightmare to remeber every single class.
5_minutes 23 hours ago 0 replies      
I'm fine with Bootstrap though... another day, another framework
rfw1z 22 hours ago 0 replies      
What makes the Internet so exciting is the direct opposite of this.
Color Night Vision (2016) [video] kottke.org
613 points by Tomte  3 days ago   151 comments top 34
qume 2 days ago 0 replies      
There is much discussion here regarding quantum efficiency (QE). Keep in mind that figures for sensors are generally _peak_ QE for a given colour filter array element. These can be quite high like 60-70%.

But - this is an 'area under the graph' issue. While it may peak at 60%, it can also fall off quickly and be much less efficient as the wavelength moves away from the peak for say red/green/blue.

From what I can tell from the tacky promo videos, the sensor is very sensitive for each colour over a wide range of wavelengths, probably from ultraviolet right up to 1200nm. That's a lot more photons being measured in any case, but especially at night.

Their use of the word 'broadband' sums it up. It's more sensitive over a much larger range of frequencies.

I also wouldn't be surprised if they are using a colour filter array with not only R/G/B but perhaps R/G/B/none or even R/IR/G/B/none. The no filter bit bringing in the high broadband sensitivity with the other pixels providing colour - don't need nearly as many of those.

Edit - one remarkable thing for me is based on the rough size of the sensor and the depth of field in the videos, this isn't using a lens much more than about f/2.4. You'd think it would be f/1.4 or thereabouts to get way more light but there is far too much DoF for that.

amluto 3 days ago 5 replies      
It would be interesting to see how this compares to theoretical limits. At a given brightness and collecting area, you get (with lossless optics) a certain number of photons per pixel per unit time. Unless your sensor does extraordinarily unlikely quantum stuff, at best it counts photons with some noise. The unavoidable limit is "shot noise": the number of photons in a given time is Poisson distributed, giving you noise according to the Poisson distribution.

At nonzero temperature, you have the further problem that your sensor has thermally excited electrons, which aren't necessarily a problem AFAIK. More importantly, the sensor glows. If the sensor registers many of its own emitted photons, you get lots of thermal noise.

Good low noise amplifiers for RF that are well matched to their antennas can avoid amplifying their own thermal emissions. I don't know how well CCDs can do at this.

Given that this is a military device, I'd assume the sensor is chilled.

joshvm 3 days ago 1 reply      
Better video from their website comparing to other cameras:


Anecdotal evidence on the internet suggests it's around 6k, but that seems far too low.

rl3 2 days ago 3 replies      
One would think with all the money the military throws into imaging technology that they would already have this.

For Special Operations use, it'd be nifty to have this technology digitally composited in real-time with MWIR imaging on the same wearable device. Base layer could be image intensification with this tech, then overlay any pixels from the MWIR layer above n temperature, and blend it at ~33% opacity. Enough to give an enemy a nice warm glow while still being able to see the expression on their face. Could even have specially made flashbangs that transmit an expected detonation timestamp to the goggles so they know to drop frames or otherwise aggressively filter the image.

Add some active hearing protection with sensitivity that far exceeds human hearing (obviously with tons of filtering/processing), and you're talking a soldier with truly superhuman senses.

That's not to mention active acoustic or EM mapping techniques so the user can see through walls. I mean, USSOCOM is already fast-tracking an "Iron Man" suit, so I don't see why they wouldn't want to replicate Batman's vision while they're at it.

telesilla 2 days ago 3 replies      
Can someone wake me up in the future? When we have digital eyes, and we can walk around at night as if it were day except the stars would be glittering. Sometimes, I'm so sad to know I'll not live to know these things and I'm incredibly envious of future generations.
akurilin 3 days ago 3 replies      
You can get somewhat close to that with a Sony a7s these days: https://vimeo.com/105690274
Silhouette 3 days ago 0 replies      
They list a lot of potentially useful applications on the product's own web site. I wonder how long it will take for this sort of technology to be commercially viable for things like night vision driving aids. High-end executive cars have started to include night vision cameras now, but they're typically monochrome, small-screen affairs. I would think that projecting an image of this sort of clarity onto some sort of large windscreen HUD would be a huge benefit to road safety at night. Of course, if actually useful self-driving cars have taken over long before it's cost-effective to include a camera like this in regular vehicles, it's less interesting from that particular point of view.
colordrops 3 days ago 3 replies      
Two thoughts come to mind:

1. It would be nice to see a split screen against a normal view of the scene as it would be seen by the typical naked eye.

2. Our light pollution must SUCK for nocturnal animals that see well at night.

jacquesm 3 days ago 1 reply      
That really is incredible. I wonder how they keep the noise level down and if the imaging hardware has to be chilled and if so how far down. Pity there is no image of the camera (and its support system), I'm really curious how large the whole package is. It could be anything from hand-held to 'umbilical to a truck' sized.

Watch when the camera tilts upwards and you see all the stars.

19eightyfour 3 days ago 2 replies      
That is beautiful.

If they can increase the dynamic range to bring detail to the highlights it is basically perfect.

I've never seen a valley look like that with a blue sky above with stars in it. Truly incredible.

The 5M ISO rating is pretty funny. 1/40 f1.2 ISO 5M.

cameldrv 3 days ago 0 replies      
They say it's hybrid IR-visible. I wonder if the trick is to use IR as the luma and then chroma-subsample by having giant pixels to catch lots of photons.
floatboth 3 days ago 1 reply      
"an effective ISO rating of 5,000,000"

Holy shit, my Canon 600D is pretty bad at 2500, goes to crap at 3200, and 6400 is an absolute noise mess

dreamcompiler 3 days ago 2 replies      
This is an amazing device. I've taken night photos that look like frames of this movie on my digital camera, but they require a 60-second exposure and a tripod, and they're -- still frames.
caublestone 3 days ago 2 replies      
My brother in law experimented with this camera a few years back on family portraits. The camera picks up a lot of "dark" details. Skin displays pale and veins are very defined. My nieces called it "the vampire camera".
fpoling 2 days ago 1 reply      
There are far infrared cameras that capture thermal radiation from 9-15 NM band. They nicely allow to see in complete darkness. They do not use CCD but rather microbalometers.

But they are expensive. 640x480 can cost over 10000 USD and cameras with smaller resolution like those used in high-end cars still cost over thousand USD.

teh_klev 3 days ago 0 replies      
Direct link to manufacturer or supplier:


batbomb 3 days ago 0 replies      
So maybe Peltier on the sensor, heat sink attached to body, body hermetically sealed. Sensors probably tested for best noise quality (probably a really low yield on that).
drenvuk 3 days ago 1 reply      
This is incredibly cool. You can even see how other sources of light actually have an effect on the environment as if they were their own suns.
peteretep 2 days ago 0 replies      
Put one of these on a drone and you'll break a lot of people's assumptions about their privacy
kator 2 days ago 0 replies      
Cieplak 2 days ago 0 replies      
I wonder what the sensor is made of. I would bet on there being a fair bit of Germanium in there.

PS: probably wrong about that, silicon's band gap is more suited to optical spectrum, even though germanium has more electron mobility. I'm speculating now that they're using avalanche photodiodes.


ChuckMcM 2 days ago 0 replies      
Here is the manufacturer's web site: https://www.x20.org/color-night-vision/

There is a 'shoot out' video on that page which compares themselves to other night vision technologies. Pretty impressive demo.

copperx 3 days ago 0 replies      
I've dreamed of such a camera for decades. I thought the technology was at least 10+ years away. This is what science fiction is made of.
bsenftner 1 day ago 0 replies      
Has no one considered a neural net post processor which has been trained on daylight views? Seems like an obvious method, given Hacker News...
lutusp 3 days ago 3 replies      
Someone should contact this company and volunteer to redesign their website (https://www.x20.org). They should also be told that "complimentary" and "complementary" don't mean the same thing.

They have a great product, unfortunately presented on a terrible website.

AnimalMuppet 3 days ago 0 replies      
It occurs to me that this technology could do absolutely amazing things as the imager for a space telescope...
jsjohnst 2 days ago 0 replies      
Here's the manufacturer website on it: https://www.x20.org/color-night-vision/
breatheoften 3 days ago 0 replies      
Is that red rocks (just outside of Las Vegas). There's a lot of man made light sources that really scatter light pretty far and in a lot of directions (the Luxor spotlight comes to mind). I wonder if that could have an effect on this camera's performance.
jbrambleDC 3 days ago 1 reply      
I want to know what this means for observational astronomy. Can we put this in the eyepiece of a telescope and discern features in nebulae that otherwise look like gray blobs to unaided vision
interfixus 2 days ago 2 replies      
Why is the night sky blue? Is that really scattered starlight?
nnain 2 days ago 0 replies      
What a quandary: We see military weapons technology put to terrible use all the time, and yet, so much technology shows up in (US) military use first.
samstave 3 days ago 3 replies      
ELI5 what an iso of 5MM means?
faragon 2 days ago 0 replies      
Is this real? :-O
egypturnash 3 days ago 0 replies      
Their website is a thing of beauty. It's straight out of the Timecube school of design. https://www.x20.org/color-night-vision/
New Features Coming in PostgreSQL 10 rhaas.blogspot.com
512 points by ioltas  2 days ago   134 comments top 26
avar 2 days ago 2 replies      
This bit about ICU support v.s. glibc:

 > [...] Furthermore, at least on Red Hat, glibc regularly whacks > around the behavior of OS-native collations in minor releases, > which effectively corrupts PostgreSQL's indexes, since the index > order might no longer match the (revised) collation order. To > me, changing the behavior of a widely-used system call in a > maintenance release seems about as friendly as locking a family > of angry racoons in someone's car, but the glibc maintainers > evidently don't agree.
Is a reference to the PostgreSQL devs wanting to make their indexorder a function of strxfrm() calls and to not have it change whenglibc updates, whereas some on the glibc list think it should only beused for feeding it to the likes of strcmp() in the same process:

 > The only thing that matters about strxfrm output is its strcmp > ordering. If that changes, it's either a bug fix or a bug > (either in the code or in the locale data). If the string > contents change but the ordering doesn't, then it's an > implementation detail that is allowed to change.
-- https://sourceware.org/ml/libc-alpha/2015-09/msg00197.html

fiatjaf 2 days ago 3 replies      
Ok, I'm not a database manager for enormous projects, so these changes may be great, but I don't understand them and don't care about them. Postgres is already the most awesome thing in Earth to me.

Still, if my opinion counts I think SELF-UPDATING MATERIALIZED VIEWS should be the next priority.

jacques_chester 2 days ago 2 replies      
I deeply appreciate the great care that Postgres committers take in writing their merge messages.

I think of it as a sign of respect for future developers to take the time to write a clear account of what has happened.

qaq 2 days ago 0 replies      
Even a single feature from the list would make 10 an amazing release, all of them together is just unbelievable. Very happy we are using PG :)
iEchoic 2 days ago 1 reply      
I'm so excited for table partitioning. I use table inheritance in several places in my current project, but have felt the pain of foreign key constraints not applying to inherited children. Reading about table partitioning, I'm realizing that this is a much better fit for my use case.

Postgres continues to amaze me with the speed at which they introduce the right features into such a heavily-used and production-critical product. Thanks Postgres team!

lazzlazzlazz 2 days ago 3 replies      
How is Postgres so consistently the best open-source DB project from features to documentation? It's unreal.
jordanthoms 2 days ago 0 replies      
Will DDL replication for the logical replication be landing in 10 or later?

We have some use cases where logical replication would be very helpful, but keeping the schema in sync manually seems like a pain - will there be a documented workaround if DDL replication doesn't make it in?

nickpeterson 2 days ago 5 replies      
Can anyone recommend a decently up to date book on postgres administration? Or are docs really the only way? I've used SQL Server for years but would likely choose postgres for an independent project if I intended to commercialize it. That said, I don't use it at work so it's hard to get in depth experience.
djcj88 2 days ago 2 replies      
I did read the article, but I can't find any mention of addressing the "Write amplification" issue as described by Uber when they moved away from postgres. https://eng.uber.com/mysql-migration/ I had heard talk on Software Engineering Daily that this new major revision was supposed to address that.

Is this issue resolved by the new "Logical replication" feature? It doesn't seem directly related, but it seems like maybe that is what he is referring to in this blog post?

Normal_gaussian 2 days ago 4 replies      
Extended Statistics! I was following the replication changes, but have just discovered the extended statistics and am more excited about them.

The directory renaming at the bottom of the post is interesting - I wonder if many other projects have to do things like this?

api 2 days ago 2 replies      
The feature I'd really love is master selection with Raft or similar and automatic query redirection to the master for all write queries (and maybe for reads with a query keyword).

That would make it very easy and robust to cluster pg without requiring a big complicated (a.k.a. high admin overhead and failure prone) stack with lots of secondary tools.

This kind of fire and forget cluster is really the killer feature of things like MongoDB and RethinkDB. Yes people with really huge deployments might want something more tunable, but that's only like 1% of the market.

Of course those NoSQL databases also offer eventual and other weaker but more scalable consistency modes, but like highly tuned manual deployment these too are features for the 1% of the market that actually needs that kind of scale.

A fire and forget cluster-able fully consistent SQL database would be nirvana for most of the market.

smac8 2 days ago 1 reply      
Wow, so awesome. I do hope at some point we can see some language improvements to PLPGSQL. More basic data structures could go a long way in making that language really useful, and I still consider views/stored procedures a superior paradigm to client side sql logic
elvinyung 2 days ago 1 reply      
Dumb question: does declarative partitioning pave the way for native sharding in Postgres? I'm not super super familiar, but it seems like along with some other features coming in Postgres 10, like parallel queries and logical replication, that this is eventually the goal.
acdha 2 days ago 2 replies      
What's the ops experience for a replicated setup like these days? i.e. assuming you want basic fault-tolerance at non-exotic size / activity levels, how much of a job is someone acquiring if, say, there are reasons they can't just use AWS RDS?
StreamBright 2 days ago 0 replies      
For analytical loads the following is going to be great:

 While PostgreSQL 9.6 offers parallel query, this feature has been significantly improved in PostgreSQL 10, with new features like Parallel Bitmap Heap Scan, Parallel Index Scan, and others. Speedups of 2-4x are common with parallel query, and these enhancements should allow those speedups to happen for a wider variety of queries.

hodgesrm 2 days ago 1 reply      
Impressive feature list. Glad to see logical replication is finally making it in.
hartator 2 days ago 3 replies      
I am considering more and more a move back from MongoDB to PostgreSQL. I will be missing being schema less so much though. Migrations - particularly Rails migrations - left a bad taste in my mouth. Anyone did the move recently and what are their feelings?
knv 2 days ago 0 replies      
Any recommendations for scaling Postgresql's best practices? Really appreciate it.
mark_l_watson 2 days ago 3 replies      
I know that several RDF data stores use PostgreSQL as a backend data store. With new features like better XML support, as well as older features for storing hierarchical data, I am wishing for a plugin or extension for handling RDF with limited (not RDFS or OWL) SPARQL query support. I almost always have PostgreSQL available, and for RDF applications it would be very nice to not have to run a separate service.

I tend to view PostgreSQL as a "Swiss Army knife" and having native RDF support would reinforce that.

ams6110 2 days ago 1 reply      
A question on this statement, in the SCRAM authentication description: stealing the hashed password from the database or sniffing it on the wire is equivalent to stealing the password itself

How is that the case? That's exactly the thing that hashed passwords prevent. Of course, if it's just an MD5 hash that's feasibly vulnerable to brute-forcing today, but it's still not "equivalent" to having the clear-text password.

bladecatcher 2 days ago 1 reply      
This is great because I couldn't go to production with earlier releases of logical decoding. Now we don't have to depend on a third party add on!
mozumder 2 days ago 1 reply      
I could use a count of the number of file I/Os that each query takes, in order to optimize my queries further...
awinter-py 2 days ago 0 replies      
fascinating that the road to improving the expr evaluator is better opcode dispatch and jit -- same tradeoffs every programming language project is looking at right now.
qxmat 2 days ago 0 replies      
DECLARE @please VARCHAR(3) = '???';
MR4D 2 days ago 0 replies      
You guys are awesome - keep up the good work!
awinter-py 2 days ago 0 replies      
the join speedup for provably unique operands sounds awesome
React v15.5.0 facebook.github.io
448 points by shahzeb  3 days ago   202 comments top 22
acemarke 3 days ago 1 reply      
For those who are interested in some of the details of the work that's going on, Lin Clark's recent talk on "A Cartoon Intro to Fiber" at ReactConf 2017 is excellent [0]. There's a number of other existing writeups and resources on how Fiber works [1] as well. The roadmap for 15.5 and 16.0 migration is at [2], and the follow-up issue discussing the plan for the "addons" packages is at [3].

I'll also toss out my usual reminder that I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Specifically intended to be a great starting point for anyone trying to learn the ecosystem, as well as a solid source of good info on more advanced topics. Finally, the Reactiflux chat channels on Discord are a great place to hang out, ask questions, and learn. The invite link is at https://www.reactiflux.com .

[0] https://www.youtube.com/watch?v=ZCuYPiUIONs

[1] https://github.com/markerikson/react-redux-links/blob/master...

[2] https://github.com/facebook/react/issues/8854

[3] https://github.com/facebook/react/issues/9207

TheAceOfHearts 3 days ago 3 replies      
React team is doing an amazing job. I remember when it was first announced, I thought Facebook was crazy. "JSX? That sounds like a bad joke!" I don't think I've ever been so wrong. After hearing so much about React, I eventually tried it out and I realized that JSX wasn't a big deal at all, and in fact it was actually pretty awesome.

Their migration strategy is great for larger actively developed applications. Since Facebook is actually using React, they must have a migration strategy in place for breaking changes. Since breaking anything has such a big impact on the parent company, it makes me feel like I can trust em.

Heck, most of the items in this list of changes won't surprise anyone that's been following the project. Now there's less magic (e.g. React.createClass with its autobinding and mixins), and less React-specific code in your app (e.g. react-addons-update has no reason to live as a React addon when it can clearly live as a small standalone lib).

STRML 3 days ago 6 replies      
This is a big deal to deprecate `createClass` and `propTypes`.

PropTypes' deprecation is not difficult to handle, but the removal of createClass means one of two things for library maintainers:

(1). They'll depend on the `create-class` shim package, or,

(2). They must now depend on an entire babel toolchain to ensure that their classes can run in ES5 environments, which is the de-facto environment that npm modules export for.

I'm concerned about (2). While we are probably due for another major shift in what npm modules export and what our new minimum browser compatibility is, the simple truth is that most authors expect to be able to skip babel transcompilation on their node_modules. So either all React component authors get on the Babel train, or they start shipping ES6 `main` entries. Either way is a little bit painful.

It's progress, no doubt, but there will be some stumbles along the way.

ggregoire 3 days ago 7 replies      
For those still using propTypes, I'd recommend to take a look at Flow as replacement.


amk_ 2 days ago 1 reply      
The breakup of the React package into a bunch of smaller modules really puts packages that treat React as a peer dependency in a pickle. I have a component module using createClass that works fine and exports a transpiled bundle in package.json. I guess now we'll have to switch to create-react-class, or maintain some kind of "backports" release series for people that are still using older React versions but want bugfixes.

Anyone have experience with this sort of thing?

nodesocket 3 days ago 3 replies      
Big news seems to be removal of `React.createClass()` in favor of:

 class HelloWorld extends React.Component { }

hueller 3 days ago 0 replies      
This is a good move. Modernization with sensible deprecation and scope re-evaluation with downsizing when more powerful alternatives exist. Too often codebases get bigger when they should really get smaller.
smdz 2 days ago 0 replies      
I absolutely love how React+TypeScript setup handles PropTypes elegantly. And then you get the amazing intellisense automatically.


interface State {}

interface ISomeComponentProps {

 title: string; tooltip?: string ....


export class SomeComponent extends React.Component<ISomeComponentProps, State> {



uranian 2 days ago 2 replies      
What is this problem in the Javascript landscape to keep forcing developers to do things differently, with the penalty of your app not working anymore if you don't comply?

I mean, creating a new type of brush for painters is ok, but I don't see the need for forcing them to redo their old paintings with the new type of brush in order to keep them visible..

IMHO Coffeescript and some other to Javascript transpilers are still a much better language than the entire Babel ES5/ES6/ES7 thing. But for some reason my free choice here is in jeopardy. The community apparently has chosen for Babel and are now happily nihilating things that are not compatible with that.

In my opinion this is not only irresponsible, but very arrogant as well.

Although I do understand and can write higher order components, I still write and use small mixins in projects because it works for me. I also use createClass because I enjoy the autobinding and don't like the possibility to forget calling super.

Now I need to explain my superiors why this warning is shown in the console, making me look stupid using deprecated stuff. And I need to convince them why I need to spend weeks rewriting large parts of the codebase because the community thinks the way I write is stupid. Or I can of course stick to the current React version and wait until one of the dependencies breaks.

It would be really great if library upgrades very, very rarely break things. Imagine if all the authors of the 60+ npm libs I use in my apps are starting to break things this way, for me there is no intellectual excuse to justify that.

PudgePacket 3 days ago 1 reply      
Why do React and other js libraries emit warnings as console.error when browsers support console.warn?
sergiotapia 3 days ago 1 reply      
Awesome changelog with great migration instructions. Bravo to the React team!

Going to set aside some hours on Saturday to upgrade our React version.

I recently started to go in with functional components where I don't need life-cycle events such as componentDidMount. Does anyone know if React is planning to make optimizations for code structured in this way?

whitefish 3 days ago 3 replies      
I'd like to see React support shadow-dom and web components. Not holding my breath however, since Facebook considers web components to be a "competing technology".

Unlike real web components, React components are brittle since React does not have the equivalent of Shadow DOM.

Rapzid 3 days ago 3 replies      
Fiber is what I'm really waiting for. Not much official chatter about it, but looks like a 16 release?

They just removed some addons in master that many third party packages rely on, including material-ui. Hopefully these other popular packages can be ready to go with the changes when the fiber release hits.

baron816 3 days ago 3 replies      
I hate that the React team prefers ES6 classes. This is what I do:

function App(params) { const component = new React.Component(params);

 component.lifeCycleMethod = function() {...}; component.render = function() {...}; function privateMethod() {...} return component;}

xamuel 3 days ago 2 replies      
Happy to see propTypes getting shelved. Too many people stubbornly use propTypes even in Typescript projects. Hopefully this change will usher in the final stamping out of that.
bsimpson 3 days ago 0 replies      
Of course, you'd need to use super appropriately, but I wonder if anyone's taken a stab at porting React mixins to ES2105 mixins:


Drdrdrq 2 days ago 1 reply      
Just curious: did Facebook change the license? AFAIK they can revoke permission to use from 3rd parties. Am I mistaken? If not, isn't this a huge risk for startups?
iLemming 2 days ago 0 replies      
I wonder how these changes would affect Clojurescript libs built on top of React, e.g.: Om.Next and Reagent
ksherlock 3 days ago 2 replies      
So... create-react-class is an unrelated node module. react-create-class (the correct one, I guess) is completely empty, other than the package.json.
aswanson 2 days ago 2 replies      
I cannot keep up. I just started learning react/apollo/graphql and I'm already out of date.
revelation 3 days ago 3 replies      
I still remember the times when warning were actual likely mistakes in your code, not "we're adding some more churn, update your stuff until we churn more".

If you want people to always ignore warnings, this is how you go about it.

lngnmn 2 days ago 1 reply      
Why people are always end up with J2EE-like bloatware? There must be some pattern, something social. It, perhaps, has something to do with the elitism of a being a framework ninja, a local guru who have memorized all the meaningless nuances and could recite the mantras, so one could call oneself an expert.

The next step would be certification, of course. Certified expert in this particular mess of hundred of dependencies and half-a-dozen tools like Babel.

Let's say that there is a law that any over-hyped project eventually would end up somewhere in the middle between OO-PHP and J2EE. Otherwise how to be an expert front-end developer?

Google's responsive design looks like the last tiny island of sanity.

Farmers look for ways to circumvent tractor software locks npr.org
420 points by pak  1 day ago   322 comments top 29
Sytten 1 day ago 2 replies      
My dad is a farmer and I can assure you that this is a real problem. Every piece of equipment now as its own proprietary, closed-source and, most of the time, incompatible software. Plus, many of them don't get any update after the product launch. When you are in rush to plant or harvest you just can`t afford to wait for an authorized dealer. And if they fail, good luck trying to find a replacement that is not 100x overpriced because it has been discontinued one year after you bought it.I tried repairing a GPS system once and it required a special serial cable + software which costed more than 100$ just to update the driver...
jaclaz 17 hours ago 3 replies      
I have the feeling that somehow the actual need has been put aside for phylosophical (or Open Source, etc.) reasonings (nice but not the original issue).

More or less what the good farmers are asking for (which is not about the code, the kernel or whatever, they are not "hackers" as much as the authorized JD technicians are not computer experts or programmers or software engineers) is just access to the "database" of parts serial number of the machine.

Loosely the way it works (simplified) is a database where the (say) pressure sensor #42 has been registered (authorized) in the operating system as having serial number #0123456789.When the sensor breaks, after it has been replaced with a new (original or verified third party) sensor, you need to update the database telling it that sensor with serial #0123456789 has bee replaced with sensor serial number #2223334445 and - of course it depends on the specific part - possibly run a "self-test" program to verify that the sensor works properly and maybe tune/regulate it.

The farmers do not want the source code, they don't want to modify it, they don't want to "hack" anything, they simply want to be able to replace a part and have the thingy work.

Going back to software, let's talk of - say - Windows 7 (yeah I know that all the rage is about Windows 10 nowadays) and its activation, imagine that instead of having one month time to activate a new install either through the internet, the automated phone call in case it doesn't work and a support phone call for particular cases where the previous two options do not work, activation was:

1) Immediately mandatory (i.e. the OS wouldn't work until activated)

2) ONLY available through a local visit of a MS agent (9 to 5 , Monday to Friday) at a cost of (say) US$ 100.00/hour + US$ 1.00/mile

nottorp 20 hours ago 1 reply      
I don't get why all this chat is about software, licensing and software safety.

The way I read it, a farmer can't change even, say, a brake pad (or whatever tractors use) without authorization from John Deere. I very strongly doubt that they want to mess with the software, they just want to perform minor maintenance themselves.

TaylorAlexander 1 day ago 8 replies      
I think we'd all be better off if basically everything was open source, by way of eliminating intellectual property protections provided by governments.

As an alternate solution, those of us with engineering skills can work to create an open source economy with open source factories, computers, and products.

This would never be a problem nor would it be likely to happen if genuinely competitive options existed for farmers that were not locked down.

Another way we in this community can help is by helping smaller businesses learn the value of open source and get them using and creating it.

I believe with a sufficiently open source base in our economy, we can make great headway into eliminating material poverty.

I write a little about this on my personal site, here:


throwaway_jddev 1 day ago 18 replies      
Hey all, I worked on software for John Deere. This is a throwaway account for obvious reasons. Opinions expressed here are MY OWN. I no longer work for John Deere or am associated with them in any way.

I was part of one of the many teams that work on this software. Specifically I was part of John Deere's ISG division also known as the Intelligent Solutions Group. The ISG division (was at the time) responsible for tying together various software built by OEM's, for building the central UI within the cabin, and for building various debugging and build tools. The team I was on, consisted of about 8 very senior engineers, and I think there were around 20 total engineers working for ISG at the time (though I saw, and knew only a handful of them). Now, when I say OEM integration, I mean suppliers and other John Deere divisions with their own teams mirroring ours. All told, I would estimate that John Deere has somewhere between 150-300 engineers working full-time on their codebase for their tractors.

Let me disabuse you of any myths. I have worked in software for 20 years. I have worked in large enterprises, and scrappy startups. This software is by FAR the largest, most complex codebase I have ever interacted with. Submission of any new code was seriously considered and reviewed before it entered production (sometimes to a pedantic degree), after which JD put all new code through 10s of thousands of hours of testing on production equipment. Production and release cycles take on the order of months to ensure that we don't kill people.

These are not riding lawnmowers. They are 30-ton combines, and 20 ton tractors tilling fields, with massive horsepower behind them. They have a real potential to end peoples lives in the event of failure, and these tractors do (in testing) fail in spectacular ways. If a team of hundred of engineers struggle with their codebase internally, Joe Farmer isn't going to have a fucking clue how to repair their software correctly.

Now should you, in theory, have the right to modify equipment you own? Sure. Absolutely. Hell, John Deere tractors run on open source software. But trust me on this, locking this down is a very good idea.

If you have the drive to make open source tractor software AND can make absolutely certain no-one ever dies from code you write, then go do it. Just keep in mind that the engineers that work on this shit really care about keeping people safe.

kccqzy 23 hours ago 1 reply      
This really reminds me of how Richard Stallman started GNU. It was because he can't modify the software on a printer he uses.
tim333 1 day ago 0 replies      
Previous discussion https://news.ycombinator.com/item?id=13925994 (177 comments)
intrasight 23 hours ago 1 reply      
While this is a fascinating question in the context of tractors, it gets even more interesting in the context of cars, houses, personal electronics, light bulbs. The First Sale Doctrine is being eroded by DRM.
userbinator 1 day ago 1 reply      
Every time I read or hear about new developments in creating safer/more secure software, I am reminded of scenarios like this. Companies could use formally verified crypto and such to provably and completely lock out users by destroying all means of circumvention. In that sense, I think these secure technologies are like nuclear weapons --- extremely powerful, too powerful. Society in general seems to rely on some insecurity to maintain its freedom; so I believe anyone who advocates for more secure systems should also carefully consider all the negative effects which will appear if their vision comes true, and whether they are, however indirectly, locking themselves out.
mabbo 1 day ago 1 reply      
It's not just the farmer paying for this: it's everyone who eats food.

The time wasted is lost productivity. The extra fees just for a software unlock is lost money. The farmer has to either charge more, or go out of business sooner. Either way, the cost of food rises.

swanson 1 day ago 3 replies      
It seems so unbelievable to me that there are enough people that are a) John Deere equipment owners/renters and b) capable of debugging and patching issues in a C++ codebase for these stories to keep appearing.

Debates on the virtues of open source aside, is this actually the solution? Or is it a symptom of, say, poor quality software releases? or service visits that are too costly? or overloaded dealers who can't handle harvest-time support loads? I just don't believe that allowing people to tinker with the software is going to be the magic answer that these folks seem to think it is.

itchyjunk 1 day ago 5 replies      
"farmers could damage the machines, like bypassing pollution emissions controls to get more horsepower."

Isn't this the problem with warranties? People could try to mod it, end up damaging it and try to get it replaced with warranties.

I also don't fully understand this software. Is it just completely vendor locked? That sounds really unreasonable. It should allow for at least basic debugging and trouble shooting.

Is this software locking only in large $100k + harvester type equipment? Does the vendor have other reasonable explanation of doing this?

I wonder if the software designer for this equipments would have reasonable arguments for such locks or if this is just profit driven decision.

ivanhoe 17 hours ago 0 replies      
Couldn't they organize and sue the manufacturer for their losses due to the tractor malfunctions in critical periods and being prevented to service them promptly? I understand it's a bit naive, but you don't solve this by hacking around the problem, but by attacking the problem through the institutions of the system.
andrewchambers 1 day ago 6 replies      
I don't like the idea that we have to pass laws to force companies to make a better product. Why can't a company take the initiative and grab all the customers who value this?
arca_vorago 1 day ago 2 replies      
I wonder, is the a GPL tractor software project out there yet?
peter_retief 19 hours ago 0 replies      
This seems to be a regular way to lock in customers, I think of ink jet cartridges and even 3d printer refills use similar tricks. I wonder if there is a case to be made in making this illegal or optional
kvncombo 13 hours ago 0 replies      
The big boys have obviously cornered the market. Is there any farming equipment company that provides more open access for maintenance and repairs? If not, why not? It seems there is an opportunity there.
squarefoot 19 hours ago 0 replies      
SmartTV and other appliances are closed too, so that users must purchase a new one when for example codecs become obsolete. Sadly, the closed source model is not just being used where there are safety concerns involved.
cmurf 1 day ago 0 replies      
Among the most successful cars were those with straightforward replacement parts, a defacto standard, and reasonably well documented and available maintenance manuals.

Tesla wants control over this, by literally renting the maintenance manual, and remotely disabling the car if repairs or parts aren't authorized. I don't expect the model 3 market will appreciate this business model. It will be a much more price sensitive market compared to the early models which has been relatively inelastic for repairs and resale.

Consider the x86 computer market, if every component had signed firmware, and the main system verified this signature in case of component replacement, and would fail to function at all if signature verification failed. What a pain...

Consider voting machines, proprietary hardware, expensive to design, maintain, audit, and go obsolete in as few as 1/2 dozen uses. Compare that to pencil and paper.

The older I get the more Darth Vader I become: "Don't be too proud of this technological terror you've constructed." (Let's say the Force is common sense in this metaphor.)

intrasight 23 hours ago 1 reply      
The jet engines in a modern airplane have some analogies here. You can think of an engine as a PaaS (Propulsion-as-a-Service). A tractor is HaaS (Harvesting-as-a-Service). Our technologies have reach this level of complexity - the must be offered as a service. Cars will soon be Mobility-as-a-Service. Putting on my economist hat, I'd say that it is the ultimate manifestation of "comparative advantage". If JD abuses their monopoly position, then fix that through the courts and legislation, or by buying a competitors product. Don't try to hack the terms of service. But JD and other "product" vendors need to make it clear that they are in fact selling a service.
andai 18 hours ago 0 replies      
Can someone please explain what software has to do with repairing a tractor?

Edit: it looks like the physical components themselves are DRMed? Wtf?

douche 1 day ago 2 replies      
There's going to be a huge market in used pre-DRM heavy machinery. The purely mechanical/hydraulic versions of this stuff is virtually indestructible, with a little maintenance.
shawn-butler 1 day ago 0 replies      
The motherboard/vice article NPR is blatantly ripping off here is much better in my opinion.


tbyehl 1 day ago 5 replies      
I'm starting to wonder if these articles are driven by a PR firm paid by John Deere's competition. They're always about John Deere and only John Deere. Aside from the Motherboard / Vice article, they never provide any specifics about the maintenance or repair operations that farms are prevented from doing on their own.

With the Vice article, 2 of the 3 things they mention are modifying the tractors to operate in ways the manufacturer did not intend which could result in damage.

watertom 1 day ago 0 replies      
What's ironic is most of these farmers are republicans and voted in the people who enable this crap.
arkis22 23 hours ago 1 reply      
Everybody likes to feel taken advantage of.

If I was a business owner or engineer that built systems this complex and you asked me to not lock it down, I'd call you freaking crazy.

These are very expensive and complex machines, and you want my competition or some farmer who has no idea what he's doing to access and modify it?

No thank you.

Google keeps proprietary code. And that's for auto complete...

soheil 23 hours ago 4 replies      
Imagine not so long in the future if self-driving cars were forced to reveal their code because of the right-to-repair bill, 1. who without a vast depth of knowledge in C++, etc. would be able to go anywhere near it? 2. even if they did is it in their best interest or anyone else's if they tinker with the code and made the car take undesirable actions?

Maybe buying a tractor should be replaced with leasing tractors, if they never want you to fully own everything in it. I think very soon there will be more and more of a need for a new way to determine what products are allowed to be sold partially with a secret OEM key.

notliketherest 1 day ago 9 replies      
When we buy a piece of software, we own and "physically" posses a binary which we feel we can rightly take apart, modify, and mess with it because we view it as analogous to owning a toaster or stereo in the physical world. It's in our home, we can touch it, and we in essence control it.

Now the same is never said for software as a service. We buy subscriptions to services all the time but don't demand an ability to modify or control the software. It's defined in our agreement. Now it seems to me that the companies that sell these tractors have decided to pursue a model by which their software is more or less SaSS (providing encrypted updates over the air). Why is it that these farmers believe they have a right to modify that software?

What is this colored fiber in my chicken? stackexchange.com
394 points by kurmouk  22 hours ago   293 comments top 11
Hexcles 16 hours ago 15 replies      
Animal welfare aside, I find people in North America really in favour of chicken breast, much more than other parts, say chicken thigh. Yet I myself think chicken thigh tastes much better, especially with the skin (yet again it is usually skinless in supermarkets here, unfortunately). Is it because of nutrition (percentage of fat/protein etc.)? On top of that, chicken feet are considered unacceptable by many...
ncr100 21 hours ago 7 replies      
Fake meat cannot come soon enough - poor bird was encouraged to grow in an unhealthy manner resulting in dead tissue inside it while it was still alive. I wonder if it was painful for the bird having this tough dead tissue at the core of its breasts.
pvaldes 18 hours ago 1 reply      
This is not a problem caused by antibiotics. Is a problem of genetic nature that happens because broilers are inbreed for growing big and fast. The same birds in true range with plenty of food would face exactly the same problem, with or without antibiotics. They are too heavy and often tend to have cardiac diseases, but they live short lives and are delicious so they are the most sucessful bird in the planet.

On the other hand, we are a paradoxal species. Able to feel horrified by this, whereas happily petting our distorted-faced bulldogs, persian cats, caesarean born bullterriers, extra-dwarfed toy Yorkshires, ponies and toy mini pigs, without any trace of moral conflict...

bambax 21 hours ago 7 replies      
Brown meat is much much more delicious; why would anyone prefer white meat is beyond me.

And the irony is that, in buying "heavy breasted chicken", customers pay for something they can't consume (assuming chicken is priced by the pound in the US).

hellofunk 16 hours ago 1 reply      
People just eat way too much chicken. The numbers of chicken consumed every year in most western nations is astounding. And those poor birds, the way they are packed to the point of not even being able to walk while they are raised, it's really a lot more disgusting than the final product shown in this article.
Devagamster 21 hours ago 5 replies      
This is kinda terrifying. I can't put my finger on why exactly but dang.
nscalf 1 hour ago 0 replies      
Certainly one of the weirder things I've seen on HN.
pillowkusis 10 hours ago 0 replies      
if this disgusts you for moral/guilt reasons there is a good solution. Eat less meat! You don't have to go full vegan.

I think people employ black and white thinking about this -- either I eat all meat or I become vegetarian/vegan and eat no meat. That's hard, and you're likely to fail (hard cutting to vegan would be quite difficult, regression rates after a few months are sky high).

Instead, simply resolve to eat less meat. Make Thursdays meatless. Opt for the vegetarian option when eating out. Slowly reduce your meat consumption to a level you're happy with. Learn about meat alternatives (which have gotten really, really good in the last few years).

If everyone in the US took one day a week to have no meat, the whole industry would change dramatically. Baby steps!

codr4life 18 hours ago 2 replies      
andrewclunn 16 hours ago 2 replies      
Your concern for animal welfare will never be enough for the animal rights people (just read the other comments here).
enibundo 16 hours ago 1 reply      
I have a solution, (downvotes are welcome), eat an organic and mostly vegatarian diet.
Great Barrier Reef at 'terminal stage' theguardian.com
344 points by mjfern  1 day ago   172 comments top 19
crawrey 22 hours ago 5 replies      
I grew up on the coast of the Great Barrier Reef in North Queensland and I can say that the current state of the reef is almost unrecognisable to what it was 20-odd years ago.

While a large part of this damage has been caused by rising sea temperatures, another large component is due to the run-off from agriculture, refineries and mining. The latter being a directly contributed by the local population.

The region is currently in a economic recession and many of the mines and refineries have either slowed or ceased operation. Anecdotally, the sentiment of the population affected by (un)employment by these industries are either unaware or ignorant towards the damage that the industries are having on this sensitive ecosystem. Instead they are consumed by how they are going to make ends meet.

In this environment, it is unthinkable to allow Adani expand their Carmichael mine to further exacerbate the situation. Add to this, that a former Adani board member is appointed to evaluate the environmental impacts of the expansion. Adani is the biggest direct threat to Australia, both environmentally and economically and they are in talks with the government to be provided with a $1 billion tax-payer funded railway line. Adani and the Carmichael mine expansion are rifled with corruption.

The issue of the reef and climate-change in general is a fairly untouched issue in Australian politics. I'm not sure that we are going to get anywhere without foreign intervention.

If you are interested, I do urge you to read some material on Adani and the Carmichael coal mine expansion and perhaps donate to a "StopAdani" cause.

Wish us luck.

spodek 17 hours ago 7 replies      
Many posts here about how sad and disgusted people are. Not much about people taking personal responsibility.

What do people think the carrying capacity of the planet means? Sustaining more humans means sacrificing other life that competes for our resources. It means pollution rising until it doesn't quite kill us but is well above the levels of a pristine, clean environment.

Nobody wants to live near the carrying capacity because approaching it means sacrificing anything that doesn't keep us alive.

Every round trip flight across the country you take contributes almost one year's allotment from the Paris agreement for one person -- https://co2.myclimate.org/en/portfolios?calculation_id=71970.... Flying first class and you're well over it. Eating meat contributes a lot too. Having more kids in western cultures contributes significantly.

Who among us, reading this, hasn't gone over their annual limit in just a few hours of flying, not to mention their regular life otherwise? How many have blown past their Paris limits already this year?

Some would say the damage was done by past generations. Okay, well what beautiful part of nature will our behavior destroy years from now? People keep posting to HN that since we can't change that a lot will happen, it doesn't matter any more, we should just enjoy ourselves, but there are different degrees of destruction.

Alternatively, we can fly less, drive less, eat much less meat, and have fewer kids. We don't need to wait for legislation. In fact, it's the fastest way to get legislation, since politicians follow voters.

In my experience, acting on all those things improved my life tremendously (including not flying for a year+ http://www.inc.com/joshua-spodek/365-days-without-flying.htm...), more than almost anything else. I'm more fit, enjoy my neighborhood and neighbors, and spend less money and there's nothing special about me.

Luminarys 23 hours ago 1 reply      
Although I've never seen the Great Barrier Reef, I was recently in Belize and snorkeled around its Barrier Reef. It was painfully obvious that the reefs were extensively bleached(though still quite beautiful). It's quite shocking to think that 30 years ago the reefs looked completely different from now and in another 30 years may not even be around. I hope that in the future we won't be reduced to having to point to pictures in a book if people want to witness the beauty of nature but this seems increasingly inevitable. What a shame.
jozzas 22 hours ago 0 replies      
There are some excellent scientists and programs attempting to improve water quality (particularly catchments that flow into the reef) and the crown of thorns starfish.

Unfortunately these are badly underfunded, not coordinated at a national level as they should be, and do not address the biggest threat - climate change. The reef is doomed unless something is done about CO2 emissions. It's probably already too late.

The loss of the GBR will see a collapse of tourism industries, and entire ecosystems will die off. There are going to be huge impacts in the next 15-20 years to come out of this.

A lot of low lying countries in the pacific will get the triple whammy of increased cyclone activity, rising sea levels and a loss of reefs and the fish populations that they subsist on. There are huge humanitarian disasters ahead.

H4CK3RM4N 23 hours ago 4 replies      
Sadly I can't remember the last time we had any real action to protect the reef, and our current government is all too willing to put businesses above the environment.
ohashi 23 hours ago 1 reply      
I saw a bleaching event in Thailand last year, it really is depressing to see. This year, the same sites have seemingly recovered and I'm really happy about that. But seeing the pictures of completely bleached white corals in the article and knowing that's probably the future for a lot of these reefs breaks my heart. Coral reefs are magical places and we're destroying them for future generations, maybe even the current one.
huckyaus 21 hours ago 1 reply      
My cousin Sam[0] runs the environmental side of things at GetUp and is putting a lot of time and effort into raising awareness about the reef. They're currently fundraising[1] for a targeted campaign in 12 Liberal electorates with the aim of encouraging MPs to break their silence and listen to their constituents on the issues surrounding Adani and the GBR.

Full disclosure: GetUp is a politically partisan organisation with strong left leanings. But I think the work they're doing around these issues is rooted more in common sense than politics.

Is anyone else involved in any grassroots-level efforts to save the reef? I'd be interested to hear about them.

[0] https://twitter.com/samregester

[1] https://www.getup.org.au/campaigns/great-barrier-reef--3/the...

Red_Tarsius 23 hours ago 2 replies      
This is what keeps me up at night. If we don't find an efficient way to extract CO2 and methane from the atmosphere, I fear that mankind might go extinct.
orschiro 23 hours ago 1 reply      
The sad fact is that even these devastating developments do not make us change systemically to the extent required to counterfeit them.
psynapse 21 hours ago 0 replies      
This really saddens me.

I spent a week or so on Lady Elliot Island more than a decade ago, snorkelling every day. Because it is a protected area, the fauna are unafraid of people. I would dive down into these cavernous bowls of coral and be surrounded by schools of fish, rays, turtles. It was amazing.

I live in Europe now, but I always hoped to take my children there to see it one day. Seems there won't be much to see.

infradig 20 hours ago 1 reply      
The real culprit for coral bleaching? A swift fall in mean sea level during a major El Nino event. See:http://www.biogeosciences.net/14/817/2017/
zipwitch 14 hours ago 0 replies      
This is merely the inevitable outcome of what we've known has been coming for a while, even if many haven't wanted to acknowledge that the corals were effectively already dead.http://www.nytimes.com/2012/07/14/opinion/a-world-without-co...
rodionos 22 hours ago 0 replies      
> Coral bleaches when the water its in is too warm for too long. The coral polyps gets stressed and spit out the algae that live in inside them. Without the colourful algae, the coral flesh becomes transparent, revealing the stark white skeleton beneath.
josscrowcroft 16 hours ago 1 reply      
Is improving water quality the decided-upon method for preventing (or even reversing) bleaching of coral reefs?

It seems like that ship has sailed, and now more technological advances are required.

Speaking from zero expertise or experience in marine biology, is it not possible to manufacture massive quantities of synthetic coral that somehow corrects for the changing water quality to enable coral life to flourish?

ozy 11 hours ago 0 replies      
So ... when do we shoot some rockets high up and spray something reflecting sunlight, changing earths albedo? Plan B?
Slobbinson 19 hours ago 1 reply      
Pauline Hanson & Malcolm Roberts will pose in front of the coral display at the Townsville Aquarium and tell us it's all a beat-up.
good_vibes 21 hours ago 6 replies      
What can we do? Serious question.
hoodoof 23 hours ago 4 replies      
harry8 15 hours ago 1 reply      
I've been hearing the reef is at death's door for 30 years. When I go, it looks amazing. Still. Maybe this scare campaign is different to all the other ones. Maybe this time as they cry wolf there really is a wolf. Maybe... If so it's a perfect example of the environmental movement destroying their own credibility so that not many in Australia, among those who love the reef, care what they're saying this week. People need to call bullshit on bullshit whether that bullshit is in the service of great justice or not. It's still bullshit and it still trashes credibility. Outrage fatigue is far worse when you know you've been had.
Ask HN: Do you still use browser bookmarks?
415 points by ethanpil  3 days ago   430 comments top 258
Houshalter 3 days ago 11 replies      
Of course, and I'm surprised many people don't. Chrome handles bookmarks well, automatically syncing them between different machines you are signed in on. I used to have them nicely organized into different folders but now it's a bit of a mess... It's especially useful to deal with tab explosion. Control+D and you can just save all your tabs in a single folder (and never look at them again.)

The biggest problem is linkrot. As a rough estimate 13% of links die every year, and it's quite possibly much higher than that. (https://www.gwern.net/Archiving%20URLs) Without the glorious web archive, bookmarks would be unusuable. And I wonder how many people know about web archive.. Youtube-dl may also be useful if you want to preserve music or videos (despite the name, it works on almost every site I've tried it on including audio sites.) Someday I intend to script something up to automatically scrape all my bookmarks and make a local copy, but it seems complex.

Cyph0n 2 days ago 11 replies      
I have a ton of bookmarks, but I use them passively. From my experience, Firefox is the undisputed king of making sure anything you type in the address bar will be instantly checked against your bookmark collection.

For instance, maybe I'm looking for a PostgresSQL tutorial. I start typing "postgres" and one of the bookmarks I forgot about from several months back appears. This approach has ended up saving me a lot of time over the years. Another cool thing is when a bookmark pops up when I'm searching that brings back memories. If the site is still up, I get a free trip down memory lane :)

My collection is at least 9 years old now. I've been maintaining the same Firefox database over the years by migrating it manually from version to version. Now it's seamless thanks to Firefox Sync. I get my bookmarks on my PC, laptop, and my phone. I have an Xmarks account as a backup, and for cases when I prefer to use Chrome.

jcrites 3 days ago 6 replies      
I don't use browser bookmarks but I do use bookmarks through pinboard.in: https://pinboard.in/u:jcrites

With a paid feature called an archival account, Pinboard stores an actual copy of each bookmarked article, kind of like your own private Wayback Machine. It provides full text search over these articles.

I frequently save articles that I read so that I can refer to them later. It doesn't happen often, but once in a while I will desire to access an article that I read a few months or years later, and I find Pinboard well worth the value for making it possible for me to actually identify the article and retrieve its content regardless of whether the original link is still around.

I find this especially useful because it is my habit to collect citations for various facts. When I find myself making a claim in conversation, I really want to be able to access the original source where I learned about the fact, and provide the evidence to back it up. Or to review the source to confirm that my memory of it is accurate. Or sometimes I want to share a useful article explaining some topic with a colleague or friend.

I do occasionally use the browser bookmarks a sort of clipboard or working set, for 5-10 links at a time. I use Google Chrome and it syncs bookmarks between my devices.

ikawe 3 days ago 5 replies      
I probably have 500 bookmarks. I never click on them though.

Instead I (ab)use bookmarks as a way to increase the weight of URLs in chrome's navigation bar autocomplete/suggestion algorithm.

e.g. If you find that you're going to a site's homepage and clicking three times, instead once you get to the actual page you want, bookmark it. You can even give it a more memorable name, like "standup hangout" and then watch it autocomplete from the address bar next time you start to type the URL.

kusmi 2 days ago 4 replies      
I used to, I now use zotero to save whole pages onto webdav, from there bunch of scripts peel the ads off, scrape the text, convert to PDF, store in cms and index for full text search on solr. Also hooked up Dropbox to do the same for one click archiving from mobile. Since Dropbox and the webdav are shared between my partners and I, it's a convenient way to build knowledge base. Experimenting hooking up Telegram and slack as well to integrate everything for no hassle user-end. The real pain in the ass is passing the URL itself, consistently, without insisting users use another third party app.

*Forgot to mention the best part: Backend pools these full-text documents, cleans and parses for NLP, then generates meaningful tags, and organizes documents in an auto generated folder hierarchy which is based on word2vec/doc2vec and content clusters. Whole thing runs on a dedicated server with two 1070 GTX video cards for the NLP work which is training and re-evaluating constantly as new content pours in.

Altogether it was 2-3 years of work.

threepipeproblm 2 days ago 0 replies      
At some point, it occurred to me that almost all of the bookmarks in my ever-expanding collection really represented "to do items" more than "reference items".

As others have said, most things can easily be searched as needed. But I was using bookmarks as placeholders, saying "I wanted to read x later", in most cases... sometimes other things.

So I started treating bookmarks as various categories of todos. I do have a reference folder, but it has less than a hundred items. I often use those only passively -- i.e. when typing into the address bar, the starred link will come up first.

All the other links are sorted into categories such as "files to download", "new articles", "new buyables" and so forth.

Now that I think of Bookmarks as deferred work, it has changed a lot of habits. My total number of bookmarks has slowly dropped, and I tend to handle more stuff as it comes, or not at all -- or at least to be more conscious of bookmarks as a cost.

An unexpected benefit has been a feeling of mental satisfaction, after closing a lot of semi-forgotten, open loops. I now think a big unorganized pile of bookmarks can represent a real liability, whereas if you actually go through all those links and delete the weaker ones you get a concentrated pile of goodness. You hit a point where you'd rather read your remaining bookmarks than most news feeds.

mr_spothawk 2 days ago 3 replies      
I have tons of bookmarks. Pro-tip: make a bookmark, edit the bookmark, set the title to "" <empty string>. Then you have it's favicon as your site launcher.


sometimes I make use of the features "open all bookmarks in this folder".

other times I use the bookmark to (as somebody else mentioned already) weight consideration of sites I'm interested in getting results from.

aside: at hackreactor, I worked with some folks on the beginnings of a chrome extension to grab your bookmarks, analyze the content of each site, and suggest new bookmarks when you open a new tab. the suggestions part was working already by the time I came around. then I got a job and that pretty much fell out of priority... heh.

INTPenis 2 days ago 2 replies      
No and it worries me. I have a great memory normally, I speak several languages and computer languages. I was raised in the era before search engines when bookmarks were important.

But these days it worries me to say that I just visit the same three websites over and over. Aggregation websites with links and content.

Sometimes I find myself staring at the url bar not being able to think of anything to do because I've visited my three websites already.

Of course besides those three aggregators there are sites like google and stackexchange that I visit indirectly. And any blogs, forum and such that I might find through google.

bm98 3 days ago 9 replies      
I'm a little surprised that the majority of the answers here are Yes!

I help my parents and my kids work with bookmarks but I have none myself; and I was beginning to think that bookmarks were primarily used by non-technical people. I guess I was wrong!

Everything I need is a simple URL (like, my bank: usaa.com - why would I bookmark that?) or a quick Google search away. If I come across a deep link that's so important that I want to keep it, I email myself the link along with maybe a short description, and it will be searchable forever.

My lack of bookmarks fits with the rest of my "online personality". I have 14,183 threads in my work email inbox and I do not file emails into folders like most of my colleagues. I do not have the desire or the time to manage email folders or browsing bookmarks.

Also, the fact that I browse in a "clean" browser instance in SELinux that saves no history from instance to instance probably contributes to my lack of bookmark use.

jacquesm 3 days ago 3 replies      
I do, but I've also come to rely on a plug-in called 'scrapbook'. It allows you to cut a snippet from a webpage and save it along with the url of the original.

Very handy, and it also protects somewhat against linkrot.

I've tied it to a hotkey to copy any bit that is highlighted to the currently open scrapbook. (shift-ctrl-b) without further notifications or interaction other than the keystroke. Super quick and it doesn't get in the way of continued reading.

Existenceblinks 3 days ago 2 replies      
My bank url is hard to remember, and search it on google is risky to be a victim of fake sites. So anything fake-able is on my bookmark.

Well I've marked a ton of urls and rarely revisit :(It's like having a camera, take photos and forget them forever. It's a tool to help you forget things, not to remember, sadly!

morganvachon 3 days ago 1 reply      
I use them in three ways: My most used bookmarks live on my bookmarks bar in Firefox with the text removed, so they are just icons of the favicon.gif from the server, screenshot example here[1]. The lesser used ones live in the "Folders" folder under a tree style arrangement. The third method is via the "ReadLater" folder which contains links I didn't have time to fully read right away, and acts as a sort of manual version of Pocket or similar apps.

[1] http://storage7.static.itmages.com/i/17/0408/h_1491614673_38...

ams6110 3 days ago 0 replies      
I have a home.html file that is my browser default page. It has all the links I use regularly, organized in a few columns that I think make sense, but more honestly I use it mainly by muscle memory. It also has input fields for a couple of different search engines.

It's very simple, no javascript and just a tiny bit of CSS.

Any time I want to update it, add a link, etc. I just use a text editor.

David_R 11 hours ago 0 replies      
Mac users...take a look at URL Manager Pro.

Macintosh-only stand-alone local database for URLs; Costs $25; url-manager.com.I've been using it for years...have 10s of thousands of bookmarks...it's where I keep all my research...has saved me many times. All data in one local file (or Dropbox)...easily portable; no annual fee.

Click on the icon in the menu bar to save the current web page Title and URL; optional field for Notes permits you to add keywords or large excerpts from the site or article.

Organize the database/outline by making folders/sub-folders. Fast db search;

Mac OS X app : Customizable toolbar, standard Window Menu and Fonts Panel, Font Smoothing, Sheets and Drawer Windows, Cocoa Status Item and support for Dropbox. Includes Yosemite Share extension and Spotlight importer.

Import and Export : URL Manager Pro can import from and export to Safari, Firefox, Chrome, OmniWeb, Camino, iCab, Mozilla and Netscape. It lets you 'harvest' bookmarks by importing XML (XBEL), HTML and text files for bookmarks and URLs. Export entire db or any one folder to HTML or text format.

JohnBooty 2 days ago 0 replies      
Hells to the yes.

To take things a step further, I'm not entirely sure how I'd function without them.

(I'm sure I'd find a way, but it would be an adjustment and a loss)

Firefox's fuzzy searching in the URL bar makes bookmarks awesome. My "workflow":

1. Bookmark anything I might need later by clicking the bookmark button. It presents a little tooltip-like popup that lets me edit the title and tags if I want to.

2. Sometimes I edit the title/tags and sometimes I don't. I make this call based on a quick judgement call on whether the default will allow me to find the article later. Suppose the article title is "MySQL Adds Froitz-Based Blammo Filtering." Well, that should suffice. But if the title is merely "10 Awesome New MySQL Features" then I might want to edit the title/tags to mention something about "Froitz-Based Blammo Filtering" if that's what I'm interested in. [1]

3. Then I usually never use the bookmark ever again.

4. BUT, sometimes I do. And Firefox's fuzzy match implementation lets me type "mysql froitz" and get a match on this bookmark 100% of the time. Chrome's matching is stupider & I'm not sure about Safari. Safari makes adding bookmarks less convenient than FF or Chrome so I assume finding them is harder. (Maybe it's not, I don't know)

I don't know about Firefox's bookmarking performance characteristics. But, I know that I've been adding lots of bookmarks forever and it "just works" and it feels instance. The fact that I've never had to think about it beyond that point is a compliment of the highest order. That's one of the many reasons why I remain a dedicated Firefox supporter.


[1] This is just a theoretical example, of course. MySQL does not actually receive new features, awesome or otherwise.

interfixus 2 days ago 1 reply      
Of course I do. Some of them neatly stacked in labeled folders, some of them just higgledypiggledy in the great unsorted. I have my bookmark history on hand to way back before the turn of the century. A lot of those links have died, obviously, but it's a neat historical record of my foci, foibles and obsessions over the years.

My data belong either offline or on serverspace I control myself. There's nothing especially secret about it, but like my email (going back more than twenty years), I wouldn't dream of storing data like that online outside my own control.

The bookmarking, by the way, used to take place in Firefox. The ongoing self-immolation of that once mighty browser has recently sent me to the Pale Moon camp. And it's like coming home. I couldn't be happier, running on various Linux'es on the household machinery. The Chrome/Chromium world hegemony is one of those sad, scary things I shall never understand.

sleavey 3 days ago 3 replies      
I use the bookmark toolbar in Firefox, but I delete the text and leave the favicons so that I can fit ~50 bookmarks in one row. I also have folders containing bookmarks for particular categories, like "Work", "Stuff to watch", etc.
hueller 3 days ago 1 reply      
I use pinboard.

As far as native bookmarks, I don't like that browsers have kind of black boxed their bookmarks and require individual proprietary cloud sync for these things (I realize Firefox has a self hosted option, but it's kind of outdated and last I checked the documentation was spotty. Even then it's only FF).

I know there's also the Netscape Bookmark Format which is kind of sketch, but at least it's something. I tried writing something that exported on close, I'd sync them myself, then imported on open, but it was pretty hacky (edit: also browsers exports are often very different so there was some normalization there that was fragile). There should be a way to setup an endpoint to natively sync this stuff with an open protocol and then all your bookmarks on all clients look the same. If you don't like that service, export someplace else and change your endpoint. Browsers should just be boxes for structured content.

rmason 2 days ago 0 replies      
I have thousands of bookmarks. One thing I've wanted Google to do for the longest time since I started using their browser was to let me limit searches to my own bookmarks.

I've got a fair degree of organization with folders and sub-folders but still spend way too much time trying to locate a specific bookmark. I've learned to edit the subject line because often you're bookmarking something called 'home' or a cryptic Github path.

JanecekPetr 2 days ago 1 reply      
Additionally to what everyone said already, I have two other uses:

1) I have a set of bookmarks specialized for search. Chrome can do this without bookmarks, but Firefox needs them. I'm talking about bookmarks like this:https://www.google.com/search?q=%s&tbm=isch

Note the %s in the middle, that's where queries go.When you save this as a bookmark and add a keyword to it ("gg" in my case), you can then search images on Google like this:

- Alt-d (jump to url bar)

- gg fluffy kittens

I have a few dozen of these: Google, G images, G translate, G maps, local maps, Wikipedia English, Wikipedia Czech, various dictionaries, whois, wolfram alpha, grammar check, YouTube, Maven search... You get the idea.

2) A huge curated collection of bookmarks to Java libraries. Something similar to all those awesome-java collections that are lately popping up, but more complete, in my browser, indexed for search and neatly grouped into like a hundred folders.

abhinickz 19 hours ago 0 replies      
I use Chrome new "Bookmark Manager" Extension:"https://chrome.google.com/webstore/detail/bookmark-manager/g...

You can Access, Search, Import, and Export bookmarks from: "chrome://bookmarks/" URL and Pressing "CTRL+D" give you option to save to folder instantly.

Sometimes when I don't remember the bookmark folder location for example(https://news.ycombinator.com), I simply type something like 'news' or 'ycom' and chrome will show me some predictions which will be combination of Bookmark, Google search, and History with different icons or text.

Currently typing 'ycom' shows me five option1. https://news.ycombinator.com/ with Star Icon.2. https://news.ycombinator.com/news?p=2 with History Icon.3. ycombinator (Google Search)4-5. More History Link.

and If I don't remember anything I just type some random words on Google to get the web link!

I also use Google Keep Extension: 'https://chrome.google.com/webstore/detail/google-keep-chrome...to organize bookmark easily with labels and colors.

aurelian15 3 days ago 0 replies      
I configured my webrowser such that it clears my browsing history whenever I'm closing my browser and mainly use bookmarks for fast auto-completion when typing in the address bar. With respect to organisation, I generally don't. I just use the "star" button to mark websites as favourites. I synchronise bookmarks across devices using Firefox Sync.
theknarf 2 days ago 0 replies      
Bookmarks are where links go to die. So yes I do "use" bookmarks, but never revisit them. What I instead do often is either keep the tabs open or, save them as notes in a note taking app. I feel that the note taking app makes it easier to organize stuff into "projects" as that how I usually work.
pmoriarty 3 days ago 1 reply      
I have thousands of bookmarks, and gave up putting them in to folders years ago. Now I just tag them with every relevant keyword that I can think of when I make the bookmark, and search them that way.

Firefox's bookmark manager is very primitive, though, and I've long been meaning to migrate my bookmarks over to org-mode in emacs, where I have much more powerful searching, metadata, editing, linking, commenting, restructuring, and navigating options.

double051 3 days ago 1 reply      
Definitely! I still keep the bookmarks bar visible on Chrome and Firefox to have quick access to my favorite and most visited pages. All of the links have abbreviated names to fit more on the bar. #1 is Hacker News, of course.

I also still 'star' interesting links and categorize them into folders. Very handy to have Chrome sync the bookmarks across all of my machines.

ravenstine 3 days ago 2 replies      
I do but only in the sense that I use it as a sort of bucket that I throw things in and almost never look at again. Basically, no.
chamakits 3 days ago 0 replies      
For my personal use? No

For work, absolutely.I have a couple of top level directories on my bookmark bar:




CurrWork-<2-3 words describing topic of work>-<Date started>

Under KeyLinks I keep well...key links, like the links to the wiki entry on how to setup dev environments, link to the Holiday calendar, link to the Jira dashboard showing my team's sprints, link to the company roadmap, etc. Pretty much just links that I'll have to refer to periodically.

Under InterestingTech are articles or things of interest I stumble upon on my day to day, but that I don't have time right now to look into.... This one is honestly a bit of a bottomless pit at this point...

Under CurrWork-* I keep all the links related to the work I'm doing right now. That means wiki entries related to it, StackOverflow links I had to use to fix something, Jira tickets, Jenkin jobs, internal web-app links, code review link, etc. You name it. If it's in any way related to my current work, and it's a site, it's there.

And when I'm done with the current 'CurrWork-*', I remove the leading 'CurrWork-' and move it to the bottom of PrevWork.

I have an awful memory, but this in combination with an emacs org-mode file for each 'CurrWork' iteration I have, I manage to be able to refer to things I've worked on in the past when people ask. After they give me a minute or two to get my bearings of course.

lucb1e 3 days ago 0 replies      
I do occasionally, though usually Firefox' "awesomebar" will get me there anyway so there is not often a need.

My girlfriend does make extensive use of it for all sorts of things.

I think my mom uses it as well. My brother and dad I'm not sure about. Not sure what that says for a confidence interval, but many people still do. Then again, I'm sure there must be clusters of people (when clustering by who knows who) that never learned it's there, or who choose not to use it.

aerovistae 2 days ago 0 replies      
My chrome bookmarks are one of the 3 pillars of my cloud identity, along with my gmail and my dropbox. You could just say my google account and my dropbox.

I have hundreds of bookmarks, covering dozens of categories of research and reading. One of the largest subcategories includes hundreds of references that I may or may not need for future projects including software (stackoverflow questions, tutorials, bug solutions, framework and API references, optimization articles, in-depth guide articles, and so on), woodworking, economic/governmental/civic/legal research, fitness, electrical engineering and general circuity/wiring, real estate, recipes, piano repair, audio production, and so on. These are all intended to be kept until needed, most likely indefinitely.

Then there's a category for more temporary things that I need in the moment and am unlikely to need again, including news/blog articles I haven't gotten around to reading yet, solutions for bugs that I need to fix, torrents I haven't gotten around to downloading, and collections of references for small, specific projects that I won't need again afterwards.

So basically I use Chrome bookmarks as my personal address book for "things on the vast internet which I wish to return to eventually."

For major things I use daily, like youtube or gmail or facebook, I don't bother bookmarking them-- for those I just use the address bar's semi-intelligent autosuggest....Ctrl+L to go to address bar, then I just type g and hit enter, or y and hit enter, or f, etc. The only website I need to type out beyond 2 letters is twitter/twitch.

I guess this may sound odd, but Chrome has begun to feel like a natural part of my mind. The bookmarks and my gmail are an extension of my memory. My interactions with the net are an extension of my thought processes. I have seen other people make similar remarks about their phones.

slowkow 2 days ago 1 reply      
I use diigo. The free version lets you cache the page, annotate the page with highlights, and tag your bookmarks. The extension for Chrome works very well. They also launched some PDF annotation features, but I haven't tried that.


skraelingjar 2 days ago 0 replies      
I still use bookmarks, but rarely go to them.

All of my bookmarks are resources, something for me to read or use at a later time. Some are for things I want to learn, some are for things I knew but have lost to time, and others are just.. out there. Like this: https://apps.fcc.gov/oetcf/eas/reports/GenericSearch.cfm I have no idea why I bookmarked that (or when).

Another example: This week I decided to learn Rust. I was listening to a podcast and the host mentioned rustbyexample.com and I visited the site and realized previous me had bookmarked it thinking I would decided to learn Rust at some point and it would be a nice resource.

Maybe something that would look at my recent history and say Hey X has been in your bookmarks for months and it's related to all the Y pages you've been visiting.

None of them are organized, I'd pay for something to automatically organize them.

aswerty 2 days ago 0 replies      
I'm a big fan of bookmarking but I found the browser features didn't fit my needs all that well. So I built my own browser extension which I really like. Hasn't taken off at all and development has kind of stopped for the time being (other work has put it on the back burner) so it's still just available on Chrome.

Using it I hit Ctrl+M (the shortcut to open it) and then I have my top 20 sites key bound. So HN is Ctrl+M -> h. All my other bookmarks can be accessed via a search feature which you can tab or "/" to get to on opening the extension. I hate lists/folders so my bookmarks are all hidden away behind the search function. The extension is built for either the mouse or keyboard so I have a lot of flexibility in how I interact with it.

The site for it is: www.devmark.io

juki 2 days ago 0 replies      
I use Emacs / Org Mode for bookmarks. I use a few different browser profiles, and usually always want to open a bookmark in a specific profile. Having all bookmarks in one place is much easier than figuring out which browser I need and then finding the bookmark in it. Plus this way I can use other Org Mode features with them too (adding any arbitrary notes/tasks to them, todo keywords for a reading list, refiling/archiving, etc.)

Basically I just add the properties URL and BROWSER to any entry I want to be a bookmark. I have the numpad key 4 bound to a command that opens the url (works either in the org file buffer itself, or from an agenda view). I also have the numpad key 1 bound to an agenda search for the tag :bm: (searching for a property is too slow), so I can easily get a list of all bookmarks, which can then be filtered by tags, category, top-level headline and regexes.

j0e1 3 days ago 0 replies      
Oh yes! I use them in FF and have organized them in folders with tags for hassle-free retrieval across devices.

I find them extremely useful for tutorials/learning new stuff which I know I need/want to learn but just don't have the time at the moment. Whether or not I end up coming back to them is a discussion for another day ;)

Most of my bookmarks are via HN.

vbezhenar 2 days ago 0 replies      
First of all, I use reading list. It's kind of bookmark in Safari. If there's interesting webpage, but I don't have time or mood to read it, click and close. If I answered and want to check it later, click and close. Once a week I breeze through them and delete, so it won't stockpile like a mountain.

Second is Favourites (like bookmark bar), I can access it from blank page. I'm saving webpages, that I visit often, news, important forums, etc. Also webpages, that I'm using currently in work (e.g. Postgres documentation, if I'm working with it right now.

Rest is just organized by topic list of webpages that I could use later. I'm not using it that often, but sometimes it might be handy.

tempestn 3 days ago 0 replies      
I use browser bookmarks frequently, but have few of them. I tend to use them primarily for utilities and other sites that I visit somewhat regularly. Basically the two main cases are 1) sites I visit so often that it's handy to get there with a single click as opposed to a couple of characters in the address bar, and 2) sites with strange urls and/or ones that I access repeatedly but infrequently, so might not remember where to find.

For anything I want to remember for later, or keep for reference material, I clip it to Evernote instead. Find that much more useful, as normally when you're looking for a piece of reference material it's going to be able to remember some keywords from it than the title or where you filed the bookmark. Also means you can easily reference it even if the page is missing or changed in the future.

alexdumitru 3 days ago 0 replies      
I do, but I've always find it pretty hard to use them, because I forgot what exactly I bookmarked and in what folder.
navs 2 days ago 1 reply      
Personally, I've stopped bookmarking everything I find somewhat interesting. Now if I do find something and it will be used in the next week/month, it's often part of an existing project or idea, and so it gets thrown in a text file that's versioned.

I started doing this after accumulating a huge index of bookmarks spread across saved.io, Evernote, Google Bookmarks, iCloud, Firefox, Opera, txt files, Google Spaces and the other dozen or so bookmarking/collaborative knowledge sharing platforms showcased on Product Hunt.

I'm surprised there's no digital equivalent to the Hoarders TV show. I suppose thousands of bookmarks are less impressive than a garage full of old newspapers and rats.

guilhas 15 hours ago 0 replies      

= Or =

Zim wiki withWith copy paste urls"Copy URL + Title" (Chrome)"Multiple tab-handler" (Firefox)

unholiness 1 day ago 0 replies      
I'm surprised no one's mentioned the chrome feature that's mostly replaced bookmarks for me.

Just like aliasing commands in the terminal, you can alias web pages in Chrome's address bar. So, when I type "je" in my omnibar it has an autocomplete option "Jenkins", and pressing enter will take me to the URL I set for the Jenkins home page.

This feature is poorly named "search engines", and yes, it is extensible to putting extra strings at the end of that URL (which could be registered as a search term within that site), but I've been using it for years, and 99% of my use is simply mapping arbitrary strings to arbitrary URLs. It works amazingly for that. No mouse movement digging into bookmark folders required.


nevatiaritika 2 days ago 0 replies      
I use the bookmarks bar to neatly organize my frequent links. And I kind of have an OCD when a link is misplaced in the wrong folder. Of course, then, some folders, I never visit again but some, very very frequently.

Also for the habit of reading/skimming articles and often hopping from one URL to another, I use One Tab. Super efficient to collect links in one page:https://chrome.google.com/webstore/detail/onetab/chphlpgkkbo...On the negative side, my work PC has over 800 URLs and home PC about 1500+

AceyMan 3 days ago 2 replies      
I treat URLs like any other document: I click+drag the favicon off the address bar and drop it in the target folder in explorer (file browser in Windows), which creates a dot-url shortcut.

Why keep resources in a unique silo? You wouldn't keep all your PDFs/Word/rtf/&c in a "<ext> manager app", so why do URLs have to be kept in one?

Also, this way they all get backed up since I keep all work docs on my network drive.

I'm surprised no one else follows this pattern, but I've never seen anyone else use it, nor have I won over any converts via its sheer awesome factor <shrug>.

FYI, works in FF and Chrome, but not Opera. (Bummer, because I like Opera generally, and it's my default Android browser.)

zengid 3 days ago 0 replies      
I used to but now I dump everything into Pocket. I would only say it's useful because it satisfies my need to horde interesting information.
cyberferret 2 days ago 1 reply      
All the time in Chrome. I have a fairly rigid structure in my Bookmarks folders, where I categorise all my hobby and professional interests. I like that it is synchronised across all my devices too.

I used to use Pocket a lot to do similar things, but categorising, and browsing the saved links was a little too cumbersome.

Plus I like that I can search just within my plethora of bookmarks if I want to reference something I know I saved a year ago. [0]

[0] - https://www.lifehacker.com.au/2015/02/quickly-search-just-ch...

Mikhail_Edoshin 2 days ago 0 replies      
Yes, I use them a lot, but find the organizing tools pretty lacking. E.g. I program in Python and keep a bookmark for many Python modules in alphabetic subfolders so that I can quickly jump to the docs. It's rather boring to maintain this setup. I also dance tango and I'd love to bookmark many Youtube videos but here the tools are really primitive: there's lots of ways to organize this (by type of video, dance style, by principal figures, by dancers -- sometimes more than one couple -- by music maybe) and no easy way to do anything other than a silo of "all things tango".
jdbernard 3 days ago 0 replies      
Absolutely. I trust Firefox Sync more than third-party services. I have a fairly comprehensive hierarchical structure. I only bookmark things I really care about, but even still have hundreds of bookmarks in tens of folders. It's useful because it makes these sites show up immediately in the address bar as I start typing. API done for example. I just start typing the library/API name and the address bar autocompletes the part to the doc index b/c that's what I've bookmarked. One step shorter than bouncing through Google.

I also bookmark articles that I think I'll reference in the future, that supports or contradicts something I believe strongly.

m-p-3 3 days ago 1 reply      
I still do, but I find Google Chrome bookmarking system to be a bit too simplistic.

I mean, Google is usually strong on that from with labels in Gmail, Keep but for some reason they never implemented that in their bookmarks. It would makes more sense than using folders IMO.

kk_cz 17 hours ago 0 replies      
yes, but only via bookmark toolbar - if you create folders here it acts like a pull-down menu that you have always on top of your browser + plus adding and categorizing new bookmark is as easy as dragging the site's url into matching folder. I haven't seen the "regular" bookmark manager or "add new bookmark" dialog in ages.

About 5-10 most used links are simple 1-click buttons, rest is sorted into folders.

Most of these aren't content websites, but rather different webapps, or links into webapps (like direct link into some intranet forms that are used maybe 2-3x per month)

ernsheong 3 days ago 0 replies      
I am currently developing PageDash, a personal web archive web-based app. The key difference is that I want to preserve the page exactly just as I saw it, with the help of a browser extension.

Reason for this is that I love saving pages that I encounter. I use Evernote Web Clipper a lot, but it frequently fails to keep the styling perfect. Secondly, a lot of archivers can't save pages behind an authentication layer.

To be notified when it launches, let me know here!https://goo.gl/forms/X1IBqaA03kekR2Db2

abrkn 2 days ago 0 replies      
I have hundreds of them in Chrome, ported over from all kinds of browsers and services over the year. I never use them. They just sit in the bookmarks toolbar and annoy me.
khedoros1 3 days ago 0 replies      
Yes. Anything essential goes in the bookmark toolbar (mostly thinking about internal sites at work). I save a number of keyword searches (like "yt" for youtube, "wp" for Wikipedia).

For personal machines, I've got about 5 machine+OS combinations, with 2-3 browsers on each that I use for various things. I chose not to set up sync accounts in any of the browsers (I've already got too many damn accounts to manage, thank you!). So I sometimes save a bookmark if I'm in the middle of a long series of pages about something, as a sometimes-completely-literal "bookmark".

Myrmornis 3 days ago 0 replies      
No, but I use pocket. I put links to technical stuff in there that I intend to read, but I rarely look at it. Maybe I'll start remembering to after this thread. I noticed a while back that pocket allows you to dump them in a text format; I was intending to do that and store in a git repo or my gdrive, so that I would be more confident I'd have them for the rest of my life. I sort of don't really know where chrome's bookmarks are kept, which makes me less inclined to use them, but that's almost certainly lazy/ignorant of me.
stevewillows 3 days ago 0 replies      
In my Bookmarks bar I have 'General', which breaks down into about fifteen categories. Each of those is broken down into several folders --- its very organized. I go through it once a year or so to clear out links I'm certain I won't need in the future (usually project ideas).

I use Bookmark Box to sync with other browsers by way of Dropbox. Its not perfect, but it works.

For the rest of the Bookmarks bar I have my most common links -- a few spreadsheets (in Drive), some web apps, and a folder for forums I frequent. I also have a bookmarket for Pepperplate, which I use on a regular basis.

rwanyoike 1 day ago 0 replies      
Yes I do, I use bookmarks to save time. I have them organized in different folders, with a few for "temporary" bookmarks that I clear out regularly. I try to limit my bookmarks to landing pages, online tools and references - stuff I know I'll revisit, while I send article/news bookmarks to Pocket or Feedly (RSS).

A problem comes up when searching for bookmarks that don't have keywords in their titles. I use a WebExtension [0] to update my bookmarks with website descriptions, increasing the odds of finding them.

[0]: https://github.com/rwanyoike/bookmark-refresh

dorfsmay 2 days ago 0 replies      
I use wiki's, because "links" only make sense in a given context, so at work I add the noteworthy links in the local wiki, and for my personal use I keep a series of text files of notes on particular subjects, where I add links.

I do however use bookmarks on my laptop to point to locally installed document such as the full python library doc, in order to be able to access those when offline (eg: in a train).

mud_dauber 2 days ago 0 replies      
My bookmarks bar has my top-30 list (mail, feedly, HN, ...) I tried a folder system but found the amount of overhead to be WAY too cumbersome.

I capture stuff to read in Pocket. If I eventually find the link to be valuable (news: almost never; how-tos: much more often), I move it into a Google Keep "PostIt".

The value-add is that I can add pics, notes, links to Dropbox docs, etc in the same PostIt, and organize them as I see fit.

mrmondo 1 day ago 0 replies      
More heavily than ever, I have every bookmark in my bookmarks toolbar, all in folders such as 'home' and 'work' for local URLs, 'checkout' for things I've found but not researched, 'git' for URLs to my GitLab / GitHub projects etc... pretty much every fancy bookmark management service or replacement has massively disappointed me, overly complicated or requires running background services (like xmarks) etc... the only reliable one is the built in Firefox sync, then I use a plugin to export bookmarks to HTML on quit which exports to a directory in my Dropbox directory.
taude 2 days ago 0 replies      
Installing the Quick Tabs [1] Google Chrome plugin has completely changed my use of browser-based Bookmarks: with cmd-e an intelligent search box pops up giving me instant access to my history or bookmarks folder.

[1] https://github.com/babyman/quick-tabs-chrome-extension

chrsstrm 3 days ago 0 replies      
Literally thousands...

Organized in ~150 folders all with subfolders. Ditched the bookmark services when Chrome started syncing data across devices. There are three features I would love:

 1. The search box in the manager does a full text search of the content on the bookmarked page instead of just the title (at the time it was marked, not updated). 2. The ability to search by URL with regex. 3. Show the date I bookmarked the page.

richardw 2 days ago 0 replies      
Yes, lots, in Chrome. I have folders directly in the shortcuts bar, with e.g. "Money", "News", "Proj" and current projects usually have their own folders. One I use a lot is "Topics", which has many subfolders for e.g. "Analytics". I use the Other Bookmarks list for things I use regularly but not often (e.g. once a month).

I definitely would like some improvements. My "Topics" folder is huge and I don't really need it loaded each time the browser loads. Just save it in the topic and let me find it later. Also, if Chrome has my shortcuts, why doesn't Google highlight those in search results? And maybe auto-link the saved shortcuts to the terms I used when finding them in the first place. There's a lot of meta data in that action - search-search-search, save. Google knows quite a bit of my thought process (via keywords and sequence), so use that.

hkjayakumar 3 days ago 0 replies      
Yes, I do. As a university student, it's really useful to be able to view different course webpages, schedules, important dates, etc - all links that I would access frequently (almost every day)

Apart from that, I also use browser bookmarks for links I want to (or have to) view in the near future. It acts as a constant reminder since it's always visually present.

I use Pocket for articles/links I can afford to view during my free time.

jbmorgado 2 days ago 0 replies      
I do bookmark them, but I end up almost not using it.

The only thing that actually kind of works for me is to bookmark stuff in "sessions" and then open the all tabs the next time I want to work on something. For instance I was trying something very specific involving deep learning at my job, then I had to do some actual work to prepare an article and I put that DL project aside. So, I make a bookmark folder with all the open tabs and closed the window. Now I got back to that DL experiment opened the all tabs again and that kind of worked for me. But this is not really a reference system, it's just a "sessions" system.

As for the traditional role of bookmarks, I don't think they will ever work for me without a single main thing: Full text search.

Every few months I try to clean the mess my bookmarks have become since I can't find what I need and a few months after everything is a mess again.

The tag or folder system simply just doesn't work for me, I keep too much stuff to check later as ideias and then I can't really find it because I have this folder "check later" where I have dozens of separate ideias and I can't really just find that one idea I had.

The solution seems to be some kind of full text search, where I can have a way to describe in a fuzzy way what I was doing, something like: "python, maps, names, germany" and go back to that post I remember where they where doing some analysis of the "last names of people in germany in different regions" and that I, of course, don't remember the name anymore.

I recon it's a very specific problem that only makes sense for people that think the same way like I do, but I'm also quite sure there are a lot of us like that and that this is the kind of solution that at least would help us a bit using the bookmark system.

mspaulding06 2 days ago 0 replies      
Currently I use a variety of techniques for managing content I would like to revisit on the web. I do use bookmarks mostly for often visited websites and I always using syncing if possible with Chrome and other browsers that support it (Brave does now!). I've also discovered some browser plugins that really help with this. OneTab is absolutely indispensable and will store all of your currently open tabs so that you don't have to keep them open. That's great when I've got several tabs open on a single subject that I want to come back to later. I've also started to use Pocket for most blog posts and random things that I want to read some time in the future but can't right now. The nice thing is that it is accessible from all my devices so I can put links into Pocket from my phone and then go to them from my desktop computer.
sacado2 2 days ago 0 replies      
Yes, but only for

- tabs I haven't read yet, but I need to restart my browser for some reason, and I want to be sure the tabs won't be lost ; in this case, those bookmarks are disposed of as soon as the browser restarted

- content I'm pretty sure I'll want to read back in a few time

I only use the bookmark bar, so I have to limit what I save. When it gets too big, I clean it up.

AJRF 2 days ago 0 replies      
I do for sure. I do this thing where I save a bookmark without its title, so it just has a little favicon icon on my bookmark bar, and it is very nice and clean.

I also have folders for Work, Blogs and one for improving myself as a developer. I love browser bookmarks, but am not exactly a poweruser, but I would miss them very much if they were taken away.

vojant 2 days ago 0 replies      
Not anymore.

Just google everything when I need to find something. In the past I was using bookmarks to track blogs I follow but these days there is too much content. I just google/HN search stuff when I need to find something. I tried going back to bookmarking stuff/save for later but I just never got time to go back to the them.

superasn 3 days ago 0 replies      
I do especially because Chrome syncs them everywhere including my mobile phones, laptop and desktop.

It's also useful to bookmark in browser because the address bar gives priority to your bookmarks over auto-complete and history.. So it's much easier to access those sites too.

P.S. I organize them by folder, so it's most likely design -> landing pages -> dark -> bookmark or personal -> finance -> bookmark, etc.

vorg 2 days ago 0 replies      
I use the bookmark ribbon in Chrome as a "to visit soon, or return to" list. Stuff I would normally look through the history for.

My most desired feature in Chrome is being able to right-click a link and add it to my bookmarks. Presently, I have to open the link in a new tab/window (using right-click, then T or W) then go to the tab/window, click on the bookmark star, then close the page (i.e. before it finishes loading). If I want to avoid loading a page I don't want to look at right now, I'll right-click on the link, then E to copy the link to the clipboard, then go to a new tab, bookmark it, right-click on the new blank bookmark link, then E to open the editor dialog, type in some suitable title, tab to the address text box, paste in the URL, then click Close. Either way, it just isn't simple.

tomfitz 2 days ago 0 replies      

I use Google Keep to store URLs, typically with some note, for example:* "Specialized Sirrus bike rear derailleur. Model number: DO20. URL: https://www.amazon.co.uk/dp/B0047D192E/ "* 2015-03-01: Visited doctor. They referred me to physio, and told me to read http://www.arthritisresearchuk.org/arthritis-information/con... for exercises/stretches to relieve pain."

Google Keep supports tagging and search, so I can usually find things. For things I want to read later, I either put it in Pocket or use Google Keeps' reminder functionality.

Chrome integration looks decent (save web pages as an image), but Firefox integration is lacking.

gkya 2 days ago 0 replies      
I use them quite a bit. They are the only completion source I allow for firefox, so when I type something other than a URL on the URL/search bar, I either hit the down arrow and select a matching bookmark, or hit enter and run a search.

Structurally my bookmarks are an ever growing list, they all go into the bookmarks menu in firefox. I occasionally tag them too. Most bookmarks are part of my "online library", I keep them so that if I ever want to send a link to sth. I liked to someone, use them in an article, or maybe read again. I have a separate read-it-later list in an Org-mode file.

Some of the bookmarks are shortcuts, mostly to different dictionaries in WordReference, to Collins english dictionary, and various websites I browse often, like Reddit, HN, my school's, and my own website that I check every-so-often when I upload sth. new.

iand 3 days ago 0 replies      
Yes. I use the bookmark toolbars in ff and chrome with icons and no text for common pages (like this http://imgur.com/a/uZBB8).

My only other use is for groups of pages that I'm referring to or want to come back to as part of a project. I usually delete them after a few weeks.

For long term bookmarks I use pinboard.in

susam 2 days ago 0 replies      
I don't use browser bookmarks.

I save my bookmarks in a text file, commit it and push it to a remote Git repo. I have this Git repo cloned on every system I use. Since the Vim editor is part of my daily workflow, visiting one of the URLs in the text file is a simple matter of pressing `gx` while the cursor is on a URL.

This is useful to me because I have this repo cloned on every system I use for various reasons, e.g. it contains my daily notes, productivity scripts, etc. So it makes sense to keep all my bookmarks also in this repo. Also having the bookmarks in a text file provides me the flexibility to add arbitrary notes/comments for each URL I save. The fact that I don't have to use the mouse and I can use Vim search or motion commands to find a bookmark is a bonus.

smonff 2 days ago 0 replies      
All my bookmarks collection inside browsers always end up turning to a horrible stack of junk: I don't know how to get rid of the old stuff, you know something that interested you at some point won't be interesting later but you never know...

With the intelligent address bars of the browsers, you can search and find for most of the recent stuff that you used, and even sometimes very old stuff.

I don't use bookmarks anymore, and I feel like the bookmark bar is most of the time a useless distraction.

If there is things I really want to keep, I post it in a public Shaarli[1] instance where I force myself to use tags, description and informative title.

[1] https://github.com/shaarli/Shaarli

Edit: removed markdown

mastax 3 days ago 0 replies      
Bookmarks manager from Chrome is quite good, I think: https://chrome.google.com/webstore/detail/bookmark-manager/g...
rdpollard 3 days ago 0 replies      
I use bookmarks to keep track of the hundreds of client-specific subdomains on a site I manage for work. I start typing the name of the client in Chrome's search bar and I've got instant access to the URL. I can't think of a better way to handle that (though I'm open to suggestions if you're using something better).
pritambarhate 2 days ago 0 replies      
I use bookmarks a lot and using Chrome I sync them on multiple machines. Yet, I find that bookmarks management is a neglected feature in Chrome. I have hierarchies of bookmarks, and while creating a new bookmark it's very hard to find the appropriate folder, especially when I remember the name of the folder vaguely.

If any Chrome Developer is listening:

It will be amazing if there is some form of autocomplete to specify the folder for the bookmark. Right now on Mac, finding the folder in the drop-down is very hard. To find a folder, typing needs be fast. I almost never find the right folder, if the folder name contains a space. As after the space it starts to match from the first letter in the folder names if you take a brief pause to start typing the next letter.

theonemind 3 days ago 1 reply      
I use firefox. It's easy to bookmark things by clicking the star. I almost never pick them from menus, but you can limit awesomebar searches to bookmarks by typing "[asterisk]", so I can find, say, all of the interesting github projects I've ever bookmarked by typing "github [asterisk]"
dpcan 3 days ago 0 replies      
Yes, and I sync them with Chrome on my phone.

The Bookmarks Bar really has the only ones I regularly use though. Wish it was 2 rows.

fela 2 days ago 0 replies      
I stopped using bookmarks after I realized I wasn't using them, thanks to a combination of:

1. Autocompletion: for any website I use regularly I just write a substring of the url or Title (Firefox does this especially well). This covers probably 70% of my browsing.

2. Google. This might take slightly longer in case I want to find a specific article I had read some time ago, but it still seems less effort that having to bother with bookmarks, in my experience: either you have a very long list of unsorted bookmarks, in witch it's hard to search, or you have to spend time sorting them into sub-folders.

Now that I think of it, the following would be a very useful Google feature: +1 an url so that it becomes much more likely to bubble to the top in future searches.

hellofunk 2 days ago 0 replies      
Unfortunately yes. And they are a mess. I have different bookmarks in Safari and Chrome, and on desktop and mobile. I have them synced between devices but the UI for navigating them is completely different and this doesn't help me so much.

I have so many Chrome bookmark folders that I don't know where anything is. The only way to find one is to just search in the Bookmark Manager. It sucks.

It also doesn't help that my preferred browser on different devices is a different browser.

I hope you are asking this question because you want to do something about this State of Affairs. I would gladly enjoy a good service that solves this problem in some innovative way that my brain cannot come up with.

rvern 2 days ago 0 replies      
Smart bookmarks! Bookmarklets! RSS bookmarks! Awesomebar fuzzy matching! Along with bookmark keywords and bookmark syncing! Firefox's implementation of bookmarks is right next to Wikipedia and ad blockers among the crowning inventions of the World Wide Web.
pasbesoin 3 days ago 0 replies      
Years (and years) ago, there was PowerMarks by Kaylon. It was great. Cross-browser, pretty good automated, over-rideable indexing -- space-separated words/symbols, very quick to maintain, with fuzzy matching. Rapid, "instantaneous", incremental search against thousands of bookmarks.

It's gone, now, and I've never seen its equivalent.

These days, I use an extension that saves a local copy of the page. As others have mentioned: Linkrot.

But it's not nearly as quick or convenient to return to a page as it was in PowerMarks. Although, the extension I use does have search -- manually triggered, and thereupon taking some time to initially build the index.

But I end up saving more "read later" stuff in it, as opposed to just reference links. So it ends up being a bit noisier, and size means I end up with multiple stores having multiple indexes.

zmix 2 days ago 0 replies      
Absolutely! You will always search the needle in the haystack with Google. But you will search the mouse in thehaystack with bookmarks. And with the history set to "not expire" that mouse may even become the size of a dog.

I stopped categorizing my bookmarks into folders a long time ago, however. They just end in a single folder. Though, I love to use 'tags', which I use for important stuff, that I want to distinguish from other important stuff.

girishso 1 day ago 0 replies      
After having thousands of bookmarks on different services. I decided to build http://tweetd.com. It indexes the links you tweet.

Edit: Although, I realized that just full-text searching through bookmarks won't pop the most relevant links to the top.

mtrycz 2 days ago 0 replies      
I have something very simple, A folder called FFR = For Future Reference, where I'll keep the most interesting stuff. Trusting Trust (and Overcoming Trusting Trust), Windows' NSA_KEY, and the like. Most are in the folder with no further hierarchy, but some are categorized into Security, DIY, UI/UX, Gift Ideas, etc.

I also have bookmarks at the root level for things that I will Definitely See Tomorrow, which I never erase, because hey, They could be important.

Since it's the weekend, have this extremely educational video about languages https://www.destroyallsoftware.com/talks/wat

joveian 2 days ago 0 replies      
I use bookmarks in two basic ways. One is that I have Firefox customized to have two rows of header and on the right half of the top row (which has tabs on the left) I have favicon only bookmarks of sites I look at frequently (like hckrnews.com) so I can open them with one click. I have eighteen such bookmarks plus a link to browser preferences. I have seven folders of bookmarks, either just with the folder icon, one or two characters of text, or a single emojii character for identification. Three of these contain links to my favorite articles (Firefox is bad at scrolling in bookmark folders :( ). One has links I occasionally want to use but not often and one is supposed to have things I want to go back and look at somewhat quickly but not quickly enough to be worth a top level link (I need to clean it out, though, I've collected too much that I am not going to go back to). I'll sometimes create temporary folders about a particular topic.

I bookmark most pages I view as unsorted bookmarks (especially helpful for news sites that have essentially no way to ever find old articles) and then ones I am more interested in I add to another folder that I occasionally divide into smaller folders (to avoid needing to scroll) and put all of these smaller folders ordered chronologically in a folder to the right of the tabs. I usually search the bookmarks first when looking for something, but I don't tag and too often neither the title nor url contain the right keywords for me to find it.

I would really love a more unified bookmark/history system along the lines of Vivaldi's calendar history, but being able to create icons that will flag the current page (to be able to look through just the more interesting history) and other icons that would cause the current page to be saved to a particular folder as a bookmark. Then at most one click would reproduce my current system other than occasional reorganization. Since I can't predict in advance most of what I want to refer to again, I want it to take as little time as possible to bookmark things. I liked the star in Firefox better when it didn't pop up the folder selection unless you clicked it twice.

kijin 2 days ago 0 replies      
Yes, every day, as part of a two-tier system.

I use browser bookmarks for pages I visit every day, or for pages that I intend to view again in the near future. An icon on a toolbar right on top of the browser is much easier to access than a link stored in a third-party app or website.

Of course I could just keep all those pages open in background tabs all the time, but I don't like clutter. Having too many open tabs also consumes a nontrivial amount of CPU and RAM. Bookmarks are also safer in case the browser crashes and fails to restore all the open tabs.

I use Pinboard for pages that I might view again at some time in the future, for research or some other purpose. The archive feature is very useful for this.

snlacks 3 days ago 0 replies      
I use bookmarks, but rarely for clicking from the bar. Chrome and Edge promote bookmarked sites in the nav bar suggestions when I'm typing. I usually use descriptive names of the content so I can find stuff I liked or go to often by typing a couple letters.
hashhar 3 days ago 0 replies      
Absolutely yes. It serves two primary purposes for me:

1. Archival. If I like something and will need to refer to it/revisit it later (more than a month, say) I will bookmark it.

2. Frequently used pages sit neatly on my bookmarks bar so that I can get to the websites I want quickly just by glancing at their favicons.


I primarily organize in 3 levels.

Top level: This is where frequently used stuff goes. I have configured FF to only show favicons for these so they take little space. eg. HN, GitHub, Outlook, Reddit and Bugzilla.

Second level: This is where things go for archival. I have bookmark folders at the top level that represent a category. eg. Books, Movies, Tech, Coding. Each of those can be further categorised. An example is my Tech folder is broken up into Articles, Blogs, Podcasts, Material (projects, GH repos etc.).

The void: This is the final level or organisation and is just a catch-all folder called Sort-These-Out where all stuff I'm too lazy to organise (or which isn't well defined right now, or things I'll get back on another machine maybe (Linux vs Windows)) goes. It currently has 13 bookmarks. Not bad.

PS: Did you know you can send tabs across Firefox instances on different machines by right clicking and hitting "Send Tab to Device"? The best thing ever.

EDIT: Forgot these two features.

1. Keyword search. Kind of like the bang query syntax from DuckDuckGo you can set up a keyword to search a single website by creating a bookmark. So I can go 'gh mycoolrepo' for searching on GitHub.

2. Tags. Firefox allows you to tag bookmarks. It helps me a lot when, for example, I want to find all bookmarks related to vim (but don't necessarily have vim in the page title). I'm working on an autotagger that integrates into Firefox to save me from having to tag them myself.

[1]: http://www.wikihow.com/Use-Firefox-Keywords (See method 2 for easier variant.

Sebatyne 2 days ago 0 replies      
I stopped using browser bookmarks to use a web app (the bookmark manager of officejs, https://www.officejs.com/), directly integrated within any browser by updating the default search engine. Having them synchronized on a webbdav server, after loging into the app I can access them from any browser on any device. Then all my searches in the browser bar go through my bookmarks first, and it redirects me to a real search engine if no match has been found.
Steven_Bukal 2 days ago 0 replies      
I have lots of bookmarks, mostly for a few purposes:

1 - Pages I want to autocomplete so I don't have to remember and type the full address or verify that I'm on the true site for my bank and not a phishing site

2 - Content to do something about in the future. Stuff to read later, stuff to download to my local machine, etc.

3 - Resources that I want to remember exist and be able to find. For example, I've got a page saved that produces blank graphics in whatever dimensions you want for use in stuff like web design. Forgetting what it is called, I could look it up in my bookmarks pretty quickly instead of having to open photoshop and create such graphics manually

__jal 3 days ago 0 replies      
I do, for frequent access stuff. Work-related things, personal apps that run in various places, frequently visited sites. The trick is to keep the number low, otherwise I'll never use them because they're impossible to navigate.

For reference material, I built something sort of vaguely like pinboard.in into a home-brew app that I run for myself. It handles search, a modified form of tagging, and a timeline-like view, and I get to it with a JS bookmark (tada) that lives in-browser and sends selected text as a search.

(The app itself is a ridiculous mess, having grown as a sort of cancer in a different app I wrote for myself that now does several unrelated things. Maybe someday I'll pick that crap back apart into something releasable.)

csydas 2 days ago 0 replies      
I do and have for our support team for our company when we hire newbies. We have a pretty standard "load-out" of commonly used pages and sites, both internal and external, which are commonly used for support calls for the product. A newbie might not have use of every single link, but having a curated list of "this will be useful at some point, just keep in mind that it's there should you run out of ideas" really helps them get past the initial hurdle of learning the ins and outs of the product and the other elements that support it.
blakesterz 3 days ago 0 replies      
The only part I use is the bookmark toolbar, which I use HEAVILY. Just counted, I have 30 in my toolbar. I never use any other bookmarks now though. I still have all my old bookmarks in backups going back to the late 90s though. Fun to look at every once in a while.
kxyvr 2 days ago 1 reply      
I have hundreds of bookmarks stored across dozens of folders based on topic. I've been burned in the past with Google changing their search algorithm and not being able to find material easily, so I just bookmark everything I want to refer to later now. To that end, I primarily use Firefox and periodically archive them using the "Import and Backup" option from the bookmarks folder. That works alright as it produces an HTML file with the entries, but I'd like something more program independent. Does anyone know a good utility for offline archiving of bookmarks in a mostly browser independent way?
scriptkiddy 2 days ago 0 replies      
I do.

I never have to worry about them going away and I can organize them into folders any way I like. Plus, they can be exported, imported, and shared. I use Firefox, so accessing the bookmarks is as simple as using a drop down menu. I actually use a bookmark tool bar for my ost frequently visited sites. This way, when I want to go to HN, for instance, I just click a single button and I'm there.

I've looked at other bookmarking software/services, and I still find that plain old browser bookmarks still fit every use case.

davidp670 3 days ago 1 reply      
I stopped using Chrome bookmarks b/c they got too messy but now I use Bookmark OS which I really like. It kinda like Mac OSX but for bookmarks in the browser https://bookmarkos.com
Merem 2 days ago 0 replies      
Of course I do. Just checked everything and my bookmarks number just above 1000. The ones I use the most and websites I need in the immediate future are organized in the bookmarks toolbar (I'm using Firefox). Apart from that, they are put into separate folders regarding various topics as well as a list with "random" links which I can't put anywhere else.They are useful to me in the sense that I don't need to remember all those 1000+ links as well as it being the fastest way to access a website (it's faster than typing).
a3n 3 days ago 0 replies      
I do, but only for frequent things, and I'll clean that out periodically.

For longer term bookmarks I use pinboard.

I use a middle-ground for a few things: I may bookmark, say, news sites in pinboard under the "news" category. Every tag and combination of tags on pinboard has an RSS feed; I bookmark the "news" tag's RSS feed in Firefox, and everything tagged shows up.

The RSS is not for the content of the target sites, it's for what goes in and out of the news tag. So I might add another news site to my pinboard news tag, and vi-ola, it shows up in my Firefox RSS bookmark. Delete something from the pinboard tag and it's gone in Firefox.

mirimir 3 days ago 0 replies      
I use bookmarks in Firefox in three ways. Sites that I use frequently go in the toolbar. Sites that I use rarely go in folders in the toolbar. Sites that I just want to remember go in "other bookmarks", and later I search for them.
Globz 2 days ago 0 replies      
Yes I still do, at this moment I have 3413 bookmarks across different folders, coding, work, recipes, Gaming, etc.

I am currently running Bookmark Checker (chrome extension) and did set the parameters to "error connect" and at this very moment it is reporting : "Bookmark check status: Total bookmarks : 2238 of 3413 error connect: 2117"

so many dead links :(

I did not know about pinboard and I am really tempted to give it a try so I can do a full html archive without the fear of losing again 2000+ bookmarks in 5 years from now.

btb 2 days ago 0 replies      
Only the bookmarks bar at the top of the browser.

For most sites I use keyboard shortcuts + the autocomplete in chrome. Aka to visit hackernews: Ctrl+L and then "news.y" and hit enter.

nafizh 2 days ago 0 replies      
Surprise no one mentioned pocket. I use the pocket chrome extension. Compared to the bookmark system, using it is a breeze and much more clean. More importantly, I can also find them back later with my poor memory.
madiathomas 2 days ago 0 replies      
I have seven folders of bookmarks. Each for different topic/subject. Whenever I come across a new link which I will need to refer to in future, I store it so that I can open it from the bookmark. If I am not going to need the bookmark or no longer interested in a certan subject, I delete the bookmark or the whole folder. Some of the bookmarks have been there since 2010 because they are of the tools I still use.

I use Chrome. I like the fact that the bookmarks are synced to my Android phone and work computer. That way they are available whenever I want to use computer.

nebyoolae 3 days ago 0 replies      
I do still use bookmarks, but only for places I go a lot, and I sync them via Chrome. Pocket is a godsend for the "cool links" that I check out when I have time and then usually archive away, never to look at again.
nsarafa 3 days ago 0 replies      
Ironically, I just cleared out my chrome bookmarks today. Found it far too difficult trying to find the correct folder hidden in a long list of old/dead folders/links. After I purged, I stumbled upon the chrome bookmarks manager browser extension that makes the process of adding a bookmark much easier as you can type to search (https://chrome.google.com/webstore/detail/bookmark-manager/g...)
comboy 3 days ago 1 reply      
The thread is already pretty long and it looks like I'm the first one to mention https://google.com/save - works quite well.
psiegmann 2 days ago 0 replies      
I'm quite happy with a set of project specific bookmarks to get people up-to-speed quicker.We have web-{dev/acc/prd}, cms-{acc/prd}, jira, confluence, buildsystem, log-{dev/acc/prd}, etc

We maintain the bookmarks in yaml and generate the html to import into firefox/chrome/ie.Script: https://github.com/psiegman/bookmark-generator

smnscu 2 days ago 0 replies      
I'm a diehard fan of classic bookmarks. I tried pinboard, pocket, and other services, but for me the browser bookmarks with some form of organization works best. I like and use all Chrome shortcuts and nifty features for bookmarks, and even if the browsers seem to be going into a different direction (see Chrome's "smart" bookmarks), as the saying goes they will have to pry them from my cold, dead hands.

(At the moment I have 519 bookmarks in 73 folders)

savethefuture 3 days ago 2 replies      
I do, but I have them export and upload to my server daily so I can keep them in sync. I don't organize them, I just use search and find. They're all relevant links I wish to look at or read at a later date.
harijoe 2 days ago 0 replies      
I tried to address this problem some months ago with a chrome extension. Feel free to try and provide feedback to it : https://chrome.google.com/webstore/detail/oh-hi-mark/fcmdkga...
ungzd 3 days ago 0 replies      
Yes, but in single folder (maintaining tree structure is pain) and rarely access it. Del.icio.us was very convenient, seems that it still exists but seems that they deleted all old data and may close again soon.
ronreiter 2 days ago 0 replies      
Reading list is not bookmarks. And of course we do use bookmarks, especially those who work in companies that require frequent access to several systems.
AldousHaxley 3 days ago 0 replies      
YES! So many things to read, and I hate having a million tabs open at once. Even if I don't get around to something until months later, bookmarks are an indispensable tool.
dingdingdang 3 days ago 0 replies      
Yes, extensively - have them arranged in the Firefox bookmark bar with along with folders to drop down for categories like "search", "news", "projects", etc. For everything that needs remembering in a more tertiary sense I bookmark without folders but use tags. FF's system, similar to Chrome's, that can synchronize across to other computers and phones while keeping encrypted stuff in the cloud makes bookmarks a lot less volatile in nature than they used to be.
tarboreus 3 days ago 0 replies      
I just keep links in easily searchable text files. When I need something I can just search for it. Emacs orgmode allows for nice links, you can open the page straight from the text file.
petercooper 3 days ago 0 replies      
No, I created a simple Ruby script that stores them in a text file and lets me easily search them at the command line. Syncs through Dropbox so I have it on all my machines :)
kakarot 3 days ago 0 replies      
I use a single line favicons across my bookmarks bar and remove all text from them. They are organized by color in a rainbow-like fashion.

It looks beautiful and works well. I just have to maintain a mental map of what general color a website's icon is and while my mouse is gravitating in that direction I'm mentally retrieving the actual icon. It's a great visual memory exercise in the beginning but eventually you wonder how you did it any other way.

svartkonst 2 days ago 0 replies      
I do, semi-organized into folders. Mostly for archival purposes. If I come across something, a product or library och guide or whatever, that I want to save, I bookmark it.

I don't use the bookmark tabs, and I'm not regularly using what I have in my bookmarks, they're more for safekeeping, and to remind myself aobut things.

Plus it's fun to take a look through the bookmarks and rediscover things.

DavideNL 1 day ago 0 replies      
Yes... and also i recently discovered Bookmacster (macOS) which locally syncs bookmarks between Safari & Firefox & Chromium etc. (without having to upload all your stuff to a cloud.) Very handy!
IE6 2 days ago 0 replies      
Yes but not like I used to. When I was younger and had time I would bookmark things, organize them, and use them to navigate to sites of interest. Now I simply use bookmarks as a dumping ground for 'something I need to see but later because I am tired now and not using the internet for anything serious'.
seltzered_ 2 days ago 0 replies      
No, I don't use browser bookmarks or bar shortcuts. For me at least, I feel like those needs have been replaced by:

- pinboard. Been using it for many years

- DuckDuckGo's interrobang search to quickly access pin board bookmarks / maps / etc.

- the browser URL bars own autocomplete

- this may have happened also since until recently, I used different browsers on mobile (Firefox) vs desktop (safari)

damat 2 days ago 0 replies      
I'm not only just using bookmarks but even pushed them to more advanced level with own extension for Chrome:https://chrome.google.com/webstore/detail/quick-startpage/dg...
alphydan 3 days ago 0 replies      
I need to access 3 pages and 7 google drive folders almost every day for work. Those are the only browser bookmarks I have because they save me 20 - 30 clicks/day.
frik 3 days ago 0 replies      
Yes and bookmark-bar enabled.

@browser developer: don't remove or hide the bookmark feature. Allow me to bookmark the same link in multiple folders. Don't nag me with your cloud sync (no thanks), but add a feature to sync to a private cloud like Owncloud/nextcloud. Don't remove advanced features, don't simplify things without fully understanding the features. RSS support in bookmark-bar is pretty useful.

jhwhite 3 days ago 1 reply      
I do but I'm very slowly moving away from them in some instances.

If I come across articles I like I save them to instapaper instead of bookmark.

For work...I've pretty much created my own wiki of bookmarks using OneNote. Employer uses SharePoint and some pages won't display or work correctly in Chrome, so I use IE for work intranet. So instead of bookmarks I have a notebook and put tags in the notebook for easy searching and I can put a good description of the site.

sriku 2 days ago 0 replies      
I use but don't rely on bookmarks as I usually want to add some information when saving a reference. My tool of choice is Zotero [1] which I started using during researcher days and never looked back. If you organize your references into collections, then zotero can make some nice summaries for you.

[1]: https://www.zotero.org

vermooten 2 days ago 0 replies      
I've still got 100s of bookmarks from the late 90s, still in their original sub-folders. Most are dead now, which is a shame.
mehdix 2 days ago 0 replies      
Oh, yes I do use them a lot. I store my bookmarks flat, with no structure. In Chrome, I add keywords to the title upon bookmarking and later I do keyword-based searches. In fact, I developed my own Chrome extension to search bookmarks: https://goo.gl/paiU3o
ivm 2 days ago 0 replies      
No, I run a local MoinMoin instance with database in Dropbox and arrange different topics in pages there, including links.
Grue3 2 days ago 0 replies      
Yes, I use the star in Firefox URL bar (yeah, I know, they moved it recently for some reason) to mark the websites I'd want to revisit and add a bunch of tags to them. Then, if I forget about something, I can always search by tag. These are filed as "Unsorted bookmarks". I pretty much never use Bookmarks menu, because searching by tag is more efficient.
sigi45 2 days ago 1 reply      
Yes. I hide my bookmarkbar on tabs and only see them on new tab.

I have in my bookmark bar the most used sites. I have a few folders for topics and for work bookmarks.

Bookmarks help me to close a tab. It gives me the feeling that i still can read it but i don't have to do so now. Sometimes, depending on the content, i pocket it instead of using a bookmark for it.

Moto7451 3 days ago 0 replies      
Yup. I use Safari on my Mac and everything syncs nicely between my devices care of iCloud. I use folders within the bookmark bar to organize things.
jakub_g 2 days ago 0 replies      
I use bookmarks at work, mostly as a big jar of deeplinks to wiki pages, dashboards etc etc - I do not organize them nicely into subfolders, just rely on my memory on how I named them and parts of URL. The more often I use the page, the shorter the keyword. I use CTRL-L and bookmark name to open pages all the time.
Huhty 3 days ago 0 replies      
Yes, I have Chrome synced between all devices and PCs.
ramigb 2 days ago 0 replies      
Yep, I also built a chrome extension to manage bookmarks ...


etiam 2 days ago 0 replies      
Yes. In a "folder" hierarchy in the built-in Firefox bookmarks manager.I often wish for a better interface to move around in the tree though (e.g. filter for a bookmark or folder and see what's stored close to it) and some sort of aliases for multiple classifications would be handy sometimes.
astrikos 2 days ago 0 replies      
Right now I use pocket, but I want to try stash!

I will definitely write a short review, but I need 10 people to view the link to help me access it first: https://stash.ai/landing?source=f520deef

markatkinson 2 days ago 0 replies      
Yea it turns out my bookmarks are a graveyard for things I'll never read. The Android HN app I use let's me Mark articles as read later and most the time it works offline so I tend to use that more. When I find myself on a plane with no reception I dip into my list of offline HN articles.
Jayakumark 2 days ago 0 replies      
Have more than 150,000 Links in Pinboard. I Bookmark every new site that i like when i come across. Wanted to start something similar to producthunt from those, but never got to it. May be someday would make it like a Yahoo directory but that day never comes.
nicky0 2 days ago 0 replies      
I use bookmarks for mundane services I use semi-regularly: online banking, government services, electric, gas, water company, insurance company and so on.

Also admin stuff like webhost control panel, bugtracker, iTunes Connect etc.

Arranged in favourites bar in folders by category.

Saved articles go in pinboard.in however.

steverandy 3 days ago 0 replies      
I use a browser called Colibri (https://colibri.opqr.co/).

It has something called Links, where all URLs that you added are sorted by date. You can save a URL quickly with keyboard shortcut (CMD+D).

I also organize the links that I frequently visit by topics in the Lists section.

DanBC 3 days ago 0 replies      

I make sure I use a descriptive sentence when I save them.

They're useful to me because the people creating the pages don't know about SEO and Google fucking sucks at giving me the pages I need unless I use weird contorted search phrases or remember the exact name of the document.

I have 12 icons in my bookmark toolbar that I use daily. I have a few that I don't use very often.

jiiam 2 days ago 0 replies      
Yep. When I'm doing a somewhat specialized research I bookmark interesting results and add a tag for future reference. Usually the time after which they are forgotten is ~1 week, because they either served their purpose or became irrelevant, but sometimes I still use some of them.
justaaron 2 days ago 0 replies      
of course I "still" use browser bookmarks. Bookmarks, back/forward buttons, it's amazing but you don't actually need to kluge more poop on top of browser behavior to make it usable! Believe it or not, they made it right the first time.
wazoox 2 days ago 0 replies      
I use the same set of bookmarks in Firefox migrating and evolving since 1997 and Netscape 1.0 on IRIX. Some are surprisingly durables.In any case even with URL rot they are useful as reminders of pages I want to keep as references.
wtbob 3 days ago 2 replies      
Yes, I use them. I prefer them to any online service because they are completely under my own control. I do wish that I could securely sync them, but ever since Firefox completely broke the security of their Sync system, there's nothing I can rely on to safely sync for me. It's not a huge deal
scarface74 3 days ago 0 replies      
Yes. But, except for work related URLs, I rarely go back and use them.

If it is an interesting web site with good articles, I subscribe to the RSS feed.

My bookmarks stay synced between my iPhone and Chrome on Windows using Apple's iCloud Chrome plug in. It stays synced between Chrome on different computers using my Chrome account.

goodJobWalrus 3 days ago 1 reply      
I do, but I consciously keep only a small number of them (ideally not much more than 100) and regularly purge.
jesus92gz-spain 2 days ago 0 replies      
I do.I have my Chrome browsers in sync, categorised in folders.I do also have "Read later" bookmarks, as I sometimes find interesting websites or news I cannot read entirely because I'm busy or whatever other reason.
alkonaut 2 days ago 0 replies      
No. Autocomplete in the URL field only.

I never save anything for later, I either read it or forget it. I only regularly visit a few dozen sites, so usually the site is completed in the URL field after 1 character (such as "n" to load HN).

tjbiddle 3 days ago 0 replies      
Not really - L to get to the address bar, and then autocomplete handles the rest as I start typing for 99% of use-cases. However I know I used bookmarks semi-recently when I was working on a project where I was regularly using websites that I don't normally use.
rdiddly 3 days ago 0 replies      
I use bookmarks, I just don't keep them in the browser anymore. I have individual ones scattered throughout my filesystem tree by topic or function, mixed in with documents and whatever other files. Much better having just one hierarchy to search through for stuff.
jasonkostempski 3 days ago 0 replies      
I used to use and painfully maintain them for reference materials but they proved to be less useful than just reGoogling, so I stopped. ReGoogling isn't great either, I'd like an easy to use PKB but I wouldn't want it to be directly built into my browser.
LocalMan 1 day ago 0 replies      
I rely on Chrome and Firefox Bookmarks. But I have too many (thousands) and find that Xmarks doesn't help all that much.
ehnto 3 days ago 0 replies      
I use bookmarklets to perform tasks on sites to make them more readable. Actual bookmarked websites is less common but I have a few. Normally it is for short term "I will forget this otherwise" sites that get removed when I no longer need them.
trojanh 2 days ago 0 replies      
Since in today's age there no unified platform , it doesn't make sense to use bookmarks to me. I use Pocket an alternative which does the bookmark smartly. It stores the webpages oflline on my mobile so it becomes very handy.
sametmax 2 days ago 0 replies      
Yes. Stuff to real later, stuff I want to share, resources I might come back to, quick grouped access to tools I use regularly (but not frequantly), links related to each of my dev missions, etc.
nol13 3 days ago 0 replies      
Very very rarely, but have a few.

Mostly just browser history, or I'll ddg it again as a fall-back. Doesn't work as well in Chrome (or im doing it wrong) but FF awesome bar seems to pull up the links I need within a few keystrokes the majority of the time.

nhumrich 3 days ago 0 replies      
I love Firefox's keywords for bookmarks. I can type `gh` and be taken to GitHub, or `dh` for dockerhub, etc. Chrome can only do this for searching, not generic bookmarks. It basically is like a shell alias for all my favorite websites.
squiggy22 2 days ago 0 replies      
I wish Google would create a separate index of all the stuff I bookmark, and provide it as a subset of the Google search experience. I too find myself Google searching for info Ive previously browsed.
candeira 2 days ago 0 replies      
Yes, but very few of them:

Bookmarks bar: bookmarklets for pinboard, ffound, whatfont, etc. Plus bookmarks for Toggl and certain other work-related services.

Bookmarks proper: one folder per client, with links to documentation, issue tracker, etc.

c_r_w 2 days ago 0 replies      
Chrome, synced. 99.99% of my bookmark clicks goto the Bookmarks Bar.

Mostly I save bookmarks to close a tab, doubtful I will ever look at it again. Mostly those are for tech research.

I also use "open tabs on other devices" extensively.

smdz 2 days ago 0 replies      
I use it, but not in its original way.

I would bookmark a link in Chrome just because it automatically shows up (in type ahead) when I search for similar keywords in the address bar. I have too many bookmarks to categorize and remember.

pjc50 2 days ago 0 replies      
Yes, in small quantities and not synced. They're for sites I visit regularly, or essential intranet pages at work.

Stuff I want archived for reference or I want to read later goes to Pinboard.in.

Kiro 2 days ago 0 replies      
No, I just save links as plain text in my Evernote. This means I can add comments and other meta data very easily and I have everything stored in one place without having to rely on the browser.
ertucetin 2 days ago 0 replies      
Also I use Diigo it's a very cool and intuitive tool so I highly recommend it: https://www.diigo.com
nottorp 2 days ago 0 replies      
Of course I use bookmarks. Not for sites i visit regularly, the browser takes care of that, but for reference articles I'll need later.I just use per-subject folders, nothing fancy.
robertlf 3 days ago 1 reply      
I've always lamented the fact that the major browsers don't make it easy to see how old your bookmarks are and provide a way to highlight and delete ones that you haven't clicked on in awhile.
grafoo 2 days ago 1 reply      
the thing that always bugs me is how to use bookmarks when working with multiple browsers.the various bookmark service platforms never fully scratched the one itch i was feeling => simply save a bookmark and let me add some tags to it.

right now the only browser based bookmark i'm having is a bookmarklet that takes me to my own bookmark store ( see https://github.com/grafoo/webdmp if you're interested.)

astrostl 1 day ago 0 replies      
Yes, but only for regularly-visited things. The rest is on Pinboard.
josho 3 days ago 0 replies      
I used Stache to save a copy of the site and thumbnail. It was a pretty nifty app, but is pretty much end of life from insufficient sales.

There is an opportunity to do something better than bookmarks, but not likely as a business.

roystonvassey 2 days ago 0 replies      
Since I find most of the useful content I read either on HN or Reddit, I tend to use in-built mechanisms such as the like/upvote/save options to bookmark things I like.
spectistcles 3 days ago 1 reply      
I use them in Chrome all the time, I have thousands I basically search them as kind of a personal google... for those "Oh I remember reading an article about that once, let me find it"
vortico 3 days ago 0 replies      
Yes, and in vimperator they're really easy to use. Press "A" to bookmark, "A" again to remove, and "t" (tabopen) to search for a page in your bookmarks.
pensatoio 2 days ago 0 replies      
Of course. I believe just about any technically capable person uses bookmarks. I've never met a programmer who didn't and such is the primary audience of this site.
steiger 15 hours ago 0 replies      
I never really did.
ge96 2 days ago 0 replies      
Yeah just because I've been lazy and haven't finished my chrome extension that I've been off/on working on to deal partially with this. I research random crap and like to store that information into one of my servers. I've got the basic read/write down. I'm having a problem with the stupid window disappearing when it's not focused, this is intended/not something I'm going to get around. So I have to work on using background-process/page and cookies (I have yet to use cookies)
kome 2 days ago 0 replies      
I use Pinboard (for free) to manage more than 4000 bookmarks. And I use it often, it's my personal search engine. It's great. But it can be improved a lot.
PixZxZxA 2 days ago 0 replies      
I bookmark things that I visit frequently (HN, Todoist etc) and save things to Pinboard that I want to read later or save for other reasons.
wsc981 3 days ago 0 replies      
I use bookmarks. Mainly to keep autocomplete of important URLs intact after clearing browser history.

And also to keep track of important endpoints when I work for a new client (I am freelancer).

wakkaflokka 2 days ago 0 replies      
On this topic, does anybody have a good recommendation for a Google Chrome-Pinboard bookmark real-time bookmark sync service/extension?
tluyben2 3 days ago 0 replies      
I use them a lot; there are a lot of obscure searches I do to which I bookmark the result with keywords that make me find it in one go instead of doing the search mambo in Google again.
rurban 2 days ago 0 replies      
Sure. Chrome syncs them and does autocompletion. Some shortcuts are also used as icons on the bar, but autocompletion is the most important feature.
neurobot 3 days ago 0 replies      
Still use bookmark.create folder inside folder (folderception).

Also, I used mozbackup, to backup my profile (all of them, including configuration, bookmark, history, etc).

I used firefox for my primary browser.

midhunsezhi 2 days ago 0 replies      
I use them very rarely. Pocket has become my preferred source for storing, managing and sharing my links now.
Veratyr 3 days ago 0 replies      
I use mine as a queue for things I intend to look at later.

What I really wish for is a way to save all the important aspects of a page for future viewing and organise it in a particular way.

gcr 2 days ago 0 replies      
Sort of.

I use Safari's reading list extensively.

I also keep snippets of things I want to keep inside my emacs org-mode folder so it's instantly accessible.

continuational 2 days ago 0 replies      
I use them to make sure I can find the site again via Chromes autocompletion. I don' organize them and I never open the bookmarks view.
tetraodonpuffer 3 days ago 0 replies      
only the toolbar for quick access to the sites I use the most, and those are kept only as the site icon so I can have many, for everything else I want to keep I use pinboard
windlessstorm 3 days ago 0 replies      
I email myself the interesting and important links with added note. Gmail have powerful search option to go through any link I am looking for, no problems so far.
vkorsunov 2 days ago 0 replies      
We create Bubblehunt (https://bubblehunt.com) - this is search platform, where you can create own search system for bookmarks, links and any other resources.

This service automatically indexed page, get relevant results from your information space, delete duplicates and non-active urls.

This is alpha-version and it would be awesome if you give feedback and ideas what we need to improve.

asdfasdf45 3 days ago 0 replies      
Evernote web clipper (for Chrome)!

It's bookmarks on steroids, saved for offline, taggable (no assumption of organizing data in a tree), and synced.

Probably the only useful Evernote feature.

fariz_ 2 days ago 0 replies      
Procrastes 3 days ago 0 replies      
I do. I have several top level folders, Daily, Reference, Demo and Personal, then few links on the bookmark bar for Production, Staging and Tickets.
nickbauman 2 days ago 0 replies      
I use them for workflow markers. Things I do everyday, like review pull requests, check specifications, access dashboards.
daledavies 2 days ago 0 replies      
Yes, excellent for research and saving stuff for later. I do tend to purge after a year or so though because bit rot usually sets in.
steel88 2 days ago 1 reply      
Sure, i use Papaly for everything , great service .
8note 3 days ago 0 replies      
I use them for work to keep track of all the different systems' uis I need to use, but otherwise no: the address/search bar does better
taranw85 3 days ago 0 replies      
I use a website called Mochimarks. It lets you set reminder dates on bookmarks. I mostly use that to check up on threads products, or blogs.
swrobel 3 days ago 0 replies      
Not for what seems like an eternity. Autocomplete from my history has replaced them for me. Actually, I do on mobile, just on quickstarter screens.
bgrohman 2 days ago 0 replies      
Yes. I use multiple browsers, too, so I built a bookmark manager web app for personal use with grouping, tagging, and search.
Safety1stClyde 3 days ago 0 replies      
I have a web server on my home computer, so I make a "bookmarks" page on there which I can use to visit web sites I want to go to.
meddlepal 2 days ago 0 replies      
Not really no. Even the stuff I do bookmark I do so more as a "I might need this six months from now" kinda thing.
pacomerh 2 days ago 0 replies      
Sure, they sync through devices, many levels of folder nesting, easy access!, drag drop, pretty raw, basic & handy.
weitzj 2 days ago 0 replies      
Yes. I use the bookmarks favorites bar, create a folder per project and synchronize across all devices via xmarks.
taklya 2 days ago 0 replies      
I do use bookmarks but now I use Refind which allows me to store the bookmarks with tags and socially.
th3reverend 3 days ago 0 replies      
i bookmark for:

1. work; internal websites can't be found on google and i can never remember them.2. to clean up open tabs related to a task that i have to postpone; i bookmark them en masse and come back to them later; discard when done.3. i have a dozen or so websites i visit daily; right click the folder of bookmarks and open them all at once.

hsivonen 2 days ago 0 replies      
I have some. I dont organize them. I just use them to make rare things not fall of the Awesomebar search in Firefox.
weslly 2 days ago 0 replies      
More than I would like to.

I have a pinboard account but always end up just dragging links to the bookmarks toolbar.

qerim 2 days ago 0 replies      
I used to manage my Bookmarks in Chrome , however after some 'Sync' incident, I lost a few of them.

I now use [Papaly](http://papaly.com). It is really well made. I have my Bookmarks on different Boards. You can view share your bookmark boards with the community if you wish.

tehabe 2 days ago 0 replies      
I bookmark a lot of sites but I rarely go back and use them at least it feels like it.
Avshalom 3 days ago 0 replies      
I have thousands, maybe tens of thousands. The library window never closes. I really don't organize them.
pcr0 3 days ago 0 replies      
I stopped using them in favor of Pocket.
kkanojia 2 days ago 0 replies      
I use bookmarks for my regular links and pocket for one time links i want to go back later and read.
paullth 2 days ago 0 replies      
Yeah 1000s of them, all organised into hierarchical subject based folders. Very useful to me
pavanky 2 days ago 0 replies      
I have frequently used websites in my boomark bar. There are about 20 of them. That is about it.
tobeportable 2 days ago 0 replies      
Not in the browser, just markdown files structured like those github *-awsome repos.
faragon 3 days ago 1 reply      
Only for short-term. For things over a month of age: key words + web search is faster, at least for me.
seajones 2 days ago 0 replies      
Simply put, a bit. Not much, I can find most stuff again just by searching
butz 2 days ago 0 replies      
Yes, who's asking? Is one of mainstream browsers planning to ditch bookmarks?
digitalpacman 1 day ago 0 replies      
Uh yeah. Bookmark bar is the best thing ever.
make3 2 days ago 0 replies      
I use pocket instead.. it's amazing with a e-reader like kobo..
zitterbewegung 3 days ago 0 replies      
At work yea . Everywhere else I memorize urls or use search / keep a tab open.
hrez 2 days ago 0 replies      
Yes and xmarks.com plugin for cross-browser sync and backup/history.
ecesena 2 days ago 0 replies      
I only have 3 or 4, hn is on of them, and I use them mostly on my iphone/mac (with the active bar), when I open a new tab I can open those sites with a single tap, pretty convenient. Beside this no, I've never organized them.
xylon 2 days ago 0 replies      
Of course I use bookmarks, how else could I remember websites.
j_s 3 days ago 0 replies      
Personally I use the QupZilla browser because private browsing automatically starts separate sessions per-process. Before I throw them all away I collect all the urls in a big text file using Windows UI Automation... it's messy but just barely better than nothing.

Never thought about the following (search vs. bookmarks/history) until the HN discussion last week, though I have always typed in google.com before searching just because browser search money seems like a bad incentive:

There is a reason for that as a rule, browsers dont really want you to use history. They want you to search and find things multiple times because search royalties are part of their business model.

A couple of full-text-of-every-page-visited Chrome add-ons textually equivalent to a single-computer version of https://pinboard.in/ $25/yr hosted "archiving and full-text bookmark search" subscription (unfortunately for me I don't like Google/Chrome/anti-privacy enough to use as my main browser):

https://github.com/lengstrom/falcon "Chrome extension for full text history search"

http://fetching.io/ "your own personal Google -- a search engine for all the web pages you've seen"

https://worldbrain.io/ "Full-Text Search the Pages you Visited and Bookmarked"

https://addons.mozilla.org/en-US/firefox/addon/recoll-indexe... "copies the web pages you visit to the Recoll web indexing queue"

Source: Vivaldi browser v1.8 released, with calendar-style browsing history | https://news.ycombinator.com/item?id=13984122 (last week)

Also mentioned: Tree Style Tabs Firefox add-on "shows my tabs in the context I opened them from" | https://addons.mozilla.org/en-US/firefox/addon/tree-style-ta...

GraphiTabs Chrome add-on | https://chrome.google.com/webstore/detail/graphitabs/dcfclem...

Edit: Added intro w/ my own anecdata.

Another idea: custom browsers per-site-you-use, per HN user megous: https://news.ycombinator.com/item?id=13226170

For each use case that is not a free browsing I create an electron app, that never executes any code from the web or uses any external style

tscs37 2 days ago 0 replies      
Shaarli + Wallabag. So no.

I usually try to tag my bookmarks but it rarely happens.

lohengramm 2 days ago 0 replies      
I constantly use the bookmarks bar (Firefox).
skdotdan 2 days ago 0 replies      
I bookmark webpages all the time but then never find them again.
bhauer 3 days ago 0 replies      
All the time, using folders in the bookmarks bar as drop-down menus.
bootload 3 days ago 0 replies      
yes HN itself. I don't bother organising them, search is provided. The articles posted by myself, others are as good as it gets. Moderated/insightful comments are a bonus.
senorjazz 2 days ago 0 replies      
I bookmark everything but go back and read nothing :(
sidcool 3 days ago 0 replies      
Yes, Chrome syncs my bookmarks across devices. Pretty nifty.
iamacynic 3 days ago 0 replies      
yes. the 50 links i have to use over and over every day managing a business are all on the bookmarks bar.

for example: i have a bookmark that shows me every invoice issued in the past 30 days.

KevanM 2 days ago 0 replies      
yes, I have a limited set organised into what I'm doing at work.

The only personal ones I have are news websites and a lunchtime reading folder.

known 3 days ago 0 replies      
I always keep open textpad and copy all interesting urls
philippz 2 days ago 0 replies      
Definitely. But more often i use Pocket instead
nullsynapse 3 days ago 0 replies      
Yes, but I use Alfred and Chrome Bookmarks to search them.
smrtinsert 3 days ago 0 replies      
yes. synced to accounts, for reference material that required complex searches to arrive at - or material I only browse seldomly, such as fitness plans.
scelerat 2 days ago 1 reply      
I miss delicio.us.
vasili111 3 days ago 0 replies      
I use Chrome and miss opera 12 bookmarks.
blizkreeg 2 days ago 0 replies      
Pocket is how I bookmark now.
kevinwang 3 days ago 0 replies      
yes, i use them extensively. They're a godsend for the homepages of all my college classes.
flurdy 2 days ago 0 replies      
No. Not for many years
jdiscar 3 days ago 0 replies      
I thought about this a lot... so this'll be long. I thought of how bookmarks were used and came up with:

- Things you want easy access to, but have annoying URLs, like your company's wiki page (Solved by Favorites/Bookmarks Bar or Dashboard)

- Things you want to finish looking at later (Solved by Read Later / Reminder)

- Things you want to keep track of, like blogs (Solved by Read Later / Reminder)

- Things you want to be able to find later (Solved by Full Text Search and Tags)

- Something you might want to see again, but not anytime soon (Solved by Personal Archive)

- Something you simply liked or are favoriting (Solved by Personal Archive)

- Note taking / Research (Solved by Tags and Boards)

- Idea inspiration (Solved by Tags and Boards)

- Things you want to show other people (Solved by Social)

- Things you want to get for yourself (Solved by Wishlist)

- Things you want other people to get for you (Solved by Wishlist)

My main problem with using bookmarks was that I rarely went back to them. Normal bookmarks are essentially a personal archive and google search usually finds things much better.

I realized there were a lot of bookmarks I'd like to go back to, I'd just forget about them. Maybe I'd like to read something when I got home from work, or maybe I wanted to check back in a week for an update (or release date), or I wanted to keep a list of items to show someone later (usually funny videos or gifs.) It was pretty difficult to do that no matter how I organized my folders or tagged things.

I eventually built my own bookmark site (https://www.mochimarks.com/landing) with all the features I wanted. The main features (apart from the expected tagging/full text search/browser integration/notes/etc...) were settable/automatic reminders, wishlists, and recommendations. Wishlists let you rank bookmarks. Recommendations could be new stuff from friends or the app could recommend that you look at stuff you liked that you hadn't visited in a while.

After having my app for a while, I've found I use bookmarks a lot more. I mostly use reminders and have a few stuff pop up to check each day. Reminders are killer for me. But when I'm bored I like to sort my wishlists. I don't use tags much... I really only use #Programming, #Interesting (usually really good articles), #Funny, #Music, #Blog, and #ArtBlog. I'll use the recommendation features to check on my blogs and to share links with my friends. I use Read Later a lot, but rarely actually go back and read things later. But when I do, I'm really glad the feature is there.

pmkary 2 days ago 0 replies      
nope, but I use things like top-sites and Opera's startpage
exabrial 3 days ago 0 replies      
Yes. Mainly the toolbar
draw_down 3 days ago 0 replies      
Yes, of course!
shurcooL 2 days ago 0 replies      
It's great timing for this question for me. I've recently made a change in how I use bookmarks, and I've become very curious about how other people deal with them.

Some history. I've used bookmarks like anyone else since before IE6 days. When Chrome 1.0 came out, I've switched to it and been using it as my primary browser since. When Chrome added ability to sync (bookmarks and other things), I've started using that.

So for the last 5+ years, I've had all my bookmarks synced between my main computers and mobile devices.

There were 3 stages of how I used bookmarks.

First stage was me trying to organize things into folders, based on content. It seemed to make sense, but didn't really scale well. I ended up not liking my bookmarks after a while because I never actually used existing ones, only added new ones.

The problem with organizing by folders is that they're exclusive. If I run into a new blog I want to bookmark, it would normally go under Blogs. But if it's game related, I have a Game Dev folder that has Blogs inside that.

I feel like labels would work better, since then you can just apply multiple labels to bookmarks and be able to find them more logically.

Eventually, I gave up on that, but realized that I mostly cared about bookmarking things "just in case" and so that they'd show up in Chrome's omnibar when I type or search for things.

So I changed my "add a bookmark" strategy to a simpler one. I created a top-level folder called Stream (inspired by Photo Stream from Apple devices), and it would be just a single place to dump all bookmarks, based on time. Latest ones always end up on the bottom. No trying to organize by content, because organizing by "when this bookmark was added" was actually more meaningful and helpful, but primarily easier.

That worked for a while, but even so, over the last few years I realized I didn't like my bookmark situation. I had hundreds of bookmarks from last few years, and I had forgotten about most of them. It felt like baggage, mental overhead.

So, just a few weeks ago, I set a goal to go through all my bookmarks and delete them. For any bookmark I couldn't delete, I added it to a text file and just organized that in an free form way.

I ended up removing 90% of useless bookmarks. They were either 404, no longer useful or relevant, out of date, or easily findable via Google when I need to look that topic up.

The 10% remaining were high quality things that I actually cared enough to want to keep in a .txt file for now.

So, I went from http://instantshare.virtivia.com:27080/12tdxyh7suc7h.html from last few years, to just http://instantshare.virtivia.com:27080/1f2drzhc3w3hk.txt.

Feeling good about that so far. I'll put the .txt file with my other .txt files for now, and see if there's anything more I wanna do with it later. But for now, it works well enough, and I'm feeling a huge sense of relief from no longer having those bookmarks in my browser.

As a bonus, I now feel better about being able to switch browser I use, and not have to worry about importing/exporting bookmarks. I just don't want to have my bookmarks tied so tightly with the browser I use, it makes sense to keep them externally.

I really like the observation someone here made about bookmarks usually being used as "TODO" items. Articles to read, interesting blog posts to consider going through, etc. I think that really makes sense why it feels bad to have so many unused bookmarks accumulating.

nunez 2 days ago 0 replies      
no. haven't in years.
geggam 2 days ago 0 replies      
delicio.us / delicious.com

back in my day...

aorth 2 days ago 0 replies      
psyc 3 days ago 0 replies      
Never did.
cabalamat 3 days ago 0 replies      
_pdp_ 3 days ago 0 replies      
jelder 3 days ago 1 reply      
No, and I judge pretty harshly anyone who does. A few shortcuts/bookmarklets on the bookmark bar is acceptable.
A quick look at the Ikea Trdfri lighting platform mjg59.dreamwidth.org
396 points by dankohn1  2 days ago   134 comments top 22
bsamuels 2 days ago 5 replies      
I dont get everyones gripe about the lack of HTTPS as long as there's firmware signing.

HTTPS as a protocol ages extremely fast, trust anchors always change, and there's no guarantee that today's state of the art won't be completely incompatible with TLS implementations in 5 years.

For HTTPS to work properly on an embedded device, it needs to have an up to date OpenSSL library and updated certificate anchors. These are always packed as part of the firmware image itself, so any update to certificate anchors or OpenSSL would require an entire new image to be deployed.

This isn't a problem for websites because updating is usually as simple as apt-get upgrade, but this is a massive problem for embedded devices because publishing a new firmware image usually means pumping a few hundred hours of QA time into the image, then back and forthing with your manufacturer to get them to use the new image on newly minted units.

This isn't even considering the changes in OpenSSL over time. Many older routers simply cannot use HTTPS for updates because newer versions of OpenSSL simply won't fit on the flash.

Then there's the question of how end-of-life will be handled. People often use products long after they've gone EOL. You have to ask yourself what happens when someone plugs in a lighting unit that has an old firmware version on it, and the unit can't communicate with the update server because the product was EOL'd 5 years ago. There won't be a transition firmware for such an old product, and the people who know how to roll the firmware images probably don't even work there any more. That user is now SOL unless you have a method for manually updating firmware.

matt_wulfeck 2 days ago 1 reply      
> It's running the Express Logic ThreadX RTOS, has no running services on any TCP ports and appears to listen on two single UDP ports.

This is excellent. I can't even say the same thing about my AT&T fiber gateway. It listens on two random ports with no way to turn it off (and also you can't use your gigabit internet without the AT&T gateway in front). I don't know what it is, but I'm sure it's probably insecure.

okket 2 days ago 4 replies      
> That file contains a bunch of links to firmware updates, all of which are also downloaded over http (and not https). The firmware images themselves appear to be signed, but downloading untrusted objects and then parsing them isn't ideal.

Why? What security benefit do you gain by using HTTPS when you already check the signature/hash of the firmware file?

floatboth 1 day ago 0 replies      
CoAP server on LAN? This is excellent. This is exactly how I set up my DIY ESP8266 "smart" devices. More LAN of Things please, not "Internet".
jjuhl 1 day ago 0 replies      
Using 'pool.ntp.org' is not cool. Ikea should get a Vendor Zone - http://www.pool.ntp.org/en/vendors.html#vendor-zone
robert_foss 2 days ago 2 replies      
Thanks Matthew.

It's nice to see that some serious vendors actually do things mostly right.

fnord123 1 day ago 0 replies      
Also of interest is this teardown of the Koppla USB power supply:


It's also a pretty darn good piece of kit.

patrickmn 2 days ago 2 replies      
> The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it's surprisingly competent.

For what it's worth, hacking is a big part of Swedish culture.

wingerlang 1 day ago 6 replies      
Trdfri literally translated is "threadless" but it can probably be interpreted as cordless as well.

I've never heard anyone say "trdfri" before. The normal word would be "sladdls" at least where I'm from.

microcolonel 2 days ago 2 replies      
Might get me a few of these and write some client libraries. I think it would be swell to hook it up to ambient light sensors to set the lights exactly when it's time to replace sunlight in a given room.

And it looks like they've separated the concerns somewhat properly, so the lightbulbs can be somewhat separate from firmware updates and the suchlike. Big improvement over some folks....[0]

[0]: https://twitter.com/internetofshit/status/849667478385037317

tostitos1979 1 day ago 1 reply      
I watched the videos and this lighting system seems very nice. Why does the gateway need an a internet connection though? For firmware updates and supporting the mobile app? Since it says local only, I assume the mobile device has to be on the same LAN?

If so, I guess they are saying that the API between the app and the gateway is currently closed (according o the IKEA website) but they are working to change that. So what is speaking COAP? The gateway?

Matthias247 1 day ago 0 replies      
Interesting to see CoAP deployed to an embedded device. I already wondered if it will also be some never-widely-deployed standard. Also intersting that they use it on the gateway and not on the (more constrained) lightbulbs. I guess the always-on gateway would also have been powerful enough to run HTTP[S], which would have made 3rd integration easier.
chvid 1 day ago 0 replies      
Side question: Is there a way to make trdfri control an arbitrary 220 v device?
vanviegen 1 day ago 0 replies      
Okay, so it's basically Philips Hue, but without the API (for now), and with a lot less to offer in terms of hardware variety. In particular: color bulbs?

Prices seem to be only slightly lower than comparable Philips products.

afashglaksnhb 2 days ago 2 replies      
Is this a closed platform? Or can one integrate with one's own/third party solutions?
Harley78 18 hours ago 0 replies      
There is a lot of development inforomation about communicating with Ikea Trdfri Gateway here:


Developers there are trying to reverse engineer it for open source home automation software.

zAy0LfpBZLC8mAC 1 day ago 1 reply      
> It's also local only, with no cloud support.

Is that actually true? Or is this just confusing "non-local" with "cloud"?

If it's speaking IP, how would it even distinguish "local" packet from "non-local" packets? What prevents you from talking to your device at home using IP connectivity elsewhere on the planet?

oflannabhra 2 days ago 0 replies      
The EFR32 chips they are are using are Thread-capable. It will be interesting to see if IKEA migrates to Thread as the network layer and dotdot as the application layer. Their on boarding method, CoAP + DTLS sure seems to indicate that would be possible.
redsummer 1 day ago 1 reply      
Will the lights work with Home Assistant on pi - home-assistant.io - without the gateway? With perhaps a zigbee hat or USB dongle?
api 1 day ago 0 replies      
Local only with no cloud support. Hallelujah.
PhasmaFelis 1 day ago 2 replies      
I was surprised too, but I guess a furniture company might not have the same pressure to "move fast and break things" (scare quotes intended) as established tech companies. They don't have a culture of rushing products to market as fast as possible.
longhust9x 1 day ago 0 replies      
Nginx reaches 33.3% web server market share while Apache falls below 50% w3techs.com
332 points by MarionG  14 hours ago   158 comments top 21
p49k 12 hours ago 7 replies      
This is kind of a weird statistic to try to analyze. So many uses of nginx are just the act of putting an Apache/IIS/etc site behind nginx, so technically, both servers still have market share but you only see nginx. It's just that nginx makes it so easy to do certain things, like supporting modern HTTPS, that you might as well add it to your stack rather than replace something.
shanemhansen 10 hours ago 3 replies      
Apache has historically been a giant swiss army knife that will do just about everything you could want, from redirect databases to cgi to php interpreters to crazy auth setups. It did all that while still being a reasonably good workhorse for static file serving (when properly turned and using the right worker model, event rather than threaded or process).

Nginx seems to have a different model. It does support a number of features but from what I can see it focuses on composing functionality with HTTP rather than adding more plugins.

Nginx seems to do a great job at being a load balancer and cdn-lite, and it seems like that's what the market wants out of a web server.

jstanley 13 hours ago 3 replies      
Netcraft's web server survey shows nginx at only 20%, and shows Apache dropping below 50% way back in August 2013. That's a big difference compared to w3techs and both sources should be taken with a pinch of salt.

It's the 3rd chart on: https://news.netcraft.com/archives/2017/03/24/march-2017-web...

Web server market share depends a lot on which sites you're looking at: are you checking the top X million sites or checking every site you can possibly find out about? And also how you're deduplicating them: is every blogspot blog counted separately?

Disclaimer: I work at Netcraft (but not on the survey).

jimjag 10 hours ago 4 replies      
nginx is creating a name and market for itself as a reverse proxy, even though there are better solutions for reverse-proxies out there, everything from HAProxy to Apache Traffic Server to even Apache httpd. But this is an important market to have. Why? Because it allows for the perception that the "web runs on nginx" simply because all you see are the nginx web proxies and nothing behind that.

So what are the servers behind nginx? 9 times out of 10 it is Apache httpd, and numerous instances of it at that. So for each single nginx server "seen" in these surveys, there are unknown multiples of Apache httpd behind the scenes doing the real work.

But all that messes up the popular, if incorrect, narrative that Apache httpd is dying and nginx is gobbling up instances. It's all about marketing baby, for a product that really isn't truly "open source" but more so open core. And people buy it hook, line, and sinker.

boznz 7 hours ago 2 replies      
Can also be re-written as "Apache Still the dominant web platform for the internet despite the upstarts.."

I Don't consciously know either server, I just like the way sites can spin facts differently.

Nux 10 hours ago 1 reply      
A lot of Apache work load is now behind Nginx or Haproxy, I wouldn't say those numbers are entirely truthful.

Consider how Plesk panels nowadays go with Nginx proxy by default, but Apache in the backend; CPanel will probably follow soon and people have already been doing this manually for a while too.

Apache is still there, just not in as much plain sight as it used to.

jwildeboer 10 hours ago 0 replies      
Or: 83.3% of web servers are Open Source. How does that sound? :-)
Neil44 13 hours ago 2 replies      
I saw a recent version of cPanel that used ngnix as a proxy in front of Apache, with Apache on a high port. That might be responsible for a lot of the new ngnix seen out there.
tigershark 1 hour ago 0 replies      
I am really surprised no one brought on the performance topic.Now I have been out of web programming for half a decade or more, but if my memory is not completely gone I remember that nginx was about an order of magnitude faster than apache under heavy load.Is it still the same nowadays?
oblio 13 hours ago 1 reply      
IIS will probably continue its nose dive as Microsoft pushes forward with .NET Core. I'm guessing people using .NET Core are more likely to use Kestrel + Nginx as a proxy.
patrickmn 13 hours ago 6 replies      
Nginx is the web server equivalent of programming languages with fibers (in a good way.)

Is there anything that competes/a "next Nginx"?

jimjag 12 hours ago 4 replies      
How much of nginx's growth is, do you think, due to it being "better" than Apache httpd (which it isn't, BTW. Apache 2.4 is easily as fast and scalable as nginx), compared to either (1) The aggressive sales and marketing of NGINX the company or (2) nginx fronting Apache httpd and thus "hiding" the growth of Apache httpd usage. But there are lots of Apache httpd haters, for some reason, and so they LOVE promoting the FUD. And yeah, I am an admitted Apache fanboy so feel free to ignore my viewpoint if it shatters your world-view :)
njharman 9 hours ago 2 replies      
Huh, I'm surprised Apache is still so high? Are there huge hosting sites that use it or something? Wordpress?

I (and I know no one) who has reached for Apache over NGINX in a decade.

wyqydsyq 3 hours ago 1 reply      
"Just to put that growth rate in perspective: this is 70 times the number of sites that switch to Node.js, another fast-growing web server."

I find it hard to believe this could be accurate considering the vast majority of Node.js deployments are also utilizing Nginx as a reverse-proxy in front of it. I think a large portion of nginx's uptake is actually due to Node.js' popularity.

astrostl 7 hours ago 0 replies      
Come for the performance promises, stay for the configuration.
agentPrefect 9 hours ago 0 replies      
So I reckon Docker & PHP7/HHVM have contributed for sure. So much easier deploying Nginx - not to mention just plain nicer.
petters 10 hours ago 0 replies      
Another measurement put Google at 13% in 2010. https://m.theregister.co.uk/2010/01/29/google_web_server/

Also, by amount of traffic it's another story (YouTube).

nirav72 3 hours ago 1 reply      
Looks like Nginx has the highest usage in Russia. Why is that?
atemerev 12 hours ago 3 replies      
edpichler 11 hours ago 1 reply      
This is like (or worst) than comparing orange and apples.Nginx and Apache are completely different.
keymone 12 hours ago 8 replies      
Calling it. Nginx won because it's configs are not xml. Apache should have learned by now.
Does it scale? Who cares (2011) jacquesmattheij.com
422 points by ne01  2 days ago   269 comments top 31
timewarrior 2 days ago 30 replies      
Couldn't agree with this article more.

I built the biggest social network to come out of India from 2006-2009. It was like Twitter but over text messaging. At it's peak it had 50M+ users and sent 1B+ text messages in a day.

When I started, the app was on a single machine. I didn't know a lot about databases and scaling. Didn't even know what database indexes are and what are their benefits.

Just built the basic product over a weekend and launched. Timeline after that whenever the web server exhausted all the JVM threads trying to serve requests:

1. 1 month - 20k users - learnt about indexes and created indexes.

2. 3 months - 500k users - Realized MyISAM is a bad fit for mutable tables. Converted the tables to InnoDB. Increased number of JVM threads to tomcat

3. 9 months - 5M users - Realized that the default MySQL config is for a desktop and allocates just 64MB RAM to the database. Setup the mysql configs. 2 application servers now.

4. 18 months - 15M users - Tuned MySQL even more. Optimized JDBC connector to cache MySQL prepared statements.

5. 36 months - 45M users - Split database by having different tables on different machines.

I had no idea or previous experience about any of these issues. However I always had enough notice to fix issues. Worked really hard, learnt along the way and was always able to find a way to scale the service.

I know of absolutely no service which failed because it couldn't scale. First focus on building what people love. If people love your product, they will put up with the growing pains (e.g. Twitter used to be down a lot!).

Because of my previous experience, I can now build and launch a highly scalable service at launch. However the reason I do this is that it is faster for me to do it - not because I am building it for scale.

Launch as soon as you can. Iterate as fast as you can. Time is the only currency you have which can't be earned and only spent. Spend it wisely.

Edited: formatting

shadowmint 1 day ago 4 replies      
I care.

It's easy to brush off scaling concerns as not important, but I've had personal experience where it's mattered, and if you want a high profile example, look at twitter.

Yes, premature optimization is a bad thing, and so is over engineering; but that's easy to say if you have the experience to make the right initial choices that mean you have a meaningful path forward to scale when you do need it.

For example, lets say you build a typical business app and push something out quickly that doesn't say, log when it fails, or provide an auto-update mechanism, or have any remote access. Now you have it deployed at 50 locations and its 'not working' for some reason. Not only do you physically have to go out to see whats wrong, you have to organize a reinstall at 50 locations. Bad right? yes. It's very bad. (<---- Personal experience)

Or, you do a similar ruby or python app when your domain is something that involves bulk processing massive loads of data. It works fine and you have a great 'platform' until you have 3 users, and then it starts to slow down for everyone; and it turns out, you need a dedicated server for each customer because doing your business logic in a slow language works when you only need to do 10 items a second, not 10000. Bad right? yes. Very. Bad. (<---- Personal experience)

It's not premature optimization to not pick stupid technology choices for your domain, or ship prototypes.

...but sometimes you don't have someone on the team with the experience to realize that, and the push from management is to just get it out, and not worry about the details; but trust me, if you have someone who is sticking their neck out and go, hey wait, this isn't going to scale...

Maybe you should listen to what they have to say, not quote platitudes.

Ecommerce is probably one of those things where the domain is well enough known you can get away with it; heck, just throw away all your rubbish and use an off-the-shelf solution if you hit a problem; but I'm going to suggest that the majority of people aren't building that sort of platform, because its largely a solved problem.

salman89 1 day ago 2 replies      
I agree with the general premise of avoiding premature optimizations, but designing systems that scale is important for several reasons:

- Startups grow exponentially, if you're playing catchup as you're growing you are focusing on keeping the lights on and hanging on for the ride. Important for a growing company to focus on vision.

- Software that scales in traffic is easier to scale in engineering effort. For example, harder for a 100 engineers to work on a single monolith vs 10 services.

- Service infrastructure cost is high on the list of cash burn. Scalable systems are efficient, allow startups to live longer.

- If the product you are selling directly correlates to computing power, important to make sure you are selling something that can be done profitably. For example, if you are selling video processing as a service, you absolutely need to validate that you can do this at scale in a profitable manner.

I also don't agree with the premise that speed of development and scalable systems are always in contention. After a certain point, scalable systems go hand and hand with your ability to execute quickly.

beefsack 1 day ago 4 replies      
Taking a completely blas approach to efficiency is potentially as dangerous as becoming hyper-focused on it.

Not all businesses become roaring successes, and those who achieve moderate success often don't get the resources to fix deep-seated performance or architectural issues (either via engineering and/or throwing hardware at it.) Eventually these technical woes can completely halt momentum and I've seen it even drown some businesses who just aren't able to dig theirselves out of the hole the find themselves in.

People always seem to be arguing for extremes, but the most sensible approach for most tends to be somewhere in the middle.

devduderino 2 days ago 7 replies      
I care because it usually goes like this:Product manager > "Niche sass app {x} will never need to support more than 10-20 users"

Two weeks after launch > "We have 10k users and counting, why didn't you architect this for scale?"

Always assume you underestimated the scope of the project.

hopfog 1 day ago 6 replies      
I'm in the unfortunate position where this question actually matters from day 1. I learnt the hard way a few days ago when I hit the bottleneck (about 50-100 concurrent users) and I'm not sure how to proceed.

It's a multiplayer drawing site built with Node.js/socket.io. I'm already on the biggest Heroku dyno my budget can allow and it's too big of a task to rewrite the back end to support load balancing (and I wouldn't know where to start). Bear in mind that this is a side-project I'm not making any money of.

I had a lot of new features planned but now I've put development on hold. It's not fun to work on something you can't allow to get popular since it would kill it.

didibus 1 day ago 1 reply      
The reason scale isn't so important today is because most DBs can actually scale vertically to really high numbers. The tipping point is high enough that if you have this problem, you probably can also afford to fix it.

What matters though is performance and availability. No matter what scale you work at, you can't be slow, that will drive people away. You also can't be unavailable. This means that you might have to handle traffic spikes.

Depending on your offering, you probably also want to be secure and reliable. Losing customer data or leaking it will drive customers away too.

So, I'd mostly agree, in 2016, scale isn't a big problem. Better to focus on functionality, performance, security, reliability and availability. These things will impact all your customers, even when you only have one. They'll also be much harder to fix.

Where scale matters is at big companies. When you already have a lot of customers, youre first version of any new feature or product must already be at scale. Amazon couldn't have launched a non scalable prime now, or echo. Google can't launch a non scalable chat service, etc.

dlwdlw 1 day ago 0 replies      
Flexibility vs efficiency. Agile vs high momentum.

As a rule of thumb, start-ups need to be more agile as they are mostly exploring new territory, trying to create new value or re-scope valueless things into valuable things.

Larger companies operate at a scale where minor efficiency improvements can mean millions of dollars and thus require more people to do the same thing, but better. Individualistic thinking on new directions to go is not needed nor appreciated.

Of course there are excepttions. The question boils down to whether or not the ladder is on the right walk before charging up it.

In rare circumstances you can do both. Either the problem is trivial, or the problem becomes trivial because you have a super expert. 10x programmers who habitually write efficient code without needing to think too much have more bandwidth for things like strategy and direction. The car they drive is both more agile, accelerates faster, has a higher max speed, etc...but even this can't move mountains. The problem an individual can solve, no matter the level of genius, is still small in scope conpared to the power of movements and collective action and intention.

The most poweful skill is to seed these movements and direct them.

Abstractly, this is what VCs look for in founders and also a reason why very smart and technical people feel short-changed that they are not appreciated for their 10x skills. (Making 500k instead of millions/billions) They may have 10x skills, but there are whole orders of magnitude they can be blind to.

addicted 1 day ago 1 reply      
Healthcare.gov was a site that failed and suffered due to scaling issues.

However, it was an anomaly because unlike a product someone this article is intended towards would be building, that site had an immediate audience of millions of users from the get go.

Also, the fact that it took a few weeks to be rewritten to handle the load at which point it became extremely successful, strengthens the original article's point.

By the time scalability becomes a problem, you will have enough resources to tackle the scalability problem.

Jach 1 day ago 0 replies      
No one else bothered by "End-to-end tracking of customers" as the primary concern? Ok then.

On the subject of scaling, I think it's good to have an idea in your head about a path to scalability. One server, using PHP and MySQL? Ok. Just be aware you might have to load balance either or both the server and DB in the future, and that's assuming you've gotten the low hanging fruit of making them faster on their own. But as this thread's top comment illustrates, learning that stuff on the fly isn't too hard. So maybe it's better to make sure you're going with technology you sort of know has had big successes elsewhere (like Java, PHP, or MySQL) and even if you're not quite sure how you might scale it beyond the defaults you know others have solved that problem and you can learn later if/when needed.

tyingq 2 days ago 3 replies      
He makes the valid point that performance for each individual user, like page load time, does matter. Just that building for an audience size you don't yet have is mostly wasted time.

Seems reasonable. I wonder, though, if PHP feels like an anchor to the average Facebook developer. I realize they architected around it, but it must have some effect on recruiting, retention, etc. I use PHP myself, and don't hate it, but the stigma is there.

sbuttgereit 1 day ago 0 replies      
The overall premise of the blog is exactly correct; though I would say some areas you probably need to consider more than others.

Spending a lot of time figuring out what exact microservice/sharding/etc strategy you need to serve a zillion visits a day and building it before you've even got customer/visitor one is overkill out of the gate. But that shouldn't mean you shouldn't think about how you'll scale over the short term or medium term at all.

When I approach scaling, I'll tend to spend much more time on the data retention strategy than anything else: databases (or other stores), being stateful, means that it's a harder problem to deal with later than earlier as compared to the stateless parts of the system. Even so, I'm typically not developing the data services for the Unicorn I wish the client will become, I'm just putting a lot more thought into optimizing the data services I am building so it won't hit the breaking point as early as it might if I were designing for functionality alone. I do expect there to be a breaking point and a need to change direction at some point in these early stage designs. But in that short to medium term period, the simpler designs are regularly easier to maintain than the fully "scalable" approaches that might be tried otherwise, and rarely do those companies ever need anything more.

uptownfunk 2 days ago 1 reply      
I like the overall idea here. Focus on building something quality first then worry about scaling later. Most servers can handle a decent amount of traffic. Seems like common sense to me. I guess some people can get too hung up on engineering to make their site scale before actually deploying or innovating on the product. Wonder if people have encountered this in the workplace before?
xyzzy4 2 days ago 2 replies      
Ok but please don't do things like using nested array searches with bad runtime when you can use hashmaps instead. I hate seeing code or using programs that are extremely unoptimized.
nomercy400 1 day ago 0 replies      
Once worked at a startup where we expected 'some activity' in our webshop at product launch, and didn't think about scaling for that. Well, some activity turned out to be 450mbit/s for five hours, which our unscalable application/webshop didn't handle very well. It became overloaded in the first minute, and took us more than an hour to get remote access again. It's one of those things we did better for our next big event (major sharding, basically replicated the application 32 times of the largest VM instance we could get. It was needed and it survived).
the_arun 1 day ago 0 replies      
I agree with this article only for launching new products. But if you already have a product which is serving millions of customers, you better worry about scale while you change anything.
iveqy 1 day ago 0 replies      
Having working on an app that we just throwed more hardware at, to the point where the azure subscription cost could be lowered by my whole anually salary, by 3 months work of optimization.

I believe performance does matter. We where a 4 person team and could have added a fifth if we had a cheaper design.

mannykannot 1 day ago 0 replies      
There is a similar argument with regard to making code reusable. I have seen inordinately complex code come from a desire to make it reusable, even if the prospects for it being reused were slim to nonexistent.
janwillemb 1 day ago 0 replies      
In general: don't fix a non-existing problem. You don't know beforehand what the problems of the future look like. Fancy technology X of today is technical debt in 10 years. So do invest in solving technical debt along the way in products you keep.
shoefly 1 day ago 0 replies      
Evolution is a beautiful thing.

I once worked for a monolith who decided to invent a new way of programming. They built something massive and ready to scale even larger. And then they discovered no one wanted the product.

andromeda__ 1 day ago 0 replies      
Fundamentally disagree with the ethos of this article. What about ambition? What happened to that?

> You wont make the cover of Time Magazine and you wont be ordering a private jet but that Ferrari is a definite possibility, if thats the thing you are hurting for. (college education for your kids is probably a better idea ;) ).

I'd like to be on the cover of fortune or as Russ Hanneman might say, "I wanna make a fuckton of money all at once".

I don't see any problem with being ambitious or wanting that private jet.

shanecleveland 1 day ago 0 replies      
Came across a service last week with a free trial and paid plan. How to upgrade? Contact by email! Why spend time and resources on a payment process if you don't have any paying customers yet?

Obviously not right for everyone, and I'm not saying it doesn't have its own challenges, but the core product deserves the most attention early on.

amelius 1 day ago 0 replies      
A better title would be: scalability is a luxury problem.
seajones 1 day ago 0 replies      
I do agree, but being at the "we need to scale up asap" stage atm makes it harder to. There's a balance to be struck. Maybe about approach of "who cares" with the POC, MVP etc stages, then keep it more and more in mind at each stage after that would be best.
tianlins 1 day ago 0 replies      
It really depends the type of growth. Organic growth is driven by the quality of product therefore scaling issue comes later. But most venture-backed startup services such as O2O need quickly dominate market by throwing cash to get users so scaling is an issue from day one.
ninjakeyboard 1 day ago 0 replies      
I agree BUT it's not that hard to ensure your app scales. It's more about using appropriate tools for the job.
innocentoldguy 1 day ago 0 replies      
I agree with this article; however, there are considerations that can be made early on, to ensure an easy path for future scalability, that don't waste any time or money during the project's nascency. For example, if I know I want my app to scale at some point in the future, I may opt to build it in Elixir, or I may choose to use a Riak cluster rather than of MySQL.
namanyayg 1 day ago 0 replies      
Can we have (2011) in the title?
partycoder 1 day ago 0 replies      
While a point the article tries to make (fix your leaky funnel before acquiring users) is true, I disagree with the article. If your application is converting well, scalability problems are not acceptable.

I have seen applications that convert very well, but were limited by scalability problems. That meant that the business had to hold on on marketing and user acquisition, missed their financial targets, and that cascaded into breaching contracts. The phrase that nobody wants to hear in that situation is "who cares about scalability".

Now, if you did not have a lot of problems scaling in your particular case, that just means it was not an obstacle for you. e.g: you had good intuition around performance/scalability, or the problem was coincidentally a good fit for your technological choices.

Unfortunately not everyone has a good intuition about scalability, not everyone is risk averse and not everyone is good at picking a good technology for their use case. So I disagree with this article in the sense that it is not in the best interest of a random reader to not care about scalability.

debt 1 day ago 0 replies      
i concur. it's fun to dream, but sadly, statistically most of us will never have to worry about scaling! so save yourself the energy and don't worry about it.
cabaalis 2 days ago 7 replies      
I liked the article and agree with its premise. But as a side question, why do developers use so many parenthetical expressions?

Those ideas (like this one, which happens to add nothing) are often either throwaway statements (like this one) or are by themselves complete thoughts that should be a separate sentence. (I see this so often in posts written by devs.)

A girl was found living among monkeys in an Indian forest washingtonpost.com
360 points by mrb  3 days ago   158 comments top 23
sandworm101 3 days ago 9 replies      
The story is too good. The girl, the monkeys defending her, the policeman ... all Disney-level stuff but where are the non-disney facts? A real story always has dark sides. This one is too perfect. I'm not saying that it is all fake, rather that I don't think we are getting the entire story. I wouldn't be surprised if we eventually learn that this girl was only living with the monkeys for a very short while, that her issues are more long-standing. Perhaps the truth is that she was a disabled girl found amongst monkeys and the story has been elaborated from those simple facts.

>>> "She behaves like an ape and screams loudly if doctors try to reach out to her."

Like an ape or like a monkey? She was raised by monkeys but acts like an ape? A lay person perhaps wouldn't know the difference but by now someone with knowledge would be on site. I have been around several disabled children. The screaming and fear of being looked at or touched is not uncommon. No mention of how she reacts to being clothed? I'm no expert on feral children but I would expect that after eight years of being naked one would not be happy about clothing and that would deserve some mention ... unless of course clothing is nothing new to her.

I want to see her feet, specifically her toes. If she really hasn't ever worn shoes then her toes will show it.


kumarm 3 days ago 4 replies      
Hope her integration to society is handled carefully. So far she has been treated like an animal in zoo by humans too (Check the photos of groups of people looking at her):



pmoriarty 3 days ago 2 replies      
This reminds me of the story of Kaspar Hauser[1] (which was made in to a movie by Werner Herzog[2]) and of the fascinating book Seeing Voices by Oliver Sacks.[3]

In his book, Sacks investigates various cases of children growing up without language, how they cope (or don't cope) with it, how they finally acquire language (if they do), and how differently they see the world in both the pre-linguistic and post-linguistic states. Hauser was one of the most famous cases of this sort, Helen Keller[4] was another.

Reading this book inspired me to learn sign language, which I expected to be radically different from spoken and written language, and more powerful in many ways, as you can physically describe things in ways that has little parallel to spoken and written languages.

[1] - https://en.wikipedia.org/wiki/Kaspar_hauser

[2] - https://en.wikipedia.org/wiki/The_Enigma_of_Kaspar_Hauser

[3] - https://www.amazon.com/Seeing-Voices-Oliver-Sacks/dp/0375704...

[4] - https://en.wikipedia.org/wiki/Helen_keller

jacquesm 3 days ago 2 replies      
The monkeys seem to have been doing a better job at parenting than the people here. Note how the text below one of the pictures says she's frightened of people and the picture right above it has a whole bunch of (all male cast) busybodies crowding into a little room with her in it.
DanielleMolloy 2 days ago 2 replies      
This is the darker (and probably more truthful) variant of the story: https://www.theguardian.com/world/2017/apr/08/indian-girl-fo...

" 'In India, people do not prefer a female child and she is mentally not sound,' DK Singh said. 'So all the more [evidence] she was left there.' "

malandrew 3 days ago 0 replies      
Would have been interesting to have Jane Goodall involved. She could have left the child integrated but used the circumstance to bridge the communication divide between us and other primates because this girl surely knows things we never will.
faitswulff 3 days ago 3 replies      
I read somewhere that reintegration with human society mostly fails for feral children. Is it really a rescue if she dies at a young age, alone?
narrator 3 days ago 0 replies      
Now the battle begins to shape her story such that it can be used to reconfirm one of a number of different competing narratives about man's relationship with nature, nature vs nurture, theories about language acquisition, the "critical period" and early childhood development. Did I miss any?
srean 1 day ago 0 replies      
Editors note:

 New information has been reported since publication of this story that raise significant doubts about the veracity of the initial accounts on which it was based. The story relied on reports by the Associated Press and the New Indian Express quoting local officials who came upon her, and a video interview with the physician who treated her. These versions of what happened to her are now being questioned by other officials quoted in the Guardian and the Hindustan Times. While the girl appears to have been abandoned near the forest in question, according to these new reports, these officials do not believe she had been living among monkeys. The original headline has been changed, and you can read about the new developments here.
Whoah! An editor cautioning against sensationalism, I dont get to see that often.

dmix 3 days ago 0 replies      
Anyone know what kind of monkeys they were? I can't find any mention of it in this article or the original referenced source.
baron816 3 days ago 1 reply      
It's possible she could end up like this unfortunately: https://en.wikipedia.org/wiki/Genie_(feral_child)
throw2016 3 days ago 0 replies      
The story if true is discomforting, the mind ponders, and it does not completely add up.

We know the 'facts' but we also don't. This is exactly the kind of story that needs fact checking, but to get that you need people on the ground, who are experienced and confirmation will take time which the attention span of the news cycle will not allow.

The worst is turning it into some kind of circus. Hope that now with the global attention the Indian authorities will immediately retrieve her from the current facilities with people clearly not trained for this, and get her the kind of specialized care and sensitivity she needs.

popol12 3 days ago 2 replies      
How ethical is it to force her to leave the monkeys to become a "normal" human ?
achow 1 day ago 0 replies      
Mowgli girl found in January, cop says was clothed, no monkeys.

"There were no monkeys. She was not naked, and she wasnt using her hands to walk. I dont know how these stories are being spread.


smdz 2 days ago 2 replies      
The first expression I had was: What rights do we humans have to take her back from her family(monkeys in this case) and her home (the forest)? Just because she is our kind, should we impose our culture, our values, our ways (and our governments) on her?

But then - this feels more like a creative story. From the videos it looks like she might have been in the forest only for some time and needs rehab, but I am no expert here.

zaroth 2 days ago 0 replies      
These types of junk-news stories seem to make their rounds on the Internet for several weeks before finally evaporating into the ether.

What's interesting is that in the past they would seem to manage to stay off the HN front page.

Now it seems like I see these stories start circulating on Outbrain or the other click bait networks and I think, well, that'll be on HN in a week or so!

These stories are usually large part fake news, or reality tweaked or skewed with some angle to make it almost irresistible to read about. I personally have no use for these types of stories on HN but certainly understand they are created with a very compelling hook to want to share them.

slitaz 2 days ago 1 reply      
I feel it is a badly-written article.

If the girl managed to survive for so many years, she should have been left with the trouppe of primates and get observed. This sudden change will probably be worse than any other less brutal change in the environment.

abrkn 2 days ago 0 replies      
An observation that adds nothing to the story/discussion: DK Singh. Donkey Kong.
mythrwy 2 days ago 0 replies      
It's finally happened.

Washington Post has completed the transition into a full blown supermarket tabloid.

johnb777 3 days ago 1 reply      
kazinator 2 days ago 1 reply      
> Numerous stories of feral children ...

"Feral children?" How amusing; is that an actual phrase?

It evokes a domesticated species of rug-rat, bred in the wild.

bingomad123 3 days ago 0 replies      
It is common in India for family members to put their autistic/badly born kids into a cage and display them in circus. I remember a family showing three of their kids in a circus as "animals" just because the babies were autistic and had tail like features.
aaron695 2 days ago 1 reply      
No one else here disturbed that a intellectually handicapped girl who was abandoned by the system has been turned into a dancing monkey for HN's amusement.

Surely the discussion here should be more about what a horrific system exists in parts of India that handicapped people are turned into stories.

Do I really need to spell it out it's an abandoned handicapped girl found near monkeys????

The doctor says when she was brought in she was near starving (video)? Were the monkeys looking after her or not?

This is a common fairy tale, seriously people, what is wrong with you that you can't see the real story here. It's about poverty, people not dealing with mental illness and broken systems???

The fact doctors even allowed her to be filmed for your amusement shows they are not very well trained.

Fact Check now available in Google Search and News blog.google
299 points by fouadmatin  3 days ago   249 comments top 53
jawns 3 days ago 13 replies      
I'm a former journalist, and one of the mistakes I often see people make is to either give too much or not enough credence to whether the facts in a news story (or op-ed) are true.

Obviously, if you disregard objective facts because they defy your assumptions or hurt your argument, you're deluding yourself.

But an argument that uses objectively true and verifiable facts may nevertheless be invalid (i.e. it's possible that the premises might be true but the conclusion false). Similarly, a news story might be entirely factual but still biased. And in software terms, your unit tests might be fine, but your integration tests still fail.

So here's what I tell people:

Fact checking is like spell check. You know what's great about spell check? It can tell me that I've misspeled two words in this sentance. But it will knot alert me too homophones. And even if my spell checker also checks grammar, I might construct a sentence that is entirely grammatical but lets the bathtub build my dark tonsils rapidly, and it will appear error-free.

Similarly, you can write an article in which all of the factual assertions are true but irrelevant to the point at hand. Or you can write an article in which the facts are true, but they're cherry-picked to support a particular bias. And some assertions are particularly hard to fact-check because even the means of verifying them is disputed.

So while fact checking can be useful, it can also be misused, and we need to keep in mind its limitations.

In the end, what will serve you best is not some fact checking website, but the ability to read critically, think critically, factor in potential bias, and scrutinize the tickled wombat's postage.

endymi0n 3 days ago 6 replies      
The problems aren't facts. The problems are what completely distorted pictures of reality you can implicitly paint with completely solid and true facts.

If 45 states that "the National Debt in my first month went down by $12 billion vs a $200 billion increase in Obama first mo." that's absolutely and objectively true - except that Obama inherited the financial meltdown of the Bush era and Trump years of hard financial consolidation (while any legislation has a lag of at least a year to trickle down into any kind of reporting at government scale).

Fact-checking won't change a thing about spin-doctoring. At least not in the positive sense.

pawn 3 days ago 3 replies      
I think this has huge potential for abuse. Let's say politifact or snopes or both happen to be biased. Let's say they both lean left or both lean right. Now an entire side of the aisle will always be presented by Google as false. I know that's how most people perceive it anyway, but how's it going to look for Google when they're taking a side? Also, I have to wonder whether this will flag things as false until one of those other sites confirms it, or does it default to neutral?
provost 3 days ago 2 replies      
I want to think about this both optimistically and pessimistically.

It's a great start and hope it leads to improvement, but this has the same psychological effect as reading a click-bait headline (fake news in itself) -- unless readers dive deeper. And just as with Wikipedia, the "fact check" sites could be gamed or contain inaccurate information themselves. Users never ask about the 'primary sources', and instead justread the headline for face-value.

My pessimistic expectation is that this inevitably will result in something like:

Chocolate is good for you. - Fact Check: Mostly True

Chocolate is bad for you. - Fact Check: Mostly True

Edit: Words

sergiotapia 3 days ago 3 replies      
Snopes and Politifact are not fact-checking websites.

>Snopes main political fact-checker is a writer named Kim Lacapria. Before writing for Snopes, Lacapria wrote for Inquisitr, a blog that oddly enough is known for publishing fake quotes and even downright hoaxes as much as anything else.

>While at Inquisitr, the future fact-checker consistently displayed clear partisanship.She described herself as openly left-leaning and a liberal. She trashed the Tea Party as teahadists. She called Bill Clinton one of our greatest presidents.


I think fact checking should be non-partisan, don't you?

allemagne 3 days ago 1 reply      
I think that politifact, snopes, and most fact-checking websites I'm aware of are great and everyone should use them as sources of reason and skepticism in a larger sea of information and misinformation.

But they are not authorities on the truth.

Google is not qualified to decide who is an authoritative decider of truth. But as the de facto gateway to the internet, it really looks like they are now doing exactly that. I am deeply uncomfortable with this.

throwaway71958 3 days ago 0 replies      
This is incomplete: they need to also include the political affiliations of owners of "fact check" sites, and perhaps also FEC disclosure for donations above threshold, and sources of financial support. I.e. this site comes from PolitiFact, but its owner is a liberal and he took a bunch of money from Pierre Omidyar who also donated heavily to the Clinton Global Initiative. Puts the fact checks in a more "factual" light, IMO. Fact check on the fact check: http://www.politifact.com/truth-o-meter/article/2016/jan/14/...

Things have gotten hyper-partisan to the extreme in the past year or so, so you sometimes see things that are factually true rated as "mostly false" if they do not align with the narrative of the (typically liberal) owners.

pcmonk 3 days ago 1 reply      
What I wish they would do is use their fancy AI to put in a link to the original source. Tracking down original sources is extremely tedious, but it generally gives you the clearest idea of what's actually going on.
artursapek 3 days ago 0 replies      
I see Google having good intentions here, but I fall back to my previous sentiment on trying to assign "true/false" for all political stories and discussions.


tabeth 3 days ago 1 reply      
Fact checking is irrelevant. What's necessary is education. Just like spellcheck will not allow you to magically compose elegant prose, fact check is notgoing to prevent people from being misled. Notice how both of these "problems"have the same solution. In fact, fact check can be counter productive as peoplenow sprinkle their articles with irrelevant facts.

Education is the solution to all social problems.

DanBC 3 days ago 2 replies      
I'd be interested to see how it copes with UK newspapers.























sweetishfish 3 days ago 2 replies      
Who fact checks the fact checkers?
scottmsul 3 days ago 2 replies      
A better idea would be to look for disagreements. Given a news article or claim, are there any sources out there which disagree? Then the user could browse both claims and decide for himself.
ksk 3 days ago 1 reply      
Are we in the twilight zone? An advertising company fact checking political discourse? Would google apply the same fact check to their own company?

"Does Google dodge taxes"

civilian 3 days ago 0 replies      
So I mean, this is just a metadata tag. Anyone can make one. I'm looking forward to Breitbart & HuffPo abusing this...

I think it would be interesting to collect a list of websites that disagree on a claim review.

pdimitar 1 day ago 0 replies      
"There's only one truth and that is Google".

Haha, no. Keep trying though.

Plus, as journalists in this thread have said, you might stick to the facts 100% (which I doubt Google will resist the temptation to abuse in the future, but let's leave that aside for now), your conclusion or subliminal message at the end might be entirely untrue and misguided.

Sorry, Google. You need wait the planet's collective IQ to drop by several tens still. It's not your time to dominate the news yet.

gthtjtkt 3 days ago 0 replies      
Snopes and Politifact are abject failures. Nothing but glorified bloggers who have declared themselves the arbiters of truth.

Even Rachel Maddow has called them out on numerous occasions, and she was rooting for the same candidate as them: http://www.msnbc.com/rachel-maddow-show/watch/politifact-fai...

smsm42 3 days ago 0 replies      
Reading the article, it looks like what is going on is that news publishers now can claim that their articles were fact checked, or certain article is a fact check article on another one, using special markup. They also say the fact checks should adhere to certain guidelines, but I don't see how it would be possible for them to enforce any of these guidelines. It looks like just self-labelling feature, with all abuse potential inherent in this.
throw2016 3 days ago 0 replies      
'Fact checking' should be limited to blatantly false news items fabricated and posted for online ad clicks ie 'Obama to move to Canada to help Trudeau run country' or 'Trump applies for UK citizenship to free UK citizens from Brussels despots'. These should be relatively easy to identify and classify.

There is a wide line between the fabrications above and news and journalism as we know it full of opinion, bias, agendas, propaganda and maybe some facts twisted to suit narrative.

The latter takes human level ai to sift through and even then detecting bias, leanings or manipulation depends on one's background, world view, specialization, knowledge levels, understanding of how the media works and a well informed general big picture state of the world.

This is impossible to classify for bias, falsehood or manipulation and will need readers to use their judgment. Trying to 'control' this is like trying to control news, favouring media aligned to your world view and discrediting those whose views you disagree with. It is for all purposes propaganda as we understand the term. Calling it fact checking is sophistry.

forgotpwtomain 3 days ago 1 reply      
This is a bad slippery slope - it suggests that a 'little sponsored banner' (which google chooses) can waive the necessity of being diligent in thought.
orangepenguin 3 days ago 0 replies      
There is obviously a lot of debate on whether or not fact checking is accurate and useful. I think simply presenting a fact check will help people think more critically about headlines they see every day. Like "Mythbusters Science". It's not perfect, but it helps people to think.

Relevant: https://xkcd.com/397/

josefresco 3 days ago 0 replies      
What if I told you (cue the Morpheus meme), that people consuming the "fake news" don't care that it's fake? It's called confirmation bias and winning. Education isn't going to solve this issue, you can't forcibly educate people nor can you change their core "values" and their determination to be "right".

The only "education" that I can envision working is quantifying the real-world-impact of their votes on the personal level. Ex: Your health insurance was cancelled? The representative you voted for caused that. This unfortunately is normally executed with a partisan goal, however should be applied as a public service to all Americans.

oldgun 3 days ago 0 replies      
Besides political debates, anyone else thinks this 'ClaimReview' schema put to use by Google is one step towards the application of Semantic Web? There might be something more than just a 'new app by Google' here.
debt 3 days ago 0 replies      
this is just gonna create a pavlovian response akin to "ah okay this is fact-checked i'll read" which'll just compound the problem. it presumes that google's fact-checking algorithms and methodology are sound.
okreallywtf 3 days ago 0 replies      
I'm amazed at how much cynicism I'm seeing here about this. People just keep repeating what can be boiled down to the same premise: complete objective truth basically doesn't exist. Truth is messy, tricky, subjective business. This is not new, this is just how the world is. Truth and understanding is best-effort and always has been, so why is a tool to attempt to combat some of the most egregious falsehoods even remotely a bad thing? Nobody should claim that its bulletproof, but I'm not seeing anyone really do this? The problem is some of us never deal in absolutes, we see nuance in everything (climate science, economics, political science) but there are others who do deal in absolutes and make a killing doing so. Sitting around having the same debate over and over about facts and truth doesn't do anything to tackle the problem.

My rule of thumb is that generally there is safety in numbers. Don't trust any single source and don't trust something that doesn't have a chain of reasoning behind it. I trust all kinds of scientific statements that I don't have the qualifications or time to vet myself - but we have to do our best and that often means doing a meta-analysis of how a conclusion was reached and how many other people/groups (who themselves have qualifications and links to other entities with similar qualifications) that the statements are linked to.

Fake news isn't 100 levels deep, its usually 1 level with no real supporting information. When people (like Trump) categorically denounce someone elses statement they often provide no real information of their own. Similarly, when refuting a fact-check, most people don't dig into it and refute something in their chain of reasoning, they just say "well that is just not true!" and leave it at that.

We don't need to fundamentally fix the nature of truth but we need to be able to combat the worst cases of misinformation and any tool that helps do that is great. Continuing the have the same philosophical debate about truth is fine from an academic standpoint but from a practical standpoint it is sometimes not helpful. I feel similarly about climate change - its great to acknowledge nuance but what good is that if we're trending towards pogroms and a totalitarian dictatorship (to be hyperbolic, maybe)?

narrowrail 3 days ago 0 replies      
Who will fact check the fact checkers?

Well, perhaps these trusted sources should implement a system similar to Quora/StackExchange but for opposing arguments?

Lots of comments call into question the biases of sites like Snopes/Politifact/etc. and allowing some sort of adversarial response would help claims about 'leftists wanting to control our minds.'

Maybe it's just a widget at the bottom of a fact check post leading to a StackExchange'd subdomain. A wiki or subreddit could work as well. Anyone looking for a side project?

balozi 3 days ago 0 replies      
One likely outcome from this is that Google Search and News will be now be perceived as partisan by the Hoi polloi. Same reason why the old media gatekeeper fell by the wayside.
ronjouch 3 days ago 0 replies      
> https://blog.google/products/search/fact-check-now-available...

Didn't know Google has its own top-level domain o. Previous HN discussion: https://news.ycombinator.com/item?id=12609551

pklausler 3 days ago 1 reply      
I really wish that major legitimate institutions of journalism (i.e., the ones that require multiple independent sources, publish corrections and retractions, &c.) would just stop pussyfooting around with nice simple accurate words like "lies" when they're reporting on somebody who's blatantly lying. False equivalency and cowardice is going to get us all killed.
pcl 3 days ago 0 replies      
The blog title is "Fact Check now available in Google Search and News around the world". I think that the extra bit at the end is worthy of inclusion, as I expect this to become a point of contention over the years.

I would not be surprised if different governments take issue with Google adding any sort of editorial commentary, even if it's algorithmically determined etc.

return0 3 days ago 0 replies      
It's a witch hunt. Science (rather, life sciences) has a similar problem. There are just enough (statistically significant) facts to push many agendas. Peer review weeds out some stuff, but that doesn't stop a lot of wrong conclusions being pushed to the public.

Maybe a better solution is adversarial opinionated journalism, rather than this proposed fact-ism.

MrZongle2 3 days ago 1 reply      
So what takes place when the inevitable happens, and an employee decides that an existing "fact check" (conducted by a third party, Google hastens to add) is philosophically inconvenient and thus removes it?

Also, FTA: "Only publishers that are algorithmically determined to be an authoritative source of information will qualify for inclusion."

What's the algorithm? Who wrote it?

dragonwriter 3 days ago 0 replies      
Original title is "Fact Check now in Google Search and News"; the different capitalization vs the current HN headline ("Fact Check Now...") is significant, the new feature "Fact Check" is now available in Google Search and News, rather than a feature "Fact Check Now" being discussed in those services.
losteverything 3 days ago 1 reply      
Billy Jack was rated M.

This is just another new rating system.

As long as they don't prevent me from reading false things, I can live with it.

Keep it my choice.

mark_l_watson 3 days ago 0 replies      
I don't like this, at all. People need to rely on their own reasoning skills and critical judgement and not let centralized authorities have a large effect on what people can read. I like systems to be decentralized and this seems to be the opposite.
takeda 3 days ago 0 replies      
I know a person who eats those "alternative facts" like candy. When I tried to prove one of them wrong, I pulled out a website to do a fact check and his response was: "you trust Snopes?" so I have doubts this will help much, but I would like to be wrong.
coryfklein 3 days ago 0 replies      
Pretty neat! Unfortunately doesn't help when searching for "obama wiretap trump tower".


Mithaldu 3 days ago 0 replies      
Like very often when google says "everywhere" they don't remotely mean everywhere and should instead be saying "in the usa". My country's edition of google news has no fact check at all.
westurner 3 days ago 0 replies      
So, publishers can voluntarily add https://schema.org/ClaimReview markup as RDFa, JSON-LD, or Microdata.
DrScump 3 days ago 0 replies      
It's interesting timing that just today, for the first time in a couple of weeks, my Facebook feed has fake news clickbait ads again.

Unless both Kevin Spacey and Burt Reynolds are, in fact, dead. Again.

ArchReaper 3 days ago 2 replies      
Anyone have an alt link? 'blog.google' does not resolve for me.
thr0waway1239 3 days ago 0 replies      
Factual Unbiased Checks for Knowledge Upkeep by Google.
xster 3 days ago 0 replies      
The fact that this came from CFR/Hillary's State Department's Jigsaw is very troubling.
sova 3 days ago 0 replies      
Hurrah for Google! Now if only Facebook and SocialNetworkGiants(tm) would follow suit!
retox 3 days ago 1 reply      
I don't trust Google to tell me the sky is blue.
codydh 3 days ago 1 reply      
I tried a slew of recent statements that are objectively false but that a certain politician in the United States has tried to say are true. Google returned fact checks for exactly 0 of the queries I tried.
keebEz 3 days ago 1 reply      
A fact has no truth value. Truth only comes from reason, and reason only exists in each person's head. This is reducing the demand for reason, and thus destroying truth.
ffef 3 days ago 0 replies      
A great start in the right direction and a kudos for using Schema to help battle "'fake news'"
gokusaaaan 3 days ago 0 replies      
who fact checks the facts checkers?
SJacPhoto 3 days ago 0 replies      
And Who controls the fact-check facts?
isaac_is_goat 3 days ago 0 replies      
Snopes and Politifact? Really? smh
huula 3 days ago 0 replies      
snowpanda 3 days ago 2 replies      
Snopes and Politifact, they can't be serious. Not that I expected them to pick a neutral source, nor am I surprised that Silicon Valley's Google picked 2 leftist "fact" sources. This is a stupid idea, everyone has a bias. This isn't to help people, this is to influence how people see things.
Deis to Join Microsoft deis.com
394 points by gabrtv  13 hours ago   90 comments top 18
dankohn1 12 hours ago 4 replies      
Congrats to Gabe and the whole Deis team on the acquisition.

For folks not familiar with Helm, it's basically apt-get for Kubernetes, but with the ability to deploy complex multi-tier systems. It has now graduated out of the Kubernetes incubator.

And their Workflow product (also open source), is basically the smallest piece of software that lets you run Heroku buildpacks on top of Kubernetes. So, you can get a 12-factor PaaS workflow, and still have the full Kubernetes API underneath if and when you need it.

Update: And I left out my all-time favorite piece of marketing collateral, their Children's Illustrated Guide to Kubernetes (available both as children's book and video): https://deis.com/blog/2016/kubernetes-illustrated-guide/

(Disclosure: I'm the executive director of CNCF and Gabe has been a super valuable member.)

rbanffy 4 hours ago 2 replies      
Every time someone is acquired by Microsoft I can't avoid feeling sad for them.

It's true they'll get a decent amount of money, that, from now on, they have infinitely deep pockets, that they'll have some of the best keyboards and mice, but it's also true their wiki will end up in Sharepoint and their e-mails in Exchange.

ridruejo 12 hours ago 0 replies      
The industry is consolidating around the Kubernetes ecosystem. This acquisition is an example of many others that will follow as the major players want to build up their offerings and expertise.
jasonmccay 17 minutes ago 0 replies      
If Deis was that valuable, one could assume that EY agreed to this arrangement because they are low on cash and needed the money.

How often is a company, in effect, acquired twice?

RRRA 4 hours ago 3 replies      
I'm really curious how people administer their K8S clusters for installation, upgrades, etc.

I'm very familiar with docker, which we've been using for over 2 years. But now, we're trying to get k8s running, with either kargo, kubeadm, deb packages, etc.:They all failed with different bugs on different set of clouds / settings. (Trying to stick to running it on Ubuntu xenial).

Not sure if it's because 1.6.* just came out of the oven when I started...?

Thanks to Minikube, I understand how powerful k8s can be, and actually find kubectl quite simple to use, but I'm confused by how fragile and complex installation and setup seems to be. I'm unsure how someone is supposed to maintain this system considering how (overly?) modular it is and the bugs I've encountered. Knowing that docker has a LOT of bugs, and k8s builds on top of it, I'm a bit scared. And there is no clean documentation on how to install it, with sections for all your choices, in a generic/agnostic way (deb+rpm distros, cloud integration or simple abstract VMs, ...)

What is you workflow? :)

edude03 13 hours ago 6 replies      
Wait what? Didn't the Deis team just join EngineYard last year? Furthermore, why would Microsoft want Deis? They haven't really shown an interest in Kubernetes thus far.
alpb 13 hours ago 0 replies      
Congratulations Gabriel! Having worked on containers space in Microsoft Azure before, my opinion is that this is a great move by Microsoft. In the past years, I've seen the company struggle in finding great talent in OSS/Linux stack. Simply, there are a lot of areas Microsoft could expand, but there is not enough talent. Deis will definitely take a ton of expertise in open source software and community to Microsoft. Now that Kubernetes is a big part of Azures container service, Deis brings a lot of fresh blood to Microsoft. I hope it works out great for both companies (and the open source Kubernetes community).
pvsnp 13 hours ago 6 replies      
I'm somewhat surprised. Why would Microsoft put weight into Deis vs use something like Kubernetes or Mesos? I haven't kept up with Deis's growth and I'm obviously very happy for them, but I'm curious what the gain is. Based on HN posts and other devops forums, Kubernetes eeems to have gained a lot of momentum recently.
luhn 12 hours ago 1 reply      
Deis has changed a lot since I last looked at them. They've dropped their original PaaS and developed an ecosystem on top of Kubernetes. Microsoft has been showing a lot of interest in Docker, so can see why this acquisition would make sense for them.
devy 13 hours ago 1 reply      
Does this mean a commitment of Microsoft on k8s ecosystems or simply a talent acquisition or both?
briandear 6 hours ago 0 replies      
I really hope Microsoft doesn't hurt this. For example, there are great docs for AWS and Google, but given the way they've ruined Skype, I really hope they don't turn this into some kind of Azure-focused system while dropping support for AWS, etc. Congrats to the awesome Deis team -- let's just hope that Microsoft doesn't just run it into the ground when it comes to non-Azure platforms.
briandear 6 hours ago 0 replies      
"Microsoft has a storied history of building tools and technologies that work for developers."

I'm not sure how I feel about that statement.

kayoone 8 hours ago 0 replies      
Love Deis, we have used V1 and V2 (with kubernetes) at my current job with success, but also had weird stability and reliability issues from time to time.
xena 12 hours ago 0 replies      
Congrats on the acquisition, I'm just sad I can't be there to congratulate you.
Sevrene 13 hours ago 0 replies      
What!? I never would have expected this, quite a surprise. I hope it works out for them.
ksikka 13 hours ago 0 replies      
The Deis team was extremely helpful over IRC when I was building on top of their PaaS. Great team and culture, Microsoft is lucky to have them.
AlexB138 12 hours ago 0 replies      
Well, that's definitely a way to buy yourself into the Kubernetes ecosystems. Congratulations to the Deis team!
WrtCdEvrydy 12 hours ago 0 replies      
Next week, Flynn to Join Apple.

Let the PaaS arms race begin.

The BEAM Book A Description of the Erlang RTS and the Virtual Machine BEAM github.com
330 points by weatherlight  3 days ago   14 comments top 9
rdtsc 3 days ago 0 replies      
There is even a separately implemented BEAM VM for running directly on Xen hypervisor:


An impressively done book on BEAM instruction sets:


There is even a handy dandy online instruction set completion search:


qaq 3 days ago 0 replies      
To anyone who wants to learn about the BEAM would highly recommend Hitchhiker's Tour of the BEAM by Robert Virdinghttps://www.youtube.com/watch?v=_Pwlvy3zz9M
qohen 3 days ago 1 reply      
There's a bit of back-story here, the gist of which is: the book initially was supposed to come out from O'Reilly and Associates, but periodically the release date would get pushed back by 3 months or so.

Then they cancelled it.

It was then picked up by Pragmatic Programmers, but they wound up cancelling it too.

In any case, Erik told me a couple of weeks ago at Erlang & Elixir Factory that he'd be looking to get it out himself online by -- or around, I forget -- summertime, so it's nice to see he did this now, even if it means I lose the betting pool (I kid).

tombert 3 days ago 1 reply      
This is incredibly cool. BEAM is one of the most fascinating bits of tech to me, particularly its garbage collector.

You've given me a bit of reading material for my train-ride for the next few days, so thank you!

defined 3 days ago 0 replies      
FWIW, there's a description of most of Erlang internal data structures here: https://edfine.io/blog/2016/06/28/erlang-data-representation...
jarrettch 3 days ago 0 replies      
This is amazing. I've started to learn Elixir recently, and as a result my interest in Erlang and BEAM has been piqued. I'm sure a lot of this will be over my head for now, but looking forward to digging in.
jfaucett 3 days ago 0 replies      
This is awesome. Unlike the JVM, its really hard to find anything describing how the Erlang VM works in detail. I tried reading through the source a while back but it was still hard to get a gist of what was going on because of the shear size of the project.

This will be a very valuable resource. Thanks so much to Erik!

dkroy 3 days ago 0 replies      
Not to be confused with Apache Beam which is looking to become a survivor in the big data space with it's abstractions over whatever is the hottest new streaming or batch processing technology.
alexott 3 days ago 0 replies      
Thank you! Did you think about using gitbook service?
Best Practices for Applying Deep Learning to Novel Applications arxiv.org
290 points by mindcrime  2 days ago   17 comments top 6
AndrewKemendo 2 days ago 3 replies      
It's good general advice, but frankly I think it doesn't address some of the major pitfalls - namely the top one: personnel.

In the very beginning it's stated:

"In this report I assume you are (or have access to) a subject matter expert for your application."

In my experience this is where it goes off the rails for most of the crowd that she is addressing. Not because they don't have someone, but because who they have isn't really a "subject matter expert."

It's a muddy term anyway especially in the field of Machine Learning. Excluding for a moment the huksters and bold faced liars, within ML there is WIDE variance in competence, domain specificity and application specific capability within the field.

The biggest capability gap that I have encountered when working with fantastic ML folks is that the ones that understand the mechanisms/algorithms/approaches best, are actually pretty terrible at delivering production code. That's not for lack of capability, it's simply because the bulk of their time has been spent in research - so they approach things very differently than application focused engineers. This is extremely relevant in this case because this is an application specific paper.

There are a plethora of mine-fields in applications of ML, some of which are outlined here from a systems approach, but the majority of which are personnel issues in my experience, and "culture" issues - not to be confused with "culture fit" problems that exist elsewhere.

gwern 2 days ago 0 replies      
Looks like good advice to me. Like a more DL-focused version of "A Few Useful Things to Know about Machine Learning" http://www.datascienceassn.org/sites/default/files/A%20Few%2... , Domingos 2012
comicjk 2 days ago 0 replies      
> Lets say you want to improve on a complex process where the physics is highly approximated (i.e., a spherical cow situation); you have a choice to input the data into a deep network that will (hopefully) output the desired result or you can train the network to find the correction in the approximate result. The latter method will almost certainly outperform the former.

This aligns with my experience (computational chem PhD). When applying a strong, general-purpose mathematical patch to an existing model, use as much of the existing model as possible. Otherwise the patch will have a hard time fitting, and maybe be worse than what you started with. Philosophically, this also comports with my thinking (it's the modeling equivalent of Chesterton's Fence https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence).

DrNuke 2 days ago 0 replies      
Physics is deterministic in states and always tends towards an equilibrium, so novel results from DL may still fit some continuum math model without being stable or even real. Domain expertise, on the other hand, helps prepare data for ML algos in such a way that results will come (or not) within the boundaries of reality and hopefully stability. I am trying both approaches for some materials science goals of mine and am curious to see what happens, now that powerful hardware is cheap enough on the cloud to put some ideas to work. Side point is all this was just impossible five years ago, so I am grateful and excited to have this opportunity.
bluetwo 2 days ago 0 replies      
Was kind of hoping for some examples of novel applications no on has thought of. :-)
lngnmn 1 day ago 0 replies      
Surprisingly good and sane paper, without all that hipster's bullshit.

The emphasis on the quality of the training data and, most importantly, on the evaluation and careful choice of heuristics on which the model to be build upon, is what makes the paper sane.

There is no shortage of disconnected from reality models based on dogmas, while, it seems, there is acute shortage of the models properly reflecting some particular aspects of reality.

Data and proper, reality-supported heuristics (domain knowledge) are the main factors of success. Technical details and particular frameworks are of the least importance.

This, BTW, is why it is almost impossible to compete with megacorps - they have the data (all your social networks) and they have the resources, including domain experts, without whom designing and evaluating a model is a hopeless task.

File Format Posters github.com
314 points by dcschelt  3 days ago   45 comments top 15
digikata 3 days ago 1 reply      
Reminds me of the MPEG-2 transport stream poster:http://in.tek.com/poster/mpeg-poster-dvb

If he runs out of file formats, he could move on to protocols...

barsonme 3 days ago 4 replies      
edit: I just noticed the author has a link to order prints from him/her, that's definitely the more polite option: http://www.redbubble.com/people/ange4771

It also seems to be less expensive than options like Office Depot, too.

Does anybody have any suggestions on how to get these printed as full-sized posters?

woliveirajr 3 days ago 0 replies      
> https://github.com/corkami/pics/blob/master/binary/CryptoMod...

This one is great. Nothing as using crypto wrong so that it becomes useless.

jwcrux 3 days ago 1 reply      
I'm a big fan of these posters! I even made something similar to show the format of the Tor consensus [0]

[0] http://jordan-wright.com/blog/images/blog/how_tor_works/cons...

chillingeffect 3 days ago 4 replies      
back before 2000, it really was important to know file formats. we didn't use libraries. we looked up the formats in books and implemented fresh code every time. I prided myself on having memorized most of the .wav header, enough that i didn't need a reference. Then, I learned .fig. Then, I worked on understanding .jpg.

Nowadays, with widespread APIs, the file formats' significance is almost irrelevant! In theory, only a single person in the world needs to know any file format. Everyone else can use a library they've written.

my how the world changes :)

jug 3 days ago 1 reply      
What! I always thought .SWF was for "ShockWave Flash", not Small Web Format. Ha, a bit late to learn though.
NuSkooler 3 days ago 0 replies      
This is excellent, thanks a lot for sharing!
westmeal 3 days ago 0 replies      
Thank you so much. I need to write a program that creates png files from arbitrary data so this will certainly come in handy!
dluan 3 days ago 0 replies      
It would be awesome to have a file poster of itself. For when one day we run out of electricity and hand-translate bits.
rinon 2 days ago 0 replies      
We have two of these prints up in our office. I highly recommend them, even if just as decoration.
40acres 3 days ago 2 replies      
Great stuff but the font is comical.
oever 3 days ago 0 replies      
Awesome! Where can I buy the book?
ardivekar 2 days ago 0 replies      
> gif.png

This made me chuckle.

anjc 3 days ago 0 replies      
Very cool
billdybas 3 days ago 0 replies      
Wow! These are pretty cool.
High prevalence of diabetes among people exposed to organophosphates in India biomedcentral.com
261 points by aethertap  1 day ago   94 comments top 14
yomly 1 day ago 9 replies      
This headline explains my general aversion to "chemicals". This, despite the fact that everything is a chemical and that we are all little chemical machines.

The human physiology is unfathomably complex and the advent of synthetic chemistry has meant that we are now exposed to new molecules which have arisen at a rate tens, hundreds of thousands of years too early for our bodies to evolve to accommodate for them. Our exposure to these chemicals is also incredibly opaque: even eating "clean" by eating fruit and veg exposes us to a multitude of chemicals that come along the pipeline including fertilisers, pesticides and preservatives.

Nature is exquisitely sensitive to chemistry - I recall reading that natural systems have evolved to exploit and dispatch behaviour based on the isotopic composition of carbon-based molecules: naturally synthesised molecules also have a different isotopic profile to artificially synthesised molecules. For the record, Carbon-13 represents ~1% of the natural isotopic abundance.

If something as granular as the isotopic distribution of elements is important to physiological systems, how can we be so complacent as to constantly pile chemicals into every aspect of our lives?

Businesses will wantonly and irresponsibly use any method to increase their bottom lines and it falls to regulators to moderate this behaviour. As an example, I recall McDonald's doping their chip oil with a known toxic organic chemical to lower the rate of thermal decomposition of their oil. This is something they could as easily avoid by replacing their oil more often, but this is costly: they instead defer this cost onto our health by exposing us to unnecessarily dangerous chemicals.

In my opinion the FDA's (or indeed global regulators') thresholds for the use of chemicals is not stringent enough - humans are living longer, how do we know that prolonged exposure to any of these individual chemicals (let alone the cocktail of all of them) over a 50-100 year period are worth the risk?

For another anecdote of irresponsible chemical usage - the onset of lung cancer through smoking underwent a stepwise increase after the tobacco industry started using phosphate fertilisers to increase their crop yield: a side effect of the fertilisers was to enrich the soil in radium which would decay down to Pollonium-210, an alpha source of Russian-assassination fame. Studies have been done on characterising the sievert profile of tobacco leaves, highlighting the risk of this but no action on the tobacco industry has been taken to mitigate this.

indogooner 1 day ago 0 replies      
From conclusion:Hence, rather than searching for other chemical alternatives, promotion and development of traditional self-sustainable, nature-based agricultural practices would be the right approach to feed this world.

Living off organic produce is not possible (at least in India as of now). The farmers do not earn much and have debts to pay-off. The only way they know of saving crops is using subsidised fertilizers provided by the government. Over the years indiscriminate use of pesticides has increased. In fact availability of Urea was a poll issue in National Election in some parts of India. [1]

[1] http://www.financialexpress.com/opinion/neem-coated-urea-why...

SCAQTony 1 day ago 0 replies      
...and 36 of them are in use within the United States.

Emphasis below on chlorpyrifos which Trump's EPA took off the EPA's banned list. EPA bulletin written before Trump took office:

"...Thirty-six of them [organophosphates] are presently registered for use in the United States, and all can potentially cause acute and subacute toxicity. Organophosphates are used in agriculture,homes, gardens and veterinary practices; however, in the past decade, several notable OPs have been discontinued for use, including parathion, which is no longer registered for any use, and chlorpyrifos, which is no longer registered for home use. ..."


firasd 1 day ago 3 replies      
The mechanism they pinpointed is illustrated in "Figure 7":

OPs (star) enter the human digestive system via food and are metabolized into acetic acid (trapezoid) by the gut microbiota (oval). Subsequently, acetic acid was absorbed by the intestinal cells and the majority of them were transported to the liver through the periportal vein. Eventually, acetic acid was converted into glucose (hexagon) by gluconeogenesis in the intestine and liver and thus accounts for glucose intolerance.

So the pesticide is being eventually converted into glucose, which has the same effect as if you were eating too much sugar/carbs.

tudorw 1 day ago 0 replies      
Farmers working with Sheep Dip chemicals have been studied;






There is also 'Genetic variation in susceptibility to chronic effects of organophosphateexposure' http://www.hse.gov.uk/research/rrpdf/rr408.pdf

Gulf War Syndrome has also been studied as a possible effect of close range exposure to organophosphates in pesticides and inset repellents.

eni 14 hours ago 0 replies      
The original title of the article: "Gut microbial degradation of organophosphate insecticides-induces glucose intolerance via gluconeogenesis"

Why is the title in HN edited to make this about "India"? Is this finding not applicable to people elsewhere? or id other parts of the world stop using organophosphates?

spencermountain 1 day ago 2 replies      
to summarize, crop pesticides are converted to glucose internally, causing diabetes.

a pretty big plot-twist for a delirious problem in global health, and a find that resembles a 21st century silent-spring.

notadoc 1 day ago 1 reply      
The primary reason most people I know who eat organic do so to avoid pesticides and herbicides.
curtis 1 day ago 0 replies      
I wonder if the people in India most likely to be exposed to organophosphates in India are also the people most likely to be living almost entirely off of rice. Did the study do a sufficiently good job eliminating obvious confounding factors?
mtw 1 day ago 0 replies      
It's not just diabetes - exposure to pesticides increases risk of suicide, lymphoma, ALS. congenital anomalies and reduces fertility. There is a solid case in choosing organic foods. See summary of risks here http://outcomereference.com/causes/77
porker 1 day ago 2 replies      
Misread the title as "..linked to gluten intolerance". Was hoping it would shine a light on that increasing.
jlebrech 18 hours ago 0 replies      
and of course the rise of the standard american diet in the 3rd world has nothing to do with it?
salesguy222 1 day ago 5 replies      
Neato, how do we avoid exposure to that pesticide?
mtdewcmu 1 day ago 0 replies      
I was always suspicious of the diet/lifestyle explanation for diabetes. It's conveniently unfalsifiable, and it's obnoxiously paternalistic and moralizing.
Textbook manifesto (2016) greenteapress.com
355 points by Tomte  1 day ago   154 comments top 38
lvh 1 day ago 6 replies      
In many European countries, this problem was resolved by what I feel is mostly student pressure. Our student union (for lack of better word) owned printing equipment and worked with most professors to do exactly what's suggested in this article: most professors wrote their own books (not 140 pages, though). Most of my textbooks were between 2 and 7 EUR, which I'm led to believe is approximately at cost. Occasionally, a particular textbook was "recommended", but there would always be ample library copies available, and often you wouldn't _really_ need it. I'd have about 4-6 courses per semester, so I'd spend maybe 25-30 EUR on our own textbooks. Occasionally I'd have to shell out for a traditional textbook, and that would utterly dominate that semester's materials budget.

The future's already here, it's just not evenly distributed.

pcmonk 1 day ago 7 replies      
A lot of people tend to harp on the "textbooks are too expensive" issue, and I think this correctly identifies one of the problems: textbook price is not an issue to many professors. Unfortunately, there's no actual solution to that presented.

> If you cant find one, write one. Its not that hard.

I've used three or four textbooks written by my professor, and I can't say the quality was all that great. Considering that the set of professors who currently choose to write their own textbooks probably skews toward professors who are good at writing textbooks, I'm not super high on this plan.

> Students: You should go on strike. If your textbook costs more than $50, dont buy it. If it has more than 500 pages, dont read it. Theres just no excuse for bad books.

Many students already do this. It's not uncommon for students to not buy a single textbook in a semester. In fact, the professors that do care about textbook price generally make textbooks optional. It turns out that's a lot easier than writing your own textbook and somehow selling it for cheap.

ziikutv 1 day ago 0 replies      
Its funny. This was my exact shower thought this morning.

I find books overly verbose, and too formal. I do not think there is a need to dumb-down technical content. I also disagree with having a page limit as it would likely lead to omission of topics that might be otherwise useful.

I think the publishing industry has to change or be weeded out by self-publishers and video makers. I have learned many topics of my courses through Slide decks and Youtube videos to avoid reading.

My plan was to start re-learning everything from Uni and write informal tutorial (snippets) of blog posts about the topics and perhaps compiling to a open source book. I'll keep you guys posted so you folks can blindly upvote my fancy submission titles =D

Addendum: I'd really like to pug Brian Douglas' Youtube playlists on Control Theory. AMAZING. Got an A thanks to him.

TheCowboy 1 day ago 2 replies      
I agree with the main point that students should read and understand textbooks. But disagree with some assumptions and points.

1. Many textbooks are written to be understood, but they vary a lot by field and class level. Generally, I think lower level textbooks best meet Downey's standards.

As you get into what is junior/senior (300-400 level) classes, there is not always a neat textbook available.

2. I disagree with 10 pages per week per course. I think the expectations of what students can read per week are too low. I attended a couple different schools, and one has a reputation of having high expectations of students, and most students tend to rise up to the challenge. I think most professors don't expect enough, and what a degree represents is watered down.

I do feel strongly that busywork and pointless readings should be avoided. Pages per week should not be some sort of metric for learning, but 10 well-written information-rich pages a week per course is not usually going to be a challenge.

Nationally, most students don't even read much of what is assigned, so telling students to not read a book if it has 500 pages won't change the status quo.

3. The idea that writing a textbook is easy is crazy. Even if you ignore the other requirements put upon professors, it is time-consuming to do it right. Even short niche books, think O'Reilly type stuff, take time to produce.

esfandia 1 day ago 0 replies      
I just implemented the reading quiz idea this term, and I thought that it went really well. Give the students a manageable chunk of reading material (in my case, the material came from various sources, not a single textbook), and give them an offline quiz to test their reading comprehension. The quizzes were graded by a TA, but the weight was quite small; small enough not to matter if they cheated (and cheating won't help them in the final exam. Aside: is it cheating if they didn't read the material but just hunted for the answers by skimming?) but enough to provide extra incentive to read.

In class we go over the answers to the quiz. I don't post the answers (the TA will have provided them the feedback they need when grading); rather we make it an interactive session. I answer questions the students have, we go over examples, I supplement the reading with slides if need be. Effectively, a flipped classroom.

This was done more out of necessity: first time teaching the course, no proper textbook (and in a quickly changing tech landscape for the topic at hand), lack of confidence in my own understanding of the material (I also tried gathering student questions beforehand so I could investigate them offline and come to class prepared to answer them), but now I think I'm going to stick to this way of teaching this course in the next installment next year.

TeMPOraL 23 hours ago 0 replies      
Here's a trick we used when I was a student: we had an FTP server shared by students of all years of our program, and we put there a copy of every required and recommended textbook, as well as slides from the lectures and every other piece of material that was relevant to our classes.

Honestly, I think this kind of setup is something universities should provide for their students. We live in 21th century, it's not that much work to provide PDFs (with restricted access, if needed, because copyright blah blah).

hackermailman 14 hours ago 0 replies      
The two biggest universities in my city have some sort of publishing agreement where they can print and bind the relevant textbook material to give to students for free and when possible they use open textbooks https://open.bccampus.ca/ though there is a government grant paying professors to maintain the texts.

The best thing about the open textbook site is other professors and TAs review the books like this Precalculus example https://open.bccampus.ca/find-open-textbooks/?uuid=2fdb8a19-...

manaskarekar 1 day ago 1 reply      
Here's the list of free books from the website: http://greenteapress.com/wp/
harry8 1 day ago 1 reply      
Richard Feynman's adventures in textbooks 50 years as told in "Surely you're joking" ago are still instructive. It's pretty embarrassing that it is still so bad. More power to Downey, support him. Perfect is the enemy of good and the ally of the status quo, which is horrible.
ez_psychedelic 1 day ago 0 replies      
I recommend "The no Bullshit guide to Math and Physics". It is about 300 pages, but goes along with what this article is about. This book is such a different approach (combines math with engineering and physics principals) so as to give validity to the maths you're reading. Also, it is written in a casual tone. Highly recommend it.
jimmaswell 1 day ago 3 replies      
Textbooks are flat out unnecessary. The notes on the board should be enough to understand the material, and the teacher can either write their own problem sets or copy them from somewhere and put them online. There's just no excuse to require a textbook for a class - it means the teacher is unable to communicate the material effectively and needs the students to read it on their own, can't be bothered to write or copy homework sets, or is forcing students to buy the professor's own book out of greed, none of which should be seen as acceptable. If the department makes you have one, just don't use it (happened in a few of my classes). For classes that need some out of class readings like history or English, there's no excuse to make students buy books when the body of freely available, uncopyrighted work out there on the internet is so easy to access. Good example: a history class I had a few semesters ago where the primary documents were a simple downloadable .doc.

I've had lots of classes that worked like this, particularly my Calculus I/II classes where there was a textbook but homework from it was just suggested, not collected, and the lectures were entirely sufficient to understand the material and do well on the exams.

Beyond being a pointless scam, I'd go as far as to say textbooks make professors worse than they would be otherwise by letting professors use them as a crutch.

Nutomic 11 hours ago 0 replies      
I studied Computer Science in Germany, and I didn't use a single book during my entire bachelor. The way it worked was professors always put the slides from their lectures online, so we could reference them as sources. Sometimes, additional references or texts were available online.

In addition to lectures, we had weekly classes where we applied the concepts from the lecture in practice. Exercises for these classes were also available online.

All of this material was free for students, and created by the professors and instructors specifically for the course.

banjodude321 1 day ago 1 reply      
"Learning from Data" is a reasonable example of the type of textbook the author is asking for.

There is something to be said about the value of "reference" books, however. Maybe reference books shouldn't be used in classes, but there can be great value in a 1000 page book that has a complete discussion of everything you'd expect.

dharness 1 day ago 4 replies      
While I agree with the content of this in general, I find books to be burdensome and would not like a course centered around them. I think a series of well crafted video lectures are a better medium for some people, myself included.

I also find that I can learn everything I need to in my Software Engineering program via a series of pointed google searches much quicker than reading a text. Most courses have 1 or more $100 books which are "required" but I haven't bought them in years.

What I /would/ like, is sample problems with solutions ;)

sbuttgereit 1 day ago 0 replies      
At the school I went to, many of the classes had no formal textbooks.

You bought a 50-100 page of not terribly dense text/examples; these were photocopies on plain old letter paper that were stapled together and pre-punched for three-ring-binders. Each class you'd buy one of those per semester and they were developed in-house. That was it. Naturally, it wasn't always this way, but certainly for the basic classes it was exactly this way.

Note this wasn't any sort of engineering field and what they were teaching didn't have a lot of authors writing standard issue coursework to begin with, but it was great material that was very focused to the classes they were teaching. I still hold on to them, too: very concise and a nice reference if I ever need to brush up.

rocqua 1 day ago 1 reply      
For high-level math courses, the best I've seen is a reader written by the professor, combined with an optional 1000 page tome.

The reader goes with the lectures, and is focused on the actual material of the course. The reader, combined with your notes, basically covers the lectures. Meanwhile, if you need another take on the material, or some wider context, the 1000 page tome is always there. This works especially well if the reader points to equivalent chapters in the tome.

smoyer 18 hours ago 0 replies      
"Before long, the students learn that they shouldnt even try. The result is a 1000-page doorstop."

My oldest two are through college and learned that they shouldn't just purchase the list of books dictated by their classes. As stated by the article, many times the textbooks were not required to pass the test. If I had to estimate, I'd say they spent half as much as their fellow students on textbooks.

benhill70 1 day ago 0 replies      
As a student in my forties I have been appalled at the price and quality of many of my textbooks. The $300 dollars worth of textbooks in one class could have done with a few youtube videos. The publishers know this cash cow is coming to an end due to piracy and online textbook rentals. Now, they are charging less for textbook but gouging students on the mandatory online components.
andrepd 1 day ago 0 replies      
Much of this articles comes across as basically complaining: "these books are long and these books are hard". Why, it's true that sometimes this is a valid criticism, but what about when the subject matter really is long and hard to understand? What then?
ziikutv 1 day ago 1 reply      
Professors often do not have the choice to pick a book. They have to teach off of one recommended by department.

more politics into the education industry.

andrewwharton 1 day ago 0 replies      
I'd like to see this philosophy applied to some of the open content out there already like the OpenStax textbooks [0]. For example, the Prealgebra text is 1152 pages in the PDF format.

I think there would be a huge amount of value in distilling these down to chapters which are 10-15 pages each instead of 100-150 pages each. Of course you would loose a lot of detail, but they could serve as a summary of 'this is the important stuff you need to know'. The expanded textbooks would serve as reference material if you want to go into more detail.

[0] https://openstax.org/

sitkack 1 day ago 0 replies      
Allen B. Downey needs to be a MacArthur Fellow.

Most commenters here should re-read the article and internalize the body of work created.

tedmiston 1 day ago 1 reply      
> Students: You should go on strike. If your textbook costs more than $50, dont buy it. If it has more than 500 pages, dont read it. Theres just no excuse for bad books.

This is bad advice.

You need the book for reference or at least will do better with the book most of the time. If you want to stick one to the publisher, buy used.

Textbooks need a "microservices revolution". And not with these crappy interactive DRM-ridden e-textbooks with exercise codes... the experience with most of those is markedly worse than print books. We need more open content like webpages and journal articles. O'Reilly does it best. Textbooks authors should follow / adapt their model.

ivan_ah 1 day ago 0 replies      
I like the suggested price point of under 50$. Perhaps I'd go even lower and require < $40. This is enough money to keep self-published authors motivated to maintain their books and write new ones, and also affordable enough for most students.

This is the approach I've been following with my MATH&PHYS and LA books, and I will continue to use with future titles.

I guess the ideal case for students would be OER, but then when everybody owns the book nobody is particularly invested in maintaining it and improving it...

Bioeye 1 day ago 0 replies      
I've taken a class from Allen and used his books. In the context of his classes they are very good and the short readings can be useful, but taken as a reference like many other textbooks are they don't do as much.
whodywop 1 day ago 0 replies      
I believe the earliest textbooks contained a gloss in the margin written by students which clarified the main text. It contained translations, notes, references, etc. This helped to circumvent the curse of knowledge whereby most authors have zero memory of their early misconceptions of the subject (a major reason why most textbooks are rubbish).

I think this ought to be reintroduced by major publishers -- new editions to contain copious annotations garnered from students who field-tested the previous edition, explaining how they conquered the parts that they found hard.

larrydag 15 hours ago 2 replies      
I'm thinking of writing a ebook that I can use for teaching a course. I would like to see examples of well written textbooks. Are there good examples?
jdeisenberg 1 day ago 1 reply      
I agree that textbook costs are exorbitant, and I use open source, online, or very low cost books when teaching at the community college level. I've been using the interactive version of the "Think Like a Computer Scientist" book when teaching the introductory programming course. The students still don't read the material, at least not before the lecture.
adamnemecek 1 day ago 0 replies      
Also all CS books (ok, maybe not all but the vast majority) need to ship with code. To quote Linus, "talk is cheap. Show me the code".
forkLding 1 day ago 0 replies      
I think this is needed, most courses have one or two textbooks which compounded together with a full courseload is a lot of pages, however the full textbook is also never used, only several chapters are usually recommended reading, really beating the point of buying the whole book
dmitripopov 1 day ago 0 replies      
Back in my student days there were really extensive textbooks that no one of us read and short brochures on the subject published by university that was the real source of knowledge and how to apply it in practice.
nabla9 1 day ago 0 replies      
Many teachers use chapters from several books and their own material.

It should be possible to buy student textbooks by chapter and print your own book. Most cities with college have few high quality printing services.

sghiassy 1 day ago 0 replies      
Disagree - many students have different learning patterns. Textbooks are only one way to teach, and there are many different ways to learn
teekee 1 day ago 3 replies      
"If you cant find one, write one."

What would be the best way to write a free book please? Any pointers? Experience?

itchyjunk 1 day ago 3 replies      
I was just thinking about asking HN about free books that will get me started in phython. This seems to have a quite a few [0]. Has HN read any of these or recommends anything in particular?

(I went through learning python the hard way a few years back and have been slacking off)


[0] http://greenteapress.com/wp/

innocentoldguy 1 day ago 0 replies      
I completely agree with the author's comment on shorter books. My biggest problem with instructional books in general is that they're filled with too much fluff. It isn't that I can't read 50 pages a week. I just don't want to, when the usable content could have been written in a page or less. While anecdotes and metaphors are great for inflating page count and price, they do little to help me understand a concept, and just become busywork, which I cannot abide.
danielbigham 1 day ago 0 replies      
Amen. This author's thesis sounds pretty darn good to me.
mncharity 22 hours ago 0 replies      
> Choose books your students can read and understand.

A noble and audacious goal.

> If you cant find one,

Realistically accessed.

> write one. Its not that hard.


Ok, I can see how this could be either plausible, or utterly absurd, depending on the domain.

> check whether they understand.

Err, does this mean "I think they didn't do too badly on the midterm"? Or daily quizes and clicker questions? Or a grad student, with a focus on the field's education research, dedicated to running concept inventories and stats?

At least in college introductory science education, "check whether they understand" is hard, an area of active research, and historically, a cesspit of professorial self-deception.

> Its not that hard.

Let's draw a proton. With marker on whiteboard, as a circle (not hard). With an illustration app, as an arbitrarily-sized hard sphere with physically-bogus lighting (not hard). With code, as a gradient, based on the proton mass density curve (not hard, but did eat some hours).

Let's draw atomic nuclei. As balls of red and blue marbles (not hard). As gradients, post-processing from recently published ab initio density functional plots, when available (hard). Background: light nuclei are lumpy.

Ok, so let's aim lower.

Let's draw the Sun. As an arbitrarily colored circle (not hard). What about as a circle, with a color at least vaguely realistic? Demonstrably hard, as it's so rare. You likely can't ask your first-tier astronomy graduate student to do it.[1] Or almost any of the professorial authors of the many introductory astronomy textbooks.

Ok, so let's aim lower.

Last week I was reading an AP Chemistry curriculum standard. Towards the top, "atoms are conserved". Great. Later on, "atoms are neutral"[when charged, they're instead "ions"]. Okaaaaay. So are there any atoms on the right side of H + light -> H+ + e- ? The old "molecules aren't made of atoms, they're made from atoms" school. Two historical threads of definition. Left for students to reconcile, because that's obviously where the burden should lie. And this wasn't Pearson trash content, this was a curriculum spec (albeit a poor one). So what do you tell your kids to make it safe for them to take standardized exams?

"[N]ot that hard." I know wizzy education-focused MIT and Harvard professors who work really hard to raise some small bit of intro physics and biology content from wretched, to very-slightly-less-wretched.

Perhaps for some domains "not that hard" is true. And it helps if the objective is "no worse then the rest of the crap out there". And if "check whether they understand" means "ask a few clicker questions, and give a random quiz" instead of "systematically run formative misconception checks". But, wow. It so doesn't match the areas I'm most familiar with.

Perhaps the manifesto is missing some scope-of-applicability predicate?

[1] http://www.clarifyscience.info/part/MHjx6 "Scientific expertise is not broadly distributed - an underappreciated obstacle to creating better content"

A man wouldnt leave an overbooked United flight, so he was dragged off washingtonpost.com
437 points by dankohn1  11 hours ago   465 comments top 45
vwcx 11 hours ago 10 replies      
The real thing to fear here is the normalization of violence.

Good perspective in the WP comments:

"The truly shocking thing here is that violence - with the real possibility of serious injury - was viewed as appropriate in a situation that was purely logistical. The airline wanted seats for its own employees. This was not an emergency - such as a terrorist attack or a drunk passenger endangering people. The lack of judgment is stunning. There is no way that violence was justified."

manacit 11 hours ago 11 replies      
Someone on a different message board put this situation very well: By the letter of the law, United was correct - morally, they were not. Their 'contract of carriage' allows them to IDB (Involuntary Deny Boarding) to passengers due to overselling, and bump people off at-will. Depending on how delayed the passenger would be to their final destination, they would owe compensation (up to a max of $1350) for the trouble.

Unfortunately, the way this played out was pretty terrible. My hope would be that events like this could move United (and other airlines) to having more transparent overbooking policies and compensating people fairly, but that's not likely.

jbeales 10 hours ago 0 replies      
For comparison, Delta had a huge number of overbooked flights over the weekend due to weather cancellations. They handled it by raising the compensation for giving up your seat until they had the right number of passengers for the flights.

As a result they got some great press on a weekend when they might have had horrible press: https://www.forbes.com/sites/laurabegleybloom/2017/04/09/why...

pbiggar 11 hours ago 3 replies      
Buried at the bottom:

> "The airline eventually cleared everyone from the plane, Bridges said, and did not let them back on until the man was removed a second time in a stretcher."

My reading is they beat him so badly they had to put him in a stretcher. Is that right?

ryandrake 10 hours ago 1 reply      
My predictions:

1. There will be no personal consequences for any of the belligerents involved in this, including the airline employees and police officers. Nobody is going to get fired.

2. The victim will receive no compensation for his injuries, instead will be further victimized in the courts.

3. There will be no business cost to United. They are already known as one of the most awful airlines in the world, and people still voluntarily choose to do business with them. People will continue to fly with them despite them now having earned the "Beats Its Customers" badge.

adekok 11 hours ago 3 replies      
Legalities and contracts aside, I don't see why an airline would allow someone to board, and then remove them. That just can't end well.
bello 11 hours ago 5 replies      
They messed up and overbooked the flight, sure. But why on earth would they forcefully drag people out of the plane, while they could just find volunteers?

They could offer cash/miles to whoever volunteered, increasing the offer until someone accepted. I've seen other airlines do this on several occasions. They couldn't have handled it worse than they did.

AngeloAnolin 10 hours ago 1 reply      
It has been mentioned a lot in the comments that United was correct as per their Contract of Carriage - Involuntary Deny Boarding.

But what's being missed here is that, the person who was dragged wasn't denied to board the plane in the first place. He was already seated, a seat number assigned to him, he is on the plane and by all means and precedents, he has already boarded. You cannot anymore deny boarding for someone whom you have already allowed passage inside the plane and seated. I assume this will be a feast day for the lawyers of the person removed forcibly, as they have all the angles working certainly in their favor.

smdz 11 hours ago 1 reply      
Law is intended to, at a fundamental level, reflect and enforce the moral and ethical standards of a civilised society.

But when law becomes a reason to induce immoral and/or unethical (but legal) behavior - the civilization collapses.

Good luck to us all!

milesf 10 hours ago 1 reply      
This was such a simple problem to fix for United. They offered vouchers for people to volunteer. No one accepted. So keep raising the offer until 4 people accepted. Problem solved.

Instead they chose violence.

I will never fly United Airlines again.

username223 11 hours ago 0 replies      
What really gets me is the PR flack smirking and waving his middle finger:

> [Charlie] Hobart said in the statement. "We apologize for the overbook situation."

In other words, "sorry about selling more seats than there were on the plane, but yeah, we had that guy beaten and dragged off. Deal with it."

justin66 9 hours ago 1 reply      
From a purely customer service conflict resolution standpoint, the fact that the guy claimed to be a doctor gave whoever was handling the situation for United a very easy out and a way to move on to the next person. I wonder why they didn't take it.
PuffinBlue 9 hours ago 1 reply      
I feel like this is going to make it into PR textbooks as a perfect example of how not to manage a developing situation.

Right from the pre-boarding request for volunteers yielding no results, to then letting passengers board, to 'randomly' selecting 4 people to 'volunteer' after failing to increase the incentive bounty above $800, to calling in the cops, to patronising responses on Twitter, to refusing to comment when major news outlets get in touch and on to the CEO failing to apologise for the harm to the guy but instead for having to 're-accommodate these customers'.

Nothing about this was handled well but you wouldn't expect it to be because it required a series of damaging institutional/cultural practices to be in place already to let the situation develop in the first place - so the response was always going to be sub-par.

nbanks 10 hours ago 0 replies      
From the doctor's perspective, refusing to leave was probably the best way to publicize United's bad customer service. It reminds me of the country singer who wrote "United Breaks Guitars" seven years ago: youtu.be/5YGc4zOqozo
Twirrim 11 hours ago 4 replies      
Today must be a fun day to be on United's PR team.
huangc10 11 hours ago 2 replies      
This is just a guess...United needed to get a couple of pilots on the flight from Chicago to Louisville or else another flight would be delayed/canceled.

Only reason I can think of an employee would take precedence over passenger.

Oras 11 hours ago 0 replies      
I think the best reaction should come from consumers to boycott United airlines for a while and this will teach it a good lesson in customer service and behaviour.
JohnLeTigre 7 hours ago 1 reply      
Some observations

- Surely there is a difference between overbooking and diminishing the number of available seats after the fact.

- Also, I would like to see how their "random computer pick" complies with their legal obligation of having IDB priority criterions.

- That was excessive force without any doubt. I mean, they had to clean up the blood afterwards.

- The doctor had to meet patients the next day, he had to uphold his hypocratic oath, I'm not sure this constitutes a "refusal" to vacate the plane in the legal sense.

DiabloD3 11 hours ago 0 replies      
Some versions of this story state the man that was beaten was a doctor (a specialist) that needed to be somewhere to see patients in an emergency situation.

So, yeah, I'd hate to be United's CEO at the moment, this is now too big to sweep under the rug and blindly quote IDB and other such laws.

darth_mastah 5 hours ago 1 reply      
Horrifying. Watching videos like that makes you think: "is it safe to go to the U.S. for holidays"?
tty7 10 hours ago 2 replies      
I think its interesting this has been completely removed from the front page
abandonliberty 9 hours ago 0 replies      
Based on my experience with United this is corporate culture, rather than 'one bad decision'. A culture that focuses on maximizing profits per transaction with little left for human dignity or compassion - let alone relationship building.

On the bright side, "United breaks guitars" is a really catchy tune. https://www.youtube.com/watch?v=5YGc4zOqozo

tdb7893 11 hours ago 2 replies      
Airlines operate in very uncertain and variable environments (weather isn't always what it is predicted, mechanical issues are pretty common, and pilots can get sick and other stuff) and having planes or pilots on call everywhere is prohibitive expensive so it's really not surprising that they have to do stuff like this sometimes. Airlines need to be more up front about their policies but there doesn't seem to be a good way to fix it without increasing prices, which they can't really do because people flying are generally so price sensitive.
heifetz 10 hours ago 0 replies      
Militarization of our society. Airports are like warzones, with warzone like police and military and security process. Whenever there is a security issue at the airport or the plane, it's taken to the extreme. Where is the common sense in this case? Has the airliners lost their minds? If something like this happens again in the future, I hope other passengers would rise up and prevent security from doing this.
panzagl 10 hours ago 1 reply      
So the questions I still have-What's the legal difference between 'Denying Boarding' and kicking someone off?

Can you legally kick someone off an airplane 'just because' (i.e. not oversold/security/safety)?

Who did the removing- law enforcement or United employees?

What's the legality of private employees assaulting someone?

This is all about level of outrage though- whether the guy should sue for civil damages or should file criminal complaints.

blizkreeg 11 hours ago 1 reply      
The depraved part here is that none of United's employees (flight attendants and/or pilots) on board the aircraft stopped law enforcement or whoever it was from forcibly ejecting this guy.

The law may be on United's side but United's employees on board could have been more human about it and not let this happen once the passenger refused to be booted off the plane. Since when is use of force the norm?

tyingq 10 hours ago 0 replies      
Betting this doesn't end well for United.

It's not 100% clear that they exhausted other reasonable options. They don't say if the 4 employees were a flight crew that NEEDED to be somewhere. There's no indication if there were other employees (not in uniform) that could have been deplaned first. There are other flights, on Sundays, that leave after this one (one at 9pm).

The judgement to go with violence after offering $400, then $800 is odd as well. Surely someone would give up a seat after the next one or two bumps in the offer. The onus is on United, since they didn't identify the issue until after passengers were boarded.

HenryBemis 10 hours ago 0 replies      
United sucks!I've been lucky/unlucky in my life to fly with them ONCE and they are THE-WORSE!!I prefer flying some shady cheap company, at least then they "meet my expectations" while I've never heard anyone ever saying a good thing about United.

I know my input offers nothing to anyone here, apart the "poor" United employee that will have to go through all our "hate" and think "oops this is a forum for people with brains and skills and they are trashing us big time.. now HERE is a pack of skilled people that would never fly with us OR work for us!!"

mattsfrey 10 hours ago 6 replies      
Sure it was a dumb/unethical thing to do on behalf of the airline and should have been handled differently etc. I'm wondering why nobody is commenting on the absurd behavior of the passenger however - insisting on forcing police to physically drag him from the plane. Once it was decided and the guys in uniforms with guns show up, what sane person is honestly going to do that? The decision to remove the passenger we can all rightly agree was wrong, but the graphic nature of how events unfolded at that point is really on the passenger.
dragonwriter 9 hours ago 0 replies      
This should never happen; regardless of how you feel about the practice of overbooking or bumping passengers for staff, if there aren't seats available, a pre-sold ticket shouldn't be convertible into a boarding pass, and the passenger should never have been able to get onto the plane or even into the secured area of the airport (and doing this is the only sane and efficient thing to do if airlines are going to allow checked baggage at all, given positive bag matching rules.)
grizzles 10 hours ago 0 replies      
Oh man. If I was United's competition, I'd be looking to do a major ad buy right now. I'd never let them live this down. First ad: A United plane filled with Punching Bags instead of passengers.
cozzyd 11 hours ago 0 replies      
United could have had someone drive the passengers in a van or something at least...
Myrmornis 11 hours ago 1 reply      
So many American police officers are violent thugs under a very thin veneer.
yellowapple 7 hours ago 0 replies      
Is there some reason why overbooking isn't classified as fraud?
tutufan 8 hours ago 0 replies      
"The beatings will continue until morale improves."

(since no one else has said it yet)

woogiewonka 10 hours ago 0 replies      
I knew there was a reason I've been avoiding United like the plague.
ArtDev 11 hours ago 2 replies      
I hope he sues the pants off them.

There is a racist angle here too. Why target the Chinese guy?

someone1222 11 hours ago 1 reply      
Everyone should have left the plane and stopped flying United in the future.

These things could be so simple and self-correcting if everyone acted instead of screaming oh my god.

losteverything 10 hours ago 0 replies      
" I had lasagna"

Sometimes you want news to be fake news.

Myrmornis 10 hours ago 0 replies      
Did they arrest him before removing him? Surely that would be the right thing to do. If they do not have a reason for arresting him then they can't just beat people up to act as "security" for the airline. If they had arrested him and read him his rights I'm sure he'd have understood that the gravity of the situation was such that he really had to stand up and leave.
ed_balls 11 hours ago 1 reply      
Why not use the jump seats?
duglauk 10 hours ago 0 replies      
lawsuit is waiting for you united
dudul 11 hours ago 0 replies      
Well, the good thing is that they may not have to worry about overbooking for a while now :)
pfortuny 11 hours ago 2 replies      
The fact is that you agree to lots of things when boarding a plane (buying a ticket). Not knowing the law is not an excuse. So...

It is annoying and terrible but it is ehat it is.

rhino369 11 hours ago 19 replies      
The outrage mob is pushing this pretty hard.

Booting people off a plane is pretty shitty behavior for an airline. It's bad customer service and they should dealt with it earlier than after boarding.

But if they have to do it, the passenger shouldn't be allowed to just refuse.

This guy did. And he had to get dragged off the plane. I don't really see an alternative other than just letting anyone willing to scream to stay and then boot off another customer with dignity.

This is shitty service by united escalated unreasonably by the passenger.

ReactXP A library for building cross-platform apps microsoft.github.io
332 points by nthtran  2 days ago   116 comments top 26
vmarsy 2 days ago 5 replies      
> ReSub

> The Skype team initially adopted the Flux principles, but we found it to be cumbersome. It requires the introduction of a bunch of new classes (dispatchers, action creators, and dispatch events), and program flow becomes difficult to follow and debug. Over time, we abandoned Flux and created a simpler model for stores. It leverages a new language feature in TypeScript (annotations) to automatically create subscriptions between components and stores. This eliminates most of the code involved in subscribing and unsubscribing. This pattern, which we refer to as ReSub, is independent of ReactXP, but they work well together.

That's interesting, I wonder how this differ from redux and others

I wonder also how is navigation is handled, is it easy to add react navigation in the mix?

Clicking on Next while on https://microsoft.github.io/reactxp/docs/animations navigates to a 404

hoodoof 2 days ago 4 replies      
Despite explaining that XP stands for "cross platform", I still think that this is mis-named and many will assume it relates to one of Microsoft's biggest ever brand names with global recognition.

At first glance I decided not to read the article because I thought it irrelevant due to the XP.

roryisok 2 days ago 2 replies      
I would really love if MS brought out some kind of .net core electron alternative. Xaml is nice enough to build UI with, .net core works across macOS and Linux, and the whole package wouldn't need an entire copy of chromium for each install
nathan_f77 2 days ago 2 replies      
I've been developing a cross-platform app with React Native, and react-native-web has been working really well. It took almost no effort to get my React Native app working in a browser. I might be wrong, but it looks like ReactXP is just an alternative to react-native-web, with some additional abstractions and conventions for stores, etc.

I'm not sure I want to switch to ReactXP. I really like being able to use the React Native Animated API on the web, and I can also use wrapper libraries such as react-native-animatable.

migueloller 2 days ago 0 replies      
I'm a bit skeptical this is needed at all. I would've much preferred if Microsoft contributed to the already popular React Native for Web [1].

React Native already supports iOS, Android, and UWP. To add browser support all you need is something like React Native for Web. I made a small presentation a few months ago that shows this. Here's the source code: [2]. Take a look at the web folder.

Libraries like React Navigation [3] have also been built to support any platform that runs React code. It looks like Microsoft built yet another navigation library [4].

Also, check out React Primitives [5]. It aims to define a set of primitives that work on any platform that can be used to build more complex components. This is highly experimental, but I'm liking the direction where it's going, a unified React interface for any platform.

In addition, ReactVR is a great example of how React Native primitives can be extended to new emerging platforms [6].

Finally, React Native for macOS [7] answers the question that many have here about building native apps for macOS without relying on Electron.

[1] https://github.com/necolas/react-native-web

[2] https://github.com/migueloller/HelloWorldApp

[3] https://github.com/react-community/react-navigation

[4] https://microsoft.github.io/reactxp/docs/components/navigato...

[5] https://github.com/lelandrichardson/react-primitives

[6] https://github.com/facebook/react-vr

[7] https://github.com/ptmt/react-native-macos

Kiro 2 days ago 2 replies      
Am I the only one who thinks this is a big deal? This basically means you can finally share your React code across platforms. It has always felt off that React and React Native are so similar, yet you can't use the same code for web and apps.
jfilter 2 days ago 4 replies      
>With React and React Native, your web app can share most its logic with your iOS and Android apps, but the view layer needs to be implemented separately for each platform.

As far as I understand, you don't need to do this in react-native. Only when you want to use some special features. Or am I missing something?

i336_ 2 days ago 0 replies      
Ahem. I nearly thought Microsoft had partnered with ReactOS for a minute there :)

But XP is EOL, so "XP" is never going to be used in anything now, thinking about it.

I can wish...

josteink 2 days ago 2 replies      
So Microsoft now have -2- cross-platform application-frameworks they offer:

- Xamarin with .NET. Mobile apps only.

- ReactXP. Which also supports regular web-applications

It will be interesting to see if these two end up competing, or if one will be ditched in favour of the other.

uncensored 2 days ago 0 replies      
With RN's and Expo's out of the box support for Android/iOS, I find the missing piece is native support for Responsive Grid layout. Till then, I've derived one based on the new support in RN v0.42 for relative dimensions. I've taken the liberty of correcting the mental model for Grid to eliminate the decoherence that results from using both an absolute column count together with relative sizing (!) by letting the developer specify grid column width as a percentage of screen size and allowing the specifying of the width of a given column in the layout as a multiple of that percentage. This way the developer doesn't have to divide the screen size in pixels (assuming they know that for the screen they're testing on, which is not always the case) by some arbitrary number of grid columns in order to get the width they desire per grid column (indirect route.) They can instead use visual intuition about relative sizes to define the column width directly as a percentage of screen width. I also found RTL support (for Hebrew/Arabic apps) generally lacking in RN, so I added RTL layout support to it.


hdhzy 2 days ago 1 reply      
Is there a screenshot somewhere to see how it looks like our am I missing something?
roryisok 2 days ago 2 replies      
I'm missing something. How is this different to react native, which is already supported on ios, android, web and UWP?
d0100 2 days ago 6 replies      
I really want React for desktop apps. Currently we are running into some performance issues with large datasets and spreadsheets, and being able to offer a performant desktop app would be ideal.

Right now we're considering using .NET and ReoGrid, since Qt is too expensive and we only target Windows anyways.

enobrev 2 days ago 0 replies      
> ReactXP currently supports the following platforms: web (React JS), iOS (React Native), Android (React Native) and Windows UWP (React Native). Windows UWP is still a work in progress, and some components and APIs are not yet complete.

Seems interesting. Hope they add linux support.

skdotdan 2 days ago 1 reply      
If they add Linux and Mac support, this will be HUGE.
idibidiart 2 days ago 0 replies      
It's not just different platforms but different screen sizes (and RTL layouts) that's the biggest challenge I found in developing React Native apps. To solves those challenges, I'd like to share something I've been working on: a responsive grid for RN with RTL layout support. It has reduced the time it takes to build relative size and responsive layouts by a factor of 10, easily. It's based on previous similar work but with some radical changes and a few functional and usability enhancements. Looking for testers!


srikz 2 days ago 0 replies      
Interesting to see if this will co-exist with xamarin or will target different type of apps.
mwcampbell 2 days ago 1 reply      
See also https://microsoft.github.io/reactxp/blog/2017/04/06/introduc...

It came out of the Skype team. But it doesn't look like they're currently using it in their universal Windows app, which is built directly on XAML.

nonsince 2 days ago 0 replies      
This name is _really_ confusing in the presence of ReactOS
DenisM 2 days ago 0 replies      
I can't find any instructions on how to try any of the samples: supported host platforms, prerequisites, downloading the toolkit, building the samples, etc.
matt_lo 2 days ago 0 replies      
At first when I landed, I thought this was another Facebook tech site since the template was reused from the old site of Jest and React Native.
Roritharr 2 days ago 2 replies      
Having macOS in there would have made it perfect.
skynode 2 days ago 0 replies      
This is simply awesome. Nothing more to add.
alekratz 2 days ago 0 replies      
At first I thought this was somehow related to the ReactOS project...
debt 2 days ago 0 replies      
microsoft owns 5% of facebook
wsgeek 2 days ago 0 replies      
Embrace.... extend.... extinguish.

Be very careful when an OS vendor makes a move like this.

Ultima VI filfre.net
263 points by doppp  3 days ago   105 comments top 22
gavanwoolery 3 days ago 5 replies      
Reminded me about another bit of info on Ultima 6 / Warren Spector:

Spector cites an amusing anecdote from Ultima 6s in-house testing:"on Ultima VI, which is kind of where I realized that all this improvisational stuff could really be magical. It was unplanned, kind of a bug. There was one puzzle where the Avatar and his party came up on one side of a portcullis and there was a lever on the other side of the portcullis that you had to flip to raise the portcullis and keep on making progress. I watched one of our testers, a guy named Mark Schaefgen, playing in that area. And he didnt have the telekinesis spell, which was the way to get past that portcullis. I was sitting there rubbing my hands together going oh ho ho, hes screwed, he cant do it.

He had a character in his party named Sherry the Mouse. You can probably see where this is going. The portcullis was simulated, and here the air quotes are around simulated, simulated enough that there was a gap at the bottom that was too small for a human to get through, but not too small for Sherry. He sent Sherry the Mouse under the portcullis, over to the lever, she flipped the lever, and then the rest of the party went through. And I fell on the floor. At that moment I just said to myself, this is what games should do. We should start planning this, not having it happen as a bug. That was where I realized this was really powerful."

It was things like this that make the Ultima series stick in my head to this day. :)

scott_s 3 days ago 4 replies      
This paragraph resonated with me:

The complexity of the world model was such that Ultima VI became the first installment that would let the player get a job to earn money in lieu of the standard CRPG approach of killing monsters and taking their loot. You can buy a sack of grain from a local farmer, take the grain to a mill and grind it into flour, then sell the flour to a baker or sneak into his bakery at night to bake your own bread using his oven. Even by the standards of today, the living world inside Ultima VI is a remarkable achievement not to mention a godsend to those of us bored with killing monsters; you can be very successful in Ultima VI whilst doing very little killing at all.

I got into western RPGs only recently - I played only JRPGs on the SNES and then later consoles. My first western RPG was Mass Effect 2, and since then I played ME3, Dragon Age: Inquisition and Skyrim. When playing Skyrim, I realized that the wolf pelts I was accumulating by killing wolves as I walked the countryside could be smithed into leather armor! That leather armor would fetch considerably more money when sold than wolf pelts.

My first thought: I found a cheat to more money! My second thought: I found a business.

santaclaus 3 days ago 1 reply      
The sequel, VII, and VII part 2, are singular achievements in terms of world building. The level of detail that went into NPC schedules, interactions, etc, down to the fact that you can do mundane tasks with no bearing on the actual game like baking bread, were pretty damn cool. The recent Witcher game might come close, but I'm still jonesing for some RPGs on VII's level.
phodo 3 days ago 2 replies      
Amazing game. While on vacation last week, I was eating at a restaurant overlooking the deep blue ocean and the background music playing was the Ultima theme song. I seemed to be the only one who recognized it and right there and then, I proudly basked in a glorious solitary moment of radiant geekiness and nostalgia as I thought of shimano, iolo and all the rest of the characters that made up that amazing place called Britannia.
sbierwagen 3 days ago 1 reply      

 The creepy poster of a pole-dancing centaur hanging on the Avatars wall back on Earth has provoked much comment over the years
Someone dug up the original art of that poster: http://ultimacodex.com/2015/10/remember-that-centaur-poster-...

bmurphy1976 3 days ago 0 replies      
Oh man I loved this game. This game may have single handedly set me on my career in software development. I'd played many many games before, but this one really opened my eyes to the possibilities that computers offered.

I just finished a recent playthrough, no more than six months ago! The game holds up really well. There are obvious shortcomings compared to modern games, it could be a hard slog for younger generations who are used to a more polished product, but if you are looking for a good bit of nostalgia U6 is hard to beat.

For comparison I also tried re-playing Bard's Tale 3 recently. I wasted many hours of my childhood with that game. Frankly, I'm amazed at how poor and awful of a game it was and I just I couldn't stick to it.

smacktoward 3 days ago 1 reply      
Since these articles on gaming history by Jimmy Maher consistently get voted up to the front page of HN, it may be worth mentioning that he has a Patreon where you can support his work here:


godmodus 3 days ago 2 replies      
This is a strange way to implement an text editor.
reiichiroh 7 hours ago 0 replies      
I don't know if this is true but a friend who worked at Origin regaled me with the story that the U6 box art's avatar model was Starr Long when he had a head of full, luscious hair.

Any Ex-Origin able to confirm?

outworlder 3 days ago 0 replies      
>On the evening of February 9, 1990, with the project now in the final frenzy of testing, bug-swatting, and final-touch-adding, he left Origins offices to talk to some colleagues having a smoke just outside. When he opened the security door to return, a piece of the doors apparatus in fact, an eight-pound chunk of steel fell off and smacked him in the head, opening up an ugly gash and knocking him out cold. His panicked colleagues, who at first thought he might be dead, rushed him to the emergency room. Once he had had his head stitched up, he set back to work.

Hah. That's how you are able to kill Lord British in Ultima VII. I had never understood the reference, until now.

elif 3 days ago 2 replies      
the latest game in this series, Shroud of the Avatar (still in pre-release) is having a free play weekend this weekend. Even though it's not "released" yet, it's a full game, very playable and enjoyable.


cocktailpeanuts 3 days ago 2 replies      
Who here came thinking it's a new modern Vim alternative?
syncsynchalt 3 days ago 0 replies      
This was my jam for all of middle school. Thanks for posting this article!
nsxwolf 3 days ago 2 replies      
The projection they used is just super weird.
jdright 3 days ago 6 replies      
Best game series ever with Ultima VI, VII, VIII and Online possible being the best games ever.
lokedhs 2 days ago 1 reply      
I remember looking at the Ultima games at the time as something interesting that I'd like to spend time on, but I never got into them. I guess it's because I was always into faster gaming experiences.

I never thought that I would really be able to enjoy any RPG's, but recently I've started playing them. I'm currently working my way through Tales of Zestiria and having a geat time with it.

I would like to give the Ultima games a try. Which one should I start with? I'd like one that is somewhat easy to get in to.

bertlequant 3 days ago 0 replies      
How I miss my shard over 56k
dewiz 2 days ago 0 replies      
I remember finding a casino in one of Ultima7 islands, I made so much gold out of the roulette that it became a problem stocking it and carrying it around Britannia.
beders 3 days ago 0 replies      
This was an awesome awesome game. Unimaginable how I played that for so many hours on such a tiny tiny screen :)
m3kw9 3 days ago 0 replies      
Use glass sword on lord British
WebYourMind 3 days ago 0 replies      
This brings back so many memories! Awesome Game!
artur_makly 3 days ago 0 replies      
The top game of my teenage life.
The presence of microplastics in commercial salts from different countries nature.com
218 points by r721  1 day ago   55 comments top 10
jknoepfler 1 day ago 3 replies      
The conclusion of the article:

"The results of this study did not show a significant load of MPs larger than 149m in salts originating from 8 different countries and, therefore, negligible health risks associated with the consumption of salts. The increasing trend of plastic use and disposal50, however, might lead to the gradual accumulation of MPs in the oceans and lakes and, therefore, in products from the aquatic environments. This should necessitate the regular quantification and characterization of MPs in various sea products."

Is interesting to me, because I feel like this is where environments politics properly starts. Should the US federal government (insert your home gov't, or the EU, or whatever) fund regular monitoring for micro plastic levels? Maybe! it's a hard cost/benefit question that involves weighing priorities and careful thinking. But that's the kind of question we should be asking when it comes to environmental politics, not "should the EPA exist," or "is climate change real?"

abeppu 1 day ago 2 replies      
Does it seem really low that they only extracted 72 particles? In their methods section they mention using 1kg of salt from each of the 17 brands.

Skimming FDA guidelines for defects in food, I see "action levels" like "average of 2 or more rodent hairs per 50g" for ground pepper, or "average of 1 or more whole insects per 50 grams" for cornmeal. 72 particles in 17 kg of salt sounds really shockingly clean given all the stuff in the oceans.

mirimir 1 day ago 2 replies      
I guess that's reassuring.

But I wonder how much salt is mined. Only one sample was plastic-free. But maybe it's mostly commingled. If one were really interested, one could look at plastic content vs Cl-36 (nuclear fallout) level.

vmarsy 1 day ago 2 replies      
> The abundance of MPs per salt sample ranged from 0 per kg in the salt sample # France-F (i.e. Country of origin: France, brand F) to 10 in the salt sample # Portugal-N

I guess I'll keep buying the French Gros Sel de Gurande from world market, plus the fact that it tastes really good when sprinkled on meat :)

JoeAltmaier 7 hours ago 1 reply      
Let me understand: this substance in the salt came from the sea, where it came from drains that contained flushed facial scrubs. So this stuff that is made to rub on your face by the hundreds of thousands or millions is present in salt by ones and twos per pound? Did I read that right?

This is significant how?

rodionos 22 hours ago 0 replies      
Based on the chart at the bottom: salt from Australia and Portugal has the highest content of plastics.
bricss 1 day ago 1 reply      
You will be probably surprised how many plastics you can find inside a shrimps or any other sea creatures guts.
sengork 1 day ago 1 reply      
Testing himalayan pink salt within this study would have been good for comparison (sea vs non-sea source of salt).
craigds 1 day ago 0 replies      
Crazy X axes on those graphs.
martyvis 1 day ago 1 reply      
From the introduction "Microplastics might be of health concern since they have been shown to carry hazardous chemicals and microorganisms.". So it is still a might whether it is a concern. We all eat, drink and breathe "chemicals" everyday. Everything from dihydrogen monoxide (http;//dhmo.org ) to Julius Caesar's urine (http://redneckmath.blogspot.com.au/2011/09/drinking-caesars-... ) to things like particulate carbon, lead, virii, bacteria and more things that actually are known to be bad. Our bodies are amazingly good at filtering or otherwise ignoring such attacks. Any idea whether we will know whether microplastics actually are bad, or just a visible distraction?
The new contribution workflow for Gnome csorianognome.wordpress.com
282 points by bpierre  2 days ago   71 comments top 14
rukittenme 2 days ago 2 replies      
Flatpak, builder, Ubuntu contributing to Gnome. 2017 is a great year for Gnome. I think these improved devtools will really help adoption. There are tons of developers (like myself) who don't make apps for linux because its too much effort. The easier it is the less excuses I have.
hashhar 2 days ago 2 replies      
I'm so happy. The last time I tried setting up a GNOME dev environment I ran into two major issues.

Using jhbuild to download dependencies doesn't work if your network blocks git:// URLs. I had to hunt a lot to learn to tell jhbuild to use https:// or ssh:// URLs instead. It still broke when updating a dependency and only worked for toplevel explicit dependencies.

The second was the extremely long process of getting the dev environment set up compared to the ease with which Mozilla does it.

Both of those issues look fixed now and make the ecosystem all the more inviting for me to hack on. I even think you guys bested Mozilla on that second point.

I'll try it out and report back.

PS: Is this tied in some way to Ubuntu or can other systems get in on the fun as well? Arch Linux does have a gnome-unstable repo so I can grab the latest GNOME Builder.

josteink 1 day ago 0 replies      
From 6 hours distro specific bootstrapping to a 5 minutes generic flow? That's just extremely impressive.

Even as a Fedora and Gnome user I've never even considered contributing to Gnome. I've just assumed (so far rightly) that it's just too big and heavy to work with.

Things like this may certainly change that. Great job!

Gudin 2 days ago 1 reply      

 There are no requirements to start development. It's an advantage if you know a bit of object oriented programming and git.
Reading this sound so encouraging. I love open-source, but as junior dev, I feel like it's super hard to contribute and everyone expects that you already know all the advanced level stuff.

xfs 2 days ago 1 reply      
I'm not quite sure what's going on with this website as it is flooded with ads.


I've been quite tolerant of ads myself without using an ad blocker for a long time but recently this kind of stuff with the whole page being clogged up by loading ads is getting out of hand.

msl09 2 days ago 2 replies      
The most impressive fact about this is that it actually works as advertised, though it took a little longer to build nautilus, more like 15 minutes.

This is so cool that I'm actually tempted to use it for non-gnome apps.

gyger 2 days ago 0 replies      
Awesome work done by the community and the Flatpak/Builder guys
tmsldd 1 day ago 0 replies      
Thanks guys!! Specially to Alex and Christian..Gnome project just got one more contributor ;)
tux1968 2 days ago 2 replies      
A minor nitpick to be sure, but it's odd that you must close down all other copies of your application before your development copy will launch. There is probably a way to work around this, but the limitation is hard enough to get around that it is mentioned in the Builder instructions.

This seems like an unnecessary limitation of the Gnome environment. It would be nice to be able to compare your existing release to your changed copy and run them side by side.

uiri 2 days ago 3 replies      
All with an UI and integrated, no terminal required.

What if I just want to git clone something and run make (or similar) ? I don't see any documentation for that workflow at all. My usual development workflow is Emacs + git + commandline tools. Perhaps I'm simply not the target audience.

gue5t 2 days ago 2 replies      
Is it possible to use this to contribute to Gnome Shell or GTK+? Or is "contribution" limited to applications?
stuaxo 1 day ago 0 replies      
This is great. Every few years I've got it into my head I want to contribute, and each time getting everything working on Ubuntu has flummoxed me after a few hours.
haddr 2 days ago 2 replies      
I'm curious what is the trick in building the whole Nautilus in under 5 minutes? I mean how do this new build tool accomplish that?
chris_wot 1 day ago 1 reply      
Hmmm... I wonder if I could get LibreOffice to build like that...

Actually, I wonder if anyone has ever tried to edit, compile and build LibreOffice in Builder? I know it can be done in KDevelop...

Typing the technical interview aphyr.com
319 points by zorpner  13 hours ago   61 comments top 20
menssen 18 minutes ago 0 replies      

This is a polemic against people who don't know Haskell. Previous iterations were the same except Clojure/Lisp.

Aphyr/Kyle is a genius, and one of my favorite people on the internet. But this series is the WRONG way to attack the code interview, which deserves to be attacked, BUT.

Some of us face real challenges about how to find common ground with interviewees. Because you know you're smarter than those questions is not a reason to discount them.

I will sacrifice ALL my HN points to say this is bullshit. Bullshit written in decent prose, but still bullshit.

sushisource 7 hours ago 2 replies      
So where can I buy my "Summon a linked list from the void" T-shirt?
strictnein 9 hours ago 0 replies      

 Youre defining the natural numbers by hand? Why? Haskell is for mathematicians, you explain. We always define our terms.
That's just beautiful

munin 9 hours ago 1 reply      
This is awesome. I didn't even really see what was happening until it was too late.

If you enjoyed that you might enjoy the structure of the proof of the complexity of type inference for \-calculus: http://www.cs.brandeis.edu/~mairson/Papers/jfp02.pdf . They construct arbitrary boolean circuits from simple types and evaluate the circuits through type checking.

jasondebo 3 hours ago 0 replies      
"One queen per row, in every legal position, for every configuration. You imagine what their startups about-us page would look like."

the coups de grce!!!

Wazzymandias 1 hour ago 2 replies      
I honestly have no idea what's going on, but his writing is humorous.
occultist_throw 9 hours ago 4 replies      
How.. beautiful.

Like I said earlier in the "half-dead chicken thread", the occult is a quiet and powerful force in computer science and related areas. With things like neural networks and learning functions, we're approaching the ultimate.

In this case, it was only a handful of lines of a functional language that could solve the N-Queens problem.. Of course, mixed with a bit of Lovecraftian lore and Norse magic.

It's only a careful look beneath the stolid atheism and antitheism that the tech community likes to front.. All is not as it appears just beneath the digital waves, is it?

mrkgnao 10 hours ago 0 replies      
> Summon a linked list from the void. It floats to the surface of the screen: a timeless structure, expressed a thousand ways, but always beautiful.

Coincidentally, I had Bach's E minor organ sonata on in the background, so...

> You sigh contentedly.

I did, I started smiling from ear as soon as I read the now-classic "Summon a ______ from the void". :)

Also, TIL:https://en.wikipedia.org/wiki/Sei%C3%B0r

eli_gottlieb 6 hours ago 0 replies      

 You smile kindly. Haskell is a dynamically-typed, interpreted language.
Thou shalt not suffer a witch to live!

azeirah 7 hours ago 0 replies      
> Seize two meaningless constants from the void, and imbue them with meaning.

Holy shit

tel 5 hours ago 0 replies      
This is the Old type art. It's tragic to see such unkind typing.
carterschonwald 4 hours ago 1 reply      
This is decent prolog code ;)
hifumi 8 hours ago 1 reply      
For the N queens on an NxN chess board, wouldn't you put them in a fibonaci spiral? Of course I can draw the board and explain it, but I have no idea how show that fibonaci formula modified for a chess board. Do you think that would be enough?
danpalmer 9 hours ago 0 replies      
There was a great talk at Haskell Exchange 2016 along similar lines - defining a lot of complex functionality at the type level to derive correct functionality. https://skillsmatter.com/skillscasts/8893-is-a-type-a-lifebu...
pfarnsworth 37 minutes ago 0 replies      
This is his third one on interviewing and definitely the funniest. I wonder if he actually has been interviewing and doing this?
uyoakaoma 1 hour ago 0 replies      
This was a funny article :)
borp_borp 2 hours ago 1 reply      
EliRivers 9 hours ago 3 replies      
Sad to say, this is exactly the sort of magic that, as the conclusion suggests, gets people not hired. Even on this very enlightened forum people argue in favour of less knowledgeable candidates (even with all else being equal).
skybrian 8 hours ago 4 replies      
It's odd. There are people who complain on Hacker News about interviewers who ask about algorithmic complexity: "when are we ever going to need this?"

This is about wasting an interview demonstrating a semi-obscure technique that's fascinating but mostly useless, and it gets widely praised.

Seems like it's just that fantasizing about turning the tables on an interviewer is fun, never mind whether it makes sense or not.

The brain doubles up by simultaneously making two memories of events bbc.com
263 points by akbarnama  1 day ago   66 comments top 15
trishume 13 hours ago 6 replies      
Hypothesis: Deja Vu happens when the long term memory isn't suppressed properly during the first few days.

I often find when I experience Deja Vu I think I remember doing a thing I did the day before a long time before that. It feels like I have two memories, one of a recent event and one a vague recollection of something a long time ago. This sounds like exactly what one would expect if two memories are recorded, one suppressed for a few days and one forgotten in a few days, but the suppression fails.

Does anyone else have Deja Vu like this?

shahbaby 21 hours ago 1 reply      
Further supports the notion that understanding the neocortex is the key to understanding intelligence.

"It is immature or silent for the first several days after formation," Prof Tonegawa said.

What's going on during those first several days? It's probably re-arranging it's model of the world to account for those new memories.

"The idea you need the cortex for memories I'm comfortable with, but the fact it's so early is a surprise."

The neocortex is constantly making predictions about the future, so it makes sense that it has some short term memory of its own to make those predictions from.

tannerc 12 hours ago 0 replies      
We've known this for some time, but this is additional evidence to the notion of our brain processing and prioritizing which information gets stored in short-term vs. long term memory.

This also explains deja vu, but possibly not as many on this thread have tried to explain.

I've written about this about two years ago, the leading theory of deja vu being that your brain processes and stores an experience in the long term memory before it's had a chance to properly store it in short term. By the time the short term memory has caught-up, the event feels already lived, because it's already been processed and stored in long term memory.

At least, that's one theory.

amelius 14 hours ago 3 replies      
Just hypothesizing but could it be that the brain is structured as two adversarial networks, which train each other e.g. during sleep?
willvarfar 21 hours ago 5 replies      
> Researchers then used light beamed into the brain to control the activity of individual neurons - they could literally switch memories on or off.

So can this be extended to movie-plot-like memories be wiped, or false memories implanted? How did they identify the memories to be targeted and where they were stored?

There's lots of exciting work to reverse-engineer and extract rules from software neural nets. Can the same be possible in hardware nets too, or will attempting to measure it interfere and distort it?

GeeJay 20 hours ago 1 reply      
Helpful for 'Memento', obsoletes 'Inside Out'.
lsh 22 hours ago 1 reply      
"The experiments had to be performed on mice, but are thought to apply to human brains too."

I look forward to human trials.

smitherfield 22 hours ago 1 reply      
Not much real-world import, but this fixes a few aspects of the movie "Memento" I had considered plot holes. :)

(How does he remember what "remember Sammy Jankis" means? How does he remember he has short-term memory loss?)

Dowwie 17 hours ago 1 reply      
is there a package I can install to upgrade mine because this one seems to take high availability over consistency
heisenbit 11 hours ago 0 replies      
This should have a major impact on learning strategies for long term retention. Short term retention may not be any indicator for long term retention as different mechanisms are at work there. But short term memory at least initially will hide any long term memory so how can we measure and optimize what we remember in the long run?
dmitripopov 16 hours ago 2 replies      
Tons of "Improve your memory" books are irrelevant from this moment. No suprise that none of them actually worked for anyone except placebo effect.
bertlequant 17 hours ago 1 reply      
2nd copy is in tape though
lutusp 22 hours ago 0 replies      
This is another welcome step on our journey between psychology and neuroscience.
psyc 16 hours ago 1 reply      
How not to create traffic jams: Dont let people park for free economist.com
207 points by uyoakaoma  2 days ago   261 comments top 29
toast0 2 days ago 19 replies      
Certainly, if you make it less practical to arrive by personal car, less people will travel to places by car, and then you'll have less traffic. Unless this comes with a massive improvement in other means of transport, you'll probably have less visitors as well.

The problem with alternatives to personal cars is that personal cars have many desirable properties:

a) near zero latency to take a trip: if there's not a cab at my curb, I have to request it and wait -- or request it early and hope I'm ready when they are. Buses and trains are usually not waiting for me at the station.

b) proportional penalty for leaving late: if you miss a bus or a train by 10 seconds, you have to wait for the next one, which can be an hour. If you leave a couple seconds late in a car, you'll probably arrive a couple seconds late (around rush hour, it gets worse of course). If you don't make it to a requested cab in time, they may leave, and you have to wait for another one to come.

c) availability: a personal car generally provides the same service during the day, at night, and can be used in rain and mild snow (heavy snow, if properly equipped). Busses, trains, and even cabs have less availability at night.

d) flexibility/directness: a personal car can drive to almost everywhere, and can generally take a fairly direct route. Trains only go where there is track, and busses only go where there is a route. Cabs don't always pick up and drop off where you want to go. In case of an urgent change in circumstances, you can change your destination at will in a personal car or cab, but may not be able to easily redirect to where you're going in a bus.

mrbabbage 2 days ago 3 replies      
This article didn't touch on the interaction between free parking and public transit, and I've seen a few comments here talk about the dearth of good public transit options as justification for free parking. I believe it's worth pointing out that free parking causes bad public transit:

- free parking siphons away would-be bus and train customers, which deprive the transit authority of revenue (leading to less frequent service, older vehicles, etc.) and also reduce the political impetus to deliver high quality transit.

- subsidized parking leads to lots of drivers circulating looking for an open spot, causing congestion and pollution -- the article mentioned that 53% of SF residential parking permit holders spent more than five minutes looking for parking at the end of their most recent trip. This congestion makes surface buses and trams run slower and with greater schedule uncertainty, making transit even more unattractive.

Obviously I would like great public transit in America yesterday, but I don't think the current state of transit is good reason to preserve subsidized parking. Preserving subsidized parking is going to keep transit in America as unattractive as it is now.

wanderr 2 days ago 4 replies      
Why does everyone think that the solution to traffic problems is to first make driving even more terrible and then oh yeah maybe get around to improving or even providing public transit? Why can't we swap the order, especially when it takes years to implement public transit improvements?

Also, we need to resist the temptation to assume that low utilization of a half assed public transit solution means that nobody wants it. In my small town, we had a bus line that had almost no utilization despite going between two desirable locations; an area with tons of apartments oriented towards students and the university itself. The city was talking about killing the line but someone convinced them to try making the bus run more frequently, and continue running later into the day as a trial. The line went from having the lowest ridership to the highest. Not every improvement will be that dramatic of course, but I think often times public transit is underutilized because it's not meeting the needs of the population.

Mz 2 days ago 0 replies      
I would really, desperately like to agree with this article, but I hit this point and may not read further because this is a completely clueless statement:

If they do not also change their parking policies, such efforts amount to little more than window-dressing. There is a one-word answer to why the streets of Los Angeles look so different from those of London, and why neither city resembles Tokyo: parking.

I wanted to be an urban planner and I have done a fair amount of related reading. Los Angeles sprawled before it became known for being so car oriented, back when people took the tram and walked everywhere.

It sprawled because it was built in the desert. The fact that water has to be imported to the area means that you had to develop large tracts of land in order for the financing to make any sense. It has to do with how much it cost to develop the necessary underlying infrastructure.

It's layout is somewhat unique due to the environment and circumstances in which it was built, the way that Venice is unique for being built in a swamp. If you don't study the history of the place, you can't understand how it came to be the way it is.

I would love to see the U.S. become less car centered. I would love it if we stopped whoring our cities out to the cult of the car and built more walkable communities and provided better public transit. But arguing for some particular approach and basing that argument on completely made up facts without understanding the history behind the places used as examples does not in any way impress me.

mrfusion 2 days ago 0 replies      
Titlegore. I'm having trouble parsing all those negatives in the headline.
bhauer 2 days ago 1 reply      
Putting aside the awkward title, the article is confirming what we instinctively would expect: increasing the burden of commuting by car will reduce the number of people who commute by car. The cost of parking is an example burden.

Obviously life needs to balance many things, and increasing the cost of parking in an attempt to shift commuters to alternative forms is also going to decrease the number of people who want to commute to the destination at all.

I live in the Los Angeles area and significantly prefer living in and visiting cities that have free parking (e.g., the southern beach cities). I actively avoid cities such as Santa Monica that have costly and insufficient parking--and importantly, that means I don't spend any money at Santa Monica retail businesses. Alternative transit options are not appealing. Light rail, while expanding in this metropolitan area, is far too inefficient. Uber and Lyft are an additional cost friction that gets factored into any decision (e.g., where to go for a dinner out?). Plus I don't want to have to deal with an app to go somewhere.

All that said, self-driving cars may be a big game changer for my lifestyle. I would be more likely to visit Santa Monica if my car could find a suitable parking structure and park itself. To my mind the cost of parking would be less of a nuisance if I didn't even have to think about the parking process.

kevinburke 2 days ago 1 reply      
A concrete thing you can do about this is email your local City Council or Supervisor and ask them to reduce parking requirements for new buildings. Send them this article.

Many local municipalities require absurd amounts of parking per new housing unit, and many Baby Boomers show up to meetings to complain about how there's no parking and thus a new project should be denied.

tutufan 2 days ago 2 replies      
This analysis seems to ignore the benefits of free parking, both for drivers and for the businesses they're driving to. Personally, in cities where I have a car, if ample parking (day or night) is not available near a business, I patronize someone else. I do prefer public transportation, but there are only a few cities in the US where that actually works.
noddingham 2 days ago 2 replies      
Two things I'm seeing reading the comments:

Anyone referencing European public transportation should realize that over half the countries in Europe are smaller than the state of Iowa, and none of them save Russia & Turkey are larger than Texas. You're comparing apples to oranges.

Second, not one of the comments below mentioned the Auto lobby. If your beef is with the number of personal vehicles on the road vs. public transportation, you should probably look there first. Tell your billionaire friends to start outspending the $61 million spent in 2016 by Automotive[1] industry lobbyists (compared to the $1.4 million for Misc Transport[2]) and maybe you'll start seeing a difference.

[1] https://www.opensecrets.org/lobby/indusclient.php?id=M02&yea...

[2] https://www.opensecrets.org/lobby/clientsum.php?id=D00004700...

seangrogg 2 days ago 0 replies      
Chances are this would have the desired effect for me: I would simply not shop during those times. I enjoy purchasing items in a store and going to the theatres, but I hate paying for parking or dealing with public transit. So I would just stop patronizing stores and movie theatres and instead use Amazon and Netflix more than I do now.
Paul-ish 2 days ago 2 replies      
This article paints an overly rosy picture of self driving cars. A significant cause of traffic on our roads is single occupancy vehicles. Opening up the gates to zero occupancy vehicles could cause the number of cars on the road to skyrocket, because you are no longer bounded by the number of people.
droithomme 2 days ago 1 reply      
First, the double negative is simply terrible. Title should have been "How to create traffic jams: Let people park for free" so their point would be more clear.

Second, I'm no fan of cars or Apple, but if Cupertino doesn't want their largest employer there who is paying more taxes than everyone else combined, they should just kick them out. If they do want them there, and there's traffic problems, they should use the tax money to build gigantic boulevards so the Apple employees can drive wherever they want. They are taxpayers and their taxes pay for roads like everyone else. The city takes charge of building roads so they should do their job and build roads, or public transport, to serve the people paying taxes, especially those paying the most.

CodeWriter23 2 days ago 0 replies      
I think if two key driving skills were taught, traffic would be greatly improved.

The first, efficient merging, which means, no, you don't stop in your lane to move to the right, you speed up a little and leverage the accordion effect of the slower traffic on the right to find a spot big enough for your car to fit in and then merge. More efficient utilization of the space on the roadway, while keeping the lane you're leaving and the one you're entering moving at their respective speeds.

Second, speed up when you get out in front of the jam. This is well studied and found to mitigate the traffic jam behind, when enough drivers do this.

mankash666 2 days ago 0 replies      
There already exist considerable barriers to car ownership -1. Cost of car, which in itself is a perpetually depreciating asset. 2. Insurance3. Recurring service fees4. Non open market for parts and replacements5. Price gouging of insurance if found liable in an accident

In my honest opinion, any design that doesn't attack the root cause is an improper one. Please make public transit a clean, safe, affordable and sufficiently widespread in coverage instead of increasing barriers to car ownership and ridership.

Jerry2 2 days ago 0 replies      
>How not to create traffic jams: Dont let people park for free

This title is horrible because it uses a double negation. Here's one without it:

>How to create traffic jams: Let people park for free

Doesn't that read much less confusing?

johan_larson 2 days ago 0 replies      
Here in Toronto we don't have per-unit parking requirements. Developers are constantly putting up towers with less than one parking space per unit.

It surprised the hell out of me when I bought a condo apartment. No parking spot included.

daodedickinson 2 days ago 0 replies      
I avoid places that don't have free parking almost as assidiously as I avoid places that don't have public restrooms. It's a sign that a place is not for people like me.
post_break 2 days ago 2 replies      
"Ohh. Ok. I didn't realize we were doing trick questions. What's the safest way to go skiing? Don't ski!"

Basically the argument here is to stop traffic jams from happening just make it so annoying that people won't drive. Well here in Texas you literally can't get anywhere without driving. Buses don't really run, there isn't public transportation to utilize, and you're going to tell me on a 100F day to ride a bike 15 miles?

yarri 2 days ago 1 reply      
The rise of punitive solutions is real. I was involved with discussions with local municipalities placing (private) local schools under restrictions for not providing sufficient carpool coverage -- levy fines based on percent of families carpooling.

Would the inverse of these punitive solutions, ie., encouraging carpool / ridesharing, not also work? It always amazes me how relatively unutilized the HOV lanes are.

matt_wulfeck 2 days ago 0 replies      
It seems to me that you have two choices when dealing with traffic:

1. Make the location less desirable.

2. Make the transportation more efficient.

Obviously nobody wants to do the 1st option, but the 2nd option is more difficult so they stick to the 1st. It seems a little backwards to me, like the goal is simply having less cars regardless of secondary or tertiary negative impact.

yazr 2 days ago 1 reply      
Quadruple negative it is!
chrismcb 1 day ago 1 reply      
Is there free paying in New York city? I thought parking rates there were astronomical AND the city has a decent public transportation system, and yet traffic is so poor... So there may be more to it than free parking
mcguire 2 days ago 0 replies      
That's funny, I thought congestion charges were how to prevent traffic jams.

Could someone parse out 'The Grand Tour' chart for me? I have no idea what it is trying to present.

crispytx 2 days ago 0 replies      
This is a stupid article. You already have to pay to park when you go downtown.
xyzzy4 2 days ago 0 replies      
Honestly we should aim to replace free parking with free Uber's.
jankotek 2 days ago 1 reply      
Allow remote work; all that transportation is irrelevant anyway.
oculusthrift 2 days ago 0 replies      
sounds like a regressive tax that keeps poor people from visiting public places.
__m 2 days ago 0 replies      
You would think that traffic jams are a deterrent themselves, a fair one.
paulsutter 2 days ago 1 reply      
It's 2017 and we're debating about parking? Shouldn't we be remarking how quaint that the Economist can't see what's coming?
       cached 11 April 2017 04:11:01 GMT