hacker news with inline top comments    .. more ..    10 Apr 2017 Best
home   ask   best   2 years ago   
1
Twitter refuses US order to disclose owner of anti-Trump account reuters.com
828 points by anigbrowl  3 days ago   482 comments top 28
2
hyolyeng 3 days ago 8 replies      
This is truly terrifying. The fact that the US government will pursue this kind of action, potentially exposing and punishing criticizers of the government -- seems like this is how dictatorships/autocracy/totalitarianism start.

"I disapprove of what you say, but I will defend to the death your right to say it." If we believe in the free America, this should be what we should all fight for, if we want to keep America for the reason it became great in the first place.

3
throwaway2048 3 days ago 5 replies      
Its disturbing the amount of posters here who are directly equating banning users who actively post racist, hateful bullshit and handing over the user info of somebody who opposes the president.

Because they did one they should do the other? what?

4
ultimoo 3 days ago 3 replies      
https://twitter.com/ALT_uscis/status/850100381560578052

"well now on CNN! and we gained 17000 followers in less than 30 minutes. Thank you CBP/Trump"

I just learnt about the Streisand Effect this afternoon -- https://en.wikipedia.org/wiki/Streisand_effect

5
colanderman 3 days ago 5 replies      
And while apologists may bring up the Yiannopolis case as an example of hypocrisy, let's keep in mind that Twitter is free to censor speech how it sees fit. The US government is explicitly not.
6
Neliquat 3 days ago 4 replies      
Not a fan of Twitter, but kudos for doing the right thing here, whatever their motivations. Free speech is being attacked by the far right and the far left it seems these days. Lets keep this thread on the rails a bit and thank someone for taking the high road, no matter how expected it should be.
7
incompatible 3 days ago 1 reply      
If the government can demand records for "an investigation to ensure compliance with duties, taxes and fines and other customs and immigration matters", they can demand records for anyone else with similarly vague justifications. It's fortunate that there are organizations like Twitter willing to take a stand against it, I guess many others would just hand over the data.
8
thinkloop 3 days ago 1 reply      
On the one hand we live in a big brother dystopian society where government is plugged into the internet backbone, has backdoors into all software and hardware, and knows everything being said by anyone, and on the other, they can't find a Twitter account's email, whine to Apple about a locked iPhone, online banking by regular people on virus machines (PC's) never gets hacked, and we only get celeb nudie drops once a decade - are we really losing the security war that bad?
9
jquery 3 days ago 0 replies      
While I am totally on Twitter's side in his case, anyone who thinks Twitter is some sort of believer in free speech hasn't been paying attention the past couple years. Free speech isn't a core value at Twitter, it's a shield to protect their business interests.
10
codydh 3 days ago 1 reply      
I think it sometimes gets lost that there's a difference between being harshly critical of another group's opinions, and threatening or harming another group. There are multiple ways in which they're different, and they're not both free speech (IMO).
11
dingo_bat 3 days ago 0 replies      
In other news, Twitter has been banning trump-supporting accounts left and right, without much uproar.
12
israrkhan 3 days ago 1 reply      
Waiting for a trump tweet, bashing twitter...
14
mrmondo 3 days ago 1 reply      
Massive respect to twitter for standing up for privacy like this.
15
sergiotapia 3 days ago 2 replies      
Speaking as a latino, it's funny that they defend this account but ban stuff like @PolNewsNetwork1.

Also funny I have to clarify that I'm a "minority" or I'll be attacked as a racist xenomorph mysoginist accountant bioslug edgelord.

Let's not paint this as Twitter defending civil liberties.

16
aetherson 3 days ago 0 replies      
Good for Twitter.
17
ultim8k 3 days ago 0 replies      
The law and the right thing are not always the same. I personally prefer to do the right thing and $#it the law. Especially if I'd have the power to do like twitter does. Law is decided by stupid people (eg Trump) for stupid people (his voters).
18
stefek99 3 days ago 0 replies      
My thinking in each and every case like this:

- official statemen: "no no no"- "here is the data" (behind closed doors)

19
hysterix 2 days ago 0 replies      
The comments in this thread are just as pathetic as I've expected from ycombinator as of late.

Just search for the term, "hate speech" and you'll see this pervasive cancer trying to erode the very fabric of our free society.

If anyone uses the term hate speech unironically, I'd like you to take a long walk off a short pier.

20
musgrove 3 days ago 0 replies      
Then all they'll do is find a reason to have it subpoenaed.
21
agrona 3 days ago 0 replies      
This is great. Although part of me has a desire for the person to be outed so Trump can lose yet another horrendously baseless first amendment lawsuit.

Hopefully a lot more visibly, this time.

22
known 3 days ago 0 replies      
"Never do anything against conscience even if the state demands it." --Einstein
23
dumbasswebsite 3 days ago 2 replies      
24
nodesocket 3 days ago 1 reply      
25
ryanmarsh 3 days ago 1 reply      
26
darkhorn 3 days ago 2 replies      
I told before. Trump acts like Erdoan.
27
ryanmarsh 3 days ago 13 replies      
Would HN be so proud of Twitter if they had refused to disclose the owner of an alt right account?
28
Sunset 3 days ago 0 replies      
Just start holding C_Os in contempt in solitary. See how long twitter's resistance lasts.
2
New York City bans employers from asking potential workers about past salary nytimes.com
711 points by mendelk  3 days ago   513 comments top 59
1
pmoriarty 3 days ago 18 replies      
I never answer questions about my past or expected salary, not to employers and not to recruiters.

Most employers don't ask, and the few that have (perhaps by having a part of an employment form ask for previous salary) have never made my leaving that information out an issue.

Most recruiters, if they even ask, respect my decision not to talk about it, but I've been pressed hard on this by a handful of recruiters, and have had this be a deal breaker for a couple of them. One recruiting firm admitted that they were paid by the employers to get this information. I wasn't getting paid to give this information out, however, and it's worth more to me to keep it private as I'm placed at a disadvantage in negotiations if I name a number first.

It's still a seller's market for IT talent, and there are plenty of other fish in the sea, so if some recruiters can't accept that I won't name a number, it's their loss.

It's great that NYC is taking the lead on this, and I really hope the rest of the US follows suit.

2
showmethemoney 2 days ago 3 replies      
Once upon a time I interviewed for a role in NYC. An employee that I spoke to said they paid pretty well, and I could expect about 120. The HR person wanted my previous salary, and I refused. Eventually they said their range was 130-150. I said it wasn't gonna work cause I was looking for something more like 220. They said okay we can do that no problem. My previous salary was 110.
3
garethsprice 2 days ago 0 replies      
Lots of people here talking from their own experience as highly skilled, in-demand professionals.

However, helping friends apply to jobs in other industries - specifically medical - I saw that most of the applications involved filling out an automated form that required prior salary information to complete.

There's no advantage to an employee from being forced to disclose this information and it perpetuates compensation discrepancies by gender/race/guts to ask. Very glad to see this made illegal.

Now, if they were really serious about fixing pay discrepancies, they'd make it mandatory to post salary ranges with job listings.

4
jimparkins 3 days ago 11 replies      
Have a google for: "can i lie to a employer about past salary" - it really really messes with people - people feel super uncertain about how to approach this situation. Throwing any confidence they have during the negotiation out the window.

Even now I hesitate to write this as a million people will come out and say never lie - what if they found out.

More than banning. There needs to be acceptance that if someone asks you. You are totally free to make any damn number up that you like. Seriously. Its a sales situation. It should not be like your under oath on the stand. Which is how most people view it.

5
mdb333 2 days ago 1 reply      
It will be interesting to see how this affects the hiring markets. Out in SF/etc it came up in just about every discussion I had last time I was looking for work usually as part of the first phase. No point interviewing candidates that wouldn't accept the job. It's pretty much a risk mgmt exercise from the hiring side. Similarly, I always asked what the compensation range they're targeting is as I don't want to waste my time either.

I wonder if this ban addresses background checks covering the same information, because some companies do ask for this data from previous employers although not all provide it. Without protection there this ban seems fairly limited.

Anyhow, I don't agree with all advice to never disclose current/previous salary. In some scenarios certainly it makes sense, but in others it is the opposite. You want to justify a higher market value and set the expectation that you're unlikely to be interested unless they're willing to compensate at $X or higher. Of course it's different in terms of leverage if you're employed currently or not. Recruiters and interviewers will waste tons of your time if you don't get on the same page quickly. Lack of transparency around your compensation expectations will exacerbate this issue. Whether that means you tell them what you're making or what you'd like to make doesn't really matter, but you better do at least one of the two.

6
pbasista 2 days ago 1 reply      
My past salary is an irrelevant information for my potential future employer. If they ask about it, my response would be: "Why would you like to know it?" Any answer to this question is bad. If they do not bail out and stop asking at this point, then I bail out.

The point is that if I want, I can completely change my way of life by switching to a job which pays 50 % of my current salary. Or 400 % of my current salary. It does not matter. What matters is that it is solely my decision and none of my potential future employer's business.

If they want to know my current salary, it is a red flag. I do not care about them knowing it, but there is a high risk that they will use that information to try to make an offer which they think that I ought to consider good. They can offer e.g. my current salary + their negotiating margin and think "hey, we have offered you more than you have now, so you ought to be happy". While in reality, the only person who can responsibly decide whether I am happy about it or not is me.

Note that I am not criticizing companies which want to hire for cheap. This is all right. But they need to do it transparently, from the beginning. They should say it clearly and upfront: for this position, our budget is somewhere in this range ... are you interested or not? This is a fair way to go.

7
paulcole 3 days ago 10 replies      
Good. Not enough people realize the best way to answer this question is with a straight-up lie.
8
lend000 2 days ago 6 replies      
There's a case to be made for an employee protection preventing an employer from firing an employee for refusing to produce pay stubs from a past employer. However, I've said it before -- preventing a question from being asked is state overreach and constitutes a violation of the first amendment, in my opinion.
9
dkrich 2 days ago 1 reply      
This is the #1 rule I always tell people who are interviewing for a new job. Never tell a potential employer your past job salary, especially so if you are unhappy with that salary.

When my girlfriend was interviewing for a new job two years ago we talked about this because the recruiter was very demanding about knowing what her current salary was and I told her to stay firm on it because the salary in her current job was, frankly, shit. In the end she got a 50% pay increase over her previous job and then six months later got promoted with a pay increase that effectively doubled her salary from her previous job. Which brings me to another point- a lot of people will justify disclosing the amount by reasoning that they can always ask for a raise after they get hired and the important thing is to get a foot in the door. The problem is that anchoring is a very real thing. If you start at $50k instead of a $75k, every raise you get at that company for the rest of your career will be based off that first salary. If you stay at a company for 10-15 years, that is an enormous difference that could be well into the six figures.

Bottom line, don't disclose salary history to employers. You'll seldom find an employer who will tell you what your colleagues in the same job make. Why do you want to show your hand?

As for this law, I'm actually mildly opposed to it. I don't think that the government should have a hand in determining salary beyond minimum wage, because that is an agreement made between two consenting parties in private industry. If you are a more experienced negotiator and are willing to ask for more money than your counterparts, why shouldn't you be at an advantage? There's no law that says you have to disclose it and the rest is up to you.

10
ransom1538 3 days ago 7 replies      
So people on HN are pro employers posting their employee salaries publicly but against previous companies asking what their salaries are. WTF. I must be missing something here.

I personally believe your salary is your business. Period. Getting salary information bad, forcing employees to divulge salaries from a position of power is disgusting.

Here is me being publicly quartered on HN for pushing back on forcing employees publicly posting salaries: https://news.ycombinator.com/item?id=12805814

11
Taylor_OD 3 days ago 1 reply      
I'm not sure how effective this will be. When I was recruiting It got to the point where I would never ask salary's I would just say, "I'm assume your currently making between XXk and XXk?" and 9 times out of 10 I was in the right range. 1 time out of 10 they would say no and correct me. I just took a educated guess based on knowing the market. Any good recruiter should be able to do the same.
12
aglavine 2 days ago 0 replies      
I advocate for all salary data being public.

To get a better salary and better work conditions cannot depend of hiding a figure. It is a weak position, to say the least.

13
ElijahLynn 2 days ago 1 reply      
That is fantastic news!

I recently interviewed with a prominent Drupal company, Forum One, and was shocked that they not only asked for my previous salary, but previous 3 salaries and also wanted me to verify them with pay stubs! I told them no and the interview stalled after that. That was a sad day, I really wanted to work with them but what they asked for was unacceptable.

They were not in New York but I welcome this law everywhere.

14
southphillyman 3 days ago 3 replies      
They are trying to pass this in Philadelphia in May. Comcast and the local Chamber of Commerce are suing to stop it citing that it's a Freedom of Speech violation.

Besides being able to freely low ball candidates who started behind the 8 ball (women/minorities/people who didn't go to elite schools), is there a real argument for companies HAVING to know your previous salary?

15
mbroshi 3 days ago 3 replies      
The analogy is not perfect, but all the time people like to know how much a house or car sold for in the past, or how much a stock traded for the in past, etc. Seems like useful data.

Now, I'm not advocating for or against this particular question, but I sure do hope there is data collected and studied on the effect this has in NY before anyone jumps to conclusions. I feel it's too easy to have a knee-jerk reaction on this one.

There is no way to know whether this helps/hurts/is neutral for any particular class of people without studying its effects.

16
lr4444lr 2 days ago 0 replies      
A lot of the analogies and reasoning I'm reading here is faulty. Nothing stops HR depts. from sharing information which to a first degree of approximation would give them an idea of the market rate for the positions they're filling. Companies are already free to rescind offers if they find out that you've lied about aspects of your past work history. It's part of what at-will employment is. This is about interfering with the negotiation of the individual job seeker. If I can't ask for your salary history, then I might miss out on the competitive advantage you as a candidate have in that you're willing to work for less over potentially more qualified people as you build your skill-set and expertise. The right way to help those who are taken advantage of is education about how to bargain and what information they need and need not share, not legislation.
17
arunitc 2 days ago 1 reply      
Here in India, you are not only asked your previous salary, you have to provide your last (sometimes three) payslip while joining. Some companies have a policy of NOT giving more than a 30% hike from your previous salary - you need top management approval for such a hike.
18
georgeecollins 2 days ago 0 replies      
I have gotten out of discussing past salary by saying that I believe salary history perpetuates wage discrimination. I am a white man, so I think this only makes me seem idealistic rather than disgruntled. But I do honestly believe this to be true.

Smart people who negotiate your salary do not want to get anywhere near a conversation that has the phrase "wage discrimination" in it for any reason, even hypothetical. They do not want to argue about what is or isn't wage discrimination, even if they disagree. So it can end the discussion with both parties feeling they are being high minded by avoiding the topic.

19
crazy1van 2 days ago 2 replies      
Laws like this one that restrict businesses from asking questions that individuals could ask strikes me as very strange.

I could ask my buddy how much he made at his last job.

I couldn't ask my buddy how much he made at his last job if I'm considering hiring him to work at my company.

Why is this information legal to acquire when I'm wearing my business hat vs when I'm wearing my friend hat?

20
Ultramax 3 days ago 4 replies      
Seems like an ethics test. Will you tell the truth about your previous low salary or lie to make sure you don't get lowballed?

Then for the employer, will you decrease the salary to match previous low rates or wages? Or will you pay him/her the market rate regardless?

Personally, I have always been asked how much I made at previous places. I prefer to give a range than specify individually.

21
gdulli 3 days ago 2 replies      
I'll have to find some way to bring it up myself. A higher than usual salary history is a good tangible signal to future employers and I'm not going to give it up.
22
riskable 2 days ago 0 replies      
Whenever I'm asked for my current or former salary I tell the recruiter/HR person to first tell me the salary of all the peers I'll be working with. If it's a contracting company I also ask how much they'll be paid for my position.

If they don't provide those money details why should I?

23
retube 2 days ago 1 reply      
This will be tricky for banks and hedge funds etc. These firms typically, in London at any rate:

- never quote a salary (or even a range) for a job opening

- following many interviews any offer is always based on prior "comp", of which you will have to provide 3 years of info (base, bonus, deferred awards, retention awards, benefits etc) plus, of course, documentary evidence to support. HR departments start squealing if uptick is a greater than 20% increase, although some people do manage higher (anything over 30% is almost unheard of, except for a handful of big producers)

- they go through your background with a fine toothcomb and check EVERYTHING you supplied on your CV and in the screening questionaires you have to complete. They employ specialist third party agencies to do the research on you.

If you do not comply with this you simply won't get hired. It is universal and practiced everywhere in finance, I've never heard of anyone not being subjected to this.

24
highdesertmuse 2 days ago 1 reply      
Mandating that employers are forbidden to ask about previous salary seems like a pathway to more government intrusion into a complex process. Part of job seeking involves learning how to negotiate with a potential employer. I've side-stepped many an inappropriate or premature question about salary with "Are you making an offer," followed by agreement to disclose the "details" if and when such offer is forthcoming at which point I know far more about what the position involves, am in a much better negotiating position and can explain, if necessary, why the pay scale was lower. If that doesnt satisfy a recruiter or potential employer, I tend to think they're not very serious about me as a candidate and I havent disclosed personal information that might limit my future job search.
25
droopyEyelids 3 days ago 1 reply      
This is a beautiful worker protection, and it makes me feel optimistic about our country's future.
26
csneeky 2 days ago 0 replies      
I know one employer this will impact: Goldman Sachs. Not only do they demand you tell them what your current salary is during the interview process, they force you during a rigorous background check before your first day to prove it.
27
loufe 3 days ago 1 reply      
"Closing the gender pay gay is important" Come on NYT....
28
user5994461 2 days ago 0 replies      
"It will take $xxx k base salary to leave my current company" -- The ultimate answer to life and the universe and everything.
29
somethingwitty1 2 days ago 0 replies      
I'm a bit skeptical that this will have a real impact. It may stop employers from asking, but it does not stop them from low-balling someone based on race/gender, etc. HR departments calculate this in to their offers already.

Don't get me wrong, I think this will help some folks and is a great step forward. People who know their value and what the employer pays will surely benefit (or great negotiators). But for the people this is touted to help, they can still very easily be kept in the lower-end of the pay scale.

It seems to me that the only way to close the pay gap is to have employers release what they pay. That in itself is a tough thing to write legislation for. In my mind, it would need to take into account experience, actual role/leveling, etc. A lot of it is subjective and easy to manipulate to help the employer. And of course, if we just release a straight list of all employees' salaries (with the details needed to calculate where you fall), you may run into privacy concerns.

30
taternuts 2 days ago 0 replies      
Personally I think it's good when they ask. I no longer answer that question with the truth, but as if they asked "what do you think you're actually worth?" - and you can find out pretty quick by their response/reaction whether or not they are going to try and lowball you.
31
fmsf 2 days ago 0 replies      
My experience with this was in London, a twitter.com recruiter contacted me a couple of years ago. At the time I was searching for a job so I replied that I was interested. We had the usual HR/Recruiter call. It ended with them asking how muh my previous employer was paying me. I said that was confidential and would not share. Their reply was "this information is mandatory and we cannot continue the process if you will not provide it." I said I wouldn't and it ended there... two months later their recruiter contacted me again saying it was ok for me not to share and provide expectations instead.
32
losteverything 2 days ago 0 replies      
For me it's not the salary (been out of tech and would not expect much nego. power), but how do I quantitfy primo health insurance, 31+ days off a year, 5 months sick leave and a 1/2 mile commute.

I really don't know how to assign a number to all that.

33
paul7986 3 days ago 0 replies      
Totally avoid this question Being asked by setting the salary you seek in stone before going to the interview.Especially if your working with a recruiter!

If they suddenly ask this question then they are trying to renege on their promise and for me it's time to go.

34
pm24601 2 days ago 0 replies      
> Underlying the bill is the notion that employers anchor the salaries they offer to potential employees based on their current or previous salary if an employee had faced pay discrimination at a previous job, in other words, the employers subsequent lower than market value offer would effectively perpetuate the discrimination.

> Being underpaid once should not condemn one to a lifetime of inequity, said New York City Public Advocate Letitia James

... Or if a person does not know how to negotiate.

35
funkyy 2 days ago 0 replies      
I was asked this before. When I refused, they said it was mandatory. I put my number as 30% more. There is no way for them to confirm, I think this is an unfair practice, so if they are forcing you to play the game, why you would be fair? I have got the job, as a junior, I was making same money as senior minus the benefits.
36
infodroid 2 days ago 0 replies      
As far as I know, there's nothing to stop employers from exchanging salary information directly with previous employers under quid-pro-quo agreements.

And there's nothing to stop employers from buying/renting this information from third parties, such as recruitment agencies or brokers in personal data.

All of these are arguably more reliable sources of salary information than asking the candidate directly.

37
ajeet_dhaliwal 3 days ago 0 replies      
Well done New York City! London needs this desperately, most recruiters here won't let you proceed without it, unless you apply directly.
38
jondubois 2 days ago 1 reply      
I think this law is silly.

If you don't like this question, then you can either choose not to answer it or use it to your advantage by 'rounding up' your numbers - This should actually help you with your negotiations as an employee.

In the unlikely event that they ask for proof, you can always tell them that your personal finances are a private matter between you and your accountant.

39
inputjoker 3 days ago 0 replies      
It should also ban potential employee disclosing the pay of previous employment. Otherwise it would become a norm soon that the companies specifically won't ask about it, but will hire only the candidates who disclose it by themselves, which makes the candidate to disclose the previous income by themselves for having a better shot at getting hired.
40
rdiddly 2 days ago 0 replies      
Wow, this is yuge. Sure I can refuse to answer the question, but nothing beats a law that makes the question futile and unlikely to be asked in the first place.

Granted employers will just scrounge around on the internet for your salary info instead, which is slightly creepier, but at least they have to work for it.

Can't wait for other states to follow suit.

41
pklausler 2 days ago 0 replies      
It's a sellers' market. If you're looking, just state your requirements and negotiate from there (or not). If you're not looking, then your current compensation is your current job's best offer, and you might as well share that information in case somebody wants to beat it.
42
donovanm 2 days ago 0 replies      
In the interviews I've had so far the 'how much are you making' question comes up at the beginning of the process almost everywhere. It's an obvious lowball play, anywhere that is super aggressive about asking for this is a giant red flag to me.
43
nfriedly 2 days ago 1 reply      
When IBM hired me, the only question I was asked about salary was "how much would you like?" I picked a number and he said "I think I can do that, let me check" followed the next day by "yes".

Maybe I left money on the table, but I'm happy, and I've gotten good raises since.

44
kelukelugames 3 days ago 0 replies      
I dislike employers who take advantage of employees.

I dislike New York City's solution too. There are other ways to empower employees and not asking about past salaries won't eliminate the pay gap. But I'm not an expert so maybe this is a step in the right direction and can result in lasting changes.

45
gesman 2 days ago 0 replies      
Instead of salary it may worth disclosing ballpark of total compensation including shares/options vesting, bonuses, etc, without breaking it down.

After all "desired salary" question is answered sooner or later but in reality - the total compensation is more important.

46
tinythrowaway 2 days ago 0 replies      
throwaway for obv reasons.This is fantastic news. But for those of you wondering what to do if you are asked this - it's pretty simple: you lie. This is a negotiation. If you ask a car salesman how much the dealership paid for a car, you think they're going to tell you the truth?

"What if they find out?"They won't. How could they? There ARE laws, very clear and absolute ones, about what a past employer can share about you. Your salary history is most certainly protected by them.

"What if they ask for a paystub?"Ask yourself if you know what you're getting into here. This is not a company that is going to treat its employees well.

47
aphextron 3 days ago 0 replies      
I never understood this one. I have always told recruiters straight up that I'm not going to disclose that information. Nobody involved with making the hiring decision is going to be asking you how much you made at your last job.
48
rodionos 2 days ago 0 replies      
What effect is this going to have if the state agency wages are already online: https://catalog.data.gov/dataset?tags=salary
49
dudul 2 days ago 0 replies      
I interviewed at a big phone/ISP company a few months ago in MA.

They would not make me an offer unless I shared my past W2 with them for them to make sure my past salary was what I claimed it was.

Needless to say, I laughed at them and walked away.

50
bingomad123 2 days ago 0 replies      
Personally I have found disclosing salary to be far more beneficial in negotiations. I guess who wish to disclose their salary still can right ? Or has the law banned employees from disclosing it ?
51
Overtonwindow 2 days ago 1 reply      
I once worked for a large trade association in Georgia. The Director had a general policy that anyone who refused to provide expected salary was not even considered, at all, no exceptions.
52
dingdongding 2 days ago 0 replies      
So I should not be divulging my current salary to recruiters? Interesting.
53
parasight 2 days ago 0 replies      
Is it legal to lie about the past salary in the US?
54
awinter-py 2 days ago 0 replies      
we should make it illegal to ask about college degree. That would erode the signaling value of a degree and bring down the price / paper value of a diploma.
55
ballenf 3 days ago 1 reply      
Would have rather seen a bill also legitimize complete and total lying in response to the question. Answers or claims regarding past salary may not be verified or used as the basis of any employment action.
56
xyzzy4 3 days ago 5 replies      
This is bad for the free market and freedom of speech. I hope it gets challenged on First Amendment grounds.
57
thoreauway 3 days ago 0 replies      
When does this take effect?
58
mrcactu5 3 days ago 1 reply      
umm... laws like this don't last very long. employers quickly develop work arounds. I will be interested to see what hiring managers try (or fail).
59
rodionos 2 days ago 3 replies      
Don't have to ask. The data is open:

https://gist.github.com/rodionos/b77080e028e3b680b2c1b5091ba...

Source: https://catalog.data.gov/dataset/civil-list-2014

> The Civil List reports the agency code (DPT), first initial and last name (NAME), agency name (ADDRESS), title code (TTL #), pay class (PC), and salary (SAL-RATE) of individuals who were employed by the City of New York at any given time during the indicated year.

3
Snowden: NSA just lost control of its Top Secret arsenal of digital weapons twitter.com
640 points by Yrlec  1 day ago   294 comments top 31
1
cyphunk 1 day ago 4 replies      
A good time to remember the official US Intelligence Community statement and policy/lie on 0days, as given post-heartbleed:

 When Federal agencies discover a new vulnerability in commercial and open source software a so-called Zero day vulnerability because the developers of the vulnerable software have had zero days to fix it it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose.
https://icontherecord.tumblr.com/post/82416436703/statement-...

https://news.ycombinator.com/item?id=7575802

2
spydum 1 day ago 1 reply      
Why is everybody posting/curious about the language of the blog post and not the contents of the file?

I've looked through some of the contents.. Some look incredibly old, but others target odd things.. lots of cPanel. My only guess is take the low hanging fruit to build "jump box" type systems?

Some odd examples: ElegantEagle/toffeehammer.. focuses on cgiecho for RCE. The thing is, a CVE was just released for this case maybe a month ago?: http://www.cvedetails.com/cve/CVE-2017-5613/

So if this dump was from 2013, why did the CVE recently pop up? Or is that coincidence?

3
sillysaurus3 1 day ago 9 replies      
It's pretty fascinating to read the Shadow Broker's posts. They have to write something, since they can't just say "I work for Russia and we're reminding America that they're not invulnerable." So they have to come up with all sorts of contrived reasons about why they're doing this, complete with broken english to fool stylometry detection that walks the fine line between being believable and preposterous. Someone spent a lot of work getting it to look so terrible.
4
tenaciousJk 1 day ago 0 replies      
He goes on to further state:

"Quick review of the #ShadowBrokers leak of Top Secret NSA tools reveals it's nowhere near the full library, but there's still so much here that NSA should be able to instantly identify where this set came from and how they lost it. If they can't, it's a scandal."

5
itchyjunk 1 day ago 1 reply      
Asking a president to do x,y or z by making this type of public statement probably implies it's geared towards the immediate readers and not some leader that might read it.

The security agencies might have made a lot of enemy over the years so it's not clear who benefits from this. Either financially or as ego boost.

The internet is definitely bigger that what most people might have predicted 20 years ago. So its not really a big surprising to see as much or even more power struggle than in real world battle fields.

Since every side has a propaganda to peddle, I, personally can draw no reasonable or coherent conclusions on what type of decisions are shaping the world I live in. But I am nonetheless curious to see how this all plays out in the coming years.

There is a related post on HN about this. [0]

---------------------------------

[0] https://news.ycombinator.com/item?id=14066596

6
iandanforth 1 day ago 1 reply      
Can someone remind me why Snowden would be in a position to comment on if this release comprises a full or partial set of hacking tools? Specifically, does this imply that his cache of data included a list of these tools, or was his day to day job one such that he would have been normally in contact with this toolset?
7
hl5 1 day ago 2 replies      
Obviously, Perl is the NSA top language choice due to it's built in support for obfuscation and job security.
8
akud 1 day ago 2 replies      
The content reads pretty clearly like a native English speaker imitating immature hacker-speak. It comes across as if it were written by a script-kiddy; that may be intentional.
9
r721 1 day ago 1 reply      
Nicholas Weaver: "Overall, though, it looks like the auction file from Shadow Brokers is mostly a bust, better stuff in the free file."

https://twitter.com/ncweaver/status/850797548717481984

the grugq: "Calling it now: the first ShadowBrokers dump was an expensive signal. This latest one was not (expensive, that is.)"

https://twitter.com/thegrugq/status/850825305845399552

10
theocean154 1 day ago 4 replies      
Looking through some of the code and some of the docs, these look old. In absence of a lot of time or some missing docs, not sure how usable these things are.
11
mcintyre1994 1 day ago 1 reply      
From the Medium post linked (https://medium.com/@shadowbrokerss/dont-forget-your-base-867...)

- Dont care if you swapped wives with Mr Putin, double down on it, Putin is not just my firend he is my BFF.

- Dont care if the election was hacked or rigged, celebrate it so what if I did, what are you going to do about it.

This has got to be a fake group trying to discredit Trump right? I don't like him or what he's doing, but surely surely his supporters don't subscribe to at least the latter view there?

12
strictnein 1 day ago 0 replies      
> "NSA just lost control of its Top Secret arsenal of digital weapons"

This is just inaccurate, or at least purposefully misleading. The NSA did not just lose control of its "Top Secret arsenal of digital weapons".

They "lost control" of mainly a bunch of old exploits whose release will not matter because anyone who is running this old junk won't be updating their servers because of this news.

13
codezero 1 day ago 1 reply      
A lot of the scripts appear to have been written by the same person, or is that just me reading into it? They have a distinct comment style in both Python and Perl.

Also, a lot of the tools appear to instruct people to paste various things in to them. I find it unlikely that a single person wrote all the tooling for the NSA, but, who knows.

15
i336_ 9 hours ago 0 replies      
Excuse me while I just...

ALLL RIIIIGHT!!

Not because I'm especially interested in the tools (although, granted, I have not had a look at any of them yet), but because I always wished this could be given to everyone.

Also, for a moment there, I was concerned 7z was insecure and that the passphrase had been bruteforced. Apparently not! Very nice.

16
remarkEon 1 day ago 3 replies      
I haven't read enough broken English to take a gander at what the native language is for the authors of that...manifesto. Anyone have a good guess? There's some pretty common mistakes throughout ("peoples" for people, "Americans' having" for "Americans have").
17
znfi 1 day ago 0 replies      
I have a bit of a hard time understanding why so many people think this is written by Russians. Obviously the grammar is not correct, but it would seem very strange to think this has any significance, and it seems more plausible that it was done in an attempt to hide the authors identity. (My spontaneous feeling was that it was written by Jar Jar Binks, and not Russians, for whatever that's worth.)

I'm not from the US and have not followed the news from there recently, but from what little I have seen much of the actual contents of the message does seem to reflect the feelings of Trumps "base"? Or would people more familiar with US politics say this is incorrect?

18
Animats 1 day ago 2 replies      
This stuff looks old. There are versions for Solaris and SCO Unix.
19
jasonhansel 1 day ago 1 reply      
I wonder what this is for: https://github.com/x0rz/EQGRP/blob/master/Linux/bin/strangeF...

It looks like it's searching for files/directories with unusual names (like ". ") that system administrators wouldn't normally notice.

20
jorblumesea 1 day ago 2 replies      
Is there any doubt the Shadow Brokers are Russian and working for Russian interests? The timing of releases, international events concerning both countries and pointed measures are far too suspicious to be considered circumstantial.
21
eps 1 day ago 1 reply      
Likely a response to the Syrian airbase tomahawking from a couple of days ago?

Russians are known for what they themselves call "asymetrical answers", so this seems to fit the pattern.

22
0x38B 1 day ago 0 replies      
Like others are saying, there's a mismatch between the overall sentence structure and progression - which strikes me as more native - and the mistakes. I don't buy the verb misconjugation especially, a Russian ESL learner at that level would get that right more often than not.

Source: many conversations with Russians learning English (also near-native Russian)

23
hl5 1 day ago 0 replies      
Regardless of the source, full disclosure works. Whomever is responsible for releasing this material is also improving computer security for everyone. Thank you.
24
fixxer 1 day ago 1 reply      
I don't know anything about the value of this crap, but I do find it interesting to grep through looking at the IPs (which I presume are compromised machines from which they are initiating attacks). See `./bin/pyside/targets.py`
25
zengid 1 day ago 0 replies      
All of this spy vs spy intrigue makes my head hurt
26
mavdi 1 day ago 5 replies      
Given the latest world events, I've personally come to realise that security agencies play an important role in keeping us safe, from external entities or from ourselves.

This is disaster in my (current) opinion. We tend to dismiss the work the likes of NSA do, not thinking much about what would happen if they didn't do it. Snowden categorically dismissing anything that NSA does, just means he's a deluded idealist, much like I used to be.

27
Harken 1 day ago 1 reply      
"We voted for you, comrade. Here is old malware from deepnet kiddy porn site post for to confuse."

Could be Russia pissed about puppet twitching without permission, or could be Bannon (via Cambridge Analytics?) pissed about puppet twitching without permission.

Twitch, puppet, twitch!

28
theocean154 1 day ago 1 reply      
ElegantEagle. nice
29
elastic_church 1 day ago 1 reply      
ShadowBroker's blog posts always crack me up
30
shitgoose 1 day ago 0 replies      
shadowbrokerss remind me of this guy:

https://www.youtube.com/user/FPSRussia

100% American from Georgia, sometimes loses Russian accent and slips into perfect English:)

31
lngnmn 1 day ago 0 replies      
Looks like bullshit. It does not match the vault7 leak, which is supposed to be from the very same NSA.

It is Russians. The classic example of Dunning Kruger effect. In a generally low IQ environment and primitive criminalized cultural environment they truly believe that what is enough to fool everyone around them, including the bosses (who are supposed to be really smart), will surely fool everyone else.

This is the phenomenon of negative selection of a cancer-like corrupted society (which ran for a three decades already) at work. They are literally decades behind of the technological progress and culture of the modern civilization.

They simply have no idea of what possible level of intelligence and sophistication could be found in places with decades of consistent high-IQ-based selection, like companies staffed with top 5% of MIT/Standford/Caltech/Berkeley graduates and what this kind of organization could do (think of Apple, Google, etc).

A high-tech US govt agency would never had such a crap in their folders. They are not a bunch of disconnected from reality, overconfident, self-deluded with their own primitive propaganda Russian punks.

4
The reference D compiler is now open source dlang.org
556 points by jacques_chirac  2 days ago   298 comments top 26
1
iamNumber4 2 days ago 7 replies      
Good news indeed.

Switched to D 4 years ago, and have never looked back. I wager that you can sit down a C++/Java/C# veteran, and say write some D code. Here's the manual, have fun. They will with in a few hours be comfortable with the language, and be fairly competent D programmer. Very little FUD surrounding the switching to yet another language with D.

D's only issue is that it does not have general adoption, which I'm willing to assert is only because it's not on the forefront of the cool kids language of the week. Which is a good thing. New does not always Mean, improved. D has a historical nod to languages of the past, and is trying to improve the on strengths of C/C++ and smooth out the rough edges, and adopt more modern programming concepts. Especially with trying to be ABI compatible, it's a passing of the torch from the old guard to the new.

Regardless of your thoughts on D; My opinion is I'm sold on D, It's here to stay. In 10 years D will still be in use, where as the fad languages will just be foot notes in Computer Science history as nice experiments that brought in new idea's but were just too out there in the fringes limiting themselves to the "thing/fad" of that language.

2
jordigh 2 days ago 2 replies      
Walter, thank you so much for finally doing this! I am so happy that Symantec finally listened. It must have been really frustrating to have to wait so long for this to happen. I have really been enjoying D and I love all the innovation in it. I'm really looking forward to seeing the reference compiler packaged for free operating systems.

Thanks again, this news makes me very happy!

3
tombert 2 days ago 2 replies      
Honestly, since I'm slightly psychotic about these things, this is a kind of huge to me. Part of the reason I never learned D was because the compiler was partly proprietary.

Now I have no excuse to avoid learning the language, and that should be fun.

4
WalterBright 2 days ago 4 replies      
And best of all, it's the Boost license!

Here it is:

https://github.com/dlang/dmd/pull/6680

5
vram22 2 days ago 1 reply      
Good to hear the news, and congrats to all involved.

Since I see some comments in this thread, asking what D can be used for, or why people should use D, I'm putting below, an Ask HN thread that I had started some months ago. It got some interesting replies:

Ask HN: What are you using D (language) for?

https://news.ycombinator.com/item?id=12193828

6
JoshTriplett 2 days ago 2 replies      
Interesting change! Before, people had a choice between the proprietary Digital Mars D (dmd) compiler, or the GCC-based GDC compiler. And apparently, since the last time I looked, also the "LDC" compiler that used the already-open dmd frontend but replaced the proprietary backend with LLVM.

I wonder how releasing the dmd backend as Open Source will change the balance between the various compilers, and what people will favor going forward?

7
brakmic 2 days ago 4 replies      
Please don't get me wrong, as I don't want to start a flame here, but why do they call D a "systems programming language" when it uses a GC? Or is it optional? I'm just reading through the docs. They do have a command line option to disable the GC but anyway...this GC thing is, imho, a no-go when it comes to systems programming. It reminds me of Go that started as a "systems programming language" too but later switched to a more realistic "networking stack".

Regards,

8
bluecat 2 days ago 0 replies      
Something I always thought was cool about dlang was that you can talk to the creator of the programming language on the forums. I don't write much D code as of now, but I always visit the forums everyday for the focused technical discussions. Anyways, congrats on the big news!
9
saosebastiao 2 days ago 2 replies      
This was something that always rubbed me the wrong way about the language, and it was an impediment for adoption for me (for D, but also Shen and a few others). In this era, there is no excuse for a closed source reference compiler (I could care less if it's not a reference compiler, I just won't use it). I'm surprised it took this long to do this, it seems like D has lost most of its relevance by now...relevance it could have kept with a little more adoption. I wonder if it can recover.
10
softinio 2 days ago 4 replies      
Whats special about D? why should i learn it?
11
jacquesm 2 days ago 1 reply      
That is excellent news :)

Congratulations Walter, now let's see D take over the world.

12
petre 1 day ago 0 replies      
Is there support for BigFloat in D/phobos or any auxiliary library? I was playing around with D sequences and wrote a D program that calculates a Fibonacci sequence (iterative) with overflow detection that upgrades reals to BigInts. I wanted to also use Binet's formula which requires sqrt(5) but it only works up to n=96 or so due to floating point precision loss.
13
xtreak_1 2 days ago 0 replies      
Thanks a lot! I am also consistently amazed at the performance of the forum like any other day even though the story is on top of HN.
14
zerr 2 days ago 4 replies      
Anybody worked on performance critical stuff in D? How good is its GC?
15
Samathy 2 days ago 0 replies      
Amazing! D has really been exciting me for the past couple of years. It has great potential.

Hopefully a fully FOSS compiler will bring it right into the mainstream.

16
petre 2 days ago 1 reply      
This is great news. I was using LDC because the DMD backend was proprietary. Thank you Walter, Andrei and whoever made this possible.
17
noway421 2 days ago 0 replies      
It's really surprising that to this day, there are languages in use which have its reference implementation closed source. All the possible optimizations and collaboration possible when it's open is invaluable.
18
virmundi 2 days ago 1 reply      
Has there been any new books out there to learn D? I have one that still references the Collection Wars (Phobos vs Native). Once I saw that, I put the book back on the shelf and stuck with Java.
19
snackai 2 days ago 0 replies      
This is big. Heard from mmany people that this avoids adoption.
20
tbrock 2 days ago 0 replies      
Is anyone else surprised that it wasn't before?
21
imode 2 days ago 0 replies      
what is it like to 'bootstrap' D? I know in many languages you can forego the standard library and 'bootstrap' yourself on small platforms (C being the main example).
22
herickson123 2 days ago 0 replies      
I wanted to play around with D using the DMD compiler but it's unfortunate I have to install VS2013 and the Windows SDK to work with 64-bit support in Windows. I've installed VS in the past and found it to be a bloated piece of software I'm not willing to do again.
23
spicyponey 2 days ago 0 replies      
Tremendous effort. Congrats.
24
nassyweazy 2 days ago 0 replies      
This is an awesome news!
25
joshsyn 2 days ago 4 replies      
Please get rid of GC :(

I want to have smart pointers instead

26
sgt 2 days ago 0 replies      
Off topic question; are you related to https://en.wikipedia.org/wiki/Jacques_Chirac ?
5
The Utter Uselessness of Job Interviews nytimes.com
523 points by tomek_zemla  22 hours ago   380 comments top 48
1
schmit 19 hours ago 10 replies      
I find it quite problematic that researchers get to talk about their own research and present it as facts without anyone taking a critical look.

Over time Ive become more skeptical about this kind of psychology research (as more studies fail to replicate) and, as is often the case, here the sample size is quite small (76 students, split across 3 groups), predicting something relatively noisy as GPA. It is unclear to me that one would be able to detect reasonable effects.

Furthermore, some claims that make it into the piece are at odds with the data:

> Strikingly, not one interviewer reported noticing that he or she was conducting a random interview. More striking still, the students who conducted random interviews rated the degree to which they got to know the interviewee slightly higher on average than those who conducted honest interviews.

While Table 3 in the paper shows that there is no statistical evidence for this claim as the effects are swamped by the variance.

My point is not that this article is wrong; verifying/debunking the claims would take much more time than my quick glance. But that ought to be the responsibility of the newspaper, and not individual readers.

Politicians dont get to write about the successes of their own policies. While there is a difference between researchers and politicians, I think we ought to be a bit more critical.

2
gatsby 21 hours ago 11 replies      
Laszlo Bock (former SVP of People at Google) did a great job summarizing decades of research around structured interviewing in his book 'Work Rules!'

For a quick reference, the two defining criteria for a structured interview are:

1.) They use one or several consistent set(s) of questions, and

2.) There are clear criteria for assessing responses

That second point is really important. You can't only ask candidates the same sets of questions and have a structured process: you need to understand what a "one-star" response vs. a "five-star" response actually looks or sounds like. Training and calibrating all of the interviewers in a large company around a similar rating system is nightmarish, so most companies don't bother.

The book also outlines that pairing a work sample with a structured interview is one of the most accurate methods of hiring.

If anyone is interested in some in-depth structured interview questions or work sample ideas, feel free to email me. I've spent the last few years working on a company in the interviewing space and would love to chat.

3
uncensored 20 hours ago 14 replies      
How do you know that the person you'll be marrying won't cheat on you and won't leave you in hard times? If you apply the methods we use today for interviews you'll end up with a 50/50 chance at best, a coin toss.

Yet why do some marriages last forever (till death do us apart) while others fail miserably or crumble even after 20 years?

The search for the global optimum cannot be performed by asking a set of questions. I argue that it cannot be done consciously. It's gut/instinct thing. If you have a mechanical approach, anyone can game the system and get a job because humans can be like chameleons to present themselves as the right candidate, and they can study for the interview. Only way IMO is to have that 3rd eye or whatever you call it... instinct, gut feeling, etc

The problem with this conclusion is that instinct and sexism/racism are often conflated.

No good answer.

4
ravitation 20 hours ago 1 reply      
I have some major issues with their conclusions... and the title of the article (which is mostly nonsensical clickbait).

The real conclusion should be that "unstructured interviews provide a variable that decreases the average accuracy of predicting GPAs, when combined with (one) other predictive variable(s) (only previous GPA)."

This conclusion seems logical. When combined with an objective predictive measure of a person's ability to maintain a certain GPA (that person's historical ability to maintain a certain GPA), a subjective interview decreases predictive accuracy when predicting specifically a person's ability to maintain a certain GPA.

To then go on to conclude that interviews then provide little, to negative, value in predicting something enormously more subjective (and more complicated), like job performance, is absurd - and borderline bad science.

There are numerous (more subjective) attributes that an unstructured interview does help gauge, from culture fit to general social skills to one's ability to handle stressful social situations. I'd hypothesize all of these are probably better measured in an unstructured (or structured) interview than in most (any) other way. To recommend the complete abandonment of unstructured interviews (which is done in the final sentence of the actual paper) is ridiculous.

5
Spooky23 19 hours ago 1 reply      
It's the whole process that's useless.

Remember a few years ago when this forum was drowning about how to find and hire "10x" people? 98% of employees were useless in the face of the 10xer.

The reality is, most of the time screening for general aptitude, self-motivation and appropriate education is good enough.

I've probably built a dozen teams where 75% of the people were random people who were there before or freed up from some other project. They all work out. IMO, you're better off hiring for smart and gets thing done and purging the people who don't work.

6
tptacek 20 hours ago 1 reply      
Take a second to read about the experiments this author conducted. They included:

Dummy candidates mixed in with the interview flow that gave randomized answers to questions (interviews were structured to somewhat blind this process), and interviewers lost no confidence from those interviews.

Interviewers, when told about the ruse, were then asked to rank between no interview, randomized interview, and honest interview. They chose a ranking (1) honest, (2) randomized, (3) no interview. Think about that: they'd prefer a randomized interview to make a prediction with over no interview at all.

Of course, the correct ranking is probably (1) no interview, (2) a tie between randomized and honest. At least the randomized interview is honest about its nature.

7
ordinaryperson 20 hours ago 0 replies      
The problem is GPA itself isn't necessary a valid data point. It's less fallible than "gut instinct", as the author here seems eager to claim, but personality type can be more important than ability to memorize facts.

I'd rather hire a programmer who knew less and could get along with others than a master dev who's a total a-hole.

8
rb2k_ 21 hours ago 3 replies      
It should probably have been titled "The Utter Uslessness of Unstructured Job Interviews", because that's the kind of interview the author criticizes.

In my personal experience, structured interviews can be very helpful in determining a candidates abilities.

9
hobls 21 hours ago 4 replies      
I've been a programmer for a bit over ten years. I've worked at scrappy little startups, midsized companies, now for a tech giant for a few years. The engineers I work with at the tech giant are consistently better engineers than my other coworkers have been, and I credit the very structured interview process. We're trained to ask specific questions, look for specific types of answers, and each interviewer is evaluating different criteria.

Also, it is quite often not the technical questions that end up making us decide not to hire someone. That's just one area. I know it's the part that sticks out, and candidates give a lot of weight to it in their memory of the interview, but you shouldn't just assume it was that your whiteboard code wasn't quite good enough. I actually don't think that's the most common thing we give a "no hire" for.

10
numinary1 20 hours ago 1 reply      
This discussion misses an important element, the skill of the interviewer. It is unsurprising that unskilled interviewers' assessments are poor predictors of future performance. It would be interesting to measure the accuracy of interviewers who have had years of experience interviewing, hiring, and managing people.

Here's how I think it works. Skilled interviewers are biased toward rejecting candidates based on any negative impression. Structured interviewing has the same effect. It's the precision versus recall tradeoff. For this use case only precision matters. Extremely low recall is fine.

Also, in the GPA prediction example, the interviewer is penalized for predicting a low GPA for a person who performed well. But in hiring, there is no penalty for failing to hire someone who would have performed adequately.

(Yes, I understand there is an implicit assumption in my argument that candidates are not in short supply, but that's usually true, certainly at Google)

11
dkarapetyan 20 hours ago 0 replies      
> The key psychological insight here is that people have no trouble turning any information into a coherent narrative. This is true when, as in the case of my friend, the information (i.e., her tardiness) is incorrect. And this is true, as in our experiments, when the information is random. People cant help seeing signals, even in noise.

People see patterns where there are none. I think this is fundamentally why humans fail at statistics. If every fiber of your being wants to see patterns then you will see patterns. Probably why people hallucinate when in sensory deprivation tanks as well. The brain will make up patterns just so it can continue to see them.

The paragraph right after follows up with the statistical failure that pattern seeking leads to

> They most often ranked no interview last. In other words, a majority felt they would rather base their predictions on an interview they knew to be random than to have to base their predictions on background information alone.

So people would rather do busy work in order to continue to satisfy established pattern seeking habits than figure out a better way.

12
redthrow 20 hours ago 1 reply      
Matt Mullenweg advocates audition/tryouts instead of job interviews.

https://hbr.org/2014/04/the-ceo-of-automattic-on-holding-aud...

13
santoshalper 19 hours ago 0 replies      
As someone who has worked at every level of IT (startup to Fortune 500 executive), hired thousands of people, and personally interviewed hundreds of candidates of all levels of experience, the conclusion I have come to is that interviews are almost entirely worthless.
14
bahmboo 17 hours ago 0 replies      
When interviewing for smaller orgs you do have to answer the question: can I work with this person everyday? Subjective and arguably harder to predict than technical performance, and oftentimes more important.
15
vonnik 18 hours ago 1 reply      
The traditional recruiting and hiring process is broken. I say this as a former technical recruiter. I wrote about the problems of recruiting for a closed-source startup here:

https://www.linkedin.com/in/chrisvnicholson/recent-activity/...

I mention closed-source for a reason. For technical hiring, there is nothing better than open source. Open-source projects allow engineers and their potential employers to collaborate in depth over time. The company can experience whether the engineer is competent, reliable and friendly. The engineer can judge the team's merits in the same way. And they can both decide whether the fit is right.

Closed-source and/or non-engineering jobs are the opposite. You get a resume, a Github repo if you're lucky, and a half-day's worth of interviews and tests. Then you roll the dice on that imperfect information.

This is one reason why a lot of recruiting and hiring happens through the networks of people that a company can tap into. It may seem corrupt or nepotistic, but the advantage of those referrals is that someone with more information than you is willing to stake their reputation on a candidate's performance.

Large companies with lots of historical data have the opportunity to train algorithms to learn how job applications and long-term performance/flight risk/etc. actually correlate. From what I can tell, most haven't.

16
akhilcacharya 21 hours ago 2 replies      
It's interesting, because I'd argue that the best companies in tech have have interviews that are very structured and predictable.
17
Boothroid 21 hours ago 3 replies      
But surely structured interviews just test a candidate's ability to improvise plausible stories? Whether they are truthful or not is a different matter..
18
exabrial 17 hours ago 0 replies      
They're useless if it's attempt to prove you know more tham them about some algorithm that's been implemented 150x times (every job interview in California). I'd rather work with someone pleasent, hard working, and concerned with everyone's well being.
19
daenz 20 hours ago 1 reply      
As someone with many interviews coming up in the near future, this scares me. It's easy to get in a self-conscious feedback loop when you know every behavior, response, and gesture is being fed into a fundamentally irrational character-judging process.

The best interview I've ever been on was one for a young startup. They gave essentially a homework problem, a day to solve it, and then in the interview we talked about the problem and my solution. The worst interview I've been on was sitting in front of multiple engineers as each one threw out a random CS question (from seemingly the entire space of CS) and asked me to talk intelligently about it. When I seemed unsure of myself, they glanced around nervously and disapprovingly.

Interviews are the worst. I've spent my time trying to bolster my OSS projects, so that I can point to them as evidence of my competence, but I can't help but prepare for the worst anyways.

20
m-j-fox 20 hours ago 2 replies      
Known useless indicators:

* Resumes

* Skills tests (hacker rank)

* Whiteboard interviews

* Unstructured interviews

* Employee referrals

No wonder headhunters have such a good business. Not that they're more discriminating, but they can pretend to be the solution to an intractable problem.

21
tempodox 5 hours ago 0 replies      
> So great is peoples confidence in their ability to glean valuable information from a face to face conversation that they feel they can do so even if they know they are not being dealt with squarely. But they are wrong.

If people utterly refuse to learn from proven mistakes, then all hope is lost. Einstein was right, human stupidity is infinite.

22
woodandsteel 15 hours ago 0 replies      
My basic problem with interviewing is you are observing behavior in one sort of situation, and on that basis trying to predict behavior in a very different sort of situation, namely job performance, which is actually a whole bundle of different types of situations.

It seems like it would be much better to instead put the job prospect in situations that model the sorts that would come up at work.

23
douglasjsellers 18 hours ago 0 replies      
All that this article says is that past performance, in terms of GPA, is the best performance of future performance - rather than a 30 minute interview predicting future performance. This sees like a basic truism to me and the main lesson that tech hiring processes can take away from this article.

In my experience (having hired > 100 engineers) one of the basic problems that tech hiring, as a whole, has is that it misunderstands the point of a technical interview. Organizations and hiring managers see the interview process as a way of improving the brand of the engineering organization - "We have super high standards and to prove this our interview process is really hard - therefore if you think you meet these standards you should apply". This leads to the current interviewing trends of super academic/puzzle/esoteric technology based interviews. Applicants leave those interviews saying that it was super hard reinforcing the brand messaging (classic marketing).

Rather, in my experience, the best results come from viewing the hiring/interviewing process for what it is - an attempt to predict future performance (and specifically performance at your organization) using a variety of techniques which interviewing is one. In this context, of attempting to predict future performance, interviews are not a great tool - better to look at specific past performance.

Past performance is always the best predictor of future performance and the point of a technical interview, in my mind, is to critically inspect that past performance to understand how closely it relates to the future performance that your organization needs.

24
inopinatus 21 hours ago 3 replies      
Articles like this - and the comments that follow - always overlook the primary value of job interviews, which to me is answering the question: "Do I want to work for this company?"
25
andrewstuart 15 hours ago 1 reply      
I am a recruiter. Recently, I started working with a new employer. We could not get anyone through their interview process. Eventually I asked the HR person to clarify precisely what was being asked in these interviews.

She said that, essentially, the interviews were ad-hoc, with the interviewer just coming up with whatever questions they thought relevant based on the resume - often asking the candidate to go through their career history.

I explained that the only effective approach I have found with recruiting is to have a set of pre-defined questions, and each question is specifically designed to give insight into how the candidate meets the pre-defined job requirements. Very much like software development, where test cases are related to software requirements.

I explained also that it is not critical to stick precisely to these questions, but that should mostly be the case - interviews are human interactions and some flexibility is required depending on circumstance.

The HR person then explained this to the hiring managers at the company, and worked with the hiring managers to define interview questions that give insight into the job requirements.

The next two people interviewed got the jobs, after months of no one getting through the interviews.

In the early days of software development, the business was often dissatisfied with software delivered because it simply did not meet the requirements of the business. So the software development process matured and came up with the idea of tests that can be mapped back to the requirements via a requirements traceability matrix. Thus the business has a requirement, the developers write code to meet the requirement, and a test is designed to verify that the software meets the defined requirement.

Recruiting currently has no such general understanding in place of the relationship between job position requirements and definition of quantifiable questions that identify to what extent a given job candidate meets a requirement.

Once you get your head around the idea that recruiting should be very similar to software development in this regard, then it is easy to see that ad-hoc interviews do nothing to verify in any organised way to what extent a candidate meets the requirements of a given job opening.

26
damagednoob 20 hours ago 1 reply      
> In one experiment, we had student subjects interview other students and then predict their grade point averages for the following semester.

Not sure how using inexperienced interviewers proves anything. Would have been more interesting to have lecturers interview the students.

27
throwaway71958 20 hours ago 1 reply      
One thing interviews can't select for: creativity and motivation. And in tech those two criteria are the most vital, especially motivation. I can easily fill in the skills gap in someone who's motivated. I can't do anything with someone who doesn't give a shit, even if they're the second coming of Albert Einstein. So folks, please, don't apply for jobs you don't really care about. Save yourself and your prospective employer time, aggravation, and the opportunity cost.
28
cordite 15 hours ago 0 replies      
I'm sure it has been discussed before, but what factors push people away from say two-day internships? Is it because these things are so short that they too can eventually be gamed? Or is it because you need to have staff of the same specialty dedicating their resources to a potentially short-lived investment?
29
dlwdlw 13 hours ago 0 replies      
The issue with silicon valley interviews is that it's leading a new paradigm of management styles that deal with knowledge and creative work

This shift MUST be accepted by everybody ornostracism is risked. (Like trump supporters)

But paradigm shifts take time and the majority of managers still want cogs. But instead of filtering dor cogs they have to dress the filters up as filtering for a "i give smart people freedom team" and the convoluted mental gymnastics needed for this creates shitty interview processes.

All "well this technique worked for us" stories are mostly Not useful because tehy are just N=1 stories about managers using their preferred filters.

The issue isn't with the filters themselves (all sorts exist) but with a culture that obligates everyone to out on false facades.

People who just want to be paid and can work well need to pretend to be passionate. Managers who want well-paid cogs need to pretend to promote individualistic thinking etc...

30
kasey_junk 20 hours ago 0 replies      
Structured interviews are better than unstructured ones, but in my experience they are really a Trojan horse for the idea that interviews in all forms are largely worthless (as predictors for good hiring).

Once you start collecting data on your hiring pipeline work sample hiring becomes so much obviously better that it makes little sense to spend the time to do the hard work of making a good structured interview process.

31
evervevdww221 19 hours ago 1 reply      
We had 2 slackers on the team. One jumped directly to Google.

The other jumped around for few years, got laid off by some company, recently joined FB.

32
sgt101 19 hours ago 0 replies      
Odd - I was trained to do competence based interviews 20 years ago, apparently this is now rediscovered knowledge or something!
33
mck- 17 hours ago 0 replies      
If you care about your company's culture, a person's humbleness, art of concise debating, etc - more important parameters than sheer GPA or coding skills imho - you can never do away with in-person interviews.

Calling them "utterly useless" is an utter click-bait.

34
donovanm 18 hours ago 0 replies      
It certainly does feel like job interviews are a coin flip to me after having done many interviews.
35
jasonthevillain 12 hours ago 0 replies      
Well, the worst interview I've ever had was the ones where the interviewer wouldn't deviate from the script, even after he recognized that these questions made no sense for someone with my background (I was self-taught, and had never managed my own memory or written a sort algorithm. They were also irrelevant to the position in question).

It was painfully awkward.

It was also a fantastic way to accidentally discriminate against women and older candidates.

I'm not saying anyone should conduct an interview completely by the seat of their pants, but please don't encourage this foolish consistency.

36
WalterBright 17 hours ago 0 replies      
More accurately, the article is about the use of unstructured free-form interviews.
37
cricfan 20 hours ago 2 replies      
I wonder if we can extrapolate to marriages and to how arranged marriages (at least in India) having a higher success rate. Usually the parents on either side decide on a match based on family background, financial stability, education background etc,. rather than letting the to-be married decide.
38
thomastjeffery 19 hours ago 0 replies      
> Alternatively, you can use interviews to test job-related skills, rather than idly chatting or asking personal questions.

Alternatively? How is this not the focus of an interview?

Sure, confidence and social skills are important, but obviously they cannot predict a person's actual ability.

39
DrNuke 21 hours ago 0 replies      
It is some time we are going towards word of mouth and peer recommendations as the preferred way to hire new personnel. Cold applications are for outsiders and as such a very different market, with all the strings and the bulls.it attached.
40
JacksonGariety 15 hours ago 0 replies      
Does it bother anyone else that the example given in the article (showing up 25 minutes late) is judging interviewing by its worst rather than its best?
41
Zigurd 20 hours ago 1 reply      
It's not surprising. Employee selection is basically voodoo. Outcomes don't get fed back into redesign of the process, and the process is far more based in tradition than data. When the process gets challenged it's ripe to fall apart.
42
RichardHeart 16 hours ago 2 replies      
You hear lots of these stories about stupid interviews. You rarely hear the stories about the horrible, terrible employees that weren't weeded out, got hired and did great harm to the company and their coworkers.

Interviews can be good and bad, I'd venture to say that many the horrible hire has been avoided by any interview at all. Thus, don't make perfect the enemy of good, and try to improve on good.

The set of potential bad hires is vast compared to the good hires, and that ratio is only remedied by good filtering before and during the interview.

43
dba7dba 16 hours ago 0 replies      
I believe a lot of places hire a candidate through a consensus, meaning some members in the team accept or reject a potential candidate. When enough accept the candidate, the hiring is done.

Is there any company that tracks who rejects a particular candidate during an interview process, and how often that negative feedback turned out to be true. I guess with the turnover rate at todays tech places, such tracking of a record of an interviewer is not really possible?

I always wonder about this.

44
peterwwillis 18 hours ago 0 replies      
No interview will tell you the future, so to my mind, the only thing the interview can tell you immediately is how much someone knows and whether their personality will mesh with the company's culture.

In order to ascertain this, I propose job-hiring hackathons. Have the company hold a mini-hackathon, once every 2 weeks or once a month, where all job applicants must show up and work on projects (corporate employees' presence can be optional). Just watch them complete the projects and hire the best candidates.

45
nameisu 17 hours ago 1 reply      
i have an interview with apple for a mechanical engineer. i will report back once its done.
46
ppidugu 16 hours ago 0 replies      
Such a careless claim and writing articles and throwing on peoples faces gives just a chance to everyone to debunk ny times articles
47
erikpukinskis 21 hours ago 2 replies      
Once we get to the point where most people have several jobs with separate contracts, interviews become superfluous because you can just hire someone for a few hours at a time and then fire them. The only reason that doesn't work today is we're still clinging to the idea that you only work for one entity at a time. Never mind that most people already manage at least a couple bosses within the same company.
48
nebabyte 16 hours ago 0 replies      
> not one interviewer reported noticing that he or she was conducting a random interview. More striking still, the students who conducted random interviews rated the degree to which they got to know the interviewee slightly higher on average

Yeah well, when you're asking questions of someone who looks thoughtful very briefly, then answers almost immediately after, it sounds like you might have more reason to think you know them better than the one actually considering their answer

Might be introducing a confound or two that you then proceed to completely ignore and even conclude past lest someone accidentally draw other conclusions

6
Color Night Vision (2016) [video] kottke.org
606 points by Tomte  2 days ago   150 comments top 34
1
qume 2 days ago 0 replies      
There is much discussion here regarding quantum efficiency (QE). Keep in mind that figures for sensors are generally _peak_ QE for a given colour filter array element. These can be quite high like 60-70%.

But - this is an 'area under the graph' issue. While it may peak at 60%, it can also fall off quickly and be much less efficient as the wavelength moves away from the peak for say red/green/blue.

From what I can tell from the tacky promo videos, the sensor is very sensitive for each colour over a wide range of wavelengths, probably from ultraviolet right up to 1200nm. That's a lot more photons being measured in any case, but especially at night.

Their use of the word 'broadband' sums it up. It's more sensitive over a much larger range of frequencies.

I also wouldn't be surprised if they are using a colour filter array with not only R/G/B but perhaps R/G/B/none or even R/IR/G/B/none. The no filter bit bringing in the high broadband sensitivity with the other pixels providing colour - don't need nearly as many of those.

Edit - one remarkable thing for me is based on the rough size of the sensor and the depth of field in the videos, this isn't using a lens much more than about f/2.4. You'd think it would be f/1.4 or thereabouts to get way more light but there is far too much DoF for that.

2
amluto 2 days ago 5 replies      
It would be interesting to see how this compares to theoretical limits. At a given brightness and collecting area, you get (with lossless optics) a certain number of photons per pixel per unit time. Unless your sensor does extraordinarily unlikely quantum stuff, at best it counts photons with some noise. The unavoidable limit is "shot noise": the number of photons in a given time is Poisson distributed, giving you noise according to the Poisson distribution.

At nonzero temperature, you have the further problem that your sensor has thermally excited electrons, which aren't necessarily a problem AFAIK. More importantly, the sensor glows. If the sensor registers many of its own emitted photons, you get lots of thermal noise.

Good low noise amplifiers for RF that are well matched to their antennas can avoid amplifying their own thermal emissions. I don't know how well CCDs can do at this.

Given that this is a military device, I'd assume the sensor is chilled.

3
joshvm 2 days ago 1 reply      
Better video from their website comparing to other cameras:

https://www.youtube.com/watch?time_continue=328&v=c_0s06ORTk...

Anecdotal evidence on the internet suggests it's around 6k, but that seems far too low.

4
rl3 2 days ago 3 replies      
One would think with all the money the military throws into imaging technology that they would already have this.

For Special Operations use, it'd be nifty to have this technology digitally composited in real-time with MWIR imaging on the same wearable device. Base layer could be image intensification with this tech, then overlay any pixels from the MWIR layer above n temperature, and blend it at ~33% opacity. Enough to give an enemy a nice warm glow while still being able to see the expression on their face. Could even have specially made flashbangs that transmit an expected detonation timestamp to the goggles so they know to drop frames or otherwise aggressively filter the image.

Add some active hearing protection with sensitivity that far exceeds human hearing (obviously with tons of filtering/processing), and you're talking a soldier with truly superhuman senses.

That's not to mention active acoustic or EM mapping techniques so the user can see through walls. I mean, USSOCOM is already fast-tracking an "Iron Man" suit, so I don't see why they wouldn't want to replicate Batman's vision while they're at it.

5
telesilla 2 days ago 3 replies      
Can someone wake me up in the future? When we have digital eyes, and we can walk around at night as if it were day except the stars would be glittering. Sometimes, I'm so sad to know I'll not live to know these things and I'm incredibly envious of future generations.
6
akurilin 2 days ago 3 replies      
You can get somewhat close to that with a Sony a7s these days: https://vimeo.com/105690274
7
Silhouette 2 days ago 0 replies      
They list a lot of potentially useful applications on the product's own web site. I wonder how long it will take for this sort of technology to be commercially viable for things like night vision driving aids. High-end executive cars have started to include night vision cameras now, but they're typically monochrome, small-screen affairs. I would think that projecting an image of this sort of clarity onto some sort of large windscreen HUD would be a huge benefit to road safety at night. Of course, if actually useful self-driving cars have taken over long before it's cost-effective to include a camera like this in regular vehicles, it's less interesting from that particular point of view.
8
colordrops 2 days ago 3 replies      
Two thoughts come to mind:

1. It would be nice to see a split screen against a normal view of the scene as it would be seen by the typical naked eye.

2. Our light pollution must SUCK for nocturnal animals that see well at night.

9
jacquesm 2 days ago 1 reply      
That really is incredible. I wonder how they keep the noise level down and if the imaging hardware has to be chilled and if so how far down. Pity there is no image of the camera (and its support system), I'm really curious how large the whole package is. It could be anything from hand-held to 'umbilical to a truck' sized.

Watch when the camera tilts upwards and you see all the stars.

10
19eightyfour 2 days ago 2 replies      
That is beautiful.

If they can increase the dynamic range to bring detail to the highlights it is basically perfect.

I've never seen a valley look like that with a blue sky above with stars in it. Truly incredible.

The 5M ISO rating is pretty funny. 1/40 f1.2 ISO 5M.

11
cameldrv 2 days ago 0 replies      
They say it's hybrid IR-visible. I wonder if the trick is to use IR as the luma and then chroma-subsample by having giant pixels to catch lots of photons.
12
floatboth 2 days ago 1 reply      
"an effective ISO rating of 5,000,000"

Holy shit, my Canon 600D is pretty bad at 2500, goes to crap at 3200, and 6400 is an absolute noise mess

13
dreamcompiler 2 days ago 2 replies      
This is an amazing device. I've taken night photos that look like frames of this movie on my digital camera, but they require a 60-second exposure and a tripod, and they're -- still frames.
14
caublestone 2 days ago 2 replies      
My brother in law experimented with this camera a few years back on family portraits. The camera picks up a lot of "dark" details. Skin displays pale and veins are very defined. My nieces called it "the vampire camera".
15
fpoling 2 days ago 1 reply      
There are far infrared cameras that capture thermal radiation from 9-15 NM band. They nicely allow to see in complete darkness. They do not use CCD but rather microbalometers.

But they are expensive. 640x480 can cost over 10000 USD and cameras with smaller resolution like those used in high-end cars still cost over thousand USD.

16
teh_klev 2 days ago 0 replies      
Direct link to manufacturer or supplier:

https://www.x20.org/color-night-vision/

17
batbomb 2 days ago 0 replies      
So maybe Peltier on the sensor, heat sink attached to body, body hermetically sealed. Sensors probably tested for best noise quality (probably a really low yield on that).
18
drenvuk 2 days ago 1 reply      
This is incredibly cool. You can even see how other sources of light actually have an effect on the environment as if they were their own suns.
19
peteretep 2 days ago 0 replies      
Put one of these on a drone and you'll break a lot of people's assumptions about their privacy
20
kator 2 days ago 0 replies      
21
bsenftner 1 day ago 0 replies      
Has no one considered a neural net post processor which has been trained on daylight views? Seems like an obvious method, given Hacker News...
22
Cieplak 2 days ago 0 replies      
I wonder what the sensor is made of. I would bet on there being a fair bit of Germanium in there.

PS: probably wrong about that, silicon's band gap is more suited to optical spectrum, even though germanium has more electron mobility. I'm speculating now that they're using avalanche photodiodes.

https://en.m.wikipedia.org/wiki/Avalanche_photodiode

23
ChuckMcM 1 day ago 0 replies      
Here is the manufacturer's web site: https://www.x20.org/color-night-vision/

There is a 'shoot out' video on that page which compares themselves to other night vision technologies. Pretty impressive demo.

24
copperx 2 days ago 0 replies      
I've dreamed of such a camera for decades. I thought the technology was at least 10+ years away. This is what science fiction is made of.
25
lutusp 2 days ago 3 replies      
Someone should contact this company and volunteer to redesign their website (https://www.x20.org). They should also be told that "complimentary" and "complementary" don't mean the same thing.

They have a great product, unfortunately presented on a terrible website.

26
jsjohnst 1 day ago 0 replies      
Here's the manufacturer website on it: https://www.x20.org/color-night-vision/
27
AnimalMuppet 2 days ago 0 replies      
It occurs to me that this technology could do absolutely amazing things as the imager for a space telescope...
28
breatheoften 2 days ago 0 replies      
Is that red rocks (just outside of Las Vegas). There's a lot of man made light sources that really scatter light pretty far and in a lot of directions (the Luxor spotlight comes to mind). I wonder if that could have an effect on this camera's performance.
29
jbrambleDC 2 days ago 1 reply      
I want to know what this means for observational astronomy. Can we put this in the eyepiece of a telescope and discern features in nebulae that otherwise look like gray blobs to unaided vision
30
interfixus 2 days ago 2 replies      
Why is the night sky blue? Is that really scattered starlight?
31
nnain 2 days ago 0 replies      
What a quandary: We see military weapons technology put to terrible use all the time, and yet, so much technology shows up in (US) military use first.
32
samstave 2 days ago 3 replies      
ELI5 what an iso of 5MM means?
33
faragon 2 days ago 0 replies      
Is this real? :-O
34
egypturnash 2 days ago 0 replies      
Their website is a thing of beauty. It's straight out of the Timecube school of design. https://www.x20.org/color-night-vision/
7
Machine learning without centralized training data googleblog.com
562 points by nealmueller  3 days ago   99 comments top 27
1
nostrademons 3 days ago 6 replies      
This is one of those announcements that seems unremarkable on read-through but could be industry-changing in a decade. The driving force between consolidation & monopoly in the tech industry is that bigger firms with more data have an advantage over smaller firms because they can deliver features (often using machine-learning) that users want and small startups or individuals simply cannot implement. This, in theory, provides a way for users to maintain control of their data while granting permission for machine-learning algorithms to inspect it and "phone home" with an improved model, without revealing the individual data. Couple it with a P2P protocol and a good on-device UI platform and you could in theory construct something similar to the WWW, with data stored locally, but with all the convenience features of centralized cloud-based servers.
2
whym 3 days ago 0 replies      
Their papers mentioned in the article:

Federated Learning: Strategies for Improving Communication Efficiency (2016) https://arxiv.org/abs/1610.05492

Federated Optimization: Distributed Machine Learning for On-Device Intelligence (2016)https://arxiv.org/abs/1610.02527

Communication-Efficient Learning of Deep Networks from Decentralized Data (2017)https://arxiv.org/abs/1602.05629

Practical Secure Aggregation for Privacy Preserving Machine Learning (2017)http://eprint.iacr.org/2017/281

3
binalpatel 3 days ago 1 reply      
Reminds me of a talk I saw by Stephen Boyd from Stanford a few years ago: https://www.youtube.com/watch?v=wqy-og_7SLs

(Slides only here: https://www.slideshare.net/0xdata/h2o-world-consensus-optimi...)

At that time I was working at a healthcare startup, and the ramifications of consensus algorithms blew my mind, especially given the constraints of HIPAA. This could be massive within the medical space, being able to train an algorithm with data from everyone, while still preserving privacy.

4
andreyk 3 days ago 1 reply      
The paper: https://arxiv.org/pdf/1602.05629.pdf

The key algorithmic detail: it seems they have each device perform multiple batch updates to the model, and then average all the multi-batch updates. "That is, each client locally takes one step of gradient descent on the current model using its local data, and the server then takes a weighted average of the resulting models. Once the algorithm is written this way, we can add morecomputation to each client by iterating the local update. "

They do some sensible things with model initialization to make sure weight update averaging works, and show in practice this way of doing things requires less communication and gets to the goal faster than a more naive approach. It seems like a fairly straighforward idea from the baseline SGD, so the contribution is mostly in actually doing it.

5
itchyjunk 3 days ago 4 replies      
"Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud."

So I assume this would help with privacy in a sense that you can train model on user data without transmitting it to the server. Is this in any way similar to something Apple calls 'Differential Privacy' [0] ?

"The key idea is to use the powerful processors in modern mobile devices to compute higher quality updates than simple gradient steps."

"Careful scheduling ensures training happens only when the device is idle, plugged in, and on a free wireless connection, so there is no impact on the phone's performance."

It's crazy what the phones of near future will be doing while 'idle'.

------------------------

[0] https://www.wired.com/2016/06/apples-differential-privacy-co...

6
sixdimensional 3 days ago 1 reply      
This is fascinating, and makes a lot of sense. There aren't too many companies in the world that could pull something like this off.. amazing work.

Counterpoint: perhaps they don't need your data if they already have the model that describes you!

If the data is like oil, but the algorithm is like gold.. then they still extract the gold without extracting the oil. You're still giving it away in exchange for the use of their service.

For that matter, run the model in reverse, and while you might not get the exact data... we've seen that machine learning has the ability to generate something that simulates the original input...

7
azinman2 3 days ago 1 reply      
This is quite amazing, beyond the homomorphic privacy implications being executed at scale in production -- they're also finding a way to harness billions of phones to do training on all kinds of data. They don't need to pay for huge data centers when they can get users to do it for them. They also can get data that might otherwise have never left the phone in light of encryption trends.
8
TY 2 days ago 0 replies      
This is an amazing development. Google is in a unique position to run this on truly massive scale.

Reading this, I couldn't shake the feeling that I heard all of this somewhere before in a work of fiction.

Then I remembered - here's the relevant clip from "Ex Machina":

https://youtu.be/39MdwJhp4Xc

9
argonaut 3 days ago 3 replies      
This is speculative, but it seems like the privacy aspect is oversold as it may be possible to reverse engineer the input data from the model updates. The point is that the model updates themselves are specific to each user.
10
siliconc0w 3 days ago 2 replies      
While a neat architectural improvement, the cynic in me thinks this is a fig leaf for the voracious inhalation of your digital life they're already doing.
11
sandGorgon 3 days ago 0 replies      
Tangentially related to this - numerai is a crowdsourced hedge fund that uses structure preserving encryption to be able to distribute it's data, while at the same time ensuring that it can be mined.

https://medium.com/numerai/encrypted-data-for-efficient-mark...

Why did they not build something like this ? I'm kind of concerned that my private keyboard data is being distributed without security. The secure aggregation protocol doesn't seem to be doing anything like this.

12
emcq 3 days ago 1 reply      
Even if this only allowed device based training and not privacy advantages it's exciting as a way of compression. Rather than sucking up device upload bandwidth you keep the data local and send the tiny model weight delta!
13
muzakthings 3 days ago 1 reply      
This is literally non-stochastic gradient descent where the batch update simply comes from a single node and a correlated set of examples. Nothing mind-blowing about it.
14
legulere 3 days ago 2 replies      
Where is the security model in this? What stops malicious attackers from uploading updates that are constructed to destory the model?
15
yeukhon 3 days ago 0 replies      
To be honest I have thought about this for long for distributed computing. If we have a problem which takes a lot of time to compute but problem can be computed with small pieces and then combined then why can't we pay user to subscribe for the computation? This is a major step toward thr big goal.
16
holografix 3 days ago 1 reply      
I don't work with ML for my day job but find it exhilaratingly interesting. (true story!)

When I first read this I was thinking: surely we can already do distributed learning, isnt that what for example SparkML does?

Is the benefit of this in the outsourcing of training of a large model to a bunch of weak devices?

17
nudpiedo 3 days ago 0 replies      
Where is the difference between that and distributed computing? A part of the specific usage for ML I don't see many differences, seti@home was an actual revolution made of actual volunteers (I don't know how many google users will be aware of that).
18
alex_hirner 3 days ago 0 replies      
I think the implications go even beyond privacy and efficiency. One could estimate each user's contribution to fidelity gains of the model. At least as an average within a batch. I imagine such an attribution to rewarded in money or credibility in the future.
19
orph 3 days ago 0 replies      
Huge implications for distributed self-driving car training and improvement.
20
nialv7 2 days ago 1 reply      
I had exactly this idea about a year ago!

I know ideas without execution don't worth anything, but I'm just happy to see my vision is on the right direction.

22
Joof 2 days ago 0 replies      
Could we build this into a P2P-like model where there are some supernodes that do the actual aggregation?
23
mehlman 3 days ago 0 replies      
I would argue there is no such thing. The model will after the update now incooperate your traning data as a seen example, clever use of optimization would enable you to partly reconstruct the example.
24
hefeweizen 3 days ago 0 replies      
How similar is this to multi-task learning?
25
yk 3 days ago 2 replies      
Google is building a google cloud, that is they try to use the hardware of other people, instead of other people using Googles hardware.
26
exit 3 days ago 0 replies      
i wonder whether this can be used as a blockchain proof of work
27
Svexar 3 days ago 0 replies      
So it's Google Wave for machine learning?
8
New Features Coming in PostgreSQL 10 rhaas.blogspot.com
507 points by ioltas  2 days ago   132 comments top 26
1
avar 2 days ago 2 replies      
This bit about ICU support v.s. glibc:

 > [...] Furthermore, at least on Red Hat, glibc regularly whacks > around the behavior of OS-native collations in minor releases, > which effectively corrupts PostgreSQL's indexes, since the index > order might no longer match the (revised) collation order. To > me, changing the behavior of a widely-used system call in a > maintenance release seems about as friendly as locking a family > of angry racoons in someone's car, but the glibc maintainers > evidently don't agree.
Is a reference to the PostgreSQL devs wanting to make their indexorder a function of strxfrm() calls and to not have it change whenglibc updates, whereas some on the glibc list think it should only beused for feeding it to the likes of strcmp() in the same process:

 > The only thing that matters about strxfrm output is its strcmp > ordering. If that changes, it's either a bug fix or a bug > (either in the code or in the locale data). If the string > contents change but the ordering doesn't, then it's an > implementation detail that is allowed to change.
-- https://sourceware.org/ml/libc-alpha/2015-09/msg00197.html

2
fiatjaf 1 day ago 3 replies      
Ok, I'm not a database manager for enormous projects, so these changes may be great, but I don't understand them and don't care about them. Postgres is already the most awesome thing in Earth to me.

Still, if my opinion counts I think SELF-UPDATING MATERIALIZED VIEWS should be the next priority.

3
jacques_chester 1 day ago 2 replies      
I deeply appreciate the great care that Postgres committers take in writing their merge messages.

I think of it as a sign of respect for future developers to take the time to write a clear account of what has happened.

4
qaq 2 days ago 0 replies      
Even a single feature from the list would make 10 an amazing release, all of them together is just unbelievable. Very happy we are using PG :)
5
iEchoic 1 day ago 1 reply      
I'm so excited for table partitioning. I use table inheritance in several places in my current project, but have felt the pain of foreign key constraints not applying to inherited children. Reading about table partitioning, I'm realizing that this is a much better fit for my use case.

Postgres continues to amaze me with the speed at which they introduce the right features into such a heavily-used and production-critical product. Thanks Postgres team!

6
lazzlazzlazz 1 day ago 3 replies      
How is Postgres so consistently the best open-source DB project from features to documentation? It's unreal.
7
jordanthoms 1 day ago 0 replies      
Will DDL replication for the logical replication be landing in 10 or later?

We have some use cases where logical replication would be very helpful, but keeping the schema in sync manually seems like a pain - will there be a documented workaround if DDL replication doesn't make it in?

8
nickpeterson 1 day ago 5 replies      
Can anyone recommend a decently up to date book on postgres administration? Or are docs really the only way? I've used SQL Server for years but would likely choose postgres for an independent project if I intended to commercialize it. That said, I don't use it at work so it's hard to get in depth experience.
9
djcj88 1 day ago 2 replies      
I did read the article, but I can't find any mention of addressing the "Write amplification" issue as described by Uber when they moved away from postgres. https://eng.uber.com/mysql-migration/ I had heard talk on Software Engineering Daily that this new major revision was supposed to address that.

Is this issue resolved by the new "Logical replication" feature? It doesn't seem directly related, but it seems like maybe that is what he is referring to in this blog post?

10
Normal_gaussian 2 days ago 4 replies      
Extended Statistics! I was following the replication changes, but have just discovered the extended statistics and am more excited about them.

The directory renaming at the bottom of the post is interesting - I wonder if many other projects have to do things like this?

11
smac8 1 day ago 1 reply      
Wow, so awesome. I do hope at some point we can see some language improvements to PLPGSQL. More basic data structures could go a long way in making that language really useful, and I still consider views/stored procedures a superior paradigm to client side sql logic
12
elvinyung 2 days ago 1 reply      
Dumb question: does declarative partitioning pave the way for native sharding in Postgres? I'm not super super familiar, but it seems like along with some other features coming in Postgres 10, like parallel queries and logical replication, that this is eventually the goal.
13
acdha 1 day ago 2 replies      
What's the ops experience for a replicated setup like these days? i.e. assuming you want basic fault-tolerance at non-exotic size / activity levels, how much of a job is someone acquiring if, say, there are reasons they can't just use AWS RDS?
14
StreamBright 1 day ago 0 replies      
For analytical loads the following is going to be great:

 While PostgreSQL 9.6 offers parallel query, this feature has been significantly improved in PostgreSQL 10, with new features like Parallel Bitmap Heap Scan, Parallel Index Scan, and others. Speedups of 2-4x are common with parallel query, and these enhancements should allow those speedups to happen for a wider variety of queries.

15
hodgesrm 1 day ago 1 reply      
Impressive feature list. Glad to see logical replication is finally making it in.
16
knv 1 day ago 0 replies      
Any recommendations for scaling Postgresql's best practices? Really appreciate it.
17
hartator 1 day ago 3 replies      
I am considering more and more a move back from MongoDB to PostgreSQL. I will be missing being schema less so much though. Migrations - particularly Rails migrations - left a bad taste in my mouth. Anyone did the move recently and what are their feelings?
18
mark_l_watson 1 day ago 3 replies      
I know that several RDF data stores use PostgreSQL as a backend data store. With new features like better XML support, as well as older features for storing hierarchical data, I am wishing for a plugin or extension for handling RDF with limited (not RDFS or OWL) SPARQL query support. I almost always have PostgreSQL available, and for RDF applications it would be very nice to not have to run a separate service.

I tend to view PostgreSQL as a "Swiss Army knife" and having native RDF support would reinforce that.

19
ams6110 1 day ago 1 reply      
A question on this statement, in the SCRAM authentication description: stealing the hashed password from the database or sniffing it on the wire is equivalent to stealing the password itself

How is that the case? That's exactly the thing that hashed passwords prevent. Of course, if it's just an MD5 hash that's feasibly vulnerable to brute-forcing today, but it's still not "equivalent" to having the clear-text password.

20
bladecatcher 2 days ago 1 reply      
This is great because I couldn't go to production with earlier releases of logical decoding. Now we don't have to depend on a third party add on!
21
awinter-py 1 day ago 0 replies      
fascinating that the road to improving the expr evaluator is better opcode dispatch and jit -- same tradeoffs every programming language project is looking at right now.
22
qxmat 1 day ago 0 replies      
DECLARE @please VARCHAR(3) = '???';
23
MR4D 2 days ago 0 replies      
You guys are awesome - keep up the good work!
24
api 1 day ago 2 replies      
The feature I'd really love is master selection with Raft or similar and automatic query redirection to the master for all write queries (and maybe for reads with a query keyword).

That would make it very easy and robust to cluster pg without requiring a big complicated (a.k.a. high admin overhead and failure prone) stack with lots of secondary tools.

This kind of fire and forget cluster is really the killer feature of things like MongoDB and RethinkDB. Yes people with really huge deployments might want something more tunable, but that's only like 1% of the market.

Of course those NoSQL databases also offer eventual and other weaker but more scalable consistency modes, but like highly tuned manual deployment these too are features for the 1% of the market that actually needs that kind of scale.

A fire and forget cluster-able fully consistent SQL database would be nirvana for most of the market.

25
awinter-py 1 day ago 0 replies      
the join speedup for provably unique operands sounds awesome
26
mozumder 1 day ago 1 reply      
I could use a count of the number of file I/Os that each query takes, in order to optimize my queries further...
9
Lego Macintosh Classic with epaper display jann.is
464 points by andrevoget  3 days ago   91 comments top 22
1
antirez 3 days ago 5 replies      
Is somebody able to explain why certain e-ink displays are so slow to refresh while others are much faster? For instance my Garmin Vivoactive HR e-ink display, that is even capable of displaying 64 colors, is like an LCD display in terms of refresh rate apparently, you can't see the difference easily, while the one that was used to build this project takes a lot of time to even show a single frame (see the Youtube video where the display is presented, following the link Jann provided in the blog post). My best guess is that they use completely different technologies.

EDIT: Vivoactive HR uses a Transreflective LCD actually. This web site explains very well how it works:

http://t17.net/transflectiveTFT/

2
alexandros 3 days ago 2 replies      
Incredible work jayniz -- I guess everyone and their mother is suggesting improvements, are you preparing a new version with the rpi zero W and non-cut lego blocks?

Disclaimer: resin.io founder, we're so happy you chose resin for this awesome project ;)

3
redsummer 3 days ago 2 replies      
I'd love a pi with an e-paper display (larger than the lego Mac) which just booted into Raspbian CLI. Has such a thing been done?
4
aphextron 3 days ago 4 replies      
It looks like there's a business opportunity here for someone to make a really slick browser based LEGO editor that does cost estimates and orders all the correct components for you when you're finished. I'm curious how large the market for such a thing would be.
5
walrus01 3 days ago 3 replies      
The Mac 128k was not from 1988. In that time frame, The Mac plus was the first really usable model with a 20MB HDD.
6
jordache 3 days ago 0 replies      
I don't get it.. is it just using e-ink to display greyscale image? So that's just a screenshot of app chrome and the hello text?
7
mamcx 3 days ago 1 reply      
I wish exist a e-paper suitable for use as monitor (21" at least)

* I mean, not a prototype in a galaxy far away

8
TheRealPomax 2 days ago 0 replies      
Now it just needs to accept "something" through the slot to trigger "something" to happen on the screen.
9
rangibaby 3 days ago 4 replies      
Pics need a banana for scale

/E I'm serious though, it is hard to tell how large it is from the pics

10
doomslay 3 days ago 5 replies      
Cutting lego bricks? Eugh. There's specific bricks that would have worked exactly.
11
maaaats 3 days ago 1 reply      
How powerful is this small replica compared to the original hardware?
12
timvdalen 3 days ago 1 reply      
Looks really slick! Is there any way to interact with the system though?

Is the Pi actually running Mac OS or is it just a static image?

13
ohitsdom 3 days ago 1 reply      
Awesome project!

Would anyone else chose a different software solution rather than Docker with resin.io? I love working on projects like this but I've stayed away from Docker so far. Docker plus a third-party service to manage it seems like it could be overkill, but it obviously got the job done.

14
codecamper 3 days ago 1 reply      
That's awesome... but can you really play shufflepuck on eInk? I miss shufflepuck too!
15
NoGravitas 2 days ago 0 replies      
Now this really needs to be running Basilisk II and a System 7 ROM. Hook up a Bluetooth keyboard and mouse, and you're set.
16
AKifer 3 days ago 1 reply      
Probably that's how we will build computers in 20 years.
17
amelius 3 days ago 0 replies      
Does it emulate the Macintosh Classic?
18
mproud 2 days ago 0 replies      
Why would you put an e-ink display on this if its not going to be used?
19
hilti 3 days ago 0 replies      
Pretty cool. I like it!
20
ge96 3 days ago 0 replies      
nice font
21
ruthtaylor123 3 days ago 1 reply      
Thats a good loooking website. Thumbs up!
22
jlebrech 3 days ago 0 replies      
no mac os?
10
Semantic UI semantic-ui.com
539 points by jhund  11 hours ago   167 comments top 45
1
jameslk 10 hours ago 9 replies      
I've always found it ironic that this library calls itself "Semantic UI" but doesn't follow the practice of semantic HTML/classes[0]. W3C suggests[1] that classes should be used for semantic roles (e.g. "warning", "news", "footer"), rather than for display ("left", "angle", "small" -- examples taken from Semantic UI's docs). So instead of giving a button the class of "button" it would be better to give it a class such as "download-book." The benefit of this is when it comes time to redesign parts of a site, you only have to touch your stylesheets instead of manipulating both the stylesheets and HTML. That is, so we don't fall into the old habits of what amounts to using <b> <font> <blink> tags.

0. https://css-tricks.com/semantic-class-names/

1. https://www.w3.org/QA/Tips/goodclassnames

2
jwr 9 hours ago 5 replies      
I use Semantic UI in production on https://partsbox.io/ and can list some upsides and downsides.

On the positive side:

* very complete, with good form styling, and lots of widgets you will use often, which is especially important for larger apps,

* the default theme is mature and has good usability, without the crazy "oh, how flat and invisible our UI is!" look.

* the class naming plays well with React (I use ClojureScript and Rum) and looks good in your code,

On the negative side:

* the CSS is huge and there is little you can do to trim it down,

* the JavaScript code is not Google Closure-ready, so it's a drag compared to my ClojureScript codebase: large and unwieldy,

* there is a jQuery dependency, so I have to pull that in, too,

* the build system is well, strange, let's put it that way. I'm used to typing "make" and getting things built, while this thing here insists on a) painting pretty pictures in the console window, b) crapping node_modules in a directory up from the one I'm building in, c) requires interactive feedback. I still haven't found a way to automatically build Semantic UI from a zip/tarball, and others seem to struggle with it, too.

Overall, I'm happy with the choice and it has been serving me well.

3
jlukic 10 hours ago 2 replies      
For people who are curious about theming here is classic GitHub done entirely in Semantic UI.http://semantic-org.github.io/example-github/

(Click the small paint icon in the top menu to swap themes to see in native SUI)

I did a meteor dev night where I talked about some of the ideas behind Semantic UI, which might clear up some of the linguistic origins for the library and it's ideas about language:https://www.youtube.com/watch?v=86PbLfUyFtA

And if anyone wants to dig really deep, there are a few podcasts as wellhttps://changelog.com/podcast/106https://changelog.com/podcast/164

4
TomFrost 10 hours ago 8 replies      
Semantic recently adopted my team's React adaptation as their official React port. It's lighter weight, eliminates jQuery, and all components are standard React components that can be extended or dropped in as-is.

https://react.semantic-ui.com/

5
xiaohanyu 10 hours ago 5 replies      
Hi, guys,

We have spent hundreds of hours build a new website with Semantic-UI for Semantic-UI: http://semantic-ui-forest.com/.

Semantic-UI is my favourite front-end CSS website, I have built several websites with Semantic-UI, and I love it, feel delightful when developing with Semantic-UI.

But compared with Bootstrap, the ecosystem of Semantic-UI is small, so we have semantic-ui-forest for you: http://semantic-ui-forest.com/posts/2017-04-05-introducing-s... .

In this website, we have ported 16 themes from bootswatch(bootstrap) to Semantic-UI (http://semantic-ui-forest.com/themes), and also, we have ported 18 official bootstrap examples (https://getbootstrap.com/getting-started/#examples) and reimplemented in Semantic-UI (http://semantic-ui-forest.com/templates/).

Not advertising, however, we think this maybe helpful for people who are interested in Semantic-UI and want to give it a try.

6
sheeshkebab 11 hours ago 6 replies      
This doesn't work well on mobile - on iOS at least... scrolling is funny, flickering screens, jerky inputs. Loading feels slow too - and I'm on wifi.
7
tomelders 6 hours ago 5 replies      
Please stop with these things. They're never fit for purpose, and now there's another thing that looks - to non technical people - like a panacea for all development woes. Designers will never follow your constraints. Managers will never understand why this hasn't magically reduced our estimates by 90%. And yet again, it's just "developers being difficult" because there's a bunch of guys in India who say they CAN work with this for half the price.

This sort of stuff is worse than useless.

8
franciscop 10 hours ago 1 reply      
When I started Picnic CSS[1] there were few CSS libraries out there and the ones that were available were severely lacking. They didn't have either :hover or :active states, no transitions, etc.

Now with new libraries or modern versions of those, including Semantic UI, I wonder whether it's time to stop supporting it and switch to one of those. They are still different but with somewhat similar principles (at least compared to others) such as the grid: <div class="flex two"><div></div><div></div></div>.

What I want to say, kudos. As I see jlukic answering some questions, how do you find the time/sponsorship to keep working on it? Is it a personal project, company project, funded through some external medium, etc? I see there's a donate button, does people contribute there a lot?

[1] https://picnicss.com/

9
malloryerik 11 hours ago 5 replies      
Might also check out Ant Design.https://ant.design/

It's integrated with React and there's a separate mobile UI for it. Ant is Chinese, with docs translated into English. Like China, it's huge^^I've just been fooling around with it today for the first time in create-react-app and seems good so far. Haven't tried on mobile.

10
aecorredor 24 minutes ago 0 replies      
Does anyone else feel that the documentation does not clearly explain how to create responsive layouts? I see the visual examples, but no clear code like in bootstrap's docs.
11
JusticeJuice 11 hours ago 1 reply      
I've done a few projects with Sematic UI. I think it's great for desktop based business applications. It looks slick, and has great animations. Plays nice with heaps of frameworks, I was using meteor.js

However, don't use it on mobile - it will destroy performance.

12
keehun 1 hour ago 1 reply      
Am I the only one that passionately dislikes the menus that require clicking on the hamburger icon? I'm okay with it in phone apps when used tastefully, but it seems like too many websites are adopting it now for no good reason. This trend is especially evident among the online Wordpress/HTML template communities and creators...
13
aphextron 11 hours ago 0 replies      
32,000+ stars is insane, how have I not heard of this? Does anyone have production experience with it?
14
nkkollaw 7 hours ago 1 reply      
It looks great.

However, I've used it in the past and the CSS size is _HUGE_, with no way to reduce it. We're talking about > 500KB of CSS (in my case, at least). The JavaScript is extremely bloated as well.

Honestly, being that heavy I wonder how anyone can use it. If your site is to be viewed by mobile users, adding 500KB just to style a few elements is unacceptable.

I'd much rather go with Bootstrap. It has the added benefit of having the majority of front-end devs know it, and you can buy or use a theme for free and make it look great.

15
notliketherest 11 hours ago 0 replies      
I love semantic UI React for my teams internal tools. So easy to drop in an use without having to think about css
16
constantlm 7 hours ago 0 replies      
I recently dropped Bootstrap early in a project and switched to Semantic. I've been using it for a few months - so far it seems fantastic and much more "natural" to work with than Bootstrap. The gigantic set of components, and integration with both EmberJS and React make it even more amazing.
17
dandare 8 hours ago 0 replies      
Sidenote: the https://en.bem.info/ website (mentioned in the first paragraph of text about Semantic UI) totally irritates me. Would you be so kind and explain with a single sentence what is the purpose of your website/platform/framework?
18
ludbek 3 hours ago 0 replies      
I have been using Semantic UI for a while now. Overall I love this framework. It has lots of essential components. I highly recommend it to lean startups who dont have enough expertise for designing and developing their own UI components.

But I do hate it for having weak and restrictive responsive queries.

19
flukus 10 hours ago 0 replies      
Wouldn't a semantic UI have things like a <menu> tag that was up to the browser to render?
20
cknight 8 hours ago 0 replies      
I chose Semantic UI for my project: https://suitocracy.com if anyone wants to see another live example, it also uses the default theme.

Like others, I was somewhat concerned about the bloat - over half of my front page's total file size. But at about 250KB all up, I realised this was only around a tenth of what the average website throws at people these days. https://www.wired.com/2016/04/average-webpage-now-size-origi...

21
taeric 11 hours ago 0 replies      
I alternate between thinking this sort of thing is merely misguided, or merely a waste of time.

I want to like it, a lot. But I can't help feeling that this ship sailed years ago.

Simple UIs that are easy to interpret are a thing of the nineties. We left them because we evidently didn't realize what we had. Also, people like flashy things. A lot.

22
cyberferret 9 hours ago 1 reply      
I've been a Bootstrap user for years on all my web apps, but thinking that perhaps instead of re-learning things for v4, I look at expending a similar amount of time and effort to learn something new.

I came across Semantic-UI last year and remember being impressed by it, but for some reason it just slipped my mind until I saw this post today. I seems it could work for another small project that I am thinking of starting.

Just to clarify - No reliance on jQuery with this framework, right? Has anyone else worked with Semantic-UI using Umbrella.js and/or Intercooler.js ??

23
baby 2 hours ago 0 replies      
I use it for small projects/pages just because it looks so good :) http://cryptologie.net/links

but I found it harder to get into compared to bootstrap/foundation.

24
ssijak 3 hours ago 0 replies      
Why is bootstrap 4 taking so long to get to a final version? All that waiting is pushing me towards other libraries. But me, being primarily backend engineer, want a library that has a large community because I am not so skilled with frontend UI and want the possibility to find the help easily.
25
inputcoffee 11 hours ago 2 replies      
What is the best way to think of this. Is this like Twitter Bootstrap and Zurb Foundation, or is this like something else entirely?
26
mark_l_watson 2 hours ago 0 replies      
I have been using bootstrap exclusively for years. I will give this a try on a small throwaway project. I am concerned by the apparently large size of CSS and JS, based on other comments here.
27
vinayakkulkarni 7 hours ago 1 reply      
Just FYI,

https://www.zomato.com/one of the biggest in it's industry uses Semantic-UI :)

Love the Framework and Jack + all contributors effort in it :)

28
tmikaeld 8 hours ago 0 replies      
My company has been using SUI in production the past 3 years and it's been absolutely great, sure it is big, but that translates into flexibility and speed of development as well as having a production-ready framework that we know can handle anything thrown at it.

I've seen some mentions of jQuery, I don't think that's a bad thing at all - the framework uses the plugin system so fully that without jQuery, I'm sure the framework would be even bigger and less flexible. The added advantage is that other jQuery plugins work without adding anything.

29
debacle 11 hours ago 1 reply      
Seems like a next evolution of Bootstrap components. The trick with this type of stuff is always in how it plays with other frameworks. Can I drop into jQuery if I need to, and still interact easily with controls? Is there some obscene DOM skeletons in the closet that's going to bite me in the ass later?
30
dmoreno 7 hours ago 0 replies      
I love semantic UI. I'm using it now with my new project (serverboards.io) and it really was a huge time saver.

I would prefer it using sass, but the is a 'port' (https://github.com/doabit/semantic-ui-sass/tree/master/app/a...)

31
Mizza 10 hours ago 0 replies      
Semantic has replaced Bootstrap as my go-to web framework. I find it more natural, and the default components are nicer. I think it needs a larger theme ecosystem and more consistent documentation, but I appreciate all the work that has gone into it.
32
wishinghand 9 hours ago 0 replies      
I love the style and components of Semantic UI, but it's really heavy in terms of CSS file size, even once minified. I'd recommend running UnCSS or something similar on it before deployment.
33
tabeth 10 hours ago 3 replies      
Is it possible these days to have a fully interactive mobile application with just HTML and CSS? Have CSS animations gotten good enough? I'm talking things like pure CSS accordians, modals/pop-ups, tooltips, etc.

Semantic UI is something I personally use for a few projects, but I really wish some of this stuff didn't require so much javascript and was more encapsulated like Tachyons [1]. The main problem I've encountered when using Semantic UI is that it becomes difficult to change the prebuild components significantly.

[1] http://tachyons.io/

34
Finbarr 11 hours ago 1 reply      
We used Semantic UI for Startup School (https://startupschool.org) and it has been awesome. Really happy with the choice.
35
chenshuiluke 7 hours ago 0 replies      
Semantic UI is really great! I suck at frontend design and it really helps me to make decent looking websites :)
36
macca321 4 hours ago 0 replies      
I'd like to find a framework like this that comes with platform-neutral (handlebars or similar) templates for each component
37
daurnimator 8 hours ago 0 replies      
Anyone able to help explain to me how to use this with e.g. a simple static site?

i.e. hand written HTML (perhaps compiled from markdown) with no JS?

The manuals for semantic UI seem to jump strait into integrations with other frontend frameworks and build tools; but I don't want to use them.

38
xyproto 5 hours ago 0 replies      
Sounds great in theory, but the dropdown box on the front page is a list where only half the height of the letters are shown, instead of a proper dropdown box.
39
karimdag 10 hours ago 0 replies      
Personally I have chose Semantic UI as my go-to css framework over bootstrap. While bootstrap performs better on mobile, SUI is way nicer/cleaner it therefore eliminates the need to customize anything which I think is one of the reasons that someone would use a css framework in the first place.
40
kbr 10 hours ago 0 replies      
Checked it out, and it looks quite nice! Congrats on making such a nice tool. I'm a fellow CSS library author here, of Wing[1].

Everything seems fine, but as others have said, the scrolling is jumpy. Might want to fix that :)

1. http://usewing.ml

41
voidhawk 4 hours ago 0 replies      
Anyone else find the pages jitter when scrolling? At least on Safari (iPhone)
42
kuon 7 hours ago 0 replies      
I am starting a new project, and I am considering semantic UI and grommet. Anybody has experience with grommet?
43
zeeshanu 8 hours ago 1 reply      
The interface looks good but it is like a nightmare to remeber every single class.
44
5_minutes 10 hours ago 0 replies      
I'm fine with Bootstrap though... another day, another framework
45
rfw1z 9 hours ago 0 replies      
What makes the Internet so exciting is the direct opposite of this.
11
React v15.5.0 facebook.github.io
443 points by shahzeb  2 days ago   200 comments top 22
1
acemarke 2 days ago 1 reply      
For those who are interested in some of the details of the work that's going on, Lin Clark's recent talk on "A Cartoon Intro to Fiber" at ReactConf 2017 is excellent [0]. There's a number of other existing writeups and resources on how Fiber works [1] as well. The roadmap for 15.5 and 16.0 migration is at [2], and the follow-up issue discussing the plan for the "addons" packages is at [3].

I'll also toss out my usual reminder that I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Specifically intended to be a great starting point for anyone trying to learn the ecosystem, as well as a solid source of good info on more advanced topics. Finally, the Reactiflux chat channels on Discord are a great place to hang out, ask questions, and learn. The invite link is at https://www.reactiflux.com .

[0] https://www.youtube.com/watch?v=ZCuYPiUIONs

[1] https://github.com/markerikson/react-redux-links/blob/master...

[2] https://github.com/facebook/react/issues/8854

[3] https://github.com/facebook/react/issues/9207

2
TheAceOfHearts 2 days ago 3 replies      
React team is doing an amazing job. I remember when it was first announced, I thought Facebook was crazy. "JSX? That sounds like a bad joke!" I don't think I've ever been so wrong. After hearing so much about React, I eventually tried it out and I realized that JSX wasn't a big deal at all, and in fact it was actually pretty awesome.

Their migration strategy is great for larger actively developed applications. Since Facebook is actually using React, they must have a migration strategy in place for breaking changes. Since breaking anything has such a big impact on the parent company, it makes me feel like I can trust em.

Heck, most of the items in this list of changes won't surprise anyone that's been following the project. Now there's less magic (e.g. React.createClass with its autobinding and mixins), and less React-specific code in your app (e.g. react-addons-update has no reason to live as a React addon when it can clearly live as a small standalone lib).

3
STRML 2 days ago 6 replies      
This is a big deal to deprecate `createClass` and `propTypes`.

PropTypes' deprecation is not difficult to handle, but the removal of createClass means one of two things for library maintainers:

(1). They'll depend on the `create-class` shim package, or,

(2). They must now depend on an entire babel toolchain to ensure that their classes can run in ES5 environments, which is the de-facto environment that npm modules export for.

I'm concerned about (2). While we are probably due for another major shift in what npm modules export and what our new minimum browser compatibility is, the simple truth is that most authors expect to be able to skip babel transcompilation on their node_modules. So either all React component authors get on the Babel train, or they start shipping ES6 `main` entries. Either way is a little bit painful.

It's progress, no doubt, but there will be some stumbles along the way.

4
ggregoire 2 days ago 7 replies      
For those still using propTypes, I'd recommend to take a look at Flow as replacement.

https://flow.org/en/docs/frameworks/react

5
amk_ 2 days ago 1 reply      
The breakup of the React package into a bunch of smaller modules really puts packages that treat React as a peer dependency in a pickle. I have a component module using createClass that works fine and exports a transpiled bundle in package.json. I guess now we'll have to switch to create-react-class, or maintain some kind of "backports" release series for people that are still using older React versions but want bugfixes.

Anyone have experience with this sort of thing?

6
nodesocket 2 days ago 3 replies      
Big news seems to be removal of `React.createClass()` in favor of:

 class HelloWorld extends React.Component { }

7
hueller 2 days ago 0 replies      
This is a good move. Modernization with sensible deprecation and scope re-evaluation with downsizing when more powerful alternatives exist. Too often codebases get bigger when they should really get smaller.
8
smdz 2 days ago 0 replies      
I absolutely love how React+TypeScript setup handles PropTypes elegantly. And then you get the amazing intellisense automatically.

```

interface State {}

interface ISomeComponentProps {

 title: string; tooltip?: string ....
}

@ReduxConnected

export class SomeComponent extends React.Component<ISomeComponentProps, State> {

....

```

9
uranian 2 days ago 2 replies      
What is this problem in the Javascript landscape to keep forcing developers to do things differently, with the penalty of your app not working anymore if you don't comply?

I mean, creating a new type of brush for painters is ok, but I don't see the need for forcing them to redo their old paintings with the new type of brush in order to keep them visible..

IMHO Coffeescript and some other to Javascript transpilers are still a much better language than the entire Babel ES5/ES6/ES7 thing. But for some reason my free choice here is in jeopardy. The community apparently has chosen for Babel and are now happily nihilating things that are not compatible with that.

In my opinion this is not only irresponsible, but very arrogant as well.

Although I do understand and can write higher order components, I still write and use small mixins in projects because it works for me. I also use createClass because I enjoy the autobinding and don't like the possibility to forget calling super.

Now I need to explain my superiors why this warning is shown in the console, making me look stupid using deprecated stuff. And I need to convince them why I need to spend weeks rewriting large parts of the codebase because the community thinks the way I write is stupid. Or I can of course stick to the current React version and wait until one of the dependencies breaks.

It would be really great if library upgrades very, very rarely break things. Imagine if all the authors of the 60+ npm libs I use in my apps are starting to break things this way, for me there is no intellectual excuse to justify that.

10
PudgePacket 2 days ago 1 reply      
Why do React and other js libraries emit warnings as console.error when browsers support console.warn?
11
sergiotapia 2 days ago 1 reply      
Awesome changelog with great migration instructions. Bravo to the React team!

Going to set aside some hours on Saturday to upgrade our React version.

I recently started to go in with functional components where I don't need life-cycle events such as componentDidMount. Does anyone know if React is planning to make optimizations for code structured in this way?

12
whitefish 2 days ago 3 replies      
I'd like to see React support shadow-dom and web components. Not holding my breath however, since Facebook considers web components to be a "competing technology".

Unlike real web components, React components are brittle since React does not have the equivalent of Shadow DOM.

13
Rapzid 2 days ago 3 replies      
Fiber is what I'm really waiting for. Not much official chatter about it, but looks like a 16 release?

They just removed some addons in master that many third party packages rely on, including material-ui. Hopefully these other popular packages can be ready to go with the changes when the fiber release hits.

14
baron816 2 days ago 3 replies      
I hate that the React team prefers ES6 classes. This is what I do:

function App(params) { const component = new React.Component(params);

 component.lifeCycleMethod = function() {...}; component.render = function() {...}; function privateMethod() {...} return component;}

15
xamuel 2 days ago 2 replies      
Happy to see propTypes getting shelved. Too many people stubbornly use propTypes even in Typescript projects. Hopefully this change will usher in the final stamping out of that.
16
bsimpson 2 days ago 0 replies      
Of course, you'd need to use super appropriately, but I wonder if anyone's taken a stab at porting React mixins to ES2105 mixins:

http://justinfagnani.com/2015/12/21/real-mixins-with-javascr...

17
Drdrdrq 2 days ago 1 reply      
Just curious: did Facebook change the license? AFAIK they can revoke permission to use from 3rd parties. Am I mistaken? If not, isn't this a huge risk for startups?
18
iLemming 1 day ago 0 replies      
I wonder how these changes would affect Clojurescript libs built on top of React, e.g.: Om.Next and Reagent
19
ksherlock 2 days ago 2 replies      
So... create-react-class is an unrelated node module. react-create-class (the correct one, I guess) is completely empty, other than the package.json.
20
aswanson 2 days ago 2 replies      
I cannot keep up. I just started learning react/apollo/graphql and I'm already out of date.
21
revelation 2 days ago 3 replies      
I still remember the times when warning were actual likely mistakes in your code, not "we're adding some more churn, update your stuff until we churn more".

If you want people to always ignore warnings, this is how you go about it.

22
lngnmn 2 days ago 1 reply      
Why people are always end up with J2EE-like bloatware? There must be some pattern, something social. It, perhaps, has something to do with the elitism of a being a framework ninja, a local guru who have memorized all the meaningless nuances and could recite the mantras, so one could call oneself an expert.

The next step would be certification, of course. Certified expert in this particular mess of hundred of dependencies and half-a-dozen tools like Babel.

Let's say that there is a law that any over-hyped project eventually would end up somewhere in the middle between OO-PHP and J2EE. Otherwise how to be an expert front-end developer?

Google's responsive design looks like the last tiny island of sanity.

12
Farmers look for ways to circumvent tractor software locks npr.org
406 points by pak  15 hours ago   291 comments top 29
1
Sytten 11 hours ago 2 replies      
My dad is a farmer and I can assure you that this is a real problem. Every piece of equipment now as its own proprietary, closed-source and, most of the time, incompatible software. Plus, many of them don't get any update after the product launch. When you are in rush to plant or harvest you just can`t afford to wait for an authorized dealer. And if they fail, good luck trying to find a replacement that is not 100x overpriced because it has been discontinued one year after you bought it.I tried repairing a GPS system once and it required a special serial cable + software which costed more than 100$ just to update the driver...
2
jaclaz 4 hours ago 2 replies      
I have the feeling that somehow the actual need has been put aside for phylosophical (or Open Source, etc.) reasonings (nice but not the original issue).

More or less what the good farmers are asking for (which is not about the code, the kernel or whatever, they are not "hackers" as much as the authorized JD technicians are not computer experts or programmers or software engineers) is just access to the "database" of parts serial number of the machine.

Loosely the way it works (simplified) is a database where the (say) pressure sensor #42 has been registered (authorized) in the operating system as having serial number #0123456789.When the sensor breaks, after it has been replaced with a new (original or verified third party) sensor, you need to update the database telling it that sensor with serial #0123456789 has bee replaced with sensor serial number #2223334445 and - of course it depends on the specific part - possibly run a "self-test" program to verify that the sensor works properly and maybe tune/regulate it.

The farmers do not want the source code, they don't want to modify it, they don't want to "hack" anything, they simply want to be able to replace a part and have the thingy work.

Going back to software, let's talk of - say - Windows 7 (yeah I know that all the rage is about Windows 10 nowadays) and its activation, imagine that instead of having one month time to activate a new install either through the internet, the automated phone call in case it doesn't work and a support phone call for particular cases where the previous two options do not work, activation was:

1) Immediately mandatory (i.e. the OS wouldn't work until activated)

2) ONLY available through a local visit of a MS agent (9 to 5 , Monday to Friday) at a cost of (say) US$ 100.00/hour + US$ 1.00/mile

3
nottorp 7 hours ago 1 reply      
I don't get why all this chat is about software, licensing and software safety.

The way I read it, a farmer can't change even, say, a brake pad (or whatever tractors use) without authorization from John Deere. I very strongly doubt that they want to mess with the software, they just want to perform minor maintenance themselves.

4
TaylorAlexander 13 hours ago 8 replies      
I think we'd all be better off if basically everything was open source, by way of eliminating intellectual property protections provided by governments.

As an alternate solution, those of us with engineering skills can work to create an open source economy with open source factories, computers, and products.

This would never be a problem nor would it be likely to happen if genuinely competitive options existed for farmers that were not locked down.

Another way we in this community can help is by helping smaller businesses learn the value of open source and get them using and creating it.

I believe with a sufficiently open source base in our economy, we can make great headway into eliminating material poverty.

I write a little about this on my personal site, here:

http://tlalexander.com/machine/

5
kvncombo 14 minutes ago 0 replies      
The big boys have obviously cornered the market. Is there any farming equipment company that provides more open access for maintenance and repairs? If not, why not? It seems there is an opportunity there.
6
throwaway_jddev 11 hours ago 18 replies      
Hey all, I worked on software for John Deere. This is a throwaway account for obvious reasons. Opinions expressed here are MY OWN. I no longer work for John Deere or am associated with them in any way.

I was part of one of the many teams that work on this software. Specifically I was part of John Deere's ISG division also known as the Intelligent Solutions Group. The ISG division (was at the time) responsible for tying together various software built by OEM's, for building the central UI within the cabin, and for building various debugging and build tools. The team I was on, consisted of about 8 very senior engineers, and I think there were around 20 total engineers working for ISG at the time (though I saw, and knew only a handful of them). Now, when I say OEM integration, I mean suppliers and other John Deere divisions with their own teams mirroring ours. All told, I would estimate that John Deere has somewhere between 150-300 engineers working full-time on their codebase for their tractors.

Let me disabuse you of any myths. I have worked in software for 20 years. I have worked in large enterprises, and scrappy startups. This software is by FAR the largest, most complex codebase I have ever interacted with. Submission of any new code was seriously considered and reviewed before it entered production (sometimes to a pedantic degree), after which JD put all new code through 10s of thousands of hours of testing on production equipment. Production and release cycles take on the order of months to ensure that we don't kill people.

These are not riding lawnmowers. They are 30-ton combines, and 20 ton tractors tilling fields, with massive horsepower behind them. They have a real potential to end peoples lives in the event of failure, and these tractors do (in testing) fail in spectacular ways. If a team of hundred of engineers struggle with their codebase internally, Joe Farmer isn't going to have a fucking clue how to repair their software correctly.

Now should you, in theory, have the right to modify equipment you own? Sure. Absolutely. Hell, John Deere tractors run on open source software. But trust me on this, locking this down is a very good idea.

If you have the drive to make open source tractor software AND can make absolutely certain no-one ever dies from code you write, then go do it. Just keep in mind that the engineers that work on this shit really care about keeping people safe.

7
intrasight 10 hours ago 1 reply      
While this is a fascinating question in the context of tractors, it gets even more interesting in the context of cars, houses, personal electronics, light bulbs. The First Sale Doctrine is being eroded by DRM.
8
kccqzy 10 hours ago 1 reply      
This really reminds me of how Richard Stallman started GNU. It was because he can't modify the software on a printer he uses.
9
tim333 13 hours ago 0 replies      
Previous discussion https://news.ycombinator.com/item?id=13925994 (177 comments)
10
userbinator 11 hours ago 1 reply      
Every time I read or hear about new developments in creating safer/more secure software, I am reminded of scenarios like this. Companies could use formally verified crypto and such to provably and completely lock out users by destroying all means of circumvention. In that sense, I think these secure technologies are like nuclear weapons --- extremely powerful, too powerful. Society in general seems to rely on some insecurity to maintain its freedom; so I believe anyone who advocates for more secure systems should also carefully consider all the negative effects which will appear if their vision comes true, and whether they are, however indirectly, locking themselves out.
11
mabbo 12 hours ago 1 reply      
It's not just the farmer paying for this: it's everyone who eats food.

The time wasted is lost productivity. The extra fees just for a software unlock is lost money. The farmer has to either charge more, or go out of business sooner. Either way, the cost of food rises.

12
swanson 11 hours ago 3 replies      
It seems so unbelievable to me that there are enough people that are a) John Deere equipment owners/renters and b) capable of debugging and patching issues in a C++ codebase for these stories to keep appearing.

Debates on the virtues of open source aside, is this actually the solution? Or is it a symptom of, say, poor quality software releases? or service visits that are too costly? or overloaded dealers who can't handle harvest-time support loads? I just don't believe that allowing people to tinker with the software is going to be the magic answer that these folks seem to think it is.

13
ivanhoe 4 hours ago 0 replies      
Couldn't they organize and sue the manufacturer for their losses due to the tractor malfunctions in critical periods and being prevented to service them promptly? I understand it's a bit naive, but you don't solve this by hacking around the problem, but by attacking the problem through the institutions of the system.
14
itchyjunk 13 hours ago 5 replies      
"farmers could damage the machines, like bypassing pollution emissions controls to get more horsepower."

Isn't this the problem with warranties? People could try to mod it, end up damaging it and try to get it replaced with warranties.

I also don't fully understand this software. Is it just completely vendor locked? That sounds really unreasonable. It should allow for at least basic debugging and trouble shooting.

Is this software locking only in large $100k + harvester type equipment? Does the vendor have other reasonable explanation of doing this?

I wonder if the software designer for this equipments would have reasonable arguments for such locks or if this is just profit driven decision.

15
peter_retief 6 hours ago 0 replies      
This seems to be a regular way to lock in customers, I think of ink jet cartridges and even 3d printer refills use similar tricks. I wonder if there is a case to be made in making this illegal or optional
16
andrewchambers 14 hours ago 6 replies      
I don't like the idea that we have to pass laws to force companies to make a better product. Why can't a company take the initiative and grab all the customers who value this?
17
arca_vorago 13 hours ago 2 replies      
I wonder, is the a GPL tractor software project out there yet?
18
squarefoot 6 hours ago 0 replies      
SmartTV and other appliances are closed too, so that users must purchase a new one when for example codecs become obsolete. Sadly, the closed source model is not just being used where there are safety concerns involved.
19
andai 5 hours ago 0 replies      
Can someone please explain what software has to do with repairing a tractor?

Edit: it looks like the physical components themselves are DRMed? Wtf?

20
intrasight 10 hours ago 1 reply      
The jet engines in a modern airplane have some analogies here. You can think of an engine as a PaaS (Propulsion-as-a-Service). A tractor is HaaS (Harvesting-as-a-Service). Our technologies have reach this level of complexity - the must be offered as a service. Cars will soon be Mobility-as-a-Service. Putting on my economist hat, I'd say that it is the ultimate manifestation of "comparative advantage". If JD abuses their monopoly position, then fix that through the courts and legislation, or by buying a competitors product. Don't try to hack the terms of service. But JD and other "product" vendors need to make it clear that they are in fact selling a service.
21
cmurf 13 hours ago 0 replies      
Among the most successful cars were those with straightforward replacement parts, a defacto standard, and reasonably well documented and available maintenance manuals.

Tesla wants control over this, by literally renting the maintenance manual, and remotely disabling the car if repairs or parts aren't authorized. I don't expect the model 3 market will appreciate this business model. It will be a much more price sensitive market compared to the early models which has been relatively inelastic for repairs and resale.

Consider the x86 computer market, if every component had signed firmware, and the main system verified this signature in case of component replacement, and would fail to function at all if signature verification failed. What a pain...

Consider voting machines, proprietary hardware, expensive to design, maintain, audit, and go obsolete in as few as 1/2 dozen uses. Compare that to pencil and paper.

The older I get the more Darth Vader I become: "Don't be too proud of this technological terror you've constructed." (Let's say the Force is common sense in this metaphor.)

22
douche 13 hours ago 2 replies      
There's going to be a huge market in used pre-DRM heavy machinery. The purely mechanical/hydraulic versions of this stuff is virtually indestructible, with a little maintenance.
23
shawn-butler 13 hours ago 0 replies      
The motherboard/vice article NPR is blatantly ripping off here is much better in my opinion.

https://motherboard.vice.com/en_us/article/why-american-farm...

24
tbyehl 12 hours ago 5 replies      
I'm starting to wonder if these articles are driven by a PR firm paid by John Deere's competition. They're always about John Deere and only John Deere. Aside from the Motherboard / Vice article, they never provide any specifics about the maintenance or repair operations that farms are prevented from doing on their own.

With the Vice article, 2 of the 3 things they mention are modifying the tractors to operate in ways the manufacturer did not intend which could result in damage.

25
known 12 hours ago 0 replies      
26
watertom 11 hours ago 0 replies      
What's ironic is most of these farmers are republicans and voted in the people who enable this crap.
27
arkis22 10 hours ago 1 reply      
Everybody likes to feel taken advantage of.

If I was a business owner or engineer that built systems this complex and you asked me to not lock it down, I'd call you freaking crazy.

These are very expensive and complex machines, and you want my competition or some farmer who has no idea what he's doing to access and modify it?

No thank you.

Google keeps proprietary code. And that's for auto complete...

28
soheil 10 hours ago 4 replies      
Imagine not so long in the future if self-driving cars were forced to reveal their code because of the right-to-repair bill, 1. who without a vast depth of knowledge in C++, etc. would be able to go anywhere near it? 2. even if they did is it in their best interest or anyone else's if they tinker with the code and made the car take undesirable actions?

Maybe buying a tractor should be replaced with leasing tractors, if they never want you to fully own everything in it. I think very soon there will be more and more of a need for a new way to determine what products are allowed to be sold partially with a secret OEM key.

29
notliketherest 13 hours ago 9 replies      
When we buy a piece of software, we own and "physically" posses a binary which we feel we can rightly take apart, modify, and mess with it because we view it as analogous to owning a toaster or stereo in the physical world. It's in our home, we can touch it, and we in essence control it.

Now the same is never said for software as a service. We buy subscriptions to services all the time but don't demand an ability to modify or control the software. It's defined in our agreement. Now it seems to me that the companies that sell these tractors have decided to pursue a model by which their software is more or less SaSS (providing encrypted updates over the air). Why is it that these farmers believe they have a right to modify that software?

13
Marc Andreessen: Take the Ego Out of Ideas stanford.edu
332 points by allenleein  2 days ago   232 comments top 32
1
6stringmerc 2 days ago 7 replies      
>So if technological change were going to cause elimination of jobs, one presumes we would have seen it by now.

...considering this statement was delivered while the US Workforce Participation is at 30+ year lows while productivity and technological change has made significant inroads during that time (ex: Macintosh 512k vs. iPhone 7), I think he's missing a large chunk of the, uh, big picture.

Then, contrast one of his well reasoned and very telling thoughts about the future:

>All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room? What if you get to spend that hour playing with your kid or reading the news or watching TV or actually working because you dont have to worry about driving?

Because in the United States, we should be working even while we are getting to work, because we don't work enough? SMDH. To me, the Working Class has plenty of reason to be cynical about this vision of the future..."playing with your kids in the car" time or not.

2
mvpu 2 days ago 7 replies      
"Take the ego out of ideas" is sound advice for investors, not entrepreneurs. Ego is a loaded word, but if you define it, in this context, as an irrational belief that you are right and the world will catch up, then it's essential for every entrepreneur. "New ideas" get no support. You're the only support. You have to strongly believe that the world will get there, do whatever it takes to convince them to get there, and survive long enough to bank on that moment. Without that ego in your idea, you probably won't survive long enough.
3
d--b 2 days ago 0 replies      
I would say more broadly "take the ego out of work".

In tech, we meet so many people who are emotionally attached to their work, who would treat their production as 'their baby'. This is a terribly common counterproductive bias. It prevents from:

- taking criticism productively: people "put their soul" in their work, and then someone tells them it's perhaps not the best way. Do hear them.

- assessing one's position objectively: people who are attached to their work often misconstrue their vision with the reality of the work. They tend to minimize weak points and emphasize strong points.

- delegating your job away: people infatuated with their work have a hard time giving it away. Necessarily, the delegate will screw it up.

That should be rule number 0 of all jobs: Be invested in the mission, not in the solution

4
BjoernKW 2 days ago 1 reply      
> All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room?

This is ridiculous. That's what our supposedly most innovative thinkers can come up with? Turning your car into a living room so we can have even more commuting (with all the wonderful side effects that come with it ...)?

What about eliminating the need to commute in the first place?

5
wyc 2 days ago 1 reply      
Re: tech creates jobs, Tyler Cowen's Average is Over has an interesting passage about automation:

"Keeping an unmanned Predator drone in the air for twenty-four hours requires about 168 workers laboring in the background. A larger drone, such as the Global Hawk surveillance drone, needs about 300 people...an F-16 fighter aircraft requires fewer than 100 people for a single mission."

It's well known that the industrial revolution created countless new jobs that were unimaginable at the time, a sentiment echoed in The Second Machine Age by Brynjolfsson. But how do you pick the winners that will bring the most jobs? Some say disruptive innovation, but it still seems like an open question.

6
pdimitar 21 hours ago 0 replies      
I'll never trust a VC on this topic, sorry. To me, "remove ego from X, Y, or Z" coming from a VC sounds a lot like "...so we have an easier time patenting your work behind your back and kick you out of your own innovation, for life".

Biased by me? Surely. But I haven't seen a benevolent VC in my life, and I've met 10-12 of them. Anecdotal? Of course. None of us knows them all so there you have it, anecdotal evidence.

I can't take this guy for real. Plus, he looks like he's in the rich bubble and "playing with your kids in the car" is a horribly misguided idea. So people should work even in their leisure / warmup time. Sure!

7
davidf18 2 days ago 1 reply      
> "Self-driving cars, for example, could potentially put 5 million people involved in transportation jobs out of work....."

On a work day, NYC subway provides 6 million trips. Think of all of the car drivers it is displacing. And then there are the buses! And that is in NYC alone. Just think of all of the drivers mass transit has already displaced throughout the nation!

Then there is intercity transit: think of all of the drivers displaced by planes, trains, and buses!

Self-driving trucks? Trucks have been displaced by trains, barges, container ships, ....

Cars, even electric ones, create air pollution which impacts health as well as greenhouse gas. Electric cars are charged from electric power plants -- most of the US electricity is generated by carbon-based fuels -- coal and gas.

Using Via which transports multiple passengers [part of Manhattan, part of Brooklyn, Chicago, Washington DC] (or Uber pool for example) at least helps to reduce air pollution and greenhouse gas compared with single passenger vehicles that at least helps to reduce air pollution / greenhouse gas.

8
blahman2 2 days ago 0 replies      
1 hour commute is fine? No. There were all these visions about how with the advent of the Industrial revolution people would have to work half a day because that's how long it would take them to finish their norm. Instead, they were asked to produce twice as much.

Now we have our 'great' thought leader try to convince us about the virtues of hard work and 1 hour commute again.

How about "Put the type of Ego in your ideas that will remove the need for you to have a job in a few years"? Because jobs will be going away, and we don't need an even more hard core rat race in the US.

9
6d6b73 2 days ago 0 replies      
every year in the U.S. on average about 21 million jobs are destroyed and about 24.5 million are created, Andreessen says

FFS.. No. They are not destroyed and created. These jobs are just shifted from one company to another, and most of them are seasonal, or part-time jobs.

10
e2e4 2 days ago 0 replies      
Commuting to work with self driving car sounds like a faster horse carriage.

I wonder why isn't telecommuting / virtual presence a big part of his predictions.

11
graycat 2 days ago 0 replies      
So, Andreessen is talking about "ideas" -- hmm ....

His ideas seem to be (A) some large changes in the economy and society from (B) some exploitations of largely existing computer technology to meet some want/need previously unnoticed or infeasible to meet.

But, even for just (A) and (B), there is potentially MUCH more potential in ideas that Andreessen seems to ignore.

An example was Xerox: Copying paper documents was important. The main means was carbon paper. Xerox did quite a lot of engineering research based on some early research, IIRC, at Battelle. The result was one of the biggest business success stories of all time.

Andreessendoesn't discuss research ideas -- how to have them, pursue them, apply them, evaluate them, etc.

12
nadermx 2 days ago 0 replies      
I guess with the growing remote work movement this becomes harder and harder to do since you spend less time with your peers whom you can "argue with" mentally since you lack time around them to get a better sense of how they think
13
omegaworks 2 days ago 0 replies      
Mark Andreessen: Get ready for White Flight 2

So much for the short-lived renaissance of the city. Will millennials still want short commutes when they can pass the time in their cars?

14
bkohlmann 2 days ago 0 replies      
I was fortunate enough to be the one to interview him for this event. He's a remarkable intellect and kept me on my toes the entire time!
15
0xCMP 2 days ago 3 replies      
I do like the idea of almost a rolling office. I've always wanted a sort of vagabond life fueled by tech. There so much out in the world and so many people. It's a shame that we're often stuck in the same places for such long periods of time.

If I become a remote/work-from-home/smb-owner I'd love to just being a self-driving car doing stuff on the go and also changing where I am all the time.

16
zackmorris 2 days ago 1 reply      
"Take the Selfishness out of Profit"
17
lappet 2 days ago 0 replies      
> "Most of the good ideas are obvious, Andreessen says. They just might not work right away"

That seems like a gross simplification of the way things usually work. Saying good ideas are obvious sounds a little egoistic, which seems ironic, considering the title

18
debt 2 days ago 5 replies      
he's a media vc. facebook is basically his crown jewel and that's it. facebook/media is cool i guess, but i don't see how he know much about anything else such as robotics or ai.

just look at andreesen horowitz investments. many are largely media companies(buzzfeed, stack exchange). they've tried doing finance which is a much bigger market but like clinkle clearly imploded and coinbase probably is next(literally transfers went down the other day eek). so fb is still all he's got.

he hasn't invested in any big winners yet beyond fb/media. so why should i listen to this guys advice(unless of course if i'm building a media company).

19
ashray5 2 days ago 0 replies      
All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room? What if you get to spend that hour playing with your kid or reading the news or watching TV or actually working because you dont have to worry about driving?

I suppose one can find these answers from people who commute by company shuttles, trains or subways.

20
matt_wulfeck 2 days ago 12 replies      
I'm getting tired of all of the hot air coming from these tech oligarchs. They're so enriched by a tech boom and a decade of easy money that we worship at their feet. Their vision and goal for the future is simply more money for themselves at the expense of others.

"Guys look! An hour long commute is actually a good thing because you can spend it with your kids!" Why is it so hard to spend time with our kids now?!

I know that sounds harsh, but we seriously need to stop the hero worship in SV culture and begin building a society that benefits everyone, not a society that works itself to the bone just to eat the cake of a larger corporation and enrich the early investors. They will just as quickly dilute your quality of life as they will dilute the shares in your company.

21
wonderous 2 days ago 0 replies      
Video & Transcript: Marc Andreessen on Change, Constraints, and Curiosity

https://gist.github.com/anonymous/e40ca4a54cc35379d6052369f8...

22
smallboy 2 days ago 0 replies      
Wasn't this the guy who said India should still be colonized? Not taking advice from him.
23
0xCMP 2 days ago 0 replies      
We can't look/talk to each other at lunch without staring at our phones why do we think we're going to spend quality time with people in a self-driving car? The car isn't a solution to that problem.
24
julius_set 19 hours ago 0 replies      
Hey billionaires are people too.
25
justinmk 2 days ago 1 reply      
> Marc Andreessen: Take the Ego Out of Ideas

Shouldn't it be:

> Anonymous: Take the Ego Out of Ideas

26
RichardHeart 2 days ago 0 replies      
Whomever wrote the title, didn't follow it's advice.
27
spectistcles 2 days ago 0 replies      
How about we take the ego out of Marc Andreessen
28
raspasov 2 days ago 0 replies      
Put science into ideas.
29
wonderous 2 days ago 0 replies      
(2016)
30
underwater 2 days ago 0 replies      
Is the attribution of the quote in this headline meant to lend it extra weight? Rather ironic.
31
LordHumungous 2 days ago 1 reply      
32
good_vibes 2 days ago 2 replies      
That picture makes me want to not keep reading but then I remember Netscape.
14
Uber said to use sophisticated software to defraud drivers, passengers arstechnica.com
340 points by dralley  3 days ago   223 comments top 42
1
tyre 3 days ago 17 replies      
I don't see the scandal.

1) Uber's upfront estimate is based on a naive calculation of getting from A -> B. From a software perspective, that makes sense. The consumer hasn't even committed to riding, so let's just toss out a ballpark figure.

2) If the consumer looks at the figure and says, "Yes, that's reasonable for transportation from A -> B", which they indicate by clicking "Request Ride", then they are agreeing to pay that price for the service.

3) The rider can verbally request a different route once in the Uber.

4) The driver is paid based on minutes and miles, via some formula that they've agreed to. The rider is charged based on an up-front calculation, which they can decide if it is worth it or not.

It sounds like the lawsuit is alleging that the rider is being defrauded by being taken on a different route than the one displayed at time of purchase.

I think this is silly because, to my knowledge, everyone taking an Uber is paying for the transportation and not any particular route. I.e. being taken on a specific route isn't what the rider is agreeing to pay for. Also, as noted in (3), the rider is always free to change the route.

Additionally silly because the rider seems to be alleging that they were defrauded by being taken by a more efficient route. There just doesn't seem to be any "harm" in what's happening here. I can understand the case if the user agreed to go from San Francisco down to San Jose, based on a route straight down the 101 highway, then, once they got in, was driven to San Jose through Los Angeles.

2
lithos 3 days ago 5 replies      
Uber has killed so much goodwill and their reputation so well that no one will be surprised at almost any accusation directed at Uber.

I know my first thought was "not surprising", and I imagine others will think the same.

3
Digit-Al 3 days ago 0 replies      
To me it seems the disconnect here is between the fixed fee on one side (the passenger) and the flexible fee on the other side (the driver).

Uber is, sort of, acting as an insurer and underwriting the cost of the journey. The passenger pays a fixed fee for a projection of how much the route will cost and the driver gets paid by how much it actually costs in driving time and distance. If there is some sort of unexpected delay and the journey takes longer then, presumably, the driver will be paid more than the passenger paid so Uber will lose out.

As with all insurers Uber charges a higher initial charge to act as a buffer and minimise the chances of losing money on the journey.

I can't really see any way of getting round this as long as the passenger pays a fixed price and the driver is paid a flexible fee.

4
mabbo 3 days ago 1 reply      
Both driver and passenger think they know the full truth of the matter for the financial transaction they're agreeing to, but they don't. There's implicit dishonesty in that, and when you combine dishonesty with money we call it 'fraud' usually.

But let's set aside the question of whether it was legal. Was it moral?

Software like this doesn't fall from the sky- management approved it, software teams wrote it, maintain it and system tests probably exist to validate it works... how do those developers feel okay about this? How do they not feel like they're cheating people out of money? When your Mom hears about it, and asks if you were part of it will you spend 20 minutes giving a long-winded answer about how it was actually not a bad thing? That's a bad sign, man.

I'm reminded of the scene from 'Clerks' discussing Contractors[0]

[0]https://www.youtube.com/watch?v=iQdDRrcAOjA

5
tmh79 3 days ago 3 replies      
I think people misunderstand upfront fares. Its like buying an airplane ticket: the airline charges passengers the appropriate price to fill the plane, and it pays pilots a salary. Pilots who fly more profitable routes don't get paid bonuses because their passengers pay more. Same thing with UPS drivers, who get paid a fixed amount to drive packages around. The concept of "up front fares" seems to be widely practiced in logistics companies, and is probably a part of the transition as ride sharing companies become less like taxis and more like UPS/airlines.
6
ihsw2 3 days ago 0 replies      
The fare discrepancy can extend beyond longer/shorter route calculation -- there is also the issue of surge price disparity between driver and passenger. For example, the user would see 3x surge pricing while the driver would see 2x surge pricing, where the user is charged for 3x but the driver is paid for 2x. This is pure speculation and I have not witnessed this behavior but it's another way things can go wrong in Uber's favor.

Uber might be able to defend itself saying that the data provided to the driver and passenger are different because of misconfigured caching and stale data being served to either party, but it's a moot point in case Ars Technica has concrete and verifible claims of methodical and programmatic fraud. Personally I have witnessed being billed for $0 in-app after taking a round trip (effectively zero distance traveled) but the email notification showed the proper billing value, and there may be more instances of this "confusion."

7
basseq 3 days ago 1 reply      
As an Uber user, I'm unaware of this "upfront" pricing model. I thought the price charged was based on the actual time/distance (which, incidentally, they email me on the receipt). I know I can estimate the trip cost, but I thought that was just an estimate.

Am I wrong? What is this "upfront" pricing?

And is the reverse true? E.g., can I commit to some committed price then have the driver take some crazy route?

8
chickenbane 3 days ago 1 reply      
A bit off topic, but I was just in New Orleans for a fun trip. I normally use Lyft, but apparently Lyft pickups were not allowed at the airport so I used Uber (my last Uber ride was months ago).

After waiting at least 15 minutes in the pickup spot, my driver cancelled. Annoyed, I requested another Uber ride (which went fine). However, I was shocked to learn that Uber had still charged me a cancellation fee for the first ride and continued to argue it was appropriate when I protested.

I finally resolved it when I continued to press the issue, but I found the whole scenario incredibly customer-hostile. Along with the litany of gross Uber stories, I will continue to prefer Lyft!

9
enknamel 3 days ago 2 replies      
This just sounds like Uber quickly charges you for the worst case since if they charge you for the best case and things go wrong, Uber loses. No one can know what route will even be possible given how chaotic traffic and closures can be. Then the driver gets paid by whatever route is actually taken. I don't really see this as an issue at all.
10
vinay_ys 3 days ago 0 replies      
UberGo is most prevalent option in India. In UberGo you are shown a fixed final price at the time of starting the trip. This is supposedly calculated based on the best route you will take from point A to B. Def. of best is - cheapest cost - by trading between short/long routes vs traffic congestion on those routes that cost time.But once the rider gets into the car and driver starts google maps, it can show a different route due to changing traffic conditions. Or driver can refuse to follow google maps and use his own judgement on which route is better (for him). In either case, Uber should be transparent and show to both rider and driver the difference between what was initially calculated vs what it actually cost based on the actual trip. But uber does not do this. Instead of they also add another arbitrary/opaque surge multiplier. If at all they have to do any fraud, they are better off doing that "fraud" by showing different multiplier for rider vs driver. Consumer protection law agencies should insist Uber should at least be transparent and predictable in how they determine surge multiplier and their distance/time metering is accurate.
11
itchyjunk 3 days ago 2 replies      
"27. In the overwhelming majority of transportations, the upfront price is the amount that a User is ultimately charged for the transportation services by the driver.28. When a driver accepts a Users request for transportation, the Users final destination is populated into the drivers application and the driver is providedwith navigation instructions directing him or her to the best route to the Usersdestination"

---------

It seems like User sees a price X for a ride and accepts it. The driver might see a price X-y if conditions have changed. Doesn't that imply User agrees to price X and driver to price X-y ? Uber might be able to adjust the price at the end but can they be sued if both party agrees to it before hand?

--------------

"36. Had Plaintiff and the Class known the truth about the Uber Defendants deception, they would never have engaged in the transportation or would havedemanded that their compensation be based on the higher fare."

------------

I am curious as to how they reached to a conclusion that Uber was intentionally doing this. Did a bunch of drivers co-ordinate experiments with riders to see if there was price differences? Did they just log out and log back into different accounts to see the price differences?

I am neutral to Uber so I feel its natural to question if Uber is seen as an easy target to go after since they are already in a legal swamp. IANAL so would love to read what people familiar with law have to say.

12
lancewiggs 3 days ago 0 replies      
Uber has deliberately fostered a culture that thinks it's acceptable to rip off people and institutions. We should never support companies with behaviour like this.
13
davidf18 3 days ago 2 replies      
There should be an app where different services (Uber/Lyft/Gett/Via/Arro (which is Yellow Cabs) bid on a ride and the lowest bid gets the ride. That would help to fix this problem.

In NYC I had noticed that Uber was charging as much as Yellow Cab for some of the trips and I was surprised about their algorithm. Now I understand why.

14
legulere 3 days ago 1 reply      
In the thread about Uber retreating from Denmark, people asked why taximeters are sensible regulation. This is the reason. We need a trustable third party that ensures fair transactions. Uber cannot be this because they have their own interests. Regularly checked taximeters can ensure this at least partly.
15
jeffdavis 3 days ago 1 reply      
If there is a sudden traffic jam and it takes twice as long, then presumably uber must pay the extra money to the driver. So are the drivers asking for some kind of "flat rate or variable, whichever is greater" contract?
16
mrow84 3 days ago 0 replies      
Here's an article about this issue from a couple of months ago, for anyone looking for additional information: http://therideshareguy.com/how-to-beat-ubers-upfront-pricing...
17
throwaway-blue 3 days ago 0 replies      
FWIW Lyft's upfront pricing words the exact same way.

This is a non-story and just good product management. This feature solves the problem of presenting a surge multiplier to the customer. With a surge multiplier the customer has to guess how much it's going to cost. With this they just see the price and figure out if it is worth it or not. Reducing purchase friction and uncertainty increases demand and is good for drivers.

Plus, both Uber and Lyft are assuming risk with upfront pricing. They are guaranteeing a price. Sometimes it will be higher and sometimes it will be lower. The driver is accepting a different payment arrangement based on distance and time.

Both companies are classic middlemen and taking advantage of consumer surplus.

18
anigbrowl 3 days ago 0 replies      
If the allegations in this suit are true, fuck Uber forever. They should go out of business, their assets should be stripped from the investors and redistributed to the users, and Kalanick and a bunch of other people should go to jail for fraud. There is no way to overlook the persistent structural problems displayed by this company. Some things could be matters of opinion (like the values of their corporate culture and so on), but there are multiple instances by now of Uber actively choosing to circumvent laws or deceive people on a systematic rather than an occasional or ad-hoc basis. I've rarely seen such a clear chase for revocation of a business license.
19
mtgentry 3 days ago 4 replies      
If they can prove it, this is super shady on Uber's part. Reminds me of Michael Bolton's money making scheme in Office Space.
20
sharemywin 3 days ago 1 reply      
If they business model is based on breaking local laws, why would anyone expect them to deal ethically with them?
21
Wissmania 3 days ago 2 replies      
I am not sure about the legality here, but I will say that I see this arrangement as good for both the driver and the passenger.

As a passenger I can know exactly what I'm paying ahead of time, and don't have to worry about my driver intentionally increasing the time/distance of a trip to charge me more.

As a driver, you are compensated on a time/distance basis, which means you don't have to worry as much about special requests/traffic/other issues messing with what you earn.

Uber is the one accepting the risk here, which the chance that the payout to the driver exceeds the flat rate the passenger paid because of an extra long trip.

22
elastic_church 3 days ago 0 replies      
All it takes is a tiny tiny line in the terms of use that states that the fare differences can be different between client side apps.

The user isn't paying the driver, they are paying uber. The driver is paid by uber under a separate agreement.

So it honestly doesn't matter.

23
mark212 3 days ago 0 replies      
This is all very interesting, but Uber has pretty robust arbitration and class action waiver clauses in their contracts, both with the users and the drivers. Sadly, this will go to arbitration on an individual basis pretty quickly. I haven't seen Uber lose a motion to compel (except once in S.D.N.Y., but it was quickly reversed on appeal).
24
pnathan 3 days ago 1 reply      
If, at the same time, driver and customer are being shown different prices, as the article alleges, then that is a problem.

If the price is an estimate, and the estimate is revised based upon actual time and distance, then that is within reason.

That said, this problem would be much more tractable if the drivers were employees of Uber. Perhaps that's what should be done? :)

25
ashish10 3 days ago 0 replies      
Hmm.. So in my area where Lyft is always expensive 20-30% more than Uber. I don't know how much these Lyft guys are stealing ?
26
comments_db 3 days ago 0 replies      
At this rate, soon Uber will be a verb. Unfortunately, associated with all the wrong things.

a la "...just don't uber it..."

27
socrates1998 3 days ago 0 replies      
The issue is with the agreement with Uber driver's, if Uber is changing the terms of the relationship without letting them know and agree to it, then that's a major violation of the contract and Uber could see a massive labor lawsuit.
28
JCzynski 3 days ago 0 replies      
This seems entirely appropriate and not fraud. For a flat rate, you're charged based on a somewhat longer, non-ideal route, rather than the optimal route which you'll take if everything goes well.
29
pizzetta 3 days ago 0 replies      
I have to think this is not their mode of operation, but if it is, what the hell, Uber? If they actually do something like this as matter of course, goodbye. That's just unacceptable and very dirty.
30
savanaly 3 days ago 2 replies      
Are we worried that shady behavior that hurts consumers and riders might become the new equilibrium? I don't see how it could be. Whatever the machinations of Uber to artificially alter prices, no matter how sneaky, at the end of the day they'll lose drivers and riders to competitors if their margins drift too far from the economic cost of being the middle man. A driver don't need to know in what way he or she is being lied to or maniuplated to know that they make less per hour driving for Uber than for [Uber's next best competitor]. Thus, I don't see how there could be an equilibrium where Uber is overcharging and still has a significant portion of the market.
31
AsyncAwait 3 days ago 0 replies      
I have no sympathy for Uber, but it does start to feel like someone is out to get them, the guys just can't seem to catch a break.
32
partycoder 3 days ago 0 replies      
I heard that Uber is more likely to give you surge pricing if you are running out of battery. Might be a rumor.
33
employee8000 3 days ago 3 replies      
No this doesn't happen. Some people may think that Uber charges the rider a different rate than the driver receives but it isn't the case.

A driver I had was sure this happened and asked me that on a fairly expensive trip (I think it was around $80).

I told him this didn't happen and I gave him my personal phone number and the amount of fare I was charged. I told him to check his daily numbers and if he didn't see this charge then to call me immediately. He never did.

34
Macsenour 3 days ago 0 replies      
I'm no lawyer but I understood "fraud" to be misrepresentation. Who is being defrauded? The passenger pays one price, the company pays the driver and takes a bit of that fee. The company is defrauding the driver by not telling him the full fee the passenger is paying? If they word it such as: "A % of the fee and other fees", I don't see fraud. Not a lawyer.. feel free to correct me.
35
elif 3 days ago 0 replies      
So, to both parties, they are under-promising on an unknowable future event.

that is better than over-promising?

i don't see the issue here.

36
dullgiulio 3 days ago 0 replies      
Updated description for a start-up: "We are like Uber, except for the lawsuits."
37
lloydatkinson 3 days ago 0 replies      
jesus christ hacker news has a fucking erection for anything anti-uber, get over yourselves, they provide a taxi service that actually helps people
38
Rainymood 3 days ago 0 replies      
We walk a thin line between deception and incentives.
39
awqrre 3 days ago 0 replies      
They probably will blame this on previous executives...
40
cwyers 3 days ago 0 replies      
Ars Technica Uses Sensationalist Headlines And Shallow Understanding Of Subject Matter To Defraud People Into Reading Their Articles
41
bdrool 3 days ago 0 replies      
Oh, come on.

I doubt it's all that sophisticated.

42
rdiddly 3 days ago 1 reply      
One good way to make sure the fare matches the distance would be to install some sort of device that measures mileage. The driver could start the device when the ride starts and turn it off when it's over. It could even calculate and display the fare for both parties!

Of course that kind of transparency wouldn't be possible unless all the vehicles had the device. So you'd probably need a licensing system for them. Which in turn could be overseen by a commission made up of industry reps and local government officials to ensure fairness and local control.

Wild ideas man, wild ideas.

15
Ask HN: Do you still use browser bookmarks?
410 points by ethanpil  2 days ago   424 comments top 257
1
Houshalter 2 days ago 11 replies      
Of course, and I'm surprised many people don't. Chrome handles bookmarks well, automatically syncing them between different machines you are signed in on. I used to have them nicely organized into different folders but now it's a bit of a mess... It's especially useful to deal with tab explosion. Control+D and you can just save all your tabs in a single folder (and never look at them again.)

The biggest problem is linkrot. As a rough estimate 13% of links die every year, and it's quite possibly much higher than that. (https://www.gwern.net/Archiving%20URLs) Without the glorious web archive, bookmarks would be unusuable. And I wonder how many people know about web archive.. Youtube-dl may also be useful if you want to preserve music or videos (despite the name, it works on almost every site I've tried it on including audio sites.) Someday I intend to script something up to automatically scrape all my bookmarks and make a local copy, but it seems complex.

2
Cyph0n 2 days ago 11 replies      
I have a ton of bookmarks, but I use them passively. From my experience, Firefox is the undisputed king of making sure anything you type in the address bar will be instantly checked against your bookmark collection.

For instance, maybe I'm looking for a PostgresSQL tutorial. I start typing "postgres" and one of the bookmarks I forgot about from several months back appears. This approach has ended up saving me a lot of time over the years. Another cool thing is when a bookmark pops up when I'm searching that brings back memories. If the site is still up, I get a free trip down memory lane :)

My collection is at least 9 years old now. I've been maintaining the same Firefox database over the years by migrating it manually from version to version. Now it's seamless thanks to Firefox Sync. I get my bookmarks on my PC, laptop, and my phone. I have an Xmarks account as a backup, and for cases when I prefer to use Chrome.

3
jcrites 2 days ago 6 replies      
I don't use browser bookmarks but I do use bookmarks through pinboard.in: https://pinboard.in/u:jcrites

With a paid feature called an archival account, Pinboard stores an actual copy of each bookmarked article, kind of like your own private Wayback Machine. It provides full text search over these articles.

I frequently save articles that I read so that I can refer to them later. It doesn't happen often, but once in a while I will desire to access an article that I read a few months or years later, and I find Pinboard well worth the value for making it possible for me to actually identify the article and retrieve its content regardless of whether the original link is still around.

I find this especially useful because it is my habit to collect citations for various facts. When I find myself making a claim in conversation, I really want to be able to access the original source where I learned about the fact, and provide the evidence to back it up. Or to review the source to confirm that my memory of it is accurate. Or sometimes I want to share a useful article explaining some topic with a colleague or friend.

I do occasionally use the browser bookmarks a sort of clipboard or working set, for 5-10 links at a time. I use Google Chrome and it syncs bookmarks between my devices.

4
ikawe 2 days ago 5 replies      
I probably have 500 bookmarks. I never click on them though.

Instead I (ab)use bookmarks as a way to increase the weight of URLs in chrome's navigation bar autocomplete/suggestion algorithm.

e.g. If you find that you're going to a site's homepage and clicking three times, instead once you get to the actual page you want, bookmark it. You can even give it a more memorable name, like "standup hangout" and then watch it autocomplete from the address bar next time you start to type the URL.

5
kusmi 2 days ago 4 replies      
I used to, I now use zotero to save whole pages onto webdav, from there bunch of scripts peel the ads off, scrape the text, convert to PDF, store in cms and index for full text search on solr. Also hooked up Dropbox to do the same for one click archiving from mobile. Since Dropbox and the webdav are shared between my partners and I, it's a convenient way to build knowledge base. Experimenting hooking up Telegram and slack as well to integrate everything for no hassle user-end. The real pain in the ass is passing the URL itself, consistently, without insisting users use another third party app.

*Forgot to mention the best part: Backend pools these full-text documents, cleans and parses for NLP, then generates meaningful tags, and organizes documents in an auto generated folder hierarchy which is based on word2vec/doc2vec and content clusters. Whole thing runs on a dedicated server with two 1070 GTX video cards for the NLP work which is training and re-evaluating constantly as new content pours in.

Altogether it was 2-3 years of work.

6
threepipeproblm 2 days ago 0 replies      
At some point, it occurred to me that almost all of the bookmarks in my ever-expanding collection really represented "to do items" more than "reference items".

As others have said, most things can easily be searched as needed. But I was using bookmarks as placeholders, saying "I wanted to read x later", in most cases... sometimes other things.

So I started treating bookmarks as various categories of todos. I do have a reference folder, but it has less than a hundred items. I often use those only passively -- i.e. when typing into the address bar, the starred link will come up first.

All the other links are sorted into categories such as "files to download", "new articles", "new buyables" and so forth.

Now that I think of Bookmarks as deferred work, it has changed a lot of habits. My total number of bookmarks has slowly dropped, and I tend to handle more stuff as it comes, or not at all -- or at least to be more conscious of bookmarks as a cost.

An unexpected benefit has been a feeling of mental satisfaction, after closing a lot of semi-forgotten, open loops. I now think a big unorganized pile of bookmarks can represent a real liability, whereas if you actually go through all those links and delete the weaker ones you get a concentrated pile of goodness. You hit a point where you'd rather read your remaining bookmarks than most news feeds.

7
mr_spothawk 2 days ago 3 replies      
I have tons of bookmarks. Pro-tip: make a bookmark, edit the bookmark, set the title to "" <empty string>. Then you have it's favicon as your site launcher.

http://imgur.com/a/mVFYh

sometimes I make use of the features "open all bookmarks in this folder".

other times I use the bookmark to (as somebody else mentioned already) weight consideration of sites I'm interested in getting results from.

aside: at hackreactor, I worked with some folks on the beginnings of a chrome extension to grab your bookmarks, analyze the content of each site, and suggest new bookmarks when you open a new tab. the suggestions part was working already by the time I came around. then I got a job and that pretty much fell out of priority... heh.

8
INTPenis 2 days ago 2 replies      
No and it worries me. I have a great memory normally, I speak several languages and computer languages. I was raised in the era before search engines when bookmarks were important.

But these days it worries me to say that I just visit the same three websites over and over. Aggregation websites with links and content.

Sometimes I find myself staring at the url bar not being able to think of anything to do because I've visited my three websites already.

Of course besides those three aggregators there are sites like google and stackexchange that I visit indirectly. And any blogs, forum and such that I might find through google.

9
bm98 2 days ago 9 replies      
I'm a little surprised that the majority of the answers here are Yes!

I help my parents and my kids work with bookmarks but I have none myself; and I was beginning to think that bookmarks were primarily used by non-technical people. I guess I was wrong!

Everything I need is a simple URL (like, my bank: usaa.com - why would I bookmark that?) or a quick Google search away. If I come across a deep link that's so important that I want to keep it, I email myself the link along with maybe a short description, and it will be searchable forever.

My lack of bookmarks fits with the rest of my "online personality". I have 14,183 threads in my work email inbox and I do not file emails into folders like most of my colleagues. I do not have the desire or the time to manage email folders or browsing bookmarks.

Also, the fact that I browse in a "clean" browser instance in SELinux that saves no history from instance to instance probably contributes to my lack of bookmark use.

10
abhinickz 6 hours ago 0 replies      
I use Chrome new "Bookmark Manager" Extension:"https://chrome.google.com/webstore/detail/bookmark-manager/g...

You can Access, Search, Import, and Export bookmarks from: "chrome://bookmarks/" URL and Pressing "CTRL+D" give you option to save to folder instantly.

Sometimes when I don't remember the bookmark folder location for example(https://news.ycombinator.com), I simply type something like 'news' or 'ycom' and chrome will show me some predictions which will be combination of Bookmark, Google search, and History with different icons or text.

Currently typing 'ycom' shows me five option1. https://news.ycombinator.com/ with Star Icon.2. https://news.ycombinator.com/news?p=2 with History Icon.3. ycombinator (Google Search)4-5. More History Link.

and If I don't remember anything I just type some random words on Google to get the web link!

I also use Google Keep Extension: 'https://chrome.google.com/webstore/detail/google-keep-chrome...to organize bookmark easily with labels and colors.

11
jacquesm 2 days ago 3 replies      
I do, but I've also come to rely on a plug-in called 'scrapbook'. It allows you to cut a snippet from a webpage and save it along with the url of the original.

Very handy, and it also protects somewhat against linkrot.

I've tied it to a hotkey to copy any bit that is highlighted to the currently open scrapbook. (shift-ctrl-b) without further notifications or interaction other than the keystroke. Super quick and it doesn't get in the way of continued reading.

12
Existenceblinks 2 days ago 2 replies      
My bank url is hard to remember, and search it on google is risky to be a victim of fake sites. So anything fake-able is on my bookmark.

Well I've marked a ton of urls and rarely revisit :(It's like having a camera, take photos and forget them forever. It's a tool to help you forget things, not to remember, sadly!

13
guilhas 2 hours ago 0 replies      
https://darkle.github.io/MarkSearch/

= Or =

Zim wiki withWith copy paste urls"Copy URL + Title" (Chrome)"Multiple tab-handler" (Firefox)

14
morganvachon 2 days ago 1 reply      
I use them in three ways: My most used bookmarks live on my bookmarks bar in Firefox with the text removed, so they are just icons of the favicon.gif from the server, screenshot example here[1]. The lesser used ones live in the "Folders" folder under a tree style arrangement. The third method is via the "ReadLater" folder which contains links I didn't have time to fully read right away, and acts as a sort of manual version of Pocket or similar apps.

[1] http://storage7.static.itmages.com/i/17/0408/h_1491614673_38...

15
ams6110 2 days ago 0 replies      
I have a home.html file that is my browser default page. It has all the links I use regularly, organized in a few columns that I think make sense, but more honestly I use it mainly by muscle memory. It also has input fields for a couple of different search engines.

It's very simple, no javascript and just a tiny bit of CSS.

Any time I want to update it, add a link, etc. I just use a text editor.

16
JohnBooty 1 day ago 0 replies      
Hells to the yes.

To take things a step further, I'm not entirely sure how I'd function without them.

(I'm sure I'd find a way, but it would be an adjustment and a loss)

Firefox's fuzzy searching in the URL bar makes bookmarks awesome. My "workflow":

1. Bookmark anything I might need later by clicking the bookmark button. It presents a little tooltip-like popup that lets me edit the title and tags if I want to.

2. Sometimes I edit the title/tags and sometimes I don't. I make this call based on a quick judgement call on whether the default will allow me to find the article later. Suppose the article title is "MySQL Adds Froitz-Based Blammo Filtering." Well, that should suffice. But if the title is merely "10 Awesome New MySQL Features" then I might want to edit the title/tags to mention something about "Froitz-Based Blammo Filtering" if that's what I'm interested in. [1]

3. Then I usually never use the bookmark ever again.

4. BUT, sometimes I do. And Firefox's fuzzy match implementation lets me type "mysql froitz" and get a match on this bookmark 100% of the time. Chrome's matching is stupider & I'm not sure about Safari. Safari makes adding bookmarks less convenient than FF or Chrome so I assume finding them is harder. (Maybe it's not, I don't know)

I don't know about Firefox's bookmarking performance characteristics. But, I know that I've been adding lots of bookmarks forever and it "just works" and it feels instance. The fact that I've never had to think about it beyond that point is a compliment of the highest order. That's one of the many reasons why I remain a dedicated Firefox supporter.

__________

[1] This is just a theoretical example, of course. MySQL does not actually receive new features, awesome or otherwise.

17
interfixus 2 days ago 1 reply      
Of course I do. Some of them neatly stacked in labeled folders, some of them just higgledypiggledy in the great unsorted. I have my bookmark history on hand to way back before the turn of the century. A lot of those links have died, obviously, but it's a neat historical record of my foci, foibles and obsessions over the years.

My data belong either offline or on serverspace I control myself. There's nothing especially secret about it, but like my email (going back more than twenty years), I wouldn't dream of storing data like that online outside my own control.

The bookmarking, by the way, used to take place in Firefox. The ongoing self-immolation of that once mighty browser has recently sent me to the Pale Moon camp. And it's like coming home. I couldn't be happier, running on various Linux'es on the household machinery. The Chrome/Chromium world hegemony is one of those sad, scary things I shall never understand.

18
sleavey 2 days ago 3 replies      
I use the bookmark toolbar in Firefox, but I delete the text and leave the favicons so that I can fit ~50 bookmarks in one row. I also have folders containing bookmarks for particular categories, like "Work", "Stuff to watch", etc.
19
hueller 2 days ago 1 reply      
I use pinboard.

As far as native bookmarks, I don't like that browsers have kind of black boxed their bookmarks and require individual proprietary cloud sync for these things (I realize Firefox has a self hosted option, but it's kind of outdated and last I checked the documentation was spotty. Even then it's only FF).

I know there's also the Netscape Bookmark Format which is kind of sketch, but at least it's something. I tried writing something that exported on close, I'd sync them myself, then imported on open, but it was pretty hacky (edit: also browsers exports are often very different so there was some normalization there that was fragile). There should be a way to setup an endpoint to natively sync this stuff with an open protocol and then all your bookmarks on all clients look the same. If you don't like that service, export someplace else and change your endpoint. Browsers should just be boxes for structured content.

20
rmason 2 days ago 0 replies      
I have thousands of bookmarks. One thing I've wanted Google to do for the longest time since I started using their browser was to let me limit searches to my own bookmarks.

I've got a fair degree of organization with folders and sub-folders but still spend way too much time trying to locate a specific bookmark. I've learned to edit the subject line because often you're bookmarking something called 'home' or a cryptic Github path.

21
JanecekPetr 2 days ago 1 reply      
Additionally to what everyone said already, I have two other uses:

1) I have a set of bookmarks specialized for search. Chrome can do this without bookmarks, but Firefox needs them. I'm talking about bookmarks like this:https://www.google.com/search?q=%s&tbm=isch

Note the %s in the middle, that's where queries go.When you save this as a bookmark and add a keyword to it ("gg" in my case), you can then search images on Google like this:

- Alt-d (jump to url bar)

- gg fluffy kittens

I have a few dozen of these: Google, G images, G translate, G maps, local maps, Wikipedia English, Wikipedia Czech, various dictionaries, whois, wolfram alpha, grammar check, YouTube, Maven search... You get the idea.

2) A huge curated collection of bookmarks to Java libraries. Something similar to all those awesome-java collections that are lately popping up, but more complete, in my browser, indexed for search and neatly grouped into like a hundred folders.

22
kk_cz 4 hours ago 0 replies      
yes, but only via bookmark toolbar - if you create folders here it acts like a pull-down menu that you have always on top of your browser + plus adding and categorizing new bookmark is as easy as dragging the site's url into matching folder. I haven't seen the "regular" bookmark manager or "add new bookmark" dialog in ages.

About 5-10 most used links are simple 1-click buttons, rest is sorted into folders.

Most of these aren't content websites, but rather different webapps, or links into webapps (like direct link into some intranet forms that are used maybe 2-3x per month)

23
theknarf 2 days ago 0 replies      
Bookmarks are where links go to die. So yes I do "use" bookmarks, but never revisit them. What I instead do often is either keep the tabs open or, save them as notes in a note taking app. I feel that the note taking app makes it easier to organize stuff into "projects" as that how I usually work.
24
aurelian15 2 days ago 0 replies      
I configured my webrowser such that it clears my browsing history whenever I'm closing my browser and mainly use bookmarks for fast auto-completion when typing in the address bar. With respect to organisation, I generally don't. I just use the "star" button to mark websites as favourites. I synchronise bookmarks across devices using Firefox Sync.
25
pmoriarty 2 days ago 1 reply      
I have thousands of bookmarks, and gave up putting them in to folders years ago. Now I just tag them with every relevant keyword that I can think of when I make the bookmark, and search them that way.

Firefox's bookmark manager is very primitive, though, and I've long been meaning to migrate my bookmarks over to org-mode in emacs, where I have much more powerful searching, metadata, editing, linking, commenting, restructuring, and navigating options.

26
double051 2 days ago 1 reply      
Definitely! I still keep the bookmarks bar visible on Chrome and Firefox to have quick access to my favorite and most visited pages. All of the links have abbreviated names to fit more on the bar. #1 is Hacker News, of course.

I also still 'star' interesting links and categorize them into folders. Very handy to have Chrome sync the bookmarks across all of my machines.

27
ravenstine 2 days ago 2 replies      
I do but only in the sense that I use it as a sort of bucket that I throw things in and almost never look at again. Basically, no.
28
chamakits 2 days ago 0 replies      
For my personal use? No

For work, absolutely.I have a couple of top level directories on my bookmark bar:

KeyLinks

InterestingTech

PrevWork

CurrWork-<2-3 words describing topic of work>-<Date started>

Under KeyLinks I keep well...key links, like the links to the wiki entry on how to setup dev environments, link to the Holiday calendar, link to the Jira dashboard showing my team's sprints, link to the company roadmap, etc. Pretty much just links that I'll have to refer to periodically.

Under InterestingTech are articles or things of interest I stumble upon on my day to day, but that I don't have time right now to look into.... This one is honestly a bit of a bottomless pit at this point...

Under CurrWork-* I keep all the links related to the work I'm doing right now. That means wiki entries related to it, StackOverflow links I had to use to fix something, Jira tickets, Jenkin jobs, internal web-app links, code review link, etc. You name it. If it's in any way related to my current work, and it's a site, it's there.

And when I'm done with the current 'CurrWork-*', I remove the leading 'CurrWork-' and move it to the bottom of PrevWork.

I have an awful memory, but this in combination with an emacs org-mode file for each 'CurrWork' iteration I have, I manage to be able to refer to things I've worked on in the past when people ask. After they give me a minute or two to get my bearings of course.

29
lucb1e 2 days ago 0 replies      
I do occasionally, though usually Firefox' "awesomebar" will get me there anyway so there is not often a need.

My girlfriend does make extensive use of it for all sorts of things.

I think my mom uses it as well. My brother and dad I'm not sure about. Not sure what that says for a confidence interval, but many people still do. Then again, I'm sure there must be clusters of people (when clustering by who knows who) that never learned it's there, or who choose not to use it.

30
aerovistae 2 days ago 0 replies      
My chrome bookmarks are one of the 3 pillars of my cloud identity, along with my gmail and my dropbox. You could just say my google account and my dropbox.

I have hundreds of bookmarks, covering dozens of categories of research and reading. One of the largest subcategories includes hundreds of references that I may or may not need for future projects including software (stackoverflow questions, tutorials, bug solutions, framework and API references, optimization articles, in-depth guide articles, and so on), woodworking, economic/governmental/civic/legal research, fitness, electrical engineering and general circuity/wiring, real estate, recipes, piano repair, audio production, and so on. These are all intended to be kept until needed, most likely indefinitely.

Then there's a category for more temporary things that I need in the moment and am unlikely to need again, including news/blog articles I haven't gotten around to reading yet, solutions for bugs that I need to fix, torrents I haven't gotten around to downloading, and collections of references for small, specific projects that I won't need again afterwards.

So basically I use Chrome bookmarks as my personal address book for "things on the vast internet which I wish to return to eventually."

For major things I use daily, like youtube or gmail or facebook, I don't bother bookmarking them-- for those I just use the address bar's semi-intelligent autosuggest....Ctrl+L to go to address bar, then I just type g and hit enter, or y and hit enter, or f, etc. The only website I need to type out beyond 2 letters is twitter/twitch.

I guess this may sound odd, but Chrome has begun to feel like a natural part of my mind. The bookmarks and my gmail are an extension of my memory. My interactions with the net are an extension of my thought processes. I have seen other people make similar remarks about their phones.

31
slowkow 2 days ago 1 reply      
I use diigo. The free version lets you cache the page, annotate the page with highlights, and tag your bookmarks. The extension for Chrome works very well. They also launched some PDF annotation features, but I haven't tried that.

https://www.diigo.com/

32
skraelingjar 2 days ago 0 replies      
I still use bookmarks, but rarely go to them.

All of my bookmarks are resources, something for me to read or use at a later time. Some are for things I want to learn, some are for things I knew but have lost to time, and others are just.. out there. Like this: https://apps.fcc.gov/oetcf/eas/reports/GenericSearch.cfm I have no idea why I bookmarked that (or when).

Another example: This week I decided to learn Rust. I was listening to a podcast and the host mentioned rustbyexample.com and I visited the site and realized previous me had bookmarked it thinking I would decided to learn Rust at some point and it would be a nice resource.

Maybe something that would look at my recent history and say Hey X has been in your bookmarks for months and it's related to all the Y pages you've been visiting.

None of them are organized, I'd pay for something to automatically organize them.

33
aswerty 2 days ago 0 replies      
I'm a big fan of bookmarking but I found the browser features didn't fit my needs all that well. So I built my own browser extension which I really like. Hasn't taken off at all and development has kind of stopped for the time being (other work has put it on the back burner) so it's still just available on Chrome.

Using it I hit Ctrl+M (the shortcut to open it) and then I have my top 20 sites key bound. So HN is Ctrl+M -> h. All my other bookmarks can be accessed via a search feature which you can tab or "/" to get to on opening the extension. I hate lists/folders so my bookmarks are all hidden away behind the search function. The extension is built for either the mouse or keyboard so I have a lot of flexibility in how I interact with it.

The site for it is: www.devmark.io

34
juki 2 days ago 0 replies      
I use Emacs / Org Mode for bookmarks. I use a few different browser profiles, and usually always want to open a bookmark in a specific profile. Having all bookmarks in one place is much easier than figuring out which browser I need and then finding the bookmark in it. Plus this way I can use other Org Mode features with them too (adding any arbitrary notes/tasks to them, todo keywords for a reading list, refiling/archiving, etc.)

Basically I just add the properties URL and BROWSER to any entry I want to be a bookmark. I have the numpad key 4 bound to a command that opens the url (works either in the org file buffer itself, or from an agenda view). I also have the numpad key 1 bound to an agenda search for the tag :bm: (searching for a property is too slow), so I can easily get a list of all bookmarks, which can then be filtered by tags, category, top-level headline and regexes.

35
j0e1 2 days ago 0 replies      
Oh yes! I use them in FF and have organized them in folders with tags for hassle-free retrieval across devices.

I find them extremely useful for tutorials/learning new stuff which I know I need/want to learn but just don't have the time at the moment. Whether or not I end up coming back to them is a discussion for another day ;)

Most of my bookmarks are via HN.

36
unholiness 1 day ago 0 replies      
I'm surprised no one's mentioned the chrome feature that's mostly replaced bookmarks for me.

Just like aliasing commands in the terminal, you can alias web pages in Chrome's address bar. So, when I type "je" in my omnibar it has an autocomplete option "Jenkins", and pressing enter will take me to the URL I set for the Jenkins home page.

This feature is poorly named "search engines", and yes, it is extensible to putting extra strings at the end of that URL (which could be registered as a search term within that site), but I've been using it for years, and 99% of my use is simply mapping arbitrary strings to arbitrary URLs. It works amazingly for that. No mouse movement digging into bookmark folders required.

http://lifehacker.com/5815291/create-short-aliases-for-frequ...

37
vbezhenar 2 days ago 0 replies      
First of all, I use reading list. It's kind of bookmark in Safari. If there's interesting webpage, but I don't have time or mood to read it, click and close. If I answered and want to check it later, click and close. Once a week I breeze through them and delete, so it won't stockpile like a mountain.

Second is Favourites (like bookmark bar), I can access it from blank page. I'm saving webpages, that I visit often, news, important forums, etc. Also webpages, that I'm using currently in work (e.g. Postgres documentation, if I'm working with it right now.

Rest is just organized by topic list of webpages that I could use later. I'm not using it that often, but sometimes it might be handy.

38
navs 2 days ago 1 reply      
Personally, I've stopped bookmarking everything I find somewhat interesting. Now if I do find something and it will be used in the next week/month, it's often part of an existing project or idea, and so it gets thrown in a text file that's versioned.

I started doing this after accumulating a huge index of bookmarks spread across saved.io, Evernote, Google Bookmarks, iCloud, Firefox, Opera, txt files, Google Spaces and the other dozen or so bookmarking/collaborative knowledge sharing platforms showcased on Product Hunt.

I'm surprised there's no digital equivalent to the Hoarders TV show. I suppose thousands of bookmarks are less impressive than a garage full of old newspapers and rats.

39
tempestn 2 days ago 0 replies      
I use browser bookmarks frequently, but have few of them. I tend to use them primarily for utilities and other sites that I visit somewhat regularly. Basically the two main cases are 1) sites I visit so often that it's handy to get there with a single click as opposed to a couple of characters in the address bar, and 2) sites with strange urls and/or ones that I access repeatedly but infrequently, so might not remember where to find.

For anything I want to remember for later, or keep for reference material, I clip it to Evernote instead. Find that much more useful, as normally when you're looking for a piece of reference material it's going to be able to remember some keywords from it than the title or where you filed the bookmark. Also means you can easily reference it even if the page is missing or changed in the future.

40
alexdumitru 2 days ago 0 replies      
I do, but I've always find it pretty hard to use them, because I forgot what exactly I bookmarked and in what folder.
41
steiger 2 hours ago 0 replies      
I never really did.
42
nevatiaritika 2 days ago 0 replies      
I use the bookmarks bar to neatly organize my frequent links. And I kind of have an OCD when a link is misplaced in the wrong folder. Of course, then, some folders, I never visit again but some, very very frequently.

Also for the habit of reading/skimming articles and often hopping from one URL to another, I use One Tab. Super efficient to collect links in one page:https://chrome.google.com/webstore/detail/onetab/chphlpgkkbo...On the negative side, my work PC has over 800 URLs and home PC about 1500+

43
AceyMan 2 days ago 2 replies      
I treat URLs like any other document: I click+drag the favicon off the address bar and drop it in the target folder in explorer (file browser in Windows), which creates a dot-url shortcut.

Why keep resources in a unique silo? You wouldn't keep all your PDFs/Word/rtf/&c in a "<ext> manager app", so why do URLs have to be kept in one?

Also, this way they all get backed up since I keep all work docs on my network drive.

I'm surprised no one else follows this pattern, but I've never seen anyone else use it, nor have I won over any converts via its sheer awesome factor <shrug>.

FYI, works in FF and Chrome, but not Opera. (Bummer, because I like Opera generally, and it's my default Android browser.)

44
Mikhail_Edoshin 2 days ago 0 replies      
Yes, I use them a lot, but find the organizing tools pretty lacking. E.g. I program in Python and keep a bookmark for many Python modules in alphabetic subfolders so that I can quickly jump to the docs. It's rather boring to maintain this setup. I also dance tango and I'd love to bookmark many Youtube videos but here the tools are really primitive: there's lots of ways to organize this (by type of video, dance style, by principal figures, by dancers -- sometimes more than one couple -- by music maybe) and no easy way to do anything other than a silo of "all things tango".
45
cyberferret 2 days ago 1 reply      
All the time in Chrome. I have a fairly rigid structure in my Bookmarks folders, where I categorise all my hobby and professional interests. I like that it is synchronised across all my devices too.

I used to use Pocket a lot to do similar things, but categorising, and browsing the saved links was a little too cumbersome.

Plus I like that I can search just within my plethora of bookmarks if I want to reference something I know I saved a year ago. [0]

[0] - https://www.lifehacker.com.au/2015/02/quickly-search-just-ch...

46
zengid 2 days ago 0 replies      
I used to but now I dump everything into Pocket. I would only say it's useful because it satisfies my need to horde interesting information.
47
jdbernard 2 days ago 0 replies      
Absolutely. I trust Firefox Sync more than third-party services. I have a fairly comprehensive hierarchical structure. I only bookmark things I really care about, but even still have hundreds of bookmarks in tens of folders. It's useful because it makes these sites show up immediately in the address bar as I start typing. API done for example. I just start typing the library/API name and the address bar autocompletes the part to the doc index b/c that's what I've bookmarked. One step shorter than bouncing through Google.

I also bookmark articles that I think I'll reference in the future, that supports or contradicts something I believe strongly.

48
rwanyoike 1 day ago 0 replies      
Yes I do, I use bookmarks to save time. I have them organized in different folders, with a few for "temporary" bookmarks that I clear out regularly. I try to limit my bookmarks to landing pages, online tools and references - stuff I know I'll revisit, while I send article/news bookmarks to Pocket or Feedly (RSS).

A problem comes up when searching for bookmarks that don't have keywords in their titles. I use a WebExtension [0] to update my bookmarks with website descriptions, increasing the odds of finding them.

[0]: https://github.com/rwanyoike/bookmark-refresh

49
m-p-3 2 days ago 1 reply      
I still do, but I find Google Chrome bookmarking system to be a bit too simplistic.

I mean, Google is usually strong on that from with labels in Gmail, Keep but for some reason they never implemented that in their bookmarks. It would makes more sense than using folders IMO.

50
abrkn 2 days ago 0 replies      
I have hundreds of them in Chrome, ported over from all kinds of browsers and services over the year. I never use them. They just sit in the bookmarks toolbar and annoy me.
51
mrmondo 1 day ago 0 replies      
More heavily than ever, I have every bookmark in my bookmarks toolbar, all in folders such as 'home' and 'work' for local URLs, 'checkout' for things I've found but not researched, 'git' for URLs to my GitLab / GitHub projects etc... pretty much every fancy bookmark management service or replacement has massively disappointed me, overly complicated or requires running background services (like xmarks) etc... the only reliable one is the built in Firefox sync, then I use a plugin to export bookmarks to HTML on quit which exports to a directory in my Dropbox directory.
52
ernsheong 2 days ago 0 replies      
I am currently developing PageDash, a personal web archive web-based app. The key difference is that I want to preserve the page exactly just as I saw it, with the help of a browser extension.

Reason for this is that I love saving pages that I encounter. I use Evernote Web Clipper a lot, but it frequently fails to keep the styling perfect. Secondly, a lot of archivers can't save pages behind an authentication layer.

To be notified when it launches, let me know here!https://goo.gl/forms/X1IBqaA03kekR2Db2

53
Myrmornis 2 days ago 0 replies      
No, but I use pocket. I put links to technical stuff in there that I intend to read, but I rarely look at it. Maybe I'll start remembering to after this thread. I noticed a while back that pocket allows you to dump them in a text format; I was intending to do that and store in a git repo or my gdrive, so that I would be more confident I'd have them for the rest of my life. I sort of don't really know where chrome's bookmarks are kept, which makes me less inclined to use them, but that's almost certainly lazy/ignorant of me.
54
khedoros1 2 days ago 0 replies      
Yes. Anything essential goes in the bookmark toolbar (mostly thinking about internal sites at work). I save a number of keyword searches (like "yt" for youtube, "wp" for Wikipedia).

For personal machines, I've got about 5 machine+OS combinations, with 2-3 browsers on each that I use for various things. I chose not to set up sync accounts in any of the browsers (I've already got too many damn accounts to manage, thank you!). So I sometimes save a bookmark if I'm in the middle of a long series of pages about something, as a sometimes-completely-literal "bookmark".

55
stevewillows 2 days ago 0 replies      
In my Bookmarks bar I have 'General', which breaks down into about fifteen categories. Each of those is broken down into several folders --- its very organized. I go through it once a year or so to clear out links I'm certain I won't need in the future (usually project ideas).

I use Bookmark Box to sync with other browsers by way of Dropbox. Its not perfect, but it works.

For the rest of the Bookmarks bar I have my most common links -- a few spreadsheets (in Drive), some web apps, and a folder for forums I frequent. I also have a bookmarket for Pepperplate, which I use on a regular basis.

56
dorfsmay 2 days ago 0 replies      
I use wiki's, because "links" only make sense in a given context, so at work I add the noteworthy links in the local wiki, and for my personal use I keep a series of text files of notes on particular subjects, where I add links.

I do however use bookmarks on my laptop to point to locally installed document such as the full python library doc, in order to be able to access those when offline (eg: in a train).

57
mud_dauber 2 days ago 0 replies      
My bookmarks bar has my top-30 list (mail, feedly, HN, ...) I tried a folder system but found the amount of overhead to be WAY too cumbersome.

I capture stuff to read in Pocket. If I eventually find the link to be valuable (news: almost never; how-tos: much more often), I move it into a Google Keep "PostIt".

The value-add is that I can add pics, notes, links to Dropbox docs, etc in the same PostIt, and organize them as I see fit.

58
taude 2 days ago 0 replies      
Installing the Quick Tabs [1] Google Chrome plugin has completely changed my use of browser-based Bookmarks: with cmd-e an intelligent search box pops up giving me instant access to my history or bookmarks folder.

[1] https://github.com/babyman/quick-tabs-chrome-extension

59
richardw 2 days ago 0 replies      
Yes, lots, in Chrome. I have folders directly in the shortcuts bar, with e.g. "Money", "News", "Proj" and current projects usually have their own folders. One I use a lot is "Topics", which has many subfolders for e.g. "Analytics". I use the Other Bookmarks list for things I use regularly but not often (e.g. once a month).

I definitely would like some improvements. My "Topics" folder is huge and I don't really need it loaded each time the browser loads. Just save it in the topic and let me find it later. Also, if Chrome has my shortcuts, why doesn't Google highlight those in search results? And maybe auto-link the saved shortcuts to the terms I used when finding them in the first place. There's a lot of meta data in that action - search-search-search, save. Google knows quite a bit of my thought process (via keywords and sequence), so use that.

60
chrsstrm 2 days ago 0 replies      
Literally thousands...

Organized in ~150 folders all with subfolders. Ditched the bookmark services when Chrome started syncing data across devices. There are three features I would love:

 1. The search box in the manager does a full text search of the content on the bookmarked page instead of just the title (at the time it was marked, not updated). 2. The ability to search by URL with regex. 3. Show the date I bookmarked the page.

61
jbmorgado 2 days ago 0 replies      
I do bookmark them, but I end up almost not using it.

The only thing that actually kind of works for me is to bookmark stuff in "sessions" and then open the all tabs the next time I want to work on something. For instance I was trying something very specific involving deep learning at my job, then I had to do some actual work to prepare an article and I put that DL project aside. So, I make a bookmark folder with all the open tabs and closed the window. Now I got back to that DL experiment opened the all tabs again and that kind of worked for me. But this is not really a reference system, it's just a "sessions" system.

As for the traditional role of bookmarks, I don't think they will ever work for me without a single main thing: Full text search.

Every few months I try to clean the mess my bookmarks have become since I can't find what I need and a few months after everything is a mess again.

The tag or folder system simply just doesn't work for me, I keep too much stuff to check later as ideias and then I can't really find it because I have this folder "check later" where I have dozens of separate ideias and I can't really just find that one idea I had.

The solution seems to be some kind of full text search, where I can have a way to describe in a fuzzy way what I was doing, something like: "python, maps, names, germany" and go back to that post I remember where they where doing some analysis of the "last names of people in germany in different regions" and that I, of course, don't remember the name anymore.

I recon it's a very specific problem that only makes sense for people that think the same way like I do, but I'm also quite sure there are a lot of us like that and that this is the kind of solution that at least would help us a bit using the bookmark system.

62
mspaulding06 2 days ago 0 replies      
Currently I use a variety of techniques for managing content I would like to revisit on the web. I do use bookmarks mostly for often visited websites and I always using syncing if possible with Chrome and other browsers that support it (Brave does now!). I've also discovered some browser plugins that really help with this. OneTab is absolutely indispensable and will store all of your currently open tabs so that you don't have to keep them open. That's great when I've got several tabs open on a single subject that I want to come back to later. I've also started to use Pocket for most blog posts and random things that I want to read some time in the future but can't right now. The nice thing is that it is accessible from all my devices so I can put links into Pocket from my phone and then go to them from my desktop computer.
63
sacado2 2 days ago 0 replies      
Yes, but only for

- tabs I haven't read yet, but I need to restart my browser for some reason, and I want to be sure the tabs won't be lost ; in this case, those bookmarks are disposed of as soon as the browser restarted

- content I'm pretty sure I'll want to read back in a few time

I only use the bookmark bar, so I have to limit what I save. When it gets too big, I clean it up.

64
hkjayakumar 2 days ago 0 replies      
Yes, I do. As a university student, it's really useful to be able to view different course webpages, schedules, important dates, etc - all links that I would access frequently (almost every day)

Apart from that, I also use browser bookmarks for links I want to (or have to) view in the near future. It acts as a constant reminder since it's always visually present.

I use Pocket for articles/links I can afford to view during my free time.

65
vojant 2 days ago 0 replies      
Not anymore.

Just google everything when I need to find something. In the past I was using bookmarks to track blogs I follow but these days there is too much content. I just google/HN search stuff when I need to find something. I tried going back to bookmarking stuff/save for later but I just never got time to go back to the them.

66
AJRF 2 days ago 0 replies      
I do for sure. I do this thing where I save a bookmark without its title, so it just has a little favicon icon on my bookmark bar, and it is very nice and clean.

I also have folders for Work, Blogs and one for improving myself as a developer. I love browser bookmarks, but am not exactly a poweruser, but I would miss them very much if they were taken away.

67
girishso 1 day ago 0 replies      
After having thousands of bookmarks on different services. I decided to build http://tweetd.com. It indexes the links you tweet.

Edit: Although, I realized that just full-text searching through bookmarks won't pop the most relevant links to the top.

68
vorg 2 days ago 0 replies      
I use the bookmark ribbon in Chrome as a "to visit soon, or return to" list. Stuff I would normally look through the history for.

My most desired feature in Chrome is being able to right-click a link and add it to my bookmarks. Presently, I have to open the link in a new tab/window (using right-click, then T or W) then go to the tab/window, click on the bookmark star, then close the page (i.e. before it finishes loading). If I want to avoid loading a page I don't want to look at right now, I'll right-click on the link, then E to copy the link to the clipboard, then go to a new tab, bookmark it, right-click on the new blank bookmark link, then E to open the editor dialog, type in some suitable title, tab to the address text box, paste in the URL, then click Close. Either way, it just isn't simple.

69
superasn 2 days ago 0 replies      
I do especially because Chrome syncs them everywhere including my mobile phones, laptop and desktop.

It's also useful to bookmark in browser because the address bar gives priority to your bookmarks over auto-complete and history.. So it's much easier to access those sites too.

P.S. I organize them by folder, so it's most likely design -> landing pages -> dark -> bookmark or personal -> finance -> bookmark, etc.

70
susam 2 days ago 0 replies      
I don't use browser bookmarks.

I save my bookmarks in a text file, commit it and push it to a remote Git repo. I have this Git repo cloned on every system I use. Since the Vim editor is part of my daily workflow, visiting one of the URLs in the text file is a simple matter of pressing `gx` while the cursor is on a URL.

This is useful to me because I have this repo cloned on every system I use for various reasons, e.g. it contains my daily notes, productivity scripts, etc. So it makes sense to keep all my bookmarks also in this repo. Also having the bookmarks in a text file provides me the flexibility to add arbitrary notes/comments for each URL I save. The fact that I don't have to use the mouse and I can use Vim search or motion commands to find a bookmark is a bonus.

71
gkya 2 days ago 0 replies      
I use them quite a bit. They are the only completion source I allow for firefox, so when I type something other than a URL on the URL/search bar, I either hit the down arrow and select a matching bookmark, or hit enter and run a search.

Structurally my bookmarks are an ever growing list, they all go into the bookmarks menu in firefox. I occasionally tag them too. Most bookmarks are part of my "online library", I keep them so that if I ever want to send a link to sth. I liked to someone, use them in an article, or maybe read again. I have a separate read-it-later list in an Org-mode file.

Some of the bookmarks are shortcuts, mostly to different dictionaries in WordReference, to Collins english dictionary, and various websites I browse often, like Reddit, HN, my school's, and my own website that I check every-so-often when I upload sth. new.

72
tomfitz 2 days ago 0 replies      
No.

I use Google Keep to store URLs, typically with some note, for example:* "Specialized Sirrus bike rear derailleur. Model number: DO20. URL: https://www.amazon.co.uk/dp/B0047D192E/ "* 2015-03-01: Visited doctor. They referred me to physio, and told me to read http://www.arthritisresearchuk.org/arthritis-information/con... for exercises/stretches to relieve pain."

Google Keep supports tagging and search, so I can usually find things. For things I want to read later, I either put it in Pocket or use Google Keeps' reminder functionality.

Chrome integration looks decent (save web pages as an image), but Firefox integration is lacking.

73
smonff 2 days ago 0 replies      
All my bookmarks collection inside browsers always end up turning to a horrible stack of junk: I don't know how to get rid of the old stuff, you know something that interested you at some point won't be interesting later but you never know...

With the intelligent address bars of the browsers, you can search and find for most of the recent stuff that you used, and even sometimes very old stuff.

I don't use bookmarks anymore, and I feel like the bookmark bar is most of the time a useless distraction.

If there is things I really want to keep, I post it in a public Shaarli[1] instance where I force myself to use tags, description and informative title.

[1] https://github.com/shaarli/Shaarli

Edit: removed markdown

74
iand 2 days ago 0 replies      
Yes. I use the bookmark toolbars in ff and chrome with icons and no text for common pages (like this http://imgur.com/a/uZBB8).

My only other use is for groups of pages that I'm referring to or want to come back to as part of a project. I usually delete them after a few weeks.

For long term bookmarks I use pinboard.in

75
zmix 1 day ago 0 replies      
Absolutely! You will always search the needle in the haystack with Google. But you will search the mouse in thehaystack with bookmarks. And with the history set to "not expire" that mouse may even become the size of a dog.

I stopped categorizing my bookmarks into folders a long time ago, however. They just end in a single folder. Though, I love to use 'tags', which I use for important stuff, that I want to distinguish from other important stuff.

76
pritambarhate 2 days ago 0 replies      
I use bookmarks a lot and using Chrome I sync them on multiple machines. Yet, I find that bookmarks management is a neglected feature in Chrome. I have hierarchies of bookmarks, and while creating a new bookmark it's very hard to find the appropriate folder, especially when I remember the name of the folder vaguely.

If any Chrome Developer is listening:

It will be amazing if there is some form of autocomplete to specify the folder for the bookmark. Right now on Mac, finding the folder in the drop-down is very hard. To find a folder, typing needs be fast. I almost never find the right folder, if the folder name contains a space. As after the space it starts to match from the first letter in the folder names if you take a brief pause to start typing the next letter.

77
mastax 2 days ago 0 replies      
Bookmarks manager from Chrome is quite good, I think: https://chrome.google.com/webstore/detail/bookmark-manager/g...
78
rdpollard 2 days ago 0 replies      
I use bookmarks to keep track of the hundreds of client-specific subdomains on a site I manage for work. I start typing the name of the client in Chrome's search bar and I've got instant access to the URL. I can't think of a better way to handle that (though I'm open to suggestions if you're using something better).
79
hellofunk 2 days ago 0 replies      
Unfortunately yes. And they are a mess. I have different bookmarks in Safari and Chrome, and on desktop and mobile. I have them synced between devices but the UI for navigating them is completely different and this doesn't help me so much.

I have so many Chrome bookmark folders that I don't know where anything is. The only way to find one is to just search in the Bookmark Manager. It sucks.

It also doesn't help that my preferred browser on different devices is a different browser.

I hope you are asking this question because you want to do something about this State of Affairs. I would gladly enjoy a good service that solves this problem in some innovative way that my brain cannot come up with.

80
theonemind 2 days ago 1 reply      
I use firefox. It's easy to bookmark things by clicking the star. I almost never pick them from menus, but you can limit awesomebar searches to bookmarks by typing "[asterisk]", so I can find, say, all of the interesting github projects I've ever bookmarked by typing "github [asterisk]"
81
fela 2 days ago 0 replies      
I stopped using bookmarks after I realized I wasn't using them, thanks to a combination of:

1. Autocompletion: for any website I use regularly I just write a substring of the url or Title (Firefox does this especially well). This covers probably 70% of my browsing.

2. Google. This might take slightly longer in case I want to find a specific article I had read some time ago, but it still seems less effort that having to bother with bookmarks, in my experience: either you have a very long list of unsorted bookmarks, in witch it's hard to search, or you have to spend time sorting them into sub-folders.

Now that I think of it, the following would be a very useful Google feature: +1 an url so that it becomes much more likely to bubble to the top in future searches.

82
dpcan 2 days ago 0 replies      
Yes, and I sync them with Chrome on my phone.

The Bookmarks Bar really has the only ones I regularly use though. Wish it was 2 rows.

83
rvern 2 days ago 0 replies      
Smart bookmarks! Bookmarklets! RSS bookmarks! Awesomebar fuzzy matching! Along with bookmark keywords and bookmark syncing! Firefox's implementation of bookmarks is right next to Wikipedia and ad blockers among the crowning inventions of the World Wide Web.
84
mtrycz 2 days ago 0 replies      
I have something very simple, A folder called FFR = For Future Reference, where I'll keep the most interesting stuff. Trusting Trust (and Overcoming Trusting Trust), Windows' NSA_KEY, and the like. Most are in the folder with no further hierarchy, but some are categorized into Security, DIY, UI/UX, Gift Ideas, etc.

I also have bookmarks at the root level for things that I will Definitely See Tomorrow, which I never erase, because hey, They could be important.

Since it's the weekend, have this extremely educational video about languages https://www.destroyallsoftware.com/talks/wat

85
pasbesoin 2 days ago 0 replies      
Years (and years) ago, there was PowerMarks by Kaylon. It was great. Cross-browser, pretty good automated, over-rideable indexing -- space-separated words/symbols, very quick to maintain, with fuzzy matching. Rapid, "instantaneous", incremental search against thousands of bookmarks.

It's gone, now, and I've never seen its equivalent.

These days, I use an extension that saves a local copy of the page. As others have mentioned: Linkrot.

But it's not nearly as quick or convenient to return to a page as it was in PowerMarks. Although, the extension I use does have search -- manually triggered, and thereupon taking some time to initially build the index.

But I end up saving more "read later" stuff in it, as opposed to just reference links. So it ends up being a bit noisier, and size means I end up with multiple stores having multiple indexes.

86
joveian 2 days ago 0 replies      
I use bookmarks in two basic ways. One is that I have Firefox customized to have two rows of header and on the right half of the top row (which has tabs on the left) I have favicon only bookmarks of sites I look at frequently (like hckrnews.com) so I can open them with one click. I have eighteen such bookmarks plus a link to browser preferences. I have seven folders of bookmarks, either just with the folder icon, one or two characters of text, or a single emojii character for identification. Three of these contain links to my favorite articles (Firefox is bad at scrolling in bookmark folders :( ). One has links I occasionally want to use but not often and one is supposed to have things I want to go back and look at somewhat quickly but not quickly enough to be worth a top level link (I need to clean it out, though, I've collected too much that I am not going to go back to). I'll sometimes create temporary folders about a particular topic.

I bookmark most pages I view as unsorted bookmarks (especially helpful for news sites that have essentially no way to ever find old articles) and then ones I am more interested in I add to another folder that I occasionally divide into smaller folders (to avoid needing to scroll) and put all of these smaller folders ordered chronologically in a folder to the right of the tabs. I usually search the bookmarks first when looking for something, but I don't tag and too often neither the title nor url contain the right keywords for me to find it.

I would really love a more unified bookmark/history system along the lines of Vivaldi's calendar history, but being able to create icons that will flag the current page (to be able to look through just the more interesting history) and other icons that would cause the current page to be saved to a particular folder as a bookmark. Then at most one click would reproduce my current system other than occasional reorganization. Since I can't predict in advance most of what I want to refer to again, I want it to take as little time as possible to bookmark things. I liked the star in Firefox better when it didn't pop up the folder selection unless you clicked it twice.

87
kijin 2 days ago 0 replies      
Yes, every day, as part of a two-tier system.

I use browser bookmarks for pages I visit every day, or for pages that I intend to view again in the near future. An icon on a toolbar right on top of the browser is much easier to access than a link stored in a third-party app or website.

Of course I could just keep all those pages open in background tabs all the time, but I don't like clutter. Having too many open tabs also consumes a nontrivial amount of CPU and RAM. Bookmarks are also safer in case the browser crashes and fails to restore all the open tabs.

I use Pinboard for pages that I might view again at some time in the future, for research or some other purpose. The archive feature is very useful for this.

88
Sebatyne 2 days ago 0 replies      
I stopped using browser bookmarks to use a web app (the bookmark manager of officejs, https://www.officejs.com/), directly integrated within any browser by updating the default search engine. Having them synchronized on a webbdav server, after loging into the app I can access them from any browser on any device. Then all my searches in the browser bar go through my bookmarks first, and it redirects me to a real search engine if no match has been found.
89
scriptkiddy 1 day ago 0 replies      
I do.

I never have to worry about them going away and I can organize them into folders any way I like. Plus, they can be exported, imported, and shared. I use Firefox, so accessing the bookmarks is as simple as using a drop down menu. I actually use a bookmark tool bar for my ost frequently visited sites. This way, when I want to go to HN, for instance, I just click a single button and I'm there.

I've looked at other bookmarking software/services, and I still find that plain old browser bookmarks still fit every use case.

90
csydas 2 days ago 0 replies      
I do and have for our support team for our company when we hire newbies. We have a pretty standard "load-out" of commonly used pages and sites, both internal and external, which are commonly used for support calls for the product. A newbie might not have use of every single link, but having a curated list of "this will be useful at some point, just keep in mind that it's there should you run out of ideas" really helps them get past the initial hurdle of learning the ins and outs of the product and the other elements that support it.
91
snlacks 2 days ago 0 replies      
I use bookmarks, but rarely for clicking from the bar. Chrome and Edge promote bookmarked sites in the nav bar suggestions when I'm typing. I usually use descriptive names of the content so I can find stuff I liked or go to often by typing a couple letters.
92
Steven_Bukal 2 days ago 0 replies      
I have lots of bookmarks, mostly for a few purposes:

1 - Pages I want to autocomplete so I don't have to remember and type the full address or verify that I'm on the true site for my bank and not a phishing site

2 - Content to do something about in the future. Stuff to read later, stuff to download to my local machine, etc.

3 - Resources that I want to remember exist and be able to find. For example, I've got a page saved that produces blank graphics in whatever dimensions you want for use in stuff like web design. Forgetting what it is called, I could look it up in my bookmarks pretty quickly instead of having to open photoshop and create such graphics manually

93
hashhar 2 days ago 0 replies      
Absolutely yes. It serves two primary purposes for me:

1. Archival. If I like something and will need to refer to it/revisit it later (more than a month, say) I will bookmark it.

2. Frequently used pages sit neatly on my bookmarks bar so that I can get to the websites I want quickly just by glancing at their favicons.

Organisation:

I primarily organize in 3 levels.

Top level: This is where frequently used stuff goes. I have configured FF to only show favicons for these so they take little space. eg. HN, GitHub, Outlook, Reddit and Bugzilla.

Second level: This is where things go for archival. I have bookmark folders at the top level that represent a category. eg. Books, Movies, Tech, Coding. Each of those can be further categorised. An example is my Tech folder is broken up into Articles, Blogs, Podcasts, Material (projects, GH repos etc.).

The void: This is the final level or organisation and is just a catch-all folder called Sort-These-Out where all stuff I'm too lazy to organise (or which isn't well defined right now, or things I'll get back on another machine maybe (Linux vs Windows)) goes. It currently has 13 bookmarks. Not bad.

PS: Did you know you can send tabs across Firefox instances on different machines by right clicking and hitting "Send Tab to Device"? The best thing ever.

EDIT: Forgot these two features.

1. Keyword search. Kind of like the bang query syntax from DuckDuckGo you can set up a keyword to search a single website by creating a bookmark. So I can go 'gh mycoolrepo' for searching on GitHub.

2. Tags. Firefox allows you to tag bookmarks. It helps me a lot when, for example, I want to find all bookmarks related to vim (but don't necessarily have vim in the page title). I'm working on an autotagger that integrates into Firefox to save me from having to tag them myself.

[1]: http://www.wikihow.com/Use-Firefox-Keywords (See method 2 for easier variant.

94
kxyvr 2 days ago 1 reply      
I have hundreds of bookmarks stored across dozens of folders based on topic. I've been burned in the past with Google changing their search algorithm and not being able to find material easily, so I just bookmark everything I want to refer to later now. To that end, I primarily use Firefox and periodically archive them using the "Import and Backup" option from the bookmarks folder. That works alright as it produces an HTML file with the entries, but I'd like something more program independent. Does anyone know a good utility for offline archiving of bookmarks in a mostly browser independent way?
95
Merem 2 days ago 0 replies      
Of course I do. Just checked everything and my bookmarks number just above 1000. The ones I use the most and websites I need in the immediate future are organized in the bookmarks toolbar (I'm using Firefox). Apart from that, they are put into separate folders regarding various topics as well as a list with "random" links which I can't put anywhere else.They are useful to me in the sense that I don't need to remember all those 1000+ links as well as it being the fastest way to access a website (it's faster than typing).
96
__jal 2 days ago 0 replies      
I do, for frequent access stuff. Work-related things, personal apps that run in various places, frequently visited sites. The trick is to keep the number low, otherwise I'll never use them because they're impossible to navigate.

For reference material, I built something sort of vaguely like pinboard.in into a home-brew app that I run for myself. It handles search, a modified form of tagging, and a timeline-like view, and I get to it with a JS bookmark (tada) that lives in-browser and sends selected text as a search.

(The app itself is a ridiculous mess, having grown as a sort of cancer in a different app I wrote for myself that now does several unrelated things. Maybe someday I'll pick that crap back apart into something releasable.)

97
Globz 2 days ago 0 replies      
Yes I still do, at this moment I have 3413 bookmarks across different folders, coding, work, recipes, Gaming, etc.

I am currently running Bookmark Checker (chrome extension) and did set the parameters to "error connect" and at this very moment it is reporting : "Bookmark check status: Total bookmarks : 2238 of 3413 error connect: 2117"

so many dead links :(

I did not know about pinboard and I am really tempted to give it a try so I can do a full html archive without the fear of losing again 2000+ bookmarks in 5 years from now.

98
blakesterz 2 days ago 0 replies      
The only part I use is the bookmark toolbar, which I use HEAVILY. Just counted, I have 30 in my toolbar. I never use any other bookmarks now though. I still have all my old bookmarks in backups going back to the late 90s though. Fun to look at every once in a while.
99
a3n 2 days ago 0 replies      
I do, but only for frequent things, and I'll clean that out periodically.

For longer term bookmarks I use pinboard.

I use a middle-ground for a few things: I may bookmark, say, news sites in pinboard under the "news" category. Every tag and combination of tags on pinboard has an RSS feed; I bookmark the "news" tag's RSS feed in Firefox, and everything tagged shows up.

The RSS is not for the content of the target sites, it's for what goes in and out of the news tag. So I might add another news site to my pinboard news tag, and vi-ola, it shows up in my Firefox RSS bookmark. Delete something from the pinboard tag and it's gone in Firefox.

100
davidp670 2 days ago 1 reply      
I stopped using Chrome bookmarks b/c they got too messy but now I use Bookmark OS which I really like. It kinda like Mac OSX but for bookmarks in the browser https://bookmarkos.com
101
btb 2 days ago 0 replies      
Only the bookmarks bar at the top of the browser.

For most sites I use keyboard shortcuts + the autocomplete in chrome. Aka to visit hackernews: Ctrl+L and then "news.y" and hit enter.

102
mirimir 2 days ago 0 replies      
I use bookmarks in Firefox in three ways. Sites that I use frequently go in the toolbar. Sites that I use rarely go in folders in the toolbar. Sites that I just want to remember go in "other bookmarks", and later I search for them.
103
nafizh 2 days ago 0 replies      
Surprise no one mentioned pocket. I use the pocket chrome extension. Compared to the bookmark system, using it is a breeze and much more clean. More importantly, I can also find them back later with my poor memory.
104
madiathomas 2 days ago 0 replies      
I have seven folders of bookmarks. Each for different topic/subject. Whenever I come across a new link which I will need to refer to in future, I store it so that I can open it from the bookmark. If I am not going to need the bookmark or no longer interested in a certan subject, I delete the bookmark or the whole folder. Some of the bookmarks have been there since 2010 because they are of the tools I still use.

I use Chrome. I like the fact that the bookmarks are synced to my Android phone and work computer. That way they are available whenever I want to use computer.

105
harijoe 1 day ago 0 replies      
I tried to address this problem some months ago with a chrome extension. Feel free to try and provide feedback to it : https://chrome.google.com/webstore/detail/oh-hi-mark/fcmdkga...
106
nebyoolae 2 days ago 0 replies      
I do still use bookmarks, but only for places I go a lot, and I sync them via Chrome. Pocket is a godsend for the "cool links" that I check out when I have time and then usually archive away, never to look at again.
107
psiegmann 2 days ago 0 replies      
I'm quite happy with a set of project specific bookmarks to get people up-to-speed quicker.We have web-{dev/acc/prd}, cms-{acc/prd}, jira, confluence, buildsystem, log-{dev/acc/prd}, etc

We maintain the bookmarks in yaml and generate the html to import into firefox/chrome/ie.Script: https://github.com/psiegman/bookmark-generator

108
smnscu 2 days ago 0 replies      
I'm a diehard fan of classic bookmarks. I tried pinboard, pocket, and other services, but for me the browser bookmarks with some form of organization works best. I like and use all Chrome shortcuts and nifty features for bookmarks, and even if the browsers seem to be going into a different direction (see Chrome's "smart" bookmarks), as the saying goes they will have to pry them from my cold, dead hands.

(At the moment I have 519 bookmarks in 73 folders)

109
nsarafa 2 days ago 0 replies      
Ironically, I just cleared out my chrome bookmarks today. Found it far too difficult trying to find the correct folder hidden in a long list of old/dead folders/links. After I purged, I stumbled upon the chrome bookmarks manager browser extension that makes the process of adding a bookmark much easier as you can type to search (https://chrome.google.com/webstore/detail/bookmark-manager/g...)
110
DavideNL 1 day ago 0 replies      
Yes... and also i recently discovered Bookmacster (macOS) which locally syncs bookmarks between Safari & Firefox & Chromium etc. (without having to upload all your stuff to a cloud.) Very handy!
111
comboy 2 days ago 1 reply      
The thread is already pretty long and it looks like I'm the first one to mention https://google.com/save - works quite well.
112
savethefuture 2 days ago 2 replies      
I do, but I have them export and upload to my server daily so I can keep them in sync. I don't organize them, I just use search and find. They're all relevant links I wish to look at or read at a later date.
113
ungzd 2 days ago 0 replies      
Yes, but in single folder (maintaining tree structure is pain) and rarely access it. Del.icio.us was very convenient, seems that it still exists but seems that they deleted all old data and may close again soon.
114
ronreiter 2 days ago 0 replies      
Reading list is not bookmarks. And of course we do use bookmarks, especially those who work in companies that require frequent access to several systems.
115
AldousHaxley 2 days ago 0 replies      
YES! So many things to read, and I hate having a million tabs open at once. Even if I don't get around to something until months later, bookmarks are an indispensable tool.
116
IE6 1 day ago 0 replies      
Yes but not like I used to. When I was younger and had time I would bookmark things, organize them, and use them to navigate to sites of interest. Now I simply use bookmarks as a dumping ground for 'something I need to see but later because I am tired now and not using the internet for anything serious'.
117
svartkonst 2 days ago 0 replies      
I do, semi-organized into folders. Mostly for archival purposes. If I come across something, a product or library och guide or whatever, that I want to save, I bookmark it.

I don't use the bookmark tabs, and I'm not regularly using what I have in my bookmarks, they're more for safekeeping, and to remind myself aobut things.

Plus it's fun to take a look through the bookmarks and rediscover things.

118
dingdingdang 2 days ago 0 replies      
Yes, extensively - have them arranged in the Firefox bookmark bar with along with folders to drop down for categories like "search", "news", "projects", etc. For everything that needs remembering in a more tertiary sense I bookmark without folders but use tags. FF's system, similar to Chrome's, that can synchronize across to other computers and phones while keeping encrypted stuff in the cloud makes bookmarks a lot less volatile in nature than they used to be.
119
tarboreus 2 days ago 0 replies      
I just keep links in easily searchable text files. When I need something I can just search for it. Emacs orgmode allows for nice links, you can open the page straight from the text file.
120
seltzered_ 2 days ago 0 replies      
No, I don't use browser bookmarks or bar shortcuts. For me at least, I feel like those needs have been replaced by:

- pinboard. Been using it for many years

- DuckDuckGo's interrobang search to quickly access pin board bookmarks / maps / etc.

- the browser URL bars own autocomplete

- this may have happened also since until recently, I used different browsers on mobile (Firefox) vs desktop (safari)

121
kakarot 2 days ago 0 replies      
I use a single line favicons across my bookmarks bar and remove all text from them. They are organized by color in a rainbow-like fashion.

It looks beautiful and works well. I just have to maintain a mental map of what general color a website's icon is and while my mouse is gravitating in that direction I'm mentally retrieving the actual icon. It's a great visual memory exercise in the beginning but eventually you wonder how you did it any other way.

122
petercooper 2 days ago 0 replies      
No, I created a simple Ruby script that stores them in a text file and lets me easily search them at the command line. Syncs through Dropbox so I have it on all my machines :)
123
damat 2 days ago 0 replies      
I'm not only just using bookmarks but even pushed them to more advanced level with own extension for Chrome:https://chrome.google.com/webstore/detail/quick-startpage/dg...
124
vermooten 2 days ago 0 replies      
I've still got 100s of bookmarks from the late 90s, still in their original sub-folders. Most are dead now, which is a shame.
125
mehdix 2 days ago 0 replies      
Oh, yes I do use them a lot. I store my bookmarks flat, with no structure. In Chrome, I add keywords to the title upon bookmarking and later I do keyword-based searches. In fact, I developed my own Chrome extension to search bookmarks: https://goo.gl/paiU3o
126
frik 2 days ago 0 replies      
Yes and bookmark-bar enabled.

@browser developer: don't remove or hide the bookmark feature. Allow me to bookmark the same link in multiple folders. Don't nag me with your cloud sync (no thanks), but add a feature to sync to a private cloud like Owncloud/nextcloud. Don't remove advanced features, don't simplify things without fully understanding the features. RSS support in bookmark-bar is pretty useful.

127
sriku 2 days ago 0 replies      
I use but don't rely on bookmarks as I usually want to add some information when saving a reference. My tool of choice is Zotero [1] which I started using during researcher days and never looked back. If you organize your references into collections, then zotero can make some nice summaries for you.

[1]: https://www.zotero.org

128
ivm 2 days ago 0 replies      
No, I run a local MoinMoin instance with database in Dropbox and arrange different topics in pages there, including links.
129
alphydan 2 days ago 0 replies      
I need to access 3 pages and 7 google drive folders almost every day for work. Those are the only browser bookmarks I have because they save me 20 - 30 clicks/day.
130
jhwhite 2 days ago 1 reply      
I do but I'm very slowly moving away from them in some instances.

If I come across articles I like I save them to instapaper instead of bookmark.

For work...I've pretty much created my own wiki of bookmarks using OneNote. Employer uses SharePoint and some pages won't display or work correctly in Chrome, so I use IE for work intranet. So instead of bookmarks I have a notebook and put tags in the notebook for easy searching and I can put a good description of the site.

131
sigi45 2 days ago 1 reply      
Yes. I hide my bookmarkbar on tabs and only see them on new tab.

I have in my bookmark bar the most used sites. I have a few folders for topics and for work bookmarks.

Bookmarks help me to close a tab. It gives me the feeling that i still can read it but i don't have to do so now. Sometimes, depending on the content, i pocket it instead of using a bookmark for it.

132
Grue3 2 days ago 0 replies      
Yes, I use the star in Firefox URL bar (yeah, I know, they moved it recently for some reason) to mark the websites I'd want to revisit and add a bunch of tags to them. Then, if I forget about something, I can always search by tag. These are filed as "Unsorted bookmarks". I pretty much never use Bookmarks menu, because searching by tag is more efficient.
133
Moto7451 2 days ago 0 replies      
Yup. I use Safari on my Mac and everything syncs nicely between my devices care of iCloud. I use folders within the bookmark bar to organize things.
134
jakub_g 2 days ago 0 replies      
I use bookmarks at work, mostly as a big jar of deeplinks to wiki pages, dashboards etc etc - I do not organize them nicely into subfolders, just rely on my memory on how I named them and parts of URL. The more often I use the page, the shorter the keyword. I use CTRL-L and bookmark name to open pages all the time.
135
ramigb 2 days ago 0 replies      
Yep, I also built a chrome extension to manage bookmarks ...

https://chrome.google.com/webstore/detail/bookmarks%2B%2B/li...

136
astrikos 2 days ago 0 replies      
Right now I use pocket, but I want to try stash!

I will definitely write a short review, but I need 10 people to view the link to help me access it first: https://stash.ai/landing?source=f520deef

137
etiam 2 days ago 0 replies      
Yes. In a "folder" hierarchy in the built-in Firefox bookmarks manager.I often wish for a better interface to move around in the tree though (e.g. filter for a bookmark or folder and see what's stored close to it) and some sort of aliases for multiple classifications would be handy sometimes.
138
Jayakumark 1 day ago 0 replies      
Have more than 150,000 Links in Pinboard. I Bookmark every new site that i like when i come across. Wanted to start something similar to producthunt from those, but never got to it. May be someday would make it like a Yahoo directory but that day never comes.
139
Huhty 2 days ago 0 replies      
Yes, I have Chrome synced between all devices and PCs.
140
nicky0 2 days ago 0 replies      
I use bookmarks for mundane services I use semi-regularly: online banking, government services, electric, gas, water company, insurance company and so on.

Also admin stuff like webhost control panel, bugtracker, iTunes Connect etc.

Arranged in favourites bar in folders by category.

Saved articles go in pinboard.in however.

141
markatkinson 2 days ago 0 replies      
Yea it turns out my bookmarks are a graveyard for things I'll never read. The Android HN app I use let's me Mark articles as read later and most the time it works offline so I tend to use that more. When I find myself on a plane with no reception I dip into my list of offline HN articles.
142
jiiam 2 days ago 0 replies      
Yep. When I'm doing a somewhat specialized research I bookmark interesting results and add a tag for future reference. Usually the time after which they are forgotten is ~1 week, because they either served their purpose or became irrelevant, but sometimes I still use some of them.
143
steverandy 2 days ago 0 replies      
I use a browser called Colibri (https://colibri.opqr.co/).

It has something called Links, where all URLs that you added are sorted by date. You can save a URL quickly with keyboard shortcut (CMD+D).

I also organize the links that I frequently visit by topics in the Lists section.

144
LocalMan 1 day ago 0 replies      
I rely on Chrome and Firefox Bookmarks. But I have too many (thousands) and find that Xmarks doesn't help all that much.
145
DanBC 2 days ago 0 replies      
Yes.

I make sure I use a descriptive sentence when I save them.

They're useful to me because the people creating the pages don't know about SEO and Google fucking sucks at giving me the pages I need unless I use weird contorted search phrases or remember the exact name of the document.

I have 12 icons in my bookmark toolbar that I use daily. I have a few that I don't use very often.

146
wazoox 2 days ago 0 replies      
I use the same set of bookmarks in Firefox migrating and evolving since 1997 and Netscape 1.0 on IRIX. Some are surprisingly durables.In any case even with URL rot they are useful as reminders of pages I want to keep as references.
147
justaaron 2 days ago 0 replies      
of course I "still" use browser bookmarks. Bookmarks, back/forward buttons, it's amazing but you don't actually need to kluge more poop on top of browser behavior to make it usable! Believe it or not, they made it right the first time.
148
astrostl 21 hours ago 0 replies      
Yes, but only for regularly-visited things. The rest is on Pinboard.
149
wtbob 2 days ago 2 replies      
Yes, I use them. I prefer them to any online service because they are completely under my own control. I do wish that I could securely sync them, but ever since Firefox completely broke the security of their Sync system, there's nothing I can rely on to safely sync for me. It's not a huge deal
150
alkonaut 2 days ago 0 replies      
No. Autocomplete in the URL field only.

I never save anything for later, I either read it or forget it. I only regularly visit a few dozen sites, so usually the site is completed in the URL field after 1 character (such as "n" to load HN).

151
scarface74 2 days ago 0 replies      
Yes. But, except for work related URLs, I rarely go back and use them.

If it is an interesting web site with good articles, I subscribe to the RSS feed.

My bookmarks stay synced between my iPhone and Chrome on Windows using Apple's iCloud Chrome plug in. It stays synced between Chrome on different computers using my Chrome account.

152
jesus92gz-spain 2 days ago 0 replies      
I do.I have my Chrome browsers in sync, categorised in folders.I do also have "Read later" bookmarks, as I sometimes find interesting websites or news I cannot read entirely because I'm busy or whatever other reason.
153
goodJobWalrus 2 days ago 1 reply      
I do, but I consciously keep only a small number of them (ideally not much more than 100) and regularly purge.
154
sametmax 2 days ago 0 replies      
Yes. Stuff to real later, stuff I want to share, resources I might come back to, quick grouped access to tools I use regularly (but not frequantly), links related to each of my dev missions, etc.
155
tjbiddle 2 days ago 0 replies      
Not really - L to get to the address bar, and then autocomplete handles the rest as I start typing for 99% of use-cases. However I know I used bookmarks semi-recently when I was working on a project where I was regularly using websites that I don't normally use.
156
rdiddly 2 days ago 0 replies      
I use bookmarks, I just don't keep them in the browser anymore. I have individual ones scattered throughout my filesystem tree by topic or function, mixed in with documents and whatever other files. Much better having just one hierarchy to search through for stuff.
157
jasonkostempski 2 days ago 0 replies      
I used to use and painfully maintain them for reference materials but they proved to be less useful than just reGoogling, so I stopped. ReGoogling isn't great either, I'd like an easy to use PKB but I wouldn't want it to be directly built into my browser.
158
trojanh 2 days ago 0 replies      
Since in today's age there no unified platform , it doesn't make sense to use bookmarks to me. I use Pocket an alternative which does the bookmark smartly. It stores the webpages oflline on my mobile so it becomes very handy.
159
ehnto 2 days ago 0 replies      
I use bookmarklets to perform tasks on sites to make them more readable. Actual bookmarked websites is less common but I have a few. Normally it is for short term "I will forget this otherwise" sites that get removed when I no longer need them.
160
squiggy22 2 days ago 0 replies      
I wish Google would create a separate index of all the stuff I bookmark, and provide it as a subset of the Google search experience. I too find myself Google searching for info Ive previously browsed.
161
nol13 2 days ago 0 replies      
Very very rarely, but have a few.

Mostly just browser history, or I'll ddg it again as a fall-back. Doesn't work as well in Chrome (or im doing it wrong) but FF awesome bar seems to pull up the links I need within a few keystrokes the majority of the time.

162
nhumrich 2 days ago 0 replies      
I love Firefox's keywords for bookmarks. I can type `gh` and be taken to GitHub, or `dh` for dockerhub, etc. Chrome can only do this for searching, not generic bookmarks. It basically is like a shell alias for all my favorite websites.
163
pjc50 2 days ago 0 replies      
Yes, in small quantities and not synced. They're for sites I visit regularly, or essential intranet pages at work.

Stuff I want archived for reference or I want to read later goes to Pinboard.in.

164
candeira 2 days ago 0 replies      
Yes, but very few of them:

Bookmarks bar: bookmarklets for pinboard, ffound, whatfont, etc. Plus bookmarks for Toggl and certain other work-related services.

Bookmarks proper: one folder per client, with links to documentation, issue tracker, etc.

165
c_r_w 2 days ago 0 replies      
Chrome, synced. 99.99% of my bookmark clicks goto the Bookmarks Bar.

Mostly I save bookmarks to close a tab, doubtful I will ever look at it again. Mostly those are for tech research.

I also use "open tabs on other devices" extensively.

166
smdz 2 days ago 0 replies      
I use it, but not in its original way.

I would bookmark a link in Chrome just because it automatically shows up (in type ahead) when I search for similar keywords in the address bar. I have too many bookmarks to categorize and remember.

167
ertucetin 2 days ago 0 replies      
Also I use Diigo it's a very cool and intuitive tool so I highly recommend it: https://www.diigo.com
168
Kiro 2 days ago 0 replies      
No, I just save links as plain text in my Evernote. This means I can add comments and other meta data very easily and I have everything stored in one place without having to rely on the browser.
169
nottorp 2 days ago 0 replies      
Of course I use bookmarks. Not for sites i visit regularly, the browser takes care of that, but for reference articles I'll need later.I just use per-subject folders, nothing fancy.
170
grafoo 2 days ago 1 reply      
the thing that always bugs me is how to use bookmarks when working with multiple browsers.the various bookmark service platforms never fully scratched the one itch i was feeling => simply save a bookmark and let me add some tags to it.

right now the only browser based bookmark i'm having is a bookmarklet that takes me to my own bookmark store ( see https://github.com/grafoo/webdmp if you're interested.)

171
robertlf 2 days ago 1 reply      
I've always lamented the fact that the major browsers don't make it easy to see how old your bookmarks are and provide a way to highlight and delete ones that you haven't clicked on in awhile.
172
roystonvassey 2 days ago 0 replies      
Since I find most of the useful content I read either on HN or Reddit, I tend to use in-built mechanisms such as the like/upvote/save options to bookmark things I like.
173
josho 2 days ago 0 replies      
I used Stache to save a copy of the site and thumbnail. It was a pretty nifty app, but is pretty much end of life from insufficient sales.

There is an opportunity to do something better than bookmarks, but not likely as a business.

174
midhunsezhi 1 day ago 0 replies      
I use them very rarely. Pocket has become my preferred source for storing, managing and sharing my links now.
175
spectistcles 2 days ago 1 reply      
I use them in Chrome all the time, I have thousands I basically search them as kind of a personal google... for those "Oh I remember reading an article about that once, let me find it"
176
ge96 2 days ago 0 replies      
Yeah just because I've been lazy and haven't finished my chrome extension that I've been off/on working on to deal partially with this. I research random crap and like to store that information into one of my servers. I've got the basic read/write down. I'm having a problem with the stupid window disappearing when it's not focused, this is intended/not something I'm going to get around. So I have to work on using background-process/page and cookies (I have yet to use cookies)
177
PixZxZxA 2 days ago 0 replies      
I bookmark things that I visit frequently (HN, Todoist etc) and save things to Pinboard that I want to read later or save for other reasons.
178
wakkaflokka 2 days ago 0 replies      
On this topic, does anybody have a good recommendation for a Google Chrome-Pinboard bookmark real-time bookmark sync service/extension?
179
vortico 2 days ago 0 replies      
Yes, and in vimperator they're really easy to use. Press "A" to bookmark, "A" again to remove, and "t" (tabopen) to search for a page in your bookmarks.
180
pensatoio 2 days ago 0 replies      
Of course. I believe just about any technically capable person uses bookmarks. I've never met a programmer who didn't and such is the primary audience of this site.
181
kome 2 days ago 0 replies      
I use Pinboard (for free) to manage more than 4000 bookmarks. And I use it often, it's my personal search engine. It's great. But it can be improved a lot.
182
rurban 2 days ago 0 replies      
Sure. Chrome syncs them and does autocompletion. Some shortcuts are also used as icons on the bar, but autocompletion is the most important feature.
183
wsc981 2 days ago 0 replies      
I use bookmarks. Mainly to keep autocomplete of important URLs intact after clearing browser history.

And also to keep track of important endpoints when I work for a new client (I am freelancer).

184
gcr 2 days ago 0 replies      
Sort of.

I use Safari's reading list extensively.

I also keep snippets of things I want to keep inside my emacs org-mode folder so it's instantly accessible.

185
tluyben2 2 days ago 0 replies      
I use them a lot; there are a lot of obscure searches I do to which I bookmark the result with keywords that make me find it in one go instead of doing the search mambo in Google again.
186
neurobot 2 days ago 0 replies      
Still use bookmark.create folder inside folder (folderception).

Also, I used mozbackup, to backup my profile (all of them, including configuration, bookmark, history, etc).

I used firefox for my primary browser.

187
vkorsunov 1 day ago 0 replies      
We create Bubblehunt (https://bubblehunt.com) - this is search platform, where you can create own search system for bookmarks, links and any other resources.

This service automatically indexed page, get relevant results from your information space, delete duplicates and non-active urls.

This is alpha-version and it would be awesome if you give feedback and ideas what we need to improve.

188
continuational 2 days ago 0 replies      
I use them to make sure I can find the site again via Chromes autocompletion. I don' organize them and I never open the bookmarks view.
189
Veratyr 2 days ago 0 replies      
I use mine as a queue for things I intend to look at later.

What I really wish for is a way to save all the important aspects of a page for future viewing and organise it in a particular way.

190
fariz_ 2 days ago 0 replies      
191
windlessstorm 2 days ago 0 replies      
I email myself the interesting and important links with added note. Gmail have powerful search option to go through any link I am looking for, no problems so far.
192
tetraodonpuffer 2 days ago 0 replies      
only the toolbar for quick access to the sites I use the most, and those are kept only as the site icon so I can have many, for everything else I want to keep I use pinboard
193
nickbauman 2 days ago 0 replies      
I use them for workflow markers. Things I do everyday, like review pull requests, check specifications, access dashboards.
194
asdfasdf45 2 days ago 0 replies      
Evernote web clipper (for Chrome)!

It's bookmarks on steroids, saved for offline, taggable (no assumption of organizing data in a tree), and synced.

Probably the only useful Evernote feature.

195
Procrastes 2 days ago 0 replies      
I do. I have several top level folders, Daily, Reference, Demo and Personal, then few links on the bookmark bar for Production, Staging and Tickets.
196
daledavies 2 days ago 0 replies      
Yes, excellent for research and saving stuff for later. I do tend to purge after a year or so though because bit rot usually sets in.
197
steel88 2 days ago 1 reply      
Sure, i use Papaly for everything , great service .
198
8note 2 days ago 0 replies      
I use them for work to keep track of all the different systems' uis I need to use, but otherwise no: the address/search bar does better
199
taranw85 2 days ago 0 replies      
I use a website called Mochimarks. It lets you set reminder dates on bookmarks. I mostly use that to check up on threads products, or blogs.
200
swrobel 2 days ago 0 replies      
Not for what seems like an eternity. Autocomplete from my history has replaced them for me. Actually, I do on mobile, just on quickstarter screens.
201
weitzj 2 days ago 0 replies      
Yes. I use the bookmarks favorites bar, create a folder per project and synchronize across all devices via xmarks.
202
taklya 2 days ago 0 replies      
I do use bookmarks but now I use Refind which allows me to store the bookmarks with tags and socially.
203
pacomerh 2 days ago 0 replies      
Sure, they sync through devices, many levels of folder nesting, easy access!, drag drop, pretty raw, basic & handy.
204
bgrohman 2 days ago 0 replies      
Yes. I use multiple browsers, too, so I built a bookmark manager web app for personal use with grouping, tagging, and search.
205
meddlepal 2 days ago 0 replies      
Not really no. Even the stuff I do bookmark I do so more as a "I might need this six months from now" kinda thing.
206
Safety1stClyde 2 days ago 0 replies      
I have a web server on my home computer, so I make a "bookmarks" page on there which I can use to visit web sites I want to go to.
207
th3reverend 2 days ago 0 replies      
i bookmark for:

1. work; internal websites can't be found on google and i can never remember them.2. to clean up open tabs related to a task that i have to postpone; i bookmark them en masse and come back to them later; discard when done.3. i have a dozen or so websites i visit daily; right click the folder of bookmarks and open them all at once.

208
hsivonen 2 days ago 0 replies      
I have some. I dont organize them. I just use them to make rare things not fall of the Awesomebar search in Firefox.
209
qerim 2 days ago 0 replies      
I used to manage my Bookmarks in Chrome , however after some 'Sync' incident, I lost a few of them.

I now use [Papaly](http://papaly.com). It is really well made. I have my Bookmarks on different Boards. You can view share your bookmark boards with the community if you wish.

210
weslly 2 days ago 0 replies      
More than I would like to.

I have a pinboard account but always end up just dragging links to the bookmarks toolbar.

211
tehabe 2 days ago 0 replies      
I bookmark a lot of sites but I rarely go back and use them at least it feels like it.
212
digitalpacman 1 day ago 0 replies      
Uh yeah. Bookmark bar is the best thing ever.
213
Avshalom 2 days ago 0 replies      
I have thousands, maybe tens of thousands. The library window never closes. I really don't organize them.
214
pcr0 2 days ago 0 replies      
I stopped using them in favor of Pocket.
215
kkanojia 2 days ago 0 replies      
I use bookmarks for my regular links and pocket for one time links i want to go back later and read.
216
paullth 2 days ago 0 replies      
Yeah 1000s of them, all organised into hierarchical subject based folders. Very useful to me
217
tobeportable 2 days ago 0 replies      
Not in the browser, just markdown files structured like those github *-awsome repos.
218
pavanky 2 days ago 0 replies      
I have frequently used websites in my boomark bar. There are about 20 of them. That is about it.
219
seajones 2 days ago 0 replies      
Simply put, a bit. Not much, I can find most stuff again just by searching
220
faragon 2 days ago 1 reply      
Only for short-term. For things over a month of age: key words + web search is faster, at least for me.
221
butz 2 days ago 0 replies      
Yes, who's asking? Is one of mainstream browsers planning to ditch bookmarks?
222
make3 2 days ago 0 replies      
I use pocket instead.. it's amazing with a e-reader like kobo..
223
lohengramm 1 day ago 0 replies      
I constantly use the bookmarks bar (Firefox).
224
ecesena 2 days ago 0 replies      
I only have 3 or 4, hn is on of them, and I use them mostly on my iphone/mac (with the active bar), when I open a new tab I can open those sites with a single tap, pretty convenient. Beside this no, I've never organized them.
225
hrez 2 days ago 0 replies      
Yes and xmarks.com plugin for cross-browser sync and backup/history.
226
zitterbewegung 2 days ago 0 replies      
At work yea . Everywhere else I memorize urls or use search / keep a tab open.
227
xylon 2 days ago 0 replies      
Of course I use bookmarks, how else could I remember websites.
228
tscs37 2 days ago 0 replies      
Shaarli + Wallabag. So no.

I usually try to tag my bookmarks but it rarely happens.

229
j_s 2 days ago 0 replies      
Personally I use the QupZilla browser because private browsing automatically starts separate sessions per-process. Before I throw them all away I collect all the urls in a big text file using Windows UI Automation... it's messy but just barely better than nothing.

Never thought about the following (search vs. bookmarks/history) until the HN discussion last week, though I have always typed in google.com before searching just because browser search money seems like a bad incentive:

There is a reason for that as a rule, browsers dont really want you to use history. They want you to search and find things multiple times because search royalties are part of their business model.

A couple of full-text-of-every-page-visited Chrome add-ons textually equivalent to a single-computer version of https://pinboard.in/ $25/yr hosted "archiving and full-text bookmark search" subscription (unfortunately for me I don't like Google/Chrome/anti-privacy enough to use as my main browser):

https://github.com/lengstrom/falcon "Chrome extension for full text history search"

http://fetching.io/ "your own personal Google -- a search engine for all the web pages you've seen"

https://worldbrain.io/ "Full-Text Search the Pages you Visited and Bookmarked"

https://addons.mozilla.org/en-US/firefox/addon/recoll-indexe... "copies the web pages you visit to the Recoll web indexing queue"

Source: Vivaldi browser v1.8 released, with calendar-style browsing history | https://news.ycombinator.com/item?id=13984122 (last week)

Also mentioned: Tree Style Tabs Firefox add-on "shows my tabs in the context I opened them from" | https://addons.mozilla.org/en-US/firefox/addon/tree-style-ta...

GraphiTabs Chrome add-on | https://chrome.google.com/webstore/detail/graphitabs/dcfclem...

Edit: Added intro w/ my own anecdata.

Another idea: custom browsers per-site-you-use, per HN user megous: https://news.ycombinator.com/item?id=13226170

For each use case that is not a free browsing I create an electron app, that never executes any code from the web or uses any external style

230
skdotdan 2 days ago 0 replies      
I bookmark webpages all the time but then never find them again.
231
senorjazz 2 days ago 0 replies      
I bookmark everything but go back and read nothing :(
232
bootload 2 days ago 0 replies      
yes HN itself. I don't bother organising them, search is provided. The articles posted by myself, others are as good as it gets. Moderated/insightful comments are a bonus.
233
bhauer 2 days ago 0 replies      
All the time, using folders in the bookmarks bar as drop-down menus.
234
sidcool 2 days ago 0 replies      
Yes, Chrome syncs my bookmarks across devices. Pretty nifty.
235
KevanM 2 days ago 0 replies      
yes, I have a limited set organised into what I'm doing at work.

The only personal ones I have are news websites and a lunchtime reading folder.

236
iamacynic 2 days ago 0 replies      
yes. the 50 links i have to use over and over every day managing a business are all on the bookmarks bar.

for example: i have a bookmark that shows me every invoice issued in the past 30 days.

237
philippz 2 days ago 0 replies      
Definitely. But more often i use Pocket instead
238
known 2 days ago 0 replies      
I always keep open textpad and copy all interesting urls
239
scelerat 2 days ago 1 reply      
I miss delicio.us.
240
nullsynapse 2 days ago 0 replies      
Yes, but I use Alfred and Chrome Bookmarks to search them.
241
smrtinsert 2 days ago 0 replies      
yes. synced to accounts, for reference material that required complex searches to arrive at - or material I only browse seldomly, such as fitness plans.
242
vasili111 2 days ago 0 replies      
I use Chrome and miss opera 12 bookmarks.
243
blizkreeg 1 day ago 0 replies      
Pocket is how I bookmark now.
244
kevinwang 2 days ago 0 replies      
yes, i use them extensively. They're a godsend for the homepages of all my college classes.
245
flurdy 2 days ago 0 replies      
No. Not for many years
246
pmkary 2 days ago 0 replies      
nope, but I use things like top-sites and Opera's startpage
247
jdiscar 2 days ago 0 replies      
I thought about this a lot... so this'll be long. I thought of how bookmarks were used and came up with:

- Things you want easy access to, but have annoying URLs, like your company's wiki page (Solved by Favorites/Bookmarks Bar or Dashboard)

- Things you want to finish looking at later (Solved by Read Later / Reminder)

- Things you want to keep track of, like blogs (Solved by Read Later / Reminder)

- Things you want to be able to find later (Solved by Full Text Search and Tags)

- Something you might want to see again, but not anytime soon (Solved by Personal Archive)

- Something you simply liked or are favoriting (Solved by Personal Archive)

- Note taking / Research (Solved by Tags and Boards)

- Idea inspiration (Solved by Tags and Boards)

- Things you want to show other people (Solved by Social)

- Things you want to get for yourself (Solved by Wishlist)

- Things you want other people to get for you (Solved by Wishlist)

My main problem with using bookmarks was that I rarely went back to them. Normal bookmarks are essentially a personal archive and google search usually finds things much better.

I realized there were a lot of bookmarks I'd like to go back to, I'd just forget about them. Maybe I'd like to read something when I got home from work, or maybe I wanted to check back in a week for an update (or release date), or I wanted to keep a list of items to show someone later (usually funny videos or gifs.) It was pretty difficult to do that no matter how I organized my folders or tagged things.

I eventually built my own bookmark site (https://www.mochimarks.com/landing) with all the features I wanted. The main features (apart from the expected tagging/full text search/browser integration/notes/etc...) were settable/automatic reminders, wishlists, and recommendations. Wishlists let you rank bookmarks. Recommendations could be new stuff from friends or the app could recommend that you look at stuff you liked that you hadn't visited in a while.

After having my app for a while, I've found I use bookmarks a lot more. I mostly use reminders and have a few stuff pop up to check each day. Reminders are killer for me. But when I'm bored I like to sort my wishlists. I don't use tags much... I really only use #Programming, #Interesting (usually really good articles), #Funny, #Music, #Blog, and #ArtBlog. I'll use the recommendation features to check on my blogs and to share links with my friends. I use Read Later a lot, but rarely actually go back and read things later. But when I do, I'm really glad the feature is there.

248
exabrial 2 days ago 0 replies      
Yes. Mainly the toolbar
249
shurcooL 1 day ago 0 replies      
It's great timing for this question for me. I've recently made a change in how I use bookmarks, and I've become very curious about how other people deal with them.

Some history. I've used bookmarks like anyone else since before IE6 days. When Chrome 1.0 came out, I've switched to it and been using it as my primary browser since. When Chrome added ability to sync (bookmarks and other things), I've started using that.

So for the last 5+ years, I've had all my bookmarks synced between my main computers and mobile devices.

There were 3 stages of how I used bookmarks.

First stage was me trying to organize things into folders, based on content. It seemed to make sense, but didn't really scale well. I ended up not liking my bookmarks after a while because I never actually used existing ones, only added new ones.

The problem with organizing by folders is that they're exclusive. If I run into a new blog I want to bookmark, it would normally go under Blogs. But if it's game related, I have a Game Dev folder that has Blogs inside that.

I feel like labels would work better, since then you can just apply multiple labels to bookmarks and be able to find them more logically.

Eventually, I gave up on that, but realized that I mostly cared about bookmarking things "just in case" and so that they'd show up in Chrome's omnibar when I type or search for things.

So I changed my "add a bookmark" strategy to a simpler one. I created a top-level folder called Stream (inspired by Photo Stream from Apple devices), and it would be just a single place to dump all bookmarks, based on time. Latest ones always end up on the bottom. No trying to organize by content, because organizing by "when this bookmark was added" was actually more meaningful and helpful, but primarily easier.

That worked for a while, but even so, over the last few years I realized I didn't like my bookmark situation. I had hundreds of bookmarks from last few years, and I had forgotten about most of them. It felt like baggage, mental overhead.

So, just a few weeks ago, I set a goal to go through all my bookmarks and delete them. For any bookmark I couldn't delete, I added it to a text file and just organized that in an free form way.

I ended up removing 90% of useless bookmarks. They were either 404, no longer useful or relevant, out of date, or easily findable via Google when I need to look that topic up.

The 10% remaining were high quality things that I actually cared enough to want to keep in a .txt file for now.

So, I went from http://instantshare.virtivia.com:27080/12tdxyh7suc7h.html from last few years, to just http://instantshare.virtivia.com:27080/1f2drzhc3w3hk.txt.

Feeling good about that so far. I'll put the .txt file with my other .txt files for now, and see if there's anything more I wanna do with it later. But for now, it works well enough, and I'm feeling a huge sense of relief from no longer having those bookmarks in my browser.

As a bonus, I now feel better about being able to switch browser I use, and not have to worry about importing/exporting bookmarks. I just don't want to have my bookmarks tied so tightly with the browser I use, it makes sense to keep them externally.

I really like the observation someone here made about bookmarks usually being used as "TODO" items. Articles to read, interesting blog posts to consider going through, etc. I think that really makes sense why it feels bad to have so many unused bookmarks accumulating.

250
draw_down 2 days ago 0 replies      
Yes, of course!
251
nunez 2 days ago 0 replies      
no. haven't in years.
252
geggam 2 days ago 0 replies      
delicio.us / delicious.com

back in my day...

253
aorth 2 days ago 0 replies      
No.
254
psyc 2 days ago 0 replies      
Never did.
255
cabalamat 2 days ago 0 replies      
Yes
256
_pdp_ 2 days ago 0 replies      
Nope
257
jelder 2 days ago 1 reply      
No, and I judge pretty harshly anyone who does. A few shortcuts/bookmarklets on the bookmark bar is acceptable.
16
A quick look at the Ikea Trdfri lighting platform mjg59.dreamwidth.org
390 points by dankohn1  1 day ago   132 comments top 22
1
bsamuels 1 day ago 5 replies      
I dont get everyones gripe about the lack of HTTPS as long as there's firmware signing.

HTTPS as a protocol ages extremely fast, trust anchors always change, and there's no guarantee that today's state of the art won't be completely incompatible with TLS implementations in 5 years.

For HTTPS to work properly on an embedded device, it needs to have an up to date OpenSSL library and updated certificate anchors. These are always packed as part of the firmware image itself, so any update to certificate anchors or OpenSSL would require an entire new image to be deployed.

This isn't a problem for websites because updating is usually as simple as apt-get upgrade, but this is a massive problem for embedded devices because publishing a new firmware image usually means pumping a few hundred hours of QA time into the image, then back and forthing with your manufacturer to get them to use the new image on newly minted units.

This isn't even considering the changes in OpenSSL over time. Many older routers simply cannot use HTTPS for updates because newer versions of OpenSSL simply won't fit on the flash.

Then there's the question of how end-of-life will be handled. People often use products long after they've gone EOL. You have to ask yourself what happens when someone plugs in a lighting unit that has an old firmware version on it, and the unit can't communicate with the update server because the product was EOL'd 5 years ago. There won't be a transition firmware for such an old product, and the people who know how to roll the firmware images probably don't even work there any more. That user is now SOL unless you have a method for manually updating firmware.

2
matt_wulfeck 1 day ago 1 reply      
> It's running the Express Logic ThreadX RTOS, has no running services on any TCP ports and appears to listen on two single UDP ports.

This is excellent. I can't even say the same thing about my AT&T fiber gateway. It listens on two random ports with no way to turn it off (and also you can't use your gigabit internet without the AT&T gateway in front). I don't know what it is, but I'm sure it's probably insecure.

3
okket 1 day ago 4 replies      
> That file contains a bunch of links to firmware updates, all of which are also downloaded over http (and not https). The firmware images themselves appear to be signed, but downloading untrusted objects and then parsing them isn't ideal.

Why? What security benefit do you gain by using HTTPS when you already check the signature/hash of the firmware file?

4
jjuhl 21 hours ago 0 replies      
Using 'pool.ntp.org' is not cool. Ikea should get a Vendor Zone - http://www.pool.ntp.org/en/vendors.html#vendor-zone
5
floatboth 1 day ago 0 replies      
CoAP server on LAN? This is excellent. This is exactly how I set up my DIY ESP8266 "smart" devices. More LAN of Things please, not "Internet".
6
robert_foss 1 day ago 2 replies      
Thanks Matthew.

It's nice to see that some serious vendors actually do things mostly right.

7
fnord123 1 day ago 0 replies      
Also of interest is this teardown of the Koppla USB power supply:

https://www.youtube.com/watch?v=uRe9w5PKmsE

It's also a pretty darn good piece of kit.

8
patrickmn 1 day ago 2 replies      
> The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it's surprisingly competent.

For what it's worth, hacking is a big part of Swedish culture.

9
wingerlang 1 day ago 6 replies      
Trdfri literally translated is "threadless" but it can probably be interpreted as cordless as well.

I've never heard anyone say "trdfri" before. The normal word would be "sladdls" at least where I'm from.

10
Harley78 5 hours ago 0 replies      
There is a lot of development inforomation about communicating with Ikea Trdfri Gateway here:

https://github.com/bwssytems/ha-bridge/issues/570

Developers there are trying to reverse engineer it for open source home automation software.

11
tostitos1979 1 day ago 1 reply      
I watched the videos and this lighting system seems very nice. Why does the gateway need an a internet connection though? For firmware updates and supporting the mobile app? Since it says local only, I assume the mobile device has to be on the same LAN?

If so, I guess they are saying that the API between the app and the gateway is currently closed (according o the IKEA website) but they are working to change that. So what is speaking COAP? The gateway?

12
Matthias247 1 day ago 0 replies      
Interesting to see CoAP deployed to an embedded device. I already wondered if it will also be some never-widely-deployed standard. Also intersting that they use it on the gateway and not on the (more constrained) lightbulbs. I guess the always-on gateway would also have been powerful enough to run HTTP[S], which would have made 3rd integration easier.
13
chvid 1 day ago 0 replies      
Side question: Is there a way to make trdfri control an arbitrary 220 v device?
14
vanviegen 1 day ago 0 replies      
Okay, so it's basically Philips Hue, but without the API (for now), and with a lot less to offer in terms of hardware variety. In particular: color bulbs?

Prices seem to be only slightly lower than comparable Philips products.

15
afashglaksnhb 1 day ago 2 replies      
Is this a closed platform? Or can one integrate with one's own/third party solutions?
16
zAy0LfpBZLC8mAC 1 day ago 1 reply      
> It's also local only, with no cloud support.

Is that actually true? Or is this just confusing "non-local" with "cloud"?

If it's speaking IP, how would it even distinguish "local" packet from "non-local" packets? What prevents you from talking to your device at home using IP connectivity elsewhere on the planet?

17
oflannabhra 1 day ago 0 replies      
The EFR32 chips they are are using are Thread-capable. It will be interesting to see if IKEA migrates to Thread as the network layer and dotdot as the application layer. Their on boarding method, CoAP + DTLS sure seems to indicate that would be possible.
18
redsummer 1 day ago 1 reply      
Will the lights work with Home Assistant on pi - home-assistant.io - without the gateway? With perhaps a zigbee hat or USB dongle?
19
api 1 day ago 0 replies      
Local only with no cloud support. Hallelujah.
20
PhasmaFelis 1 day ago 2 replies      
I was surprised too, but I guess a furniture company might not have the same pressure to "move fast and break things" (scare quotes intended) as established tech companies. They don't have a culture of rushing products to market as fast as possible.
21
microcolonel 1 day ago 2 replies      
Might get me a few of these and write some client libraries. I think it would be swell to hook it up to ambient light sensors to set the lights exactly when it's time to replace sunlight in a given room.

And it looks like they've separated the concerns somewhat properly, so the lightbulbs can be somewhat separate from firmware updates and the suchlike. Big improvement over some folks....[0]

[0]: https://twitter.com/internetofshit/status/849667478385037317

22
longhust9x 1 day ago 0 replies      
Helo
17
A girl was found living among monkeys in an Indian forest washingtonpost.com
360 points by mrb  2 days ago   157 comments top 23
1
sandworm101 2 days ago 9 replies      
The story is too good. The girl, the monkeys defending her, the policeman ... all Disney-level stuff but where are the non-disney facts? A real story always has dark sides. This one is too perfect. I'm not saying that it is all fake, rather that I don't think we are getting the entire story. I wouldn't be surprised if we eventually learn that this girl was only living with the monkeys for a very short while, that her issues are more long-standing. Perhaps the truth is that she was a disabled girl found amongst monkeys and the story has been elaborated from those simple facts.

>>> "She behaves like an ape and screams loudly if doctors try to reach out to her."

Like an ape or like a monkey? She was raised by monkeys but acts like an ape? A lay person perhaps wouldn't know the difference but by now someone with knowledge would be on site. I have been around several disabled children. The screaming and fear of being looked at or touched is not uncommon. No mention of how she reacts to being clothed? I'm no expert on feral children but I would expect that after eight years of being naked one would not be happy about clothing and that would deserve some mention ... unless of course clothing is nothing new to her.

I want to see her feet, specifically her toes. If she really hasn't ever worn shoes then her toes will show it.

http://www.drgangemi.com/kids-health/childs-shoe/

2
kumarm 2 days ago 4 replies      
Hope her integration to society is handled carefully. So far she has been treated like an animal in zoo by humans too (Check the photos of groups of people looking at her):

http://timesofindia.indiatimes.com/india/eight-year-old-girl...

http://www.dailymail.co.uk/news/article-4386060/Mowgli-girl-...

3
pmoriarty 2 days ago 2 replies      
This reminds me of the story of Kaspar Hauser[1] (which was made in to a movie by Werner Herzog[2]) and of the fascinating book Seeing Voices by Oliver Sacks.[3]

In his book, Sacks investigates various cases of children growing up without language, how they cope (or don't cope) with it, how they finally acquire language (if they do), and how differently they see the world in both the pre-linguistic and post-linguistic states. Hauser was one of the most famous cases of this sort, Helen Keller[4] was another.

Reading this book inspired me to learn sign language, which I expected to be radically different from spoken and written language, and more powerful in many ways, as you can physically describe things in ways that has little parallel to spoken and written languages.

[1] - https://en.wikipedia.org/wiki/Kaspar_hauser

[2] - https://en.wikipedia.org/wiki/The_Enigma_of_Kaspar_Hauser

[3] - https://www.amazon.com/Seeing-Voices-Oliver-Sacks/dp/0375704...

[4] - https://en.wikipedia.org/wiki/Helen_keller

4
jacquesm 2 days ago 2 replies      
The monkeys seem to have been doing a better job at parenting than the people here. Note how the text below one of the pictures says she's frightened of people and the picture right above it has a whole bunch of (all male cast) busybodies crowding into a little room with her in it.
5
DanielleMolloy 2 days ago 2 replies      
This is the darker (and probably more truthful) variant of the story: https://www.theguardian.com/world/2017/apr/08/indian-girl-fo...

" 'In India, people do not prefer a female child and she is mentally not sound,' DK Singh said. 'So all the more [evidence] she was left there.' "

6
malandrew 2 days ago 0 replies      
Would have been interesting to have Jane Goodall involved. She could have left the child integrated but used the circumstance to bridge the communication divide between us and other primates because this girl surely knows things we never will.
7
faitswulff 2 days ago 3 replies      
I read somewhere that reintegration with human society mostly fails for feral children. Is it really a rescue if she dies at a young age, alone?
8
narrator 2 days ago 0 replies      
Now the battle begins to shape her story such that it can be used to reconfirm one of a number of different competing narratives about man's relationship with nature, nature vs nurture, theories about language acquisition, the "critical period" and early childhood development. Did I miss any?
9
srean 1 day ago 0 replies      
Editors note:

 New information has been reported since publication of this story that raise significant doubts about the veracity of the initial accounts on which it was based. The story relied on reports by the Associated Press and the New Indian Express quoting local officials who came upon her, and a video interview with the physician who treated her. These versions of what happened to her are now being questioned by other officials quoted in the Guardian and the Hindustan Times. While the girl appears to have been abandoned near the forest in question, according to these new reports, these officials do not believe she had been living among monkeys. The original headline has been changed, and you can read about the new developments here.
Whoah! An editor cautioning against sensationalism, I dont get to see that often.

10
dmix 2 days ago 0 replies      
Anyone know what kind of monkeys they were? I can't find any mention of it in this article or the original referenced source.
11
baron816 2 days ago 1 reply      
It's possible she could end up like this unfortunately: https://en.wikipedia.org/wiki/Genie_(feral_child)
12
throw2016 2 days ago 0 replies      
The story if true is discomforting, the mind ponders, and it does not completely add up.

We know the 'facts' but we also don't. This is exactly the kind of story that needs fact checking, but to get that you need people on the ground, who are experienced and confirmation will take time which the attention span of the news cycle will not allow.

The worst is turning it into some kind of circus. Hope that now with the global attention the Indian authorities will immediately retrieve her from the current facilities with people clearly not trained for this, and get her the kind of specialized care and sensitivity she needs.

13
achow 1 day ago 0 replies      
Mowgli girl found in January, cop says was clothed, no monkeys.

"There were no monkeys. She was not naked, and she wasnt using her hands to walk. I dont know how these stories are being spread.

http://indianexpress.com/article/india/mowgli-girl-found-in-...

14
popol12 2 days ago 2 replies      
How ethical is it to force her to leave the monkeys to become a "normal" human ?
15
smdz 2 days ago 2 replies      
The first expression I had was: What rights do we humans have to take her back from her family(monkeys in this case) and her home (the forest)? Just because she is our kind, should we impose our culture, our values, our ways (and our governments) on her?

But then - this feels more like a creative story. From the videos it looks like she might have been in the forest only for some time and needs rehab, but I am no expert here.

16
zaroth 2 days ago 0 replies      
These types of junk-news stories seem to make their rounds on the Internet for several weeks before finally evaporating into the ether.

What's interesting is that in the past they would seem to manage to stay off the HN front page.

Now it seems like I see these stories start circulating on Outbrain or the other click bait networks and I think, well, that'll be on HN in a week or so!

These stories are usually large part fake news, or reality tweaked or skewed with some angle to make it almost irresistible to read about. I personally have no use for these types of stories on HN but certainly understand they are created with a very compelling hook to want to share them.

17
slitaz 2 days ago 1 reply      
I feel it is a badly-written article.

If the girl managed to survive for so many years, she should have been left with the trouppe of primates and get observed. This sudden change will probably be worse than any other less brutal change in the environment.

18
abrkn 2 days ago 0 replies      
An observation that adds nothing to the story/discussion: DK Singh. Donkey Kong.
19
mythrwy 2 days ago 0 replies      
It's finally happened.

Washington Post has completed the transition into a full blown supermarket tabloid.

20
johnb777 2 days ago 1 reply      
21
kazinator 2 days ago 1 reply      
> Numerous stories of feral children ...

"Feral children?" How amusing; is that an actual phrase?

It evokes a domesticated species of rug-rat, bred in the wild.

22
bingomad123 2 days ago 0 replies      
It is common in India for family members to put their autistic/badly born kids into a cage and display them in circus. I remember a family showing three of their kids in a circus as "animals" just because the babies were autistic and had tail like features.
23
aaron695 2 days ago 1 reply      
No one else here disturbed that a intellectually handicapped girl who was abandoned by the system has been turned into a dancing monkey for HN's amusement.

Surely the discussion here should be more about what a horrific system exists in parts of India that handicapped people are turned into stories.

Do I really need to spell it out it's an abandoned handicapped girl found near monkeys????

The doctor says when she was brought in she was near starving (video)? Were the monkeys looking after her or not?

This is a common fairy tale, seriously people, what is wrong with you that you can't see the real story here. It's about poverty, people not dealing with mental illness and broken systems???

The fact doctors even allowed her to be filmed for your amusement shows they are not very well trained.

18
Fact Check now available in Google Search and News blog.google
299 points by fouadmatin  2 days ago   248 comments top 53
1
jawns 2 days ago 13 replies      
I'm a former journalist, and one of the mistakes I often see people make is to either give too much or not enough credence to whether the facts in a news story (or op-ed) are true.

Obviously, if you disregard objective facts because they defy your assumptions or hurt your argument, you're deluding yourself.

But an argument that uses objectively true and verifiable facts may nevertheless be invalid (i.e. it's possible that the premises might be true but the conclusion false). Similarly, a news story might be entirely factual but still biased. And in software terms, your unit tests might be fine, but your integration tests still fail.

So here's what I tell people:

Fact checking is like spell check. You know what's great about spell check? It can tell me that I've misspeled two words in this sentance. But it will knot alert me too homophones. And even if my spell checker also checks grammar, I might construct a sentence that is entirely grammatical but lets the bathtub build my dark tonsils rapidly, and it will appear error-free.

Similarly, you can write an article in which all of the factual assertions are true but irrelevant to the point at hand. Or you can write an article in which the facts are true, but they're cherry-picked to support a particular bias. And some assertions are particularly hard to fact-check because even the means of verifying them is disputed.

So while fact checking can be useful, it can also be misused, and we need to keep in mind its limitations.

In the end, what will serve you best is not some fact checking website, but the ability to read critically, think critically, factor in potential bias, and scrutinize the tickled wombat's postage.

2
endymi0n 2 days ago 6 replies      
The problems aren't facts. The problems are what completely distorted pictures of reality you can implicitly paint with completely solid and true facts.

If 45 states that "the National Debt in my first month went down by $12 billion vs a $200 billion increase in Obama first mo." that's absolutely and objectively true - except that Obama inherited the financial meltdown of the Bush era and Trump years of hard financial consolidation (while any legislation has a lag of at least a year to trickle down into any kind of reporting at government scale).

Fact-checking won't change a thing about spin-doctoring. At least not in the positive sense.

3
pawn 2 days ago 3 replies      
I think this has huge potential for abuse. Let's say politifact or snopes or both happen to be biased. Let's say they both lean left or both lean right. Now an entire side of the aisle will always be presented by Google as false. I know that's how most people perceive it anyway, but how's it going to look for Google when they're taking a side? Also, I have to wonder whether this will flag things as false until one of those other sites confirms it, or does it default to neutral?
4
provost 2 days ago 2 replies      
I want to think about this both optimistically and pessimistically.

It's a great start and hope it leads to improvement, but this has the same psychological effect as reading a click-bait headline (fake news in itself) -- unless readers dive deeper. And just as with Wikipedia, the "fact check" sites could be gamed or contain inaccurate information themselves. Users never ask about the 'primary sources', and instead justread the headline for face-value.

My pessimistic expectation is that this inevitably will result in something like:

Chocolate is good for you. - Fact Check: Mostly True

Chocolate is bad for you. - Fact Check: Mostly True

Edit: Words

5
sergiotapia 2 days ago 3 replies      
Snopes and Politifact are not fact-checking websites.

>Snopes main political fact-checker is a writer named Kim Lacapria. Before writing for Snopes, Lacapria wrote for Inquisitr, a blog that oddly enough is known for publishing fake quotes and even downright hoaxes as much as anything else.

>While at Inquisitr, the future fact-checker consistently displayed clear partisanship.She described herself as openly left-leaning and a liberal. She trashed the Tea Party as teahadists. She called Bill Clinton one of our greatest presidents.

---

I think fact checking should be non-partisan, don't you?

6
allemagne 2 days ago 1 reply      
I think that politifact, snopes, and most fact-checking websites I'm aware of are great and everyone should use them as sources of reason and skepticism in a larger sea of information and misinformation.

But they are not authorities on the truth.

Google is not qualified to decide who is an authoritative decider of truth. But as the de facto gateway to the internet, it really looks like they are now doing exactly that. I am deeply uncomfortable with this.

7
throwaway71958 2 days ago 0 replies      
This is incomplete: they need to also include the political affiliations of owners of "fact check" sites, and perhaps also FEC disclosure for donations above threshold, and sources of financial support. I.e. this site comes from PolitiFact, but its owner is a liberal and he took a bunch of money from Pierre Omidyar who also donated heavily to the Clinton Global Initiative. Puts the fact checks in a more "factual" light, IMO. Fact check on the fact check: http://www.politifact.com/truth-o-meter/article/2016/jan/14/...

Things have gotten hyper-partisan to the extreme in the past year or so, so you sometimes see things that are factually true rated as "mostly false" if they do not align with the narrative of the (typically liberal) owners.

8
pcmonk 2 days ago 1 reply      
What I wish they would do is use their fancy AI to put in a link to the original source. Tracking down original sources is extremely tedious, but it generally gives you the clearest idea of what's actually going on.
9
artursapek 2 days ago 0 replies      
I see Google having good intentions here, but I fall back to my previous sentiment on trying to assign "true/false" for all political stories and discussions.

https://news.ycombinator.com/item?id=13793576

10
tabeth 2 days ago 1 reply      
Fact checking is irrelevant. What's necessary is education. Just like spellcheck will not allow you to magically compose elegant prose, fact check is notgoing to prevent people from being misled. Notice how both of these "problems"have the same solution. In fact, fact check can be counter productive as peoplenow sprinkle their articles with irrelevant facts.

Education is the solution to all social problems.

11
DanBC 2 days ago 2 replies      
I'd be interested to see how it copes with UK newspapers.

https://pbs.twimg.com/media/C6GuXQhWUAAN5F5.jpg

PROOF STATINS SAVE MILLIONS

STATINS IN NEW HEALTH ALERT

STATINS REALLY DO SAVE YOUR LIFE

HEALTH CHIEF SLAMS STATINS

HIGH DOSE OF STATINS CAN BEAT DEMENTIA

DOCTORS BAN ON STATINS

TAKE STATINS TO SAVE YOUR LIFE

NEW STATINS BOMBSHELL

OFFICIAL: STATINS ARE SAFE

STATINS: NEW SAFETY CHECKS

PROOF STATINS BEAT DEMENTIA

HOW STATINS CAN CAUSE DIABETES

STATINS FIGHT CANCER

STATINS SLASH RISK OF STROKE BY 30%

STATINS INCREASE RISK OF DIABETES

STATINS ADD A MERE 3 DAYS TO LIFE

STATINS DOUBLE RISK OF DIABETES

STATINS AGE YOU FASTER

NEW STATINS SAFETY ALERT

STATINS LINKED TO 227 DEATHS

PROOF AT LAST STATINS ARE SAFE

12
sweetishfish 2 days ago 2 replies      
Who fact checks the fact checkers?
13
scottmsul 2 days ago 2 replies      
A better idea would be to look for disagreements. Given a news article or claim, are there any sources out there which disagree? Then the user could browse both claims and decide for himself.
14
ksk 2 days ago 1 reply      
Are we in the twilight zone? An advertising company fact checking political discourse? Would google apply the same fact check to their own company?

"Does Google dodge taxes"

15
civilian 2 days ago 0 replies      
So I mean, this is just a metadata tag. Anyone can make one. I'm looking forward to Breitbart & HuffPo abusing this...

I think it would be interesting to collect a list of websites that disagree on a claim review.

16
pdimitar 21 hours ago 0 replies      
"There's only one truth and that is Google".

Haha, no. Keep trying though.

Plus, as journalists in this thread have said, you might stick to the facts 100% (which I doubt Google will resist the temptation to abuse in the future, but let's leave that aside for now), your conclusion or subliminal message at the end might be entirely untrue and misguided.

Sorry, Google. You need wait the planet's collective IQ to drop by several tens still. It's not your time to dominate the news yet.

17
gthtjtkt 2 days ago 0 replies      
Snopes and Politifact are abject failures. Nothing but glorified bloggers who have declared themselves the arbiters of truth.

Even Rachel Maddow has called them out on numerous occasions, and she was rooting for the same candidate as them: http://www.msnbc.com/rachel-maddow-show/watch/politifact-fai...

18
smsm42 2 days ago 0 replies      
Reading the article, it looks like what is going on is that news publishers now can claim that their articles were fact checked, or certain article is a fact check article on another one, using special markup. They also say the fact checks should adhere to certain guidelines, but I don't see how it would be possible for them to enforce any of these guidelines. It looks like just self-labelling feature, with all abuse potential inherent in this.
19
throw2016 2 days ago 0 replies      
'Fact checking' should be limited to blatantly false news items fabricated and posted for online ad clicks ie 'Obama to move to Canada to help Trudeau run country' or 'Trump applies for UK citizenship to free UK citizens from Brussels despots'. These should be relatively easy to identify and classify.

There is a wide line between the fabrications above and news and journalism as we know it full of opinion, bias, agendas, propaganda and maybe some facts twisted to suit narrative.

The latter takes human level ai to sift through and even then detecting bias, leanings or manipulation depends on one's background, world view, specialization, knowledge levels, understanding of how the media works and a well informed general big picture state of the world.

This is impossible to classify for bias, falsehood or manipulation and will need readers to use their judgment. Trying to 'control' this is like trying to control news, favouring media aligned to your world view and discrediting those whose views you disagree with. It is for all purposes propaganda as we understand the term. Calling it fact checking is sophistry.

20
forgotpwtomain 2 days ago 1 reply      
This is a bad slippery slope - it suggests that a 'little sponsored banner' (which google chooses) can waive the necessity of being diligent in thought.
21
orangepenguin 2 days ago 0 replies      
There is obviously a lot of debate on whether or not fact checking is accurate and useful. I think simply presenting a fact check will help people think more critically about headlines they see every day. Like "Mythbusters Science". It's not perfect, but it helps people to think.

Relevant: https://xkcd.com/397/

22
josefresco 2 days ago 0 replies      
What if I told you (cue the Morpheus meme), that people consuming the "fake news" don't care that it's fake? It's called confirmation bias and winning. Education isn't going to solve this issue, you can't forcibly educate people nor can you change their core "values" and their determination to be "right".

The only "education" that I can envision working is quantifying the real-world-impact of their votes on the personal level. Ex: Your health insurance was cancelled? The representative you voted for caused that. This unfortunately is normally executed with a partisan goal, however should be applied as a public service to all Americans.

23
oldgun 2 days ago 0 replies      
Besides political debates, anyone else thinks this 'ClaimReview' schema put to use by Google is one step towards the application of Semantic Web? There might be something more than just a 'new app by Google' here.
24
debt 2 days ago 0 replies      
this is just gonna create a pavlovian response akin to "ah okay this is fact-checked i'll read" which'll just compound the problem. it presumes that google's fact-checking algorithms and methodology are sound.
25
okreallywtf 2 days ago 0 replies      
I'm amazed at how much cynicism I'm seeing here about this. People just keep repeating what can be boiled down to the same premise: complete objective truth basically doesn't exist. Truth is messy, tricky, subjective business. This is not new, this is just how the world is. Truth and understanding is best-effort and always has been, so why is a tool to attempt to combat some of the most egregious falsehoods even remotely a bad thing? Nobody should claim that its bulletproof, but I'm not seeing anyone really do this? The problem is some of us never deal in absolutes, we see nuance in everything (climate science, economics, political science) but there are others who do deal in absolutes and make a killing doing so. Sitting around having the same debate over and over about facts and truth doesn't do anything to tackle the problem.

My rule of thumb is that generally there is safety in numbers. Don't trust any single source and don't trust something that doesn't have a chain of reasoning behind it. I trust all kinds of scientific statements that I don't have the qualifications or time to vet myself - but we have to do our best and that often means doing a meta-analysis of how a conclusion was reached and how many other people/groups (who themselves have qualifications and links to other entities with similar qualifications) that the statements are linked to.

Fake news isn't 100 levels deep, its usually 1 level with no real supporting information. When people (like Trump) categorically denounce someone elses statement they often provide no real information of their own. Similarly, when refuting a fact-check, most people don't dig into it and refute something in their chain of reasoning, they just say "well that is just not true!" and leave it at that.

We don't need to fundamentally fix the nature of truth but we need to be able to combat the worst cases of misinformation and any tool that helps do that is great. Continuing the have the same philosophical debate about truth is fine from an academic standpoint but from a practical standpoint it is sometimes not helpful. I feel similarly about climate change - its great to acknowledge nuance but what good is that if we're trending towards pogroms and a totalitarian dictatorship (to be hyperbolic, maybe)?

26
narrowrail 2 days ago 0 replies      
Who will fact check the fact checkers?

Well, perhaps these trusted sources should implement a system similar to Quora/StackExchange but for opposing arguments?

Lots of comments call into question the biases of sites like Snopes/Politifact/etc. and allowing some sort of adversarial response would help claims about 'leftists wanting to control our minds.'

Maybe it's just a widget at the bottom of a fact check post leading to a StackExchange'd subdomain. A wiki or subreddit could work as well. Anyone looking for a side project?

27
balozi 2 days ago 0 replies      
One likely outcome from this is that Google Search and News will be now be perceived as partisan by the Hoi polloi. Same reason why the old media gatekeeper fell by the wayside.
28
ronjouch 2 days ago 0 replies      
> https://blog.google/products/search/fact-check-now-available...

Didn't know Google has its own top-level domain o. Previous HN discussion: https://news.ycombinator.com/item?id=12609551

29
pklausler 2 days ago 1 reply      
I really wish that major legitimate institutions of journalism (i.e., the ones that require multiple independent sources, publish corrections and retractions, &c.) would just stop pussyfooting around with nice simple accurate words like "lies" when they're reporting on somebody who's blatantly lying. False equivalency and cowardice is going to get us all killed.
30
pcl 2 days ago 0 replies      
The blog title is "Fact Check now available in Google Search and News around the world". I think that the extra bit at the end is worthy of inclusion, as I expect this to become a point of contention over the years.

I would not be surprised if different governments take issue with Google adding any sort of editorial commentary, even if it's algorithmically determined etc.

31
return0 2 days ago 0 replies      
It's a witch hunt. Science (rather, life sciences) has a similar problem. There are just enough (statistically significant) facts to push many agendas. Peer review weeds out some stuff, but that doesn't stop a lot of wrong conclusions being pushed to the public.

Maybe a better solution is adversarial opinionated journalism, rather than this proposed fact-ism.

32
MrZongle2 2 days ago 1 reply      
So what takes place when the inevitable happens, and an employee decides that an existing "fact check" (conducted by a third party, Google hastens to add) is philosophically inconvenient and thus removes it?

Also, FTA: "Only publishers that are algorithmically determined to be an authoritative source of information will qualify for inclusion."

What's the algorithm? Who wrote it?

33
dragonwriter 2 days ago 0 replies      
Original title is "Fact Check now in Google Search and News"; the different capitalization vs the current HN headline ("Fact Check Now...") is significant, the new feature "Fact Check" is now available in Google Search and News, rather than a feature "Fact Check Now" being discussed in those services.
34
losteverything 2 days ago 1 reply      
Billy Jack was rated M.

This is just another new rating system.

As long as they don't prevent me from reading false things, I can live with it.

Keep it my choice.

35
mark_l_watson 2 days ago 0 replies      
I don't like this, at all. People need to rely on their own reasoning skills and critical judgement and not let centralized authorities have a large effect on what people can read. I like systems to be decentralized and this seems to be the opposite.
36
takeda 2 days ago 0 replies      
I know a person who eats those "alternative facts" like candy. When I tried to prove one of them wrong, I pulled out a website to do a fact check and his response was: "you trust Snopes?" so I have doubts this will help much, but I would like to be wrong.
37
coryfklein 2 days ago 0 replies      
Pretty neat! Unfortunately doesn't help when searching for "obama wiretap trump tower".

https://www.google.com/search?q=obama+wiretap+trump+tower

38
Mithaldu 2 days ago 0 replies      
Like very often when google says "everywhere" they don't remotely mean everywhere and should instead be saying "in the usa". My country's edition of google news has no fact check at all.
39
westurner 2 days ago 0 replies      
So, publishers can voluntarily add https://schema.org/ClaimReview markup as RDFa, JSON-LD, or Microdata.
40
DrScump 2 days ago 0 replies      
It's interesting timing that just today, for the first time in a couple of weeks, my Facebook feed has fake news clickbait ads again.

Unless both Kevin Spacey and Burt Reynolds are, in fact, dead. Again.

41
ArchReaper 2 days ago 2 replies      
Anyone have an alt link? 'blog.google' does not resolve for me.
42
thr0waway1239 2 days ago 0 replies      
Factual Unbiased Checks for Knowledge Upkeep by Google.
43
xster 2 days ago 0 replies      
The fact that this came from CFR/Hillary's State Department's Jigsaw is very troubling.
44
sova 2 days ago 0 replies      
Hurrah for Google! Now if only Facebook and SocialNetworkGiants(tm) would follow suit!
45
retox 2 days ago 1 reply      
I don't trust Google to tell me the sky is blue.
46
codydh 2 days ago 1 reply      
I tried a slew of recent statements that are objectively false but that a certain politician in the United States has tried to say are true. Google returned fact checks for exactly 0 of the queries I tried.
47
keebEz 2 days ago 1 reply      
A fact has no truth value. Truth only comes from reason, and reason only exists in each person's head. This is reducing the demand for reason, and thus destroying truth.
48
ffef 2 days ago 0 replies      
A great start in the right direction and a kudos for using Schema to help battle "'fake news'"
49
gokusaaaan 2 days ago 0 replies      
who fact checks the facts checkers?
50
SJacPhoto 2 days ago 0 replies      
And Who controls the fact-check facts?
51
isaac_is_goat 2 days ago 0 replies      
Snopes and Politifact? Really? smh
52
huula 2 days ago 0 replies      
Goodgirl!
53
snowpanda 2 days ago 2 replies      
Snopes and Politifact, they can't be serious. Not that I expected them to pick a neutral source, nor am I surprised that Silicon Valley's Google picked 2 leftist "fact" sources. This is a stupid idea, everyone has a bias. This isn't to help people, this is to influence how people see things.
19
What is this colored fiber in my chicken? stackexchange.com
337 points by kurmouk  9 hours ago   209 comments top 9
1
Hexcles 3 hours ago 12 replies      
Animal welfare aside, I find people in North America really in favour of chicken breast, much more than other parts, say chicken thigh. Yet I myself think chicken thigh tastes much better, especially with the skin (yet again it is usually skinless in supermarkets here, unfortunately). Is it because of nutrition (percentage of fat/protein etc.)? On top of that, chicken feet are considered unacceptable by many...
2
ncr100 8 hours ago 7 replies      
Fake meat cannot come soon enough - poor bird was encouraged to grow in an unhealthy manner resulting in dead tissue inside it while it was still alive. I wonder if it was painful for the bird having this tough dead tissue at the core of its breasts.
3
pvaldes 5 hours ago 1 reply      
This is not a problem caused by antibiotics. Is a problem of genetic nature that happens because broilers are inbreed for growing big and fast. The same birds in true range with plenty of food would face exactly the same problem, with or without antibiotics. They are too heavy and often tend to have cardiac diseases, but they live short lives and are delicious so they are the most sucessful bird in the planet.

On the other hand, we are a paradoxal species. Able to feel horrified by this, whereas happily petting our distorted-faced bulldogs, persian cats, caesarean born bullterriers, extra-dwarfed toy Yorkshires, ponies and toy mini pigs, without any trace of moral conflict...

4
bambax 8 hours ago 7 replies      
Brown meat is much much more delicious; why would anyone prefer white meat is beyond me.

And the irony is that, in buying "heavy breasted chicken", customers pay for something they can't consume (assuming chicken is priced by the pound in the US).

5
Devagamster 8 hours ago 5 replies      
This is kinda terrifying. I can't put my finger on why exactly but dang.
6
hellofunk 3 hours ago 1 reply      
People just eat way too much chicken. The numbers of chicken consumed every year in most western nations is astounding. And those poor birds, the way they are packed to the point of not even being able to walk while they are raised, it's really a lot more disgusting than the final product shown in this article.
7
codr4life 5 hours ago 2 replies      
8
andrewclunn 3 hours ago 1 reply      
Your concern for animal welfare will never be enough for the animal rights people (just read the other comments here).
9
enibundo 3 hours ago 1 reply      
I have a solution, (downvotes are welcome), eat an organic and mostly vegatarian diet.
20
Does it scale? Who cares (2011) jacquesmattheij.com
415 points by ne01  1 day ago   262 comments top 31
1
timewarrior 1 day ago 29 replies      
Couldn't agree with this article more.

I built the biggest social network to come out of India from 2006-2009. It was like Twitter but over text messaging. At it's peak it had 50M+ users and sent 1B+ text messages in a day.

When I started, the app was on a single machine. I didn't know a lot about databases and scaling. Didn't even know what database indexes are and what are their benefits.

Just built the basic product over a weekend and launched. Timeline after that whenever the web server exhausted all the JVM threads trying to serve requests:

1. 1 month - 20k users - learnt about indexes and created indexes.

2. 3 months - 500k users - Realized MyISAM is a bad fit for mutable tables. Converted the tables to InnoDB. Increased number of JVM threads to tomcat

3. 9 months - 5M users - Realized that the default MySQL config is for a desktop and allocates just 64MB RAM to the database. Setup the mysql configs. 2 application servers now.

4. 18 months - 15M users - Tuned MySQL even more. Optimized JDBC connector to cache MySQL prepared statements.

5. 36 months - 45M users - Split database by having different tables on different machines.

I had no idea or previous experience about any of these issues. However I always had enough notice to fix issues. Worked really hard, learnt along the way and was always able to find a way to scale the service.

I know of absolutely no service which failed because it couldn't scale. First focus on building what people love. If people love your product, they will put up with the growing pains (e.g. Twitter used to be down a lot!).

Because of my previous experience, I can now build and launch a highly scalable service at launch. However the reason I do this is that it is faster for me to do it - not because I am building it for scale.

Launch as soon as you can. Iterate as fast as you can. Time is the only currency you have which can't be earned and only spent. Spend it wisely.

Edited: formatting

2
shadowmint 1 day ago 4 replies      
I care.

It's easy to brush off scaling concerns as not important, but I've had personal experience where it's mattered, and if you want a high profile example, look at twitter.

Yes, premature optimization is a bad thing, and so is over engineering; but that's easy to say if you have the experience to make the right initial choices that mean you have a meaningful path forward to scale when you do need it.

For example, lets say you build a typical business app and push something out quickly that doesn't say, log when it fails, or provide an auto-update mechanism, or have any remote access. Now you have it deployed at 50 locations and its 'not working' for some reason. Not only do you physically have to go out to see whats wrong, you have to organize a reinstall at 50 locations. Bad right? yes. It's very bad. (<---- Personal experience)

Or, you do a similar ruby or python app when your domain is something that involves bulk processing massive loads of data. It works fine and you have a great 'platform' until you have 3 users, and then it starts to slow down for everyone; and it turns out, you need a dedicated server for each customer because doing your business logic in a slow language works when you only need to do 10 items a second, not 10000. Bad right? yes. Very. Bad. (<---- Personal experience)

It's not premature optimization to not pick stupid technology choices for your domain, or ship prototypes.

...but sometimes you don't have someone on the team with the experience to realize that, and the push from management is to just get it out, and not worry about the details; but trust me, if you have someone who is sticking their neck out and go, hey wait, this isn't going to scale...

Maybe you should listen to what they have to say, not quote platitudes.

Ecommerce is probably one of those things where the domain is well enough known you can get away with it; heck, just throw away all your rubbish and use an off-the-shelf solution if you hit a problem; but I'm going to suggest that the majority of people aren't building that sort of platform, because its largely a solved problem.

3
salman89 1 day ago 2 replies      
I agree with the general premise of avoiding premature optimizations, but designing systems that scale is important for several reasons:

- Startups grow exponentially, if you're playing catchup as you're growing you are focusing on keeping the lights on and hanging on for the ride. Important for a growing company to focus on vision.

- Software that scales in traffic is easier to scale in engineering effort. For example, harder for a 100 engineers to work on a single monolith vs 10 services.

- Service infrastructure cost is high on the list of cash burn. Scalable systems are efficient, allow startups to live longer.

- If the product you are selling directly correlates to computing power, important to make sure you are selling something that can be done profitably. For example, if you are selling video processing as a service, you absolutely need to validate that you can do this at scale in a profitable manner.

I also don't agree with the premise that speed of development and scalable systems are always in contention. After a certain point, scalable systems go hand and hand with your ability to execute quickly.

4
beefsack 1 day ago 4 replies      
Taking a completely blas approach to efficiency is potentially as dangerous as becoming hyper-focused on it.

Not all businesses become roaring successes, and those who achieve moderate success often don't get the resources to fix deep-seated performance or architectural issues (either via engineering and/or throwing hardware at it.) Eventually these technical woes can completely halt momentum and I've seen it even drown some businesses who just aren't able to dig theirselves out of the hole the find themselves in.

People always seem to be arguing for extremes, but the most sensible approach for most tends to be somewhere in the middle.

5
devduderino 1 day ago 7 replies      
I care because it usually goes like this:Product manager > "Niche sass app {x} will never need to support more than 10-20 users"

Two weeks after launch > "We have 10k users and counting, why didn't you architect this for scale?"

Always assume you underestimated the scope of the project.

6
hopfog 1 day ago 6 replies      
I'm in the unfortunate position where this question actually matters from day 1. I learnt the hard way a few days ago when I hit the bottleneck (about 50-100 concurrent users) and I'm not sure how to proceed.

It's a multiplayer drawing site built with Node.js/socket.io. I'm already on the biggest Heroku dyno my budget can allow and it's too big of a task to rewrite the back end to support load balancing (and I wouldn't know where to start). Bear in mind that this is a side-project I'm not making any money of.

I had a lot of new features planned but now I've put development on hold. It's not fun to work on something you can't allow to get popular since it would kill it.

7
didibus 23 hours ago 1 reply      
The reason scale isn't so important today is because most DBs can actually scale vertically to really high numbers. The tipping point is high enough that if you have this problem, you probably can also afford to fix it.

What matters though is performance and availability. No matter what scale you work at, you can't be slow, that will drive people away. You also can't be unavailable. This means that you might have to handle traffic spikes.

Depending on your offering, you probably also want to be secure and reliable. Losing customer data or leaking it will drive customers away too.

So, I'd mostly agree, in 2016, scale isn't a big problem. Better to focus on functionality, performance, security, reliability and availability. These things will impact all your customers, even when you only have one. They'll also be much harder to fix.

Where scale matters is at big companies. When you already have a lot of customers, youre first version of any new feature or product must already be at scale. Amazon couldn't have launched a non scalable prime now, or echo. Google can't launch a non scalable chat service, etc.

8
dlwdlw 1 day ago 0 replies      
Flexibility vs efficiency. Agile vs high momentum.

As a rule of thumb, start-ups need to be more agile as they are mostly exploring new territory, trying to create new value or re-scope valueless things into valuable things.

Larger companies operate at a scale where minor efficiency improvements can mean millions of dollars and thus require more people to do the same thing, but better. Individualistic thinking on new directions to go is not needed nor appreciated.

Of course there are excepttions. The question boils down to whether or not the ladder is on the right walk before charging up it.

In rare circumstances you can do both. Either the problem is trivial, or the problem becomes trivial because you have a super expert. 10x programmers who habitually write efficient code without needing to think too much have more bandwidth for things like strategy and direction. The car they drive is both more agile, accelerates faster, has a higher max speed, etc...but even this can't move mountains. The problem an individual can solve, no matter the level of genius, is still small in scope conpared to the power of movements and collective action and intention.

The most poweful skill is to seed these movements and direct them.

Abstractly, this is what VCs look for in founders and also a reason why very smart and technical people feel short-changed that they are not appreciated for their 10x skills. (Making 500k instead of millions/billions) They may have 10x skills, but there are whole orders of magnitude they can be blind to.

9
addicted 1 day ago 1 reply      
Healthcare.gov was a site that failed and suffered due to scaling issues.

However, it was an anomaly because unlike a product someone this article is intended towards would be building, that site had an immediate audience of millions of users from the get go.

Also, the fact that it took a few weeks to be rewritten to handle the load at which point it became extremely successful, strengthens the original article's point.

By the time scalability becomes a problem, you will have enough resources to tackle the scalability problem.

10
Jach 1 day ago 0 replies      
No one else bothered by "End-to-end tracking of customers" as the primary concern? Ok then.

On the subject of scaling, I think it's good to have an idea in your head about a path to scalability. One server, using PHP and MySQL? Ok. Just be aware you might have to load balance either or both the server and DB in the future, and that's assuming you've gotten the low hanging fruit of making them faster on their own. But as this thread's top comment illustrates, learning that stuff on the fly isn't too hard. So maybe it's better to make sure you're going with technology you sort of know has had big successes elsewhere (like Java, PHP, or MySQL) and even if you're not quite sure how you might scale it beyond the defaults you know others have solved that problem and you can learn later if/when needed.

11
tyingq 1 day ago 3 replies      
He makes the valid point that performance for each individual user, like page load time, does matter. Just that building for an audience size you don't yet have is mostly wasted time.

Seems reasonable. I wonder, though, if PHP feels like an anchor to the average Facebook developer. I realize they architected around it, but it must have some effect on recruiting, retention, etc. I use PHP myself, and don't hate it, but the stigma is there.

12
sbuttgereit 1 day ago 0 replies      
The overall premise of the blog is exactly correct; though I would say some areas you probably need to consider more than others.

Spending a lot of time figuring out what exact microservice/sharding/etc strategy you need to serve a zillion visits a day and building it before you've even got customer/visitor one is overkill out of the gate. But that shouldn't mean you shouldn't think about how you'll scale over the short term or medium term at all.

When I approach scaling, I'll tend to spend much more time on the data retention strategy than anything else: databases (or other stores), being stateful, means that it's a harder problem to deal with later than earlier as compared to the stateless parts of the system. Even so, I'm typically not developing the data services for the Unicorn I wish the client will become, I'm just putting a lot more thought into optimizing the data services I am building so it won't hit the breaking point as early as it might if I were designing for functionality alone. I do expect there to be a breaking point and a need to change direction at some point in these early stage designs. But in that short to medium term period, the simpler designs are regularly easier to maintain than the fully "scalable" approaches that might be tried otherwise, and rarely do those companies ever need anything more.

13
uptownfunk 1 day ago 1 reply      
I like the overall idea here. Focus on building something quality first then worry about scaling later. Most servers can handle a decent amount of traffic. Seems like common sense to me. I guess some people can get too hung up on engineering to make their site scale before actually deploying or innovating on the product. Wonder if people have encountered this in the workplace before?
14
nomercy400 1 day ago 0 replies      
Once worked at a startup where we expected 'some activity' in our webshop at product launch, and didn't think about scaling for that. Well, some activity turned out to be 450mbit/s for five hours, which our unscalable application/webshop didn't handle very well. It became overloaded in the first minute, and took us more than an hour to get remote access again. It's one of those things we did better for our next big event (major sharding, basically replicated the application 32 times of the largest VM instance we could get. It was needed and it survived).
15
the_arun 1 day ago 0 replies      
I agree with this article only for launching new products. But if you already have a product which is serving millions of customers, you better worry about scale while you change anything.
16
iveqy 1 day ago 0 replies      
Having working on an app that we just throwed more hardware at, to the point where the azure subscription cost could be lowered by my whole anually salary, by 3 months work of optimization.

I believe performance does matter. We where a 4 person team and could have added a fifth if we had a cheaper design.

17
mannykannot 1 day ago 0 replies      
There is a similar argument with regard to making code reusable. I have seen inordinately complex code come from a desire to make it reusable, even if the prospects for it being reused were slim to nonexistent.
18
janwillemb 1 day ago 0 replies      
In general: don't fix a non-existing problem. You don't know beforehand what the problems of the future look like. Fancy technology X of today is technical debt in 10 years. So do invest in solving technical debt along the way in products you keep.
19
shoefly 1 day ago 0 replies      
Evolution is a beautiful thing.

I once worked for a monolith who decided to invent a new way of programming. They built something massive and ready to scale even larger. And then they discovered no one wanted the product.

20
shanecleveland 23 hours ago 0 replies      
Came across a service last week with a free trial and paid plan. How to upgrade? Contact by email! Why spend time and resources on a payment process if you don't have any paying customers yet?

Obviously not right for everyone, and I'm not saying it doesn't have its own challenges, but the core product deserves the most attention early on.

21
andromeda__ 1 day ago 0 replies      
Fundamentally disagree with the ethos of this article. What about ambition? What happened to that?

> You wont make the cover of Time Magazine and you wont be ordering a private jet but that Ferrari is a definite possibility, if thats the thing you are hurting for. (college education for your kids is probably a better idea ;) ).

I'd like to be on the cover of fortune or as Russ Hanneman might say, "I wanna make a fuckton of money all at once".

I don't see any problem with being ambitious or wanting that private jet.

22
amelius 1 day ago 0 replies      
A better title would be: scalability is a luxury problem.
23
seajones 1 day ago 0 replies      
I do agree, but being at the "we need to scale up asap" stage atm makes it harder to. There's a balance to be struck. Maybe about approach of "who cares" with the POC, MVP etc stages, then keep it more and more in mind at each stage after that would be best.
24
tianlins 1 day ago 0 replies      
It really depends the type of growth. Organic growth is driven by the quality of product therefore scaling issue comes later. But most venture-backed startup services such as O2O need quickly dominate market by throwing cash to get users so scaling is an issue from day one.
25
ninjakeyboard 1 day ago 0 replies      
I agree BUT it's not that hard to ensure your app scales. It's more about using appropriate tools for the job.
26
xyzzy4 1 day ago 2 replies      
Ok but please don't do things like using nested array searches with bad runtime when you can use hashmaps instead. I hate seeing code or using programs that are extremely unoptimized.
27
innocentoldguy 1 day ago 0 replies      
I agree with this article; however, there are considerations that can be made early on, to ensure an easy path for future scalability, that don't waste any time or money during the project's nascency. For example, if I know I want my app to scale at some point in the future, I may opt to build it in Elixir, or I may choose to use a Riak cluster rather than of MySQL.
28
namanyayg 1 day ago 0 replies      
Can we have (2011) in the title?
29
partycoder 1 day ago 0 replies      
While a point the article tries to make (fix your leaky funnel before acquiring users) is true, I disagree with the article. If your application is converting well, scalability problems are not acceptable.

I have seen applications that convert very well, but were limited by scalability problems. That meant that the business had to hold on on marketing and user acquisition, missed their financial targets, and that cascaded into breaching contracts. The phrase that nobody wants to hear in that situation is "who cares about scalability".

Now, if you did not have a lot of problems scaling in your particular case, that just means it was not an obstacle for you. e.g: you had good intuition around performance/scalability, or the problem was coincidentally a good fit for your technological choices.

Unfortunately not everyone has a good intuition about scalability, not everyone is risk averse and not everyone is good at picking a good technology for their use case. So I disagree with this article in the sense that it is not in the best interest of a random reader to not care about scalability.

30
debt 1 day ago 0 replies      
i concur. it's fun to dream, but sadly, statistically most of us will never have to worry about scaling! so save yourself the energy and don't worry about it.
31
cabaalis 1 day ago 7 replies      
I liked the article and agree with its premise. But as a side question, why do developers use so many parenthetical expressions?

Those ideas (like this one, which happens to add nothing) are often either throwaway statements (like this one) or are by themselves complete thoughts that should be a separate sentence. (I see this so often in posts written by devs.)

21
Great Barrier Reef at 'terminal stage' theguardian.com
298 points by mjfern  12 hours ago   132 comments top 18
1
crawrey 9 hours ago 4 replies      
I grew up on the coast of the Great Barrier Reef in North Queensland and I can say that the current state of the reef is almost unrecognisable to what it was 20-odd years ago.

While a large part of this damage has been caused by rising sea temperatures, another large component is due to the run-off from agriculture, refineries and mining. The latter being a directly contributed by the local population.

The region is currently in a economic recession and many of the mines and refineries have either slowed or ceased operation. Anecdotally, the sentiment of the population affected by (un)employment by these industries are either unaware or ignorant towards the damage that the industries are having on this sensitive ecosystem. Instead they are consumed by how they are going to make ends meet.

In this environment, it is unthinkable to allow Adani expand their Carmichael mine to further exacerbate the situation. Add to this, that a former Adani board member is appointed to evaluate the environmental impacts of the expansion. Adani is the biggest direct threat to Australia, both environmentally and economically and they are in talks with the government to be provided with a $1 billion tax-payer funded railway line. Adani and the Carmichael mine expansion are rifled with corruption.

The issue of the reef and climate-change in general is a fairly untouched issue in Australian politics. I'm not sure that we are going to get anywhere without foreign intervention.

If you are interested, I do urge you to read some material on Adani and the Carmichael coal mine expansion and perhaps donate to a "StopAdani" cause.

Wish us luck.

2
spodek 3 hours ago 5 replies      
Many posts here about how sad and disgusted people are. Not much about people taking personal responsibility.

What do people think the carrying capacity of the planet means? Sustaining more humans means sacrificing other life that competes for our resources. It means pollution rising until it doesn't quite kill us but is well above the levels of a pristine, clean environment.

Nobody wants to live near the carrying capacity because approaching it means sacrificing anything that doesn't keep us alive.

Every round trip flight across the country you take contributes almost one year's allotment from the Paris agreement for one person -- https://co2.myclimate.org/en/portfolios?calculation_id=71970.... Flying first class and you're well over it. Eating meat contributes a lot too. Having more kids in western cultures contributes significantly.

Who among us, reading this, hasn't gone over their annual limit in just a few hours of flying, not to mention their regular life otherwise? How many have blown past their Paris limits already this year?

Some would say the damage was done by past generations. Okay, well what beautiful part of nature will our behavior destroy years from now? People keep posting to HN that since we can't change that a lot will happen, it doesn't matter any more, we should just enjoy ourselves, but there are different degrees of destruction.

Alternatively, we can fly less, drive less, eat much less meat, and have fewer kids. We don't need to wait for legislation. In fact, it's the fastest way to get legislation, since politicians follow voters.

In my experience, acting on all those things improved my life tremendously (including not flying for a year+ http://www.inc.com/joshua-spodek/365-days-without-flying.htm...), more than almost anything else. I'm more fit, enjoy my neighborhood and neighbors, and spend less money and there's nothing special about me.

3
Luminarys 10 hours ago 1 reply      
Although I've never seen the Great Barrier Reef, I was recently in Belize and snorkeled around its Barrier Reef. It was painfully obvious that the reefs were extensively bleached(though still quite beautiful). It's quite shocking to think that 30 years ago the reefs looked completely different from now and in another 30 years may not even be around. I hope that in the future we won't be reduced to having to point to pictures in a book if people want to witness the beauty of nature but this seems increasingly inevitable. What a shame.
4
jozzas 9 hours ago 0 replies      
There are some excellent scientists and programs attempting to improve water quality (particularly catchments that flow into the reef) and the crown of thorns starfish.

Unfortunately these are badly underfunded, not coordinated at a national level as they should be, and do not address the biggest threat - climate change. The reef is doomed unless something is done about CO2 emissions. It's probably already too late.

The loss of the GBR will see a collapse of tourism industries, and entire ecosystems will die off. There are going to be huge impacts in the next 15-20 years to come out of this.

A lot of low lying countries in the pacific will get the triple whammy of increased cyclone activity, rising sea levels and a loss of reefs and the fish populations that they subsist on. There are huge humanitarian disasters ahead.

5
ohashi 10 hours ago 1 reply      
I saw a bleaching event in Thailand last year, it really is depressing to see. This year, the same sites have seemingly recovered and I'm really happy about that. But seeing the pictures of completely bleached white corals in the article and knowing that's probably the future for a lot of these reefs breaks my heart. Coral reefs are magical places and we're destroying them for future generations, maybe even the current one.
6
H4CK3RM4N 10 hours ago 4 replies      
Sadly I can't remember the last time we had any real action to protect the reef, and our current government is all too willing to put businesses above the environment.
7
Red_Tarsius 10 hours ago 2 replies      
This is what keeps me up at night. If we don't find an efficient way to extract CO2 and methane from the atmosphere, I fear that mankind might go extinct.
8
zipwitch 1 hour ago 0 replies      
This is merely the inevitable outcome of what we've known has been coming for a while, even if many haven't wanted to acknowledge that the corals were effectively already dead.http://www.nytimes.com/2012/07/14/opinion/a-world-without-co...
9
orschiro 10 hours ago 1 reply      
The sad fact is that even these devastating developments do not make us change systemically to the extent required to counterfeit them.
10
psynapse 8 hours ago 0 replies      
This really saddens me.

I spent a week or so on Lady Elliot Island more than a decade ago, snorkelling every day. Because it is a protected area, the fauna are unafraid of people. I would dive down into these cavernous bowls of coral and be surrounded by schools of fish, rays, turtles. It was amazing.

I live in Europe now, but I always hoped to take my children there to see it one day. Seems there won't be much to see.

11
huckyaus 8 hours ago 1 reply      
My cousin Sam[0] runs the environmental side of things at GetUp and is putting a lot of time and effort into raising awareness about the reef. They're currently fundraising[1] for a targeted campaign in 12 Liberal electorates with the aim of encouraging MPs to break their silence and listen to their constituents on the issues surrounding Adani and the GBR.

Full disclosure: GetUp is a politically partisan organisation with strong left leanings. But I think the work they're doing around these issues is rooted more in common sense than politics.

Is anyone else involved in any grassroots-level efforts to save the reef? I'd be interested to hear about them.

[0] https://twitter.com/samregester

[1] https://www.getup.org.au/campaigns/great-barrier-reef--3/the...

12
infradig 7 hours ago 1 reply      
The real culprit for coral bleaching? A swift fall in mean sea level during a major El Nino event. See:http://www.biogeosciences.net/14/817/2017/
13
josscrowcroft 3 hours ago 1 reply      
Is improving water quality the decided-upon method for preventing (or even reversing) bleaching of coral reefs?

It seems like that ship has sailed, and now more technological advances are required.

Speaking from zero expertise or experience in marine biology, is it not possible to manufacture massive quantities of synthetic coral that somehow corrects for the changing water quality to enable coral life to flourish?

14
rodionos 9 hours ago 0 replies      
> Coral bleaches when the water its in is too warm for too long. The coral polyps gets stressed and spit out the algae that live in inside them. Without the colourful algae, the coral flesh becomes transparent, revealing the stark white skeleton beneath.
15
Slobbinson 6 hours ago 1 reply      
Pauline Hanson & Malcolm Roberts will pose in front of the coral display at the Townsville Aquarium and tell us it's all a beat-up.
16
good_vibes 8 hours ago 5 replies      
What can we do? Serious question.
17
hoodoof 10 hours ago 4 replies      
x
18
harry8 2 hours ago 0 replies      
I've been hearing the reef is at death's door for 30 years. When I go, it looks amazing. Still. Maybe this scare campaign is different to all the other ones. Maybe this time as they cry wolf there really is a wolf. Maybe... If so it's a perfect example of the environmental movement destroying their own credibility so that not many in Australia, among those who love the reef, care what they're saying this week. People need to call bullshit on bullshit whether that bullshit is in the service of great justice or not. It's still bullshit and it still trashes credibility. Outrage fatigue is far worse when you know you've been had.
22
The BEAM Book A Description of the Erlang RTS and the Virtual Machine BEAM github.com
330 points by weatherlight  2 days ago   14 comments top 9
1
rdtsc 2 days ago 0 replies      
There is even a separately implemented BEAM VM for running directly on Xen hypervisor:

https://github.com/cloudozer/ling

An impressively done book on BEAM instruction sets:

https://github.com/cloudozer/ling/tree/master/doc

There is even a handy dandy online instruction set completion search:

http://erlangonxen.org/more/beam

2
qaq 2 days ago 0 replies      
To anyone who wants to learn about the BEAM would highly recommend Hitchhiker's Tour of the BEAM by Robert Virdinghttps://www.youtube.com/watch?v=_Pwlvy3zz9M
3
qohen 2 days ago 1 reply      
There's a bit of back-story here, the gist of which is: the book initially was supposed to come out from O'Reilly and Associates, but periodically the release date would get pushed back by 3 months or so.

Then they cancelled it.

It was then picked up by Pragmatic Programmers, but they wound up cancelling it too.

In any case, Erik told me a couple of weeks ago at Erlang & Elixir Factory that he'd be looking to get it out himself online by -- or around, I forget -- summertime, so it's nice to see he did this now, even if it means I lose the betting pool (I kid).

4
tombert 2 days ago 1 reply      
This is incredibly cool. BEAM is one of the most fascinating bits of tech to me, particularly its garbage collector.

You've given me a bit of reading material for my train-ride for the next few days, so thank you!

5
defined 2 days ago 0 replies      
FWIW, there's a description of most of Erlang internal data structures here: https://edfine.io/blog/2016/06/28/erlang-data-representation...
6
jarrettch 2 days ago 0 replies      
This is amazing. I've started to learn Elixir recently, and as a result my interest in Erlang and BEAM has been piqued. I'm sure a lot of this will be over my head for now, but looking forward to digging in.
7
jfaucett 2 days ago 0 replies      
This is awesome. Unlike the JVM, its really hard to find anything describing how the Erlang VM works in detail. I tried reading through the source a while back but it was still hard to get a gist of what was going on because of the shear size of the project.

This will be a very valuable resource. Thanks so much to Erik!

8
dkroy 2 days ago 0 replies      
Not to be confused with Apache Beam which is looking to become a survivor in the big data space with it's abstractions over whatever is the hottest new streaming or batch processing technology.
9
alexott 2 days ago 0 replies      
Thank you! Did you think about using gitbook service?
23
Best Practices for Applying Deep Learning to Novel Applications arxiv.org
285 points by mindcrime  1 day ago   17 comments top 6
1
AndrewKemendo 1 day ago 3 replies      
It's good general advice, but frankly I think it doesn't address some of the major pitfalls - namely the top one: personnel.

In the very beginning it's stated:

"In this report I assume you are (or have access to) a subject matter expert for your application."

In my experience this is where it goes off the rails for most of the crowd that she is addressing. Not because they don't have someone, but because who they have isn't really a "subject matter expert."

It's a muddy term anyway especially in the field of Machine Learning. Excluding for a moment the huksters and bold faced liars, within ML there is WIDE variance in competence, domain specificity and application specific capability within the field.

The biggest capability gap that I have encountered when working with fantastic ML folks is that the ones that understand the mechanisms/algorithms/approaches best, are actually pretty terrible at delivering production code. That's not for lack of capability, it's simply because the bulk of their time has been spent in research - so they approach things very differently than application focused engineers. This is extremely relevant in this case because this is an application specific paper.

There are a plethora of mine-fields in applications of ML, some of which are outlined here from a systems approach, but the majority of which are personnel issues in my experience, and "culture" issues - not to be confused with "culture fit" problems that exist elsewhere.

2
gwern 1 day ago 0 replies      
Looks like good advice to me. Like a more DL-focused version of "A Few Useful Things to Know about Machine Learning" http://www.datascienceassn.org/sites/default/files/A%20Few%2... , Domingos 2012
3
comicjk 1 day ago 0 replies      
> Lets say you want to improve on a complex process where the physics is highly approximated (i.e., a spherical cow situation); you have a choice to input the data into a deep network that will (hopefully) output the desired result or you can train the network to find the correction in the approximate result. The latter method will almost certainly outperform the former.

This aligns with my experience (computational chem PhD). When applying a strong, general-purpose mathematical patch to an existing model, use as much of the existing model as possible. Otherwise the patch will have a hard time fitting, and maybe be worse than what you started with. Philosophically, this also comports with my thinking (it's the modeling equivalent of Chesterton's Fence https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence).

4
DrNuke 1 day ago 0 replies      
Physics is deterministic in states and always tends towards an equilibrium, so novel results from DL may still fit some continuum math model without being stable or even real. Domain expertise, on the other hand, helps prepare data for ML algos in such a way that results will come (or not) within the boundaries of reality and hopefully stability. I am trying both approaches for some materials science goals of mine and am curious to see what happens, now that powerful hardware is cheap enough on the cloud to put some ideas to work. Side point is all this was just impossible five years ago, so I am grateful and excited to have this opportunity.
5
bluetwo 1 day ago 0 replies      
Was kind of hoping for some examples of novel applications no on has thought of. :-)
6
lngnmn 1 day ago 0 replies      
Surprisingly good and sane paper, without all that hipster's bullshit.

The emphasis on the quality of the training data and, most importantly, on the evaluation and careful choice of heuristics on which the model to be build upon, is what makes the paper sane.

There is no shortage of disconnected from reality models based on dogmas, while, it seems, there is acute shortage of the models properly reflecting some particular aspects of reality.

Data and proper, reality-supported heuristics (domain knowledge) are the main factors of success. Technical details and particular frameworks are of the least importance.

This, BTW, is why it is almost impossible to compete with megacorps - they have the data (all your social networks) and they have the resources, including domain experts, without whom designing and evaluating a model is a hopeless task.

24
File Format Posters github.com
313 points by dcschelt  2 days ago   45 comments top 15
1
digikata 2 days ago 1 reply      
Reminds me of the MPEG-2 transport stream poster:http://in.tek.com/poster/mpeg-poster-dvb

If he runs out of file formats, he could move on to protocols...

2
barsonme 2 days ago 4 replies      
edit: I just noticed the author has a link to order prints from him/her, that's definitely the more polite option: http://www.redbubble.com/people/ange4771

It also seems to be less expensive than options like Office Depot, too.

Does anybody have any suggestions on how to get these printed as full-sized posters?

3
woliveirajr 2 days ago 0 replies      
> https://github.com/corkami/pics/blob/master/binary/CryptoMod...

This one is great. Nothing as using crypto wrong so that it becomes useless.

4
jwcrux 2 days ago 1 reply      
I'm a big fan of these posters! I even made something similar to show the format of the Tor consensus [0]

[0] http://jordan-wright.com/blog/images/blog/how_tor_works/cons...

5
chillingeffect 2 days ago 4 replies      
back before 2000, it really was important to know file formats. we didn't use libraries. we looked up the formats in books and implemented fresh code every time. I prided myself on having memorized most of the .wav header, enough that i didn't need a reference. Then, I learned .fig. Then, I worked on understanding .jpg.

Nowadays, with widespread APIs, the file formats' significance is almost irrelevant! In theory, only a single person in the world needs to know any file format. Everyone else can use a library they've written.

my how the world changes :)

6
jug 2 days ago 1 reply      
What! I always thought .SWF was for "ShockWave Flash", not Small Web Format. Ha, a bit late to learn though.
7
NuSkooler 2 days ago 0 replies      
This is excellent, thanks a lot for sharing!
8
westmeal 2 days ago 0 replies      
Thank you so much. I need to write a program that creates png files from arbitrary data so this will certainly come in handy!
9
dluan 2 days ago 0 replies      
It would be awesome to have a file poster of itself. For when one day we run out of electricity and hand-translate bits.
10
rinon 2 days ago 0 replies      
We have two of these prints up in our office. I highly recommend them, even if just as decoration.
11
40acres 2 days ago 2 replies      
Great stuff but the font is comical.
12
oever 2 days ago 0 replies      
Awesome! Where can I buy the book?
13
ardivekar 2 days ago 0 replies      
> gif.png

This made me chuckle.

14
anjc 2 days ago 0 replies      
Very cool
15
billdybas 2 days ago 0 replies      
Wow! These are pretty cool.
25
High prevalence of diabetes among people exposed to organophosphates in India biomedcentral.com
258 points by aethertap  1 day ago   93 comments top 14
1
yomly 1 day ago 9 replies      
This headline explains my general aversion to "chemicals". This, despite the fact that everything is a chemical and that we are all little chemical machines.

The human physiology is unfathomably complex and the advent of synthetic chemistry has meant that we are now exposed to new molecules which have arisen at a rate tens, hundreds of thousands of years too early for our bodies to evolve to accommodate for them. Our exposure to these chemicals is also incredibly opaque: even eating "clean" by eating fruit and veg exposes us to a multitude of chemicals that come along the pipeline including fertilisers, pesticides and preservatives.

Nature is exquisitely sensitive to chemistry - I recall reading that natural systems have evolved to exploit and dispatch behaviour based on the isotopic composition of carbon-based molecules: naturally synthesised molecules also have a different isotopic profile to artificially synthesised molecules. For the record, Carbon-13 represents ~1% of the natural isotopic abundance.

If something as granular as the isotopic distribution of elements is important to physiological systems, how can we be so complacent as to constantly pile chemicals into every aspect of our lives?

Businesses will wantonly and irresponsibly use any method to increase their bottom lines and it falls to regulators to moderate this behaviour. As an example, I recall McDonald's doping their chip oil with a known toxic organic chemical to lower the rate of thermal decomposition of their oil. This is something they could as easily avoid by replacing their oil more often, but this is costly: they instead defer this cost onto our health by exposing us to unnecessarily dangerous chemicals.

In my opinion the FDA's (or indeed global regulators') thresholds for the use of chemicals is not stringent enough - humans are living longer, how do we know that prolonged exposure to any of these individual chemicals (let alone the cocktail of all of them) over a 50-100 year period are worth the risk?

For another anecdote of irresponsible chemical usage - the onset of lung cancer through smoking underwent a stepwise increase after the tobacco industry started using phosphate fertilisers to increase their crop yield: a side effect of the fertilisers was to enrich the soil in radium which would decay down to Pollonium-210, an alpha source of Russian-assassination fame. Studies have been done on characterising the sievert profile of tobacco leaves, highlighting the risk of this but no action on the tobacco industry has been taken to mitigate this.

2
indogooner 21 hours ago 0 replies      
From conclusion:Hence, rather than searching for other chemical alternatives, promotion and development of traditional self-sustainable, nature-based agricultural practices would be the right approach to feed this world.

Living off organic produce is not possible (at least in India as of now). The farmers do not earn much and have debts to pay-off. The only way they know of saving crops is using subsidised fertilizers provided by the government. Over the years indiscriminate use of pesticides has increased. In fact availability of Urea was a poll issue in National Election in some parts of India. [1]

[1] http://www.financialexpress.com/opinion/neem-coated-urea-why...

3
SCAQTony 23 hours ago 0 replies      
...and 36 of them are in use within the United States.

Emphasis below on chlorpyrifos which Trump's EPA took off the EPA's banned list. EPA bulletin written before Trump took office:

"...Thirty-six of them [organophosphates] are presently registered for use in the United States, and all can potentially cause acute and subacute toxicity. Organophosphates are used in agriculture,homes, gardens and veterinary practices; however, in the past decade, several notable OPs have been discontinued for use, including parathion, which is no longer registered for any use, and chlorpyrifos, which is no longer registered for home use. ..."

https://www.epa.gov/sites/production/files/documents/rmpp_6t...

4
eni 1 hour ago 0 replies      
The original title of the article: "Gut microbial degradation of organophosphate insecticides-induces glucose intolerance via gluconeogenesis"

Why is the title in HN edited to make this about "India"? Is this finding not applicable to people elsewhere? or id other parts of the world stop using organophosphates?

5
firasd 1 day ago 3 replies      
The mechanism they pinpointed is illustrated in "Figure 7":

OPs (star) enter the human digestive system via food and are metabolized into acetic acid (trapezoid) by the gut microbiota (oval). Subsequently, acetic acid was absorbed by the intestinal cells and the majority of them were transported to the liver through the periportal vein. Eventually, acetic acid was converted into glucose (hexagon) by gluconeogenesis in the intestine and liver and thus accounts for glucose intolerance.

So the pesticide is being eventually converted into glucose, which has the same effect as if you were eating too much sugar/carbs.

6
tudorw 21 hours ago 0 replies      
Farmers working with Sheep Dip chemicals have been studied;

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2078460/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3366364/

http://www.fwi.co.uk/livestock/op-sheep-dip-illness-new-deta...

http://www.telegraph.co.uk/comment/11561630/That-sheep-dip-p...

https://www.theguardian.com/environment/2015/apr/20/revealed...

There is also 'Genetic variation in susceptibility to chronic effects of organophosphateexposure' http://www.hse.gov.uk/research/rrpdf/rr408.pdf

Gulf War Syndrome has also been studied as a possible effect of close range exposure to organophosphates in pesticides and inset repellents.

7
spencermountain 1 day ago 2 replies      
to summarize, crop pesticides are converted to glucose internally, causing diabetes.

a pretty big plot-twist for a delirious problem in global health, and a find that resembles a 21st century silent-spring.

8
notadoc 22 hours ago 1 reply      
The primary reason most people I know who eat organic do so to avoid pesticides and herbicides.
9
jlebrech 5 hours ago 0 replies      
and of course the rise of the standard american diet in the 3rd world has nothing to do with it?
10
curtis 23 hours ago 0 replies      
I wonder if the people in India most likely to be exposed to organophosphates in India are also the people most likely to be living almost entirely off of rice. Did the study do a sufficiently good job eliminating obvious confounding factors?
11
mtw 20 hours ago 0 replies      
It's not just diabetes - exposure to pesticides increases risk of suicide, lymphoma, ALS. congenital anomalies and reduces fertility. There is a solid case in choosing organic foods. See summary of risks here http://outcomereference.com/causes/77
12
porker 1 day ago 2 replies      
Misread the title as "..linked to gluten intolerance". Was hoping it would shine a light on that increasing.
13
mtdewcmu 16 hours ago 0 replies      
I was always suspicious of the diet/lifestyle explanation for diabetes. It's conveniently unfalsifiable, and it's obnoxiously paternalistic and moralizing.
14
salesguy222 1 day ago 5 replies      
Neato, how do we avoid exposure to that pesticide?
26
Textbook manifesto (2016) greenteapress.com
341 points by Tomte  1 day ago   147 comments top 37
1
lvh 22 hours ago 6 replies      
In many European countries, this problem was resolved by what I feel is mostly student pressure. Our student union (for lack of better word) owned printing equipment and worked with most professors to do exactly what's suggested in this article: most professors wrote their own books (not 140 pages, though). Most of my textbooks were between 2 and 7 EUR, which I'm led to believe is approximately at cost. Occasionally, a particular textbook was "recommended", but there would always be ample library copies available, and often you wouldn't _really_ need it. I'd have about 4-6 courses per semester, so I'd spend maybe 25-30 EUR on our own textbooks. Occasionally I'd have to shell out for a traditional textbook, and that would utterly dominate that semester's materials budget.

The future's already here, it's just not evenly distributed.

2
hackermailman 1 hour ago 0 replies      
The two biggest universities in my city have some sort of publishing agreement where they can print and bind the relevant textbook material to give to students for free and when possible they use open textbooks https://open.bccampus.ca/ though there is a government grant paying professors to maintain the texts.

The best thing about the open textbook site is other professors and TAs review the books like this Precalculus example https://open.bccampus.ca/find-open-textbooks/?uuid=2fdb8a19-...

3
pcmonk 23 hours ago 7 replies      
A lot of people tend to harp on the "textbooks are too expensive" issue, and I think this correctly identifies one of the problems: textbook price is not an issue to many professors. Unfortunately, there's no actual solution to that presented.

> If you cant find one, write one. Its not that hard.

I've used three or four textbooks written by my professor, and I can't say the quality was all that great. Considering that the set of professors who currently choose to write their own textbooks probably skews toward professors who are good at writing textbooks, I'm not super high on this plan.

> Students: You should go on strike. If your textbook costs more than $50, dont buy it. If it has more than 500 pages, dont read it. Theres just no excuse for bad books.

Many students already do this. It's not uncommon for students to not buy a single textbook in a semester. In fact, the professors that do care about textbook price generally make textbooks optional. It turns out that's a lot easier than writing your own textbook and somehow selling it for cheap.

4
ziikutv 15 hours ago 0 replies      
Its funny. This was my exact shower thought this morning.

I find books overly verbose, and too formal. I do not think there is a need to dumb-down technical content. I also disagree with having a page limit as it would likely lead to omission of topics that might be otherwise useful.

I think the publishing industry has to change or be weeded out by self-publishers and video makers. I have learned many topics of my courses through Slide decks and Youtube videos to avoid reading.

My plan was to start re-learning everything from Uni and write informal tutorial (snippets) of blog posts about the topics and perhaps compiling to a open source book. I'll keep you guys posted so you folks can blindly upvote my fancy submission titles =D

Addendum: I'd really like to pug Brian Douglas' Youtube playlists on Control Theory. AMAZING. Got an A thanks to him.

5
TheCowboy 22 hours ago 2 replies      
I agree with the main point that students should read and understand textbooks. But disagree with some assumptions and points.

1. Many textbooks are written to be understood, but they vary a lot by field and class level. Generally, I think lower level textbooks best meet Downey's standards.

As you get into what is junior/senior (300-400 level) classes, there is not always a neat textbook available.

2. I disagree with 10 pages per week per course. I think the expectations of what students can read per week are too low. I attended a couple different schools, and one has a reputation of having high expectations of students, and most students tend to rise up to the challenge. I think most professors don't expect enough, and what a degree represents is watered down.

I do feel strongly that busywork and pointless readings should be avoided. Pages per week should not be some sort of metric for learning, but 10 well-written information-rich pages a week per course is not usually going to be a challenge.

Nationally, most students don't even read much of what is assigned, so telling students to not read a book if it has 500 pages won't change the status quo.

3. The idea that writing a textbook is easy is crazy. Even if you ignore the other requirements put upon professors, it is time-consuming to do it right. Even short niche books, think O'Reilly type stuff, take time to produce.

6
TeMPOraL 10 hours ago 0 replies      
Here's a trick we used when I was a student: we had an FTP server shared by students of all years of our program, and we put there a copy of every required and recommended textbook, as well as slides from the lectures and every other piece of material that was relevant to our classes.

Honestly, I think this kind of setup is something universities should provide for their students. We live in 21th century, it's not that much work to provide PDFs (with restricted access, if needed, because copyright blah blah).

7
esfandia 20 hours ago 0 replies      
I just implemented the reading quiz idea this term, and I thought that it went really well. Give the students a manageable chunk of reading material (in my case, the material came from various sources, not a single textbook), and give them an offline quiz to test their reading comprehension. The quizzes were graded by a TA, but the weight was quite small; small enough not to matter if they cheated (and cheating won't help them in the final exam. Aside: is it cheating if they didn't read the material but just hunted for the answers by skimming?) but enough to provide extra incentive to read.

In class we go over the answers to the quiz. I don't post the answers (the TA will have provided them the feedback they need when grading); rather we make it an interactive session. I answer questions the students have, we go over examples, I supplement the reading with slides if need be. Effectively, a flipped classroom.

This was done more out of necessity: first time teaching the course, no proper textbook (and in a quickly changing tech landscape for the topic at hand), lack of confidence in my own understanding of the material (I also tried gathering student questions beforehand so I could investigate them offline and come to class prepared to answer them), but now I think I'm going to stick to this way of teaching this course in the next installment next year.

8
manaskarekar 23 hours ago 1 reply      
Here's the list of free books from the website: http://greenteapress.com/wp/
9
ez_psychedelic 12 hours ago 0 replies      
I recommend "The no Bullshit guide to Math and Physics". It is about 300 pages, but goes along with what this article is about. This book is such a different approach (combines math with engineering and physics principals) so as to give validity to the maths you're reading. Also, it is written in a casual tone. Highly recommend it.
10
harry8 17 hours ago 1 reply      
Richard Feynman's adventures in textbooks 50 years as told in "Surely you're joking" ago are still instructive. It's pretty embarrassing that it is still so bad. More power to Downey, support him. Perfect is the enemy of good and the ally of the status quo, which is horrible.
11
larrydag 2 hours ago 2 replies      
I'm thinking of writing a ebook that I can use for teaching a course. I would like to see examples of well written textbooks. Are there good examples?
12
jimmaswell 21 hours ago 3 replies      
Textbooks are flat out unnecessary. The notes on the board should be enough to understand the material, and the teacher can either write their own problem sets or copy them from somewhere and put them online. There's just no excuse to require a textbook for a class - it means the teacher is unable to communicate the material effectively and needs the students to read it on their own, can't be bothered to write or copy homework sets, or is forcing students to buy the professor's own book out of greed, none of which should be seen as acceptable. If the department makes you have one, just don't use it (happened in a few of my classes). For classes that need some out of class readings like history or English, there's no excuse to make students buy books when the body of freely available, uncopyrighted work out there on the internet is so easy to access. Good example: a history class I had a few semesters ago where the primary documents were a simple downloadable .doc.

I've had lots of classes that worked like this, particularly my Calculus I/II classes where there was a textbook but homework from it was just suggested, not collected, and the lectures were entirely sufficient to understand the material and do well on the exams.

Beyond being a pointless scam, I'd go as far as to say textbooks make professors worse than they would be otherwise by letting professors use them as a crutch.

13
smoyer 5 hours ago 0 replies      
"Before long, the students learn that they shouldnt even try. The result is a 1000-page doorstop."

My oldest two are through college and learned that they shouldn't just purchase the list of books dictated by their classes. As stated by the article, many times the textbooks were not required to pass the test. If I had to estimate, I'd say they spent half as much as their fellow students on textbooks.

14
sbuttgereit 11 hours ago 0 replies      
At the school I went to, many of the classes had no formal textbooks.

You bought a 50-100 page of not terribly dense text/examples; these were photocopies on plain old letter paper that were stapled together and pre-punched for three-ring-binders. Each class you'd buy one of those per semester and they were developed in-house. That was it. Naturally, it wasn't always this way, but certainly for the basic classes it was exactly this way.

Note this wasn't any sort of engineering field and what they were teaching didn't have a lot of authors writing standard issue coursework to begin with, but it was great material that was very focused to the classes they were teaching. I still hold on to them, too: very concise and a nice reference if I ever need to brush up.

15
dharness 23 hours ago 4 replies      
While I agree with the content of this in general, I find books to be burdensome and would not like a course centered around them. I think a series of well crafted video lectures are a better medium for some people, myself included.

I also find that I can learn everything I need to in my Software Engineering program via a series of pointed google searches much quicker than reading a text. Most courses have 1 or more $100 books which are "required" but I haven't bought them in years.

What I /would/ like, is sample problems with solutions ;)

16
andrepd 14 hours ago 0 replies      
Much of this articles comes across as basically complaining: "these books are long and these books are hard". Why, it's true that sometimes this is a valid criticism, but what about when the subject matter really is long and hard to understand? What then?
17
rocqua 22 hours ago 1 reply      
For high-level math courses, the best I've seen is a reader written by the professor, combined with an optional 1000 page tome.

The reader goes with the lectures, and is focused on the actual material of the course. The reader, combined with your notes, basically covers the lectures. Meanwhile, if you need another take on the material, or some wider context, the 1000 page tome is always there. This works especially well if the reader points to equivalent chapters in the tome.

18
benhill70 21 hours ago 0 replies      
As a student in my forties I have been appalled at the price and quality of many of my textbooks. The $300 dollars worth of textbooks in one class could have done with a few youtube videos. The publishers know this cash cow is coming to an end due to piracy and online textbook rentals. Now, they are charging less for textbook but gouging students on the mandatory online components.
19
andrewwharton 15 hours ago 0 replies      
I'd like to see this philosophy applied to some of the open content out there already like the OpenStax textbooks [0]. For example, the Prealgebra text is 1152 pages in the PDF format.

I think there would be a huge amount of value in distilling these down to chapters which are 10-15 pages each instead of 100-150 pages each. Of course you would loose a lot of detail, but they could serve as a summary of 'this is the important stuff you need to know'. The expanded textbooks would serve as reference material if you want to go into more detail.

[0] https://openstax.org/

20
ziikutv 21 hours ago 1 reply      
Professors often do not have the choice to pick a book. They have to teach off of one recommended by department.

more politics into the education industry.

21
tedmiston 17 hours ago 1 reply      
> Students: You should go on strike. If your textbook costs more than $50, dont buy it. If it has more than 500 pages, dont read it. Theres just no excuse for bad books.

This is bad advice.

You need the book for reference or at least will do better with the book most of the time. If you want to stick one to the publisher, buy used.

Textbooks need a "microservices revolution". And not with these crappy interactive DRM-ridden e-textbooks with exercise codes... the experience with most of those is markedly worse than print books. We need more open content like webpages and journal articles. O'Reilly does it best. Textbooks authors should follow / adapt their model.

22
banjodude321 23 hours ago 1 reply      
"Learning from Data" is a reasonable example of the type of textbook the author is asking for.

There is something to be said about the value of "reference" books, however. Maybe reference books shouldn't be used in classes, but there can be great value in a 1000 page book that has a complete discussion of everything you'd expect.

23
sitkack 22 hours ago 0 replies      
Allen B. Downey needs to be a MacArthur Fellow.

Most commenters here should re-read the article and internalize the body of work created.

24
ivan_ah 15 hours ago 0 replies      
I like the suggested price point of under 50$. Perhaps I'd go even lower and require < $40. This is enough money to keep self-published authors motivated to maintain their books and write new ones, and also affordable enough for most students.

This is the approach I've been following with my MATH&PHYS and LA books, and I will continue to use with future titles.

I guess the ideal case for students would be OER, but then when everybody owns the book nobody is particularly invested in maintaining it and improving it...

25
Bioeye 21 hours ago 0 replies      
I've taken a class from Allen and used his books. In the context of his classes they are very good and the short readings can be useful, but taken as a reference like many other textbooks are they don't do as much.
26
jdeisenberg 21 hours ago 1 reply      
I agree that textbook costs are exorbitant, and I use open source, online, or very low cost books when teaching at the community college level. I've been using the interactive version of the "Think Like a Computer Scientist" book when teaching the introductory programming course. The students still don't read the material, at least not before the lecture.
27
adamnemecek 22 hours ago 0 replies      
Also all CS books (ok, maybe not all but the vast majority) need to ship with code. To quote Linus, "talk is cheap. Show me the code".
28
dmitripopov 19 hours ago 0 replies      
Back in my student days there were really extensive textbooks that no one of us read and short brochures on the subject published by university that was the real source of knowledge and how to apply it in practice.
29
forkLding 23 hours ago 0 replies      
I think this is needed, most courses have one or two textbooks which compounded together with a full courseload is a lot of pages, however the full textbook is also never used, only several chapters are usually recommended reading, really beating the point of buying the whole book
30
whodywop 21 hours ago 0 replies      
I believe the earliest textbooks contained a gloss in the margin written by students which clarified the main text. It contained translations, notes, references, etc. This helped to circumvent the curse of knowledge whereby most authors have zero memory of their early misconceptions of the subject (a major reason why most textbooks are rubbish).

I think this ought to be reintroduced by major publishers -- new editions to contain copious annotations garnered from students who field-tested the previous edition, explaining how they conquered the parts that they found hard.

31
nabla9 22 hours ago 0 replies      
Many teachers use chapters from several books and their own material.

It should be possible to buy student textbooks by chapter and print your own book. Most cities with college have few high quality printing services.

32
sghiassy 22 hours ago 0 replies      
Disagree - many students have different learning patterns. Textbooks are only one way to teach, and there are many different ways to learn
33
teekee 22 hours ago 3 replies      
"If you cant find one, write one."

What would be the best way to write a free book please? Any pointers? Experience?

34
itchyjunk 23 hours ago 3 replies      
I was just thinking about asking HN about free books that will get me started in phython. This seems to have a quite a few [0]. Has HN read any of these or recommends anything in particular?

(I went through learning python the hard way a few years back and have been slacking off)

----------------------------

[0] http://greenteapress.com/wp/

35
innocentoldguy 22 hours ago 0 replies      
I completely agree with the author's comment on shorter books. My biggest problem with instructional books in general is that they're filled with too much fluff. It isn't that I can't read 50 pages a week. I just don't want to, when the usable content could have been written in a page or less. While anecdotes and metaphors are great for inflating page count and price, they do little to help me understand a concept, and just become busywork, which I cannot abide.
36
danielbigham 21 hours ago 0 replies      
Amen. This author's thesis sounds pretty darn good to me.
37
mncharity 9 hours ago 0 replies      
> Choose books your students can read and understand.

A noble and audacious goal.

> If you cant find one,

Realistically accessed.

> write one. Its not that hard.

WTF?

Ok, I can see how this could be either plausible, or utterly absurd, depending on the domain.

> check whether they understand.

Err, does this mean "I think they didn't do too badly on the midterm"? Or daily quizes and clicker questions? Or a grad student, with a focus on the field's education research, dedicated to running concept inventories and stats?

At least in college introductory science education, "check whether they understand" is hard, an area of active research, and historically, a cesspit of professorial self-deception.

> Its not that hard.

Let's draw a proton. With marker on whiteboard, as a circle (not hard). With an illustration app, as an arbitrarily-sized hard sphere with physically-bogus lighting (not hard). With code, as a gradient, based on the proton mass density curve (not hard, but did eat some hours).

Let's draw atomic nuclei. As balls of red and blue marbles (not hard). As gradients, post-processing from recently published ab initio density functional plots, when available (hard). Background: light nuclei are lumpy.

Ok, so let's aim lower.

Let's draw the Sun. As an arbitrarily colored circle (not hard). What about as a circle, with a color at least vaguely realistic? Demonstrably hard, as it's so rare. You likely can't ask your first-tier astronomy graduate student to do it.[1] Or almost any of the professorial authors of the many introductory astronomy textbooks.

Ok, so let's aim lower.

Last week I was reading an AP Chemistry curriculum standard. Towards the top, "atoms are conserved". Great. Later on, "atoms are neutral"[when charged, they're instead "ions"]. Okaaaaay. So are there any atoms on the right side of H + light -> H+ + e- ? The old "molecules aren't made of atoms, they're made from atoms" school. Two historical threads of definition. Left for students to reconcile, because that's obviously where the burden should lie. And this wasn't Pearson trash content, this was a curriculum spec (albeit a poor one). So what do you tell your kids to make it safe for them to take standardized exams?

"[N]ot that hard." I know wizzy education-focused MIT and Harvard professors who work really hard to raise some small bit of intro physics and biology content from wretched, to very-slightly-less-wretched.

Perhaps for some domains "not that hard" is true. And it helps if the objective is "no worse then the rest of the crap out there". And if "check whether they understand" means "ask a few clicker questions, and give a random quiz" instead of "systematically run formative misconception checks". But, wow. It so doesn't match the areas I'm most familiar with.

Perhaps the manifesto is missing some scope-of-applicability predicate?

[1] http://www.clarifyscience.info/part/MHjx6 "Scientific expertise is not broadly distributed - an underappreciated obstacle to creating better content"

27
ReactXP A library for building cross-platform apps microsoft.github.io
328 points by nthtran  2 days ago   115 comments top 26
1
vmarsy 2 days ago 5 replies      
> ReSub

> The Skype team initially adopted the Flux principles, but we found it to be cumbersome. It requires the introduction of a bunch of new classes (dispatchers, action creators, and dispatch events), and program flow becomes difficult to follow and debug. Over time, we abandoned Flux and created a simpler model for stores. It leverages a new language feature in TypeScript (annotations) to automatically create subscriptions between components and stores. This eliminates most of the code involved in subscribing and unsubscribing. This pattern, which we refer to as ReSub, is independent of ReactXP, but they work well together.

That's interesting, I wonder how this differ from redux and others

I wonder also how is navigation is handled, is it easy to add react navigation in the mix?

Clicking on Next while on https://microsoft.github.io/reactxp/docs/animations navigates to a 404

2
hoodoof 2 days ago 4 replies      
Despite explaining that XP stands for "cross platform", I still think that this is mis-named and many will assume it relates to one of Microsoft's biggest ever brand names with global recognition.

At first glance I decided not to read the article because I thought it irrelevant due to the XP.

3
roryisok 2 days ago 2 replies      
I would really love if MS brought out some kind of .net core electron alternative. Xaml is nice enough to build UI with, .net core works across macOS and Linux, and the whole package wouldn't need an entire copy of chromium for each install
4
nathan_f77 2 days ago 2 replies      
I've been developing a cross-platform app with React Native, and react-native-web has been working really well. It took almost no effort to get my React Native app working in a browser. I might be wrong, but it looks like ReactXP is just an alternative to react-native-web, with some additional abstractions and conventions for stores, etc.

I'm not sure I want to switch to ReactXP. I really like being able to use the React Native Animated API on the web, and I can also use wrapper libraries such as react-native-animatable.

5
migueloller 1 day ago 0 replies      
I'm a bit skeptical this is needed at all. I would've much preferred if Microsoft contributed to the already popular React Native for Web [1].

React Native already supports iOS, Android, and UWP. To add browser support all you need is something like React Native for Web. I made a small presentation a few months ago that shows this. Here's the source code: [2]. Take a look at the web folder.

Libraries like React Navigation [3] have also been built to support any platform that runs React code. It looks like Microsoft built yet another navigation library [4].

Also, check out React Primitives [5]. It aims to define a set of primitives that work on any platform that can be used to build more complex components. This is highly experimental, but I'm liking the direction where it's going, a unified React interface for any platform.

In addition, ReactVR is a great example of how React Native primitives can be extended to new emerging platforms [6].

Finally, React Native for macOS [7] answers the question that many have here about building native apps for macOS without relying on Electron.

[1] https://github.com/necolas/react-native-web

[2] https://github.com/migueloller/HelloWorldApp

[3] https://github.com/react-community/react-navigation

[4] https://microsoft.github.io/reactxp/docs/components/navigato...

[5] https://github.com/lelandrichardson/react-primitives

[6] https://github.com/facebook/react-vr

[7] https://github.com/ptmt/react-native-macos

6
Kiro 2 days ago 2 replies      
Am I the only one who thinks this is a big deal? This basically means you can finally share your React code across platforms. It has always felt off that React and React Native are so similar, yet you can't use the same code for web and apps.
7
jfilter 2 days ago 4 replies      
>With React and React Native, your web app can share most its logic with your iOS and Android apps, but the view layer needs to be implemented separately for each platform.

As far as I understand, you don't need to do this in react-native. Only when you want to use some special features. Or am I missing something?

8
i336_ 2 days ago 0 replies      
Ahem. I nearly thought Microsoft had partnered with ReactOS for a minute there :)

But XP is EOL, so "XP" is never going to be used in anything now, thinking about it.

I can wish...

9
josteink 2 days ago 2 replies      
So Microsoft now have -2- cross-platform application-frameworks they offer:

- Xamarin with .NET. Mobile apps only.

- ReactXP. Which also supports regular web-applications

It will be interesting to see if these two end up competing, or if one will be ditched in favour of the other.

10
uncensored 2 days ago 0 replies      
With RN's and Expo's out of the box support for Android/iOS, I find the missing piece is native support for Responsive Grid layout. Till then, I've derived one based on the new support in RN v0.42 for relative dimensions. I've taken the liberty of correcting the mental model for Grid to eliminate the decoherence that results from using both an absolute column count together with relative sizing (!) by letting the developer specify grid column width as a percentage of screen size and allowing the specifying of the width of a given column in the layout as a multiple of that percentage. This way the developer doesn't have to divide the screen size in pixels (assuming they know that for the screen they're testing on, which is not always the case) by some arbitrary number of grid columns in order to get the width they desire per grid column (indirect route.) They can instead use visual intuition about relative sizes to define the column width directly as a percentage of screen width. I also found RTL support (for Hebrew/Arabic apps) generally lacking in RN, so I added RTL layout support to it.

https://github.com/idibidiart/react-native-responsive-grid

11
hdhzy 2 days ago 1 reply      
Is there a screenshot somewhere to see how it looks like our am I missing something?
12
roryisok 2 days ago 2 replies      
I'm missing something. How is this different to react native, which is already supported on ios, android, web and UWP?
13
d0100 2 days ago 6 replies      
I really want React for desktop apps. Currently we are running into some performance issues with large datasets and spreadsheets, and being able to offer a performant desktop app would be ideal.

Right now we're considering using .NET and ReoGrid, since Qt is too expensive and we only target Windows anyways.

14
enobrev 2 days ago 0 replies      
> ReactXP currently supports the following platforms: web (React JS), iOS (React Native), Android (React Native) and Windows UWP (React Native). Windows UWP is still a work in progress, and some components and APIs are not yet complete.

Seems interesting. Hope they add linux support.

15
skdotdan 2 days ago 1 reply      
If they add Linux and Mac support, this will be HUGE.
16
idibidiart 2 days ago 0 replies      
It's not just different platforms but different screen sizes (and RTL layouts) that's the biggest challenge I found in developing React Native apps. To solves those challenges, I'd like to share something I've been working on: a responsive grid for RN with RTL layout support. It has reduced the time it takes to build relative size and responsive layouts by a factor of 10, easily. It's based on previous similar work but with some radical changes and a few functional and usability enhancements. Looking for testers!

https://github.com/idibidiart/react-native-responsive-grid

17
srikz 2 days ago 0 replies      
Interesting to see if this will co-exist with xamarin or will target different type of apps.
18
mwcampbell 2 days ago 1 reply      
See also https://microsoft.github.io/reactxp/blog/2017/04/06/introduc...

It came out of the Skype team. But it doesn't look like they're currently using it in their universal Windows app, which is built directly on XAML.

19
nonsince 1 day ago 0 replies      
This name is _really_ confusing in the presence of ReactOS
20
DenisM 2 days ago 0 replies      
I can't find any instructions on how to try any of the samples: supported host platforms, prerequisites, downloading the toolkit, building the samples, etc.
21
matt_lo 1 day ago 0 replies      
At first when I landed, I thought this was another Facebook tech site since the template was reused from the old site of Jest and React Native.
22
Roritharr 2 days ago 2 replies      
Having macOS in there would have made it perfect.
23
skynode 2 days ago 0 replies      
This is simply awesome. Nothing more to add.
24
alekratz 1 day ago 0 replies      
At first I thought this was somehow related to the ReactOS project...
25
debt 2 days ago 0 replies      
microsoft owns 5% of facebook
26
wsgeek 2 days ago 0 replies      
Embrace.... extend.... extinguish.

Be very careful when an OS vendor makes a move like this.

28
Open sourcing Sonnet a new library for constructing neural networks deepmind.com
260 points by lopespm  3 days ago   38 comments top 10
1
gcr 2 days ago 2 replies      
This isn't "yet another completely different neural network library." This library just has some new layer types for TensorFlow.

Looks like there are some new layers for special kinds of attention RNNs, word embeddings, alternate implementations of spatial transformers, and so on. They also have another Batch Norm implementation that of course requires tons of fiddling to work properly, a classic tf staple :-)

As a machine learning environment, tf is so complex that different research groups have to define their own best practices for it. Using tf is like learning C++ where everyone learns a slightly different, mutually incompatible, but broadly overlapping subset of the language. We're just seeing a glimpse into DeepMind's specialized tooling along with reference implementations of the operations they use in their work.

This will be really useful for researchers who want to mess with deepmind's ideas/papers, but I'm a bit relieved that there isn't anything claimed to be fundamentally paradigm-shifting about this release.

2
mrdrozdov 3 days ago 0 replies      
There's definitely some examples of useful ops here:

 # 1. Define our computation as some op def useful_op(input_a, input_b, use_clipping=True, remove_nans=False, solve_agi='maybe'): # ...

3
drej 3 days ago 5 replies      
Looks cool, but: "This installation is compatible with Linux/Mac OS X and Python 2.7."

Sad face.

4
dbcurtis 3 days ago 1 reply      
For us NN newbies, could some one do a comparison to Keras?
5
singhrac 3 days ago 0 replies      
> Making Sonnet public allows other models created within DeepMind to be easily shared with the community

I'm looking forward to this - DeepMind has a frustrating track record of not open sourcing their code. While it's their prerogative and I know of many valid reasons for doing so, the many other researchers who publish their (sometimes terribly hacked together) models have been incredibly helpful in verifying that their work, well, works.

6
vmsp 3 days ago 4 replies      
Where does this leave Keras? Wasn't it supposed to be the new high level interface to TF?

EDIT: Also, what about TF-Slim?

7
waleedka 2 days ago 1 reply      
Tip for the project lead: A quick way to lose half of the early adopters is to release a library without Python 3 support. Luckily, fixing this is easy and shouldn't take more than a day or two. And the ROI is high if your goal is wider adoption.
8
ReeSilva 2 days ago 0 replies      
It looks nice and I will certainly try it. But, no support to Py3? Common, guys, it's 2k17
9
likelynew 2 days ago 0 replies      
Anyone knowledgable with sonnet here mind detailing how is it different from keras.
10
qeternity 2 days ago 1 reply      
As a startup that's in the midst of hiring, I can't wait to see how quickly CVs get updated to include this.
29
Ultima VI filfre.net
261 points by doppp  2 days ago   104 comments top 21
1
gavanwoolery 2 days ago 5 replies      
Reminded me about another bit of info on Ultima 6 / Warren Spector:

Spector cites an amusing anecdote from Ultima 6s in-house testing:"on Ultima VI, which is kind of where I realized that all this improvisational stuff could really be magical. It was unplanned, kind of a bug. There was one puzzle where the Avatar and his party came up on one side of a portcullis and there was a lever on the other side of the portcullis that you had to flip to raise the portcullis and keep on making progress. I watched one of our testers, a guy named Mark Schaefgen, playing in that area. And he didnt have the telekinesis spell, which was the way to get past that portcullis. I was sitting there rubbing my hands together going oh ho ho, hes screwed, he cant do it.

He had a character in his party named Sherry the Mouse. You can probably see where this is going. The portcullis was simulated, and here the air quotes are around simulated, simulated enough that there was a gap at the bottom that was too small for a human to get through, but not too small for Sherry. He sent Sherry the Mouse under the portcullis, over to the lever, she flipped the lever, and then the rest of the party went through. And I fell on the floor. At that moment I just said to myself, this is what games should do. We should start planning this, not having it happen as a bug. That was where I realized this was really powerful."

It was things like this that make the Ultima series stick in my head to this day. :)

2
scott_s 2 days ago 4 replies      
This paragraph resonated with me:

The complexity of the world model was such that Ultima VI became the first installment that would let the player get a job to earn money in lieu of the standard CRPG approach of killing monsters and taking their loot. You can buy a sack of grain from a local farmer, take the grain to a mill and grind it into flour, then sell the flour to a baker or sneak into his bakery at night to bake your own bread using his oven. Even by the standards of today, the living world inside Ultima VI is a remarkable achievement not to mention a godsend to those of us bored with killing monsters; you can be very successful in Ultima VI whilst doing very little killing at all.

I got into western RPGs only recently - I played only JRPGs on the SNES and then later consoles. My first western RPG was Mass Effect 2, and since then I played ME3, Dragon Age: Inquisition and Skyrim. When playing Skyrim, I realized that the wolf pelts I was accumulating by killing wolves as I walked the countryside could be smithed into leather armor! That leather armor would fetch considerably more money when sold than wolf pelts.

My first thought: I found a cheat to more money! My second thought: I found a business.

3
santaclaus 2 days ago 1 reply      
The sequel, VII, and VII part 2, are singular achievements in terms of world building. The level of detail that went into NPC schedules, interactions, etc, down to the fact that you can do mundane tasks with no bearing on the actual game like baking bread, were pretty damn cool. The recent Witcher game might come close, but I'm still jonesing for some RPGs on VII's level.
4
phodo 2 days ago 2 replies      
Amazing game. While on vacation last week, I was eating at a restaurant overlooking the deep blue ocean and the background music playing was the Ultima theme song. I seemed to be the only one who recognized it and right there and then, I proudly basked in a glorious solitary moment of radiant geekiness and nostalgia as I thought of shimano, iolo and all the rest of the characters that made up that amazing place called Britannia.
5
sbierwagen 2 days ago 1 reply      

 The creepy poster of a pole-dancing centaur hanging on the Avatars wall back on Earth has provoked much comment over the years
Someone dug up the original art of that poster: http://ultimacodex.com/2015/10/remember-that-centaur-poster-...

6
bmurphy1976 2 days ago 0 replies      
Oh man I loved this game. This game may have single handedly set me on my career in software development. I'd played many many games before, but this one really opened my eyes to the possibilities that computers offered.

I just finished a recent playthrough, no more than six months ago! The game holds up really well. There are obvious shortcomings compared to modern games, it could be a hard slog for younger generations who are used to a more polished product, but if you are looking for a good bit of nostalgia U6 is hard to beat.

For comparison I also tried re-playing Bard's Tale 3 recently. I wasted many hours of my childhood with that game. Frankly, I'm amazed at how poor and awful of a game it was and I just I couldn't stick to it.

7
smacktoward 2 days ago 1 reply      
Since these articles on gaming history by Jimmy Maher consistently get voted up to the front page of HN, it may be worth mentioning that he has a Patreon where you can support his work here:

https://www.patreon.com/DigitalAntiquarian

8
godmodus 2 days ago 2 replies      
This is a strange way to implement an text editor.
9
outworlder 2 days ago 0 replies      
>On the evening of February 9, 1990, with the project now in the final frenzy of testing, bug-swatting, and final-touch-adding, he left Origins offices to talk to some colleagues having a smoke just outside. When he opened the security door to return, a piece of the doors apparatus in fact, an eight-pound chunk of steel fell off and smacked him in the head, opening up an ugly gash and knocking him out cold. His panicked colleagues, who at first thought he might be dead, rushed him to the emergency room. Once he had had his head stitched up, he set back to work.

Hah. That's how you are able to kill Lord British in Ultima VII. I had never understood the reference, until now.

10
elif 2 days ago 2 replies      
the latest game in this series, Shroud of the Avatar (still in pre-release) is having a free play weekend this weekend. Even though it's not "released" yet, it's a full game, very playable and enjoyable.

https://www.shroudoftheavatar.com/?page_id=69568

11
cocktailpeanuts 2 days ago 2 replies      
Who here came thinking it's a new modern Vim alternative?
12
syncsynchalt 2 days ago 0 replies      
This was my jam for all of middle school. Thanks for posting this article!
13
nsxwolf 2 days ago 2 replies      
The projection they used is just super weird.
14
jdright 2 days ago 6 replies      
Best game series ever with Ultima VI, VII, VIII and Online possible being the best games ever.
15
lokedhs 2 days ago 1 reply      
I remember looking at the Ultima games at the time as something interesting that I'd like to spend time on, but I never got into them. I guess it's because I was always into faster gaming experiences.

I never thought that I would really be able to enjoy any RPG's, but recently I've started playing them. I'm currently working my way through Tales of Zestiria and having a geat time with it.

I would like to give the Ultima games a try. Which one should I start with? I'd like one that is somewhat easy to get in to.

16
dewiz 2 days ago 0 replies      
I remember finding a casino in one of Ultima7 islands, I made so much gold out of the roulette that it became a problem stocking it and carrying it around Britannia.
17
bertlequant 2 days ago 0 replies      
How I miss my shard over 56k
18
beders 2 days ago 0 replies      
This was an awesome awesome game. Unimaginable how I played that for so many hours on such a tiny tiny screen :)
19
m3kw9 2 days ago 0 replies      
Use glass sword on lord British
20
WebYourMind 2 days ago 0 replies      
This brings back so many memories! Awesome Game!
21
artur_makly 2 days ago 0 replies      
The top game of my teenage life.
30
Official list of phoned-home info revealed by Microsoft theregister.co.uk
271 points by frik  3 days ago   208 comments top 22
1
uranian 3 days ago 6 replies      
Most striking for me is not only access to your documents, but also:

>events generated by the operating system, and your "inking and typing data."

Sounds like a key logger virus, but then built-in the OS? Is this for real?

2
darrmit 3 days ago 1 reply      
For those thinking Enterprise and/or Education may be better, it's only better if you're using it in an environment where privacy settings are enforced via Group Policy or some other method. Standalone (like I'm running it) is really not much better than Pro unless you go through and manually intervene.

For example, the default telemetry level is "Enhanced":

"The Enhanced level gathers data about how Windows and apps are used and how they perform. This level also includes data from both the Basic and Security levels. This level helps to improve the user experience with the operating system and apps. Data from this level can be abstracted into patterns and trends that can help Microsoft determine future improvements.

This is the default level for Windows 10 Enterprise and Windows 10 Education editions, and the minimum level needed to quickly identify and address Windows, Windows Server, and System Center quality issues." [1]

On a fresh install of Windows 10 Enterprise I still have to manually disable updates by disabling/setting permissions on scheduled tasks for updates and I'm still prompted for things like "Use OneDrive!". Cortana is also enabled by default.

[1] https://technet.microsoft.com/en-us/itpro/windows/configure/...

3
rubatuga 3 days ago 3 replies      
One Windows 10 version that many people are ignorant of is Windows 10 for education, which is based on the enterprise edition. It has the ability to disable almost all data collecting / advertising features . Cortana doesn't even exist on this version. If you can grab this, and most university or college students should be able to for free, it'll be a big improvement in privacy.

Edit: from what I recall, Microsoft stated that advertising didn't have a place in education, or something to that effect

4
rl3 3 days ago 5 replies      
>Engineers, with permission from Microsofts privacy governance team, can obtain users' documents that trigger crashes in applications, so they can work out what's going wrong. The techies can also run diagnostic tools remotely on the computers, again with permission from their overseers.

So in other words: engineering access to your personal documents (and computer) is mediated by a group of people who also shouldn't have access in the first place. Got it.

When I close my eyes, it's almost like I can vividly picture the crappy NSA PowerPoint slides that must exist, detailing "Windows telemetry exploitation" or some such. At the very least, the information has to be incredibly useful for targeting purposes.

5
yAnonymous 3 days ago 4 replies      
Wouldn't half of that make it illegal to use for government agencies in many countries?

They're one config error away from sending classified data to Microsoft.

6
us0r 3 days ago 1 reply      
Microsoft is absolutely out of control with this shit. I was recently flipping through BI articles and came across this [0]. Using data from millions of its subscribers The findings come from people who use Microsoft Word and/or Outlook. WTF? Sure enough, I opted out of telemetry but that apparently doesnt include the content of business documents and email. 7 clicks to find that option. I guarantee you 99% of Office 365 users have no idea this is happening.

Microsoft customers arent getting scroogled, they are getting straight fucked. Not only are they slurping everything imaginable up, but actual people are going through the data and doing stories on business insider.

My problem is I actually like their products. Im cheering for the day the EU (the US wont do anything so sadly I have to cheer for a foreign government) wakes up and slaps them around. Hopefully its hard enough to get them to change their ways.

http://www.businessinsider.com/microsoft-data-the-most-confu...

7
vadansky 3 days ago 5 replies      
If you absolutely have to use Windows (like me) is it possible to block all the telemetry at the router level, maybe somekind of hardware firewall? Do we have a list of IPs to blacklist?
8
zwarag 3 days ago 2 replies      
Isn't it a bit of a pathetic approach to tackle this topic with: How can we turn off this telemetry craziness.Shouldn't we USE something that just does not scan you're stuff at all. Like Linux or something?Sure it might not be that well round up like Win. But at least it will not penetrate your bum hole by design and will become round up eventually.
9
cogs 3 days ago 3 replies      
How does this compare with Apple? I haven't seen so many articles about what MacOS slurps, is that because it is better behaved?
10
laurencei 3 days ago 6 replies      
Is there a place where people have put together a conclusive list/script to remove/turn off as much telemetry as possible?

I've seen various lists on reddit, HN etc - but they all seem to have different bits.

Perhaps a GitHub Gist that can be crowdsourced to help people ensure they get every single hidden option turned off?

11
pleasecalllater 3 days ago 1 reply      
Cool, so I will have backup data in NSA, and Microsoft. Do you think it's possible to recover my data from their servers easily?

Looks like 'my data' will soon be something strange, and suspicious.

12
pawadu 3 days ago 3 replies      
I cant comment on the article since I don't have any actual data on the subject (it doesn't seem they do either). But I do have a slightly on-topic question for HN readers using Windows in enterprise:

You can run popular Linux distributions off grid and still receive security updates via a local package repository. Can you still do something like this with Windows? Does it require an special Windows 10 version?

13
Steeeve 3 days ago 0 replies      
The article doesn't have a full list, it has a set of examples. The technet pages linked in the article don't have a full set of information either.

I have the distinct impression that regardless of settings, some data gets sent.

I also have the distinct impression that the data will be for sale - the usefulness of a good portion of the data is questionable and some would only be useful for application developers.

What the list does have enough of ... is enough information for adversarial parties to want to target it.

It's not that hard to stop using windows. More people should.

14
itaysk 3 days ago 0 replies      
I wonder how this compares to Android while using Google's services and accepting their terms (admittedly I allow everything by default). Is anyone aware of such analysis?
15
chj 3 days ago 1 reply      
The reason I installed ubuntu on my laptop.
16
elorant 3 days ago 1 reply      
I'd like to know if there are any C# devs who moved to Linux and how is the whole experience.
17
retox 2 days ago 0 replies      
Some troubling sounding ones;- All the physical memory used by Windows at the point of the crash- URL for a specific two second chunk of content if there is an error

- Image & video resolution, video length, file sizes types and encoding

- URLs (which may include search terms)

- Ink strokes written, text before and after the ink insertion point, recognized text entered

- Time and result of each connection attempt (WiFi)

- Mobile Equipment ID (IMEI) and Mobile Country Code (MCCO)

- Whether the user clicked or hovered on UI controls or hotspots

18
AdmiralAsshat 3 days ago 1 reply      
Useful utility I remember from a few years ago:https://www.safer-networking.org/spybot-anti-beacon/

I mostly used it to block the Windows 7 telemetry that they backported.

I don't know how up-to-date they're keeping it, though. I fear Microsoft is adding more hooks faster than Spybot can block them.

19
itchyjunk 3 days ago 0 replies      
Ahh, the "Relevant Ads" button. No matter which way it turns, you still get adds. I am tempted to ask "Is this button broken /s?" but I know it's a feature and not a bug.
20
Clownshoesms 3 days ago 0 replies      
Privacy journey. It'll take a while to purge that tripe. Makes me feel sick thinking of the corporate weasel on the end of the post.
21
frik 3 days ago 1 reply      
The original title of this post was "The sheer amount of data Windows 10 sends from your PC".

Which was shorted from the article title "Put down your coffee and admire the sheer amount of data Windows 10 Creators Update will slurp from your PC"

But now he HN title got changed to "Official list of phoned-home info revealed by Microsoft" whih is misleading or let's say down-playing the whole story. The story is more than yesterdays HN story, it shades a not so nice picture about what really happen.

22
Sir_Cmpwn 3 days ago 3 replies      
       cached 10 April 2017 15:11:01 GMT