hacker news with inline top comments    .. more ..    29 Mar 2017 Best
home   ask   best   2 years ago   
The House just voted to wipe out the FCCs landmark Internet privacy protections washingtonpost.com
833 points by blazingfrog2  17 hours ago   441 comments top 58
pnathan 16 hours ago 14 replies      
This, right here, is the consequence of the withdrawal from politics many geeks advocated very strongly in an earlier time. "Everything is corrupt, it doesn't matter"... turns out to only be a viable philosophy when things mostly work well enough.

What we have in protections and freedoms were purchased through a ton of hard work by prior generations: the liberty to slack and think that it just works ok is a nice side effect of the prior sweat.

tomohawk 16 hours ago 16 replies      
Before getting all spun up, I'd dig a little deeper on the issue than what the WaPo does in this piece.

These regulations were only voted on late in 2016 and never went into effect. To do the regulations, the FCC reclassified the internet as basically ye olde telephone system, which then made it subject to their purview based on laws created in the 1930s. This is classic overreach. Congress never gave this authority to the FCC and is acting to put them back in line with the law.

It's pathetic the the WaPo used their platform to create more heat than light on this, by selective quoting. Here's a more full quote from Rep Blackburn that explains her position more fully.

The FCC already has the ability to oversee privacy with broadband providers, Blackburn explained. That is done primarily through Section 222 of the Communications Act, and additional authority is granted through Sections 201 and 202. Now, what they did was to go outside of their bounds and expand that. They did a swipe at the jurisdiction of the Federal Trade Commission, the FTC. They have traditionally been our nations primary privacy regulator, and they have done a very good job of it.

The lesson here really is that if the issue is really important, then get an actual law passed instead of trying to contort regulatory authority based on laws from the 1930s. The previous president could certainly have done this, but chose not to.

callcallcall 16 hours ago 5 replies      
Please do not complain into the echo chamber of comments here. Please take a moment to support the EFF, call your representatives, and speak to friends and family.

EFF: https://www.eff.org/Find your reps: https://tryvoices.com/

vancan1ty 16 hours ago 3 replies      
Something that is not mentioned in the article is that the FCC regulations in question were passed in October 2016 and have never gone in to effect. So, to be strictly accurate, the vote does not roll back any regulations which actually ever affected the internet.


doctorshady 16 hours ago 9 replies      
It's a bit disappointing to see that aside from a few abstained votes, everybody just chose to vote along party lines. Do these people just rubber stamp a bill because there's a D or an R next to it? Even if it meant more nay votes for the bill, I really wish we had representatives that vote based on critical thought rather than what their friends were doing.

I mean, as long as I'm dreaming too, we should give assembly programming kits to first graders.

vvanders 16 hours ago 2 replies      
As someone who grew up during the early days of the internet I don't know of any other way to describe this than utterly depressing.

The internet was supposed to be this bastion of knowledge, information and free exchange of ideas. Now it's just heading towards another avenue for large-organizations monetize the individual.

davidf18 1 hour ago 0 replies      
The Register: Your internet history on sale to highest bidder: US Congress votes to shred ISP privacy rules


"Now, the really big question is: can your ISPs see the content of your online interactions? Can it read your emails? Can it read your search results? Can it store and search through the words you typed into a webpage?

And the answer is: yes, sometimes.

If the website you visit is not secured with HTTPS meaning that any data between you and the website is encrypted then your ISP can see exactly what you are doing."

Read the article for suggestions on how to protect yourself.

Also read:http://www.theregister.co.uk/2017/03/28/so_my_isp_can_now_se...

alistproducer2 15 hours ago 1 reply      
One aspect of this that is being missed is how well this illustrates the inability of the Democratic party to take advantage of an obviously advantageous situation.

It's a no brainer that most people would recoil at the idea of everything they do on the Internet suddenly being for sale. It would be super easy to come up with at least a dozen relatable nefarious use cases and stuff them into TV commercials and ads and tying it to the Republican party.

But nope, silence. It's almost like they don't want to be in power. It feels like I live in a de facto one-party state.

Gustomaximus 14 hours ago 1 reply      
If you want an idea to get mass movement against this;

Start some display campaigns injecting peoples names and other personal information into ads. Have this follow people around the web. Even if data is not taken from what has been allowed here, most people will find it creepy. Link ad to a website explaining whats going on and how to contact their local member.

I suspect with a fairly reasonable spend you could get some strong resistance and media attention.

gwu78 16 hours ago 4 replies      
This thread may grow long and maybe turn to the topic of HTTPS. SSL with SNI exposes plaintext hostnames/domainnames on the wire for anyone to read, aggregate and sell, not to mention tamper with. It should be an optional extension. For many users it adds no benefit. For some users, it breaks their software and adds needless complexity. Now the privacy advocates have a reason to dislike it too. Just say no to SNI.
kevinpet 10 hours ago 0 replies      
While I'd definitely like to see restrictions on internet browsing being protected at least as much as library circulation records [1] and video rentals [2], as a fan of checks and balances, the mere concept of a regulatory agency passing "landmark" regulations on anything is troubling. Either that power is in the law giving regulatory authority to the agency, and hence, it shouldn't be called "landmark"; or the power is outside the scope of what Congress intended when enacting the law, in which case it's a a bureaucratic power grab.

1. http://www.ala.org/advocacy/privacyconfidentiality/privacy/s...2. https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act

marvindanig 16 hours ago 5 replies      
Before we jump in a shock and talk about the TFW political disaster there is happening in DC at the moment I want to ask a question:

Is there a simple guide or steps that I can follow to make myself anonymous? I know there is TOR and VPNs, how can I go about setting it up?

alistproducer2 16 hours ago 3 replies      
Please share your VPN setups. I would like to have my VPN connection at the router level, if possible.

Edit: Here's sort of an answer to my own questionhttps://www.howtogeek.com/221889/connect-your-home-router-to...

Tepix 2 hours ago 0 replies      
So the GOP argues that it's unfair because streaming services and search engines can already collect this data and ISPs couldn't.

I don't understand how they fail to recognize that ISPs will

a) see all of the sites you will visit and

b) many people can't choose between ISPs because there are only a few in their area

It seems that for the GOP, as long as there is profit for corporations, they are willing to give up the privacy of the voters.

How is this different than the telephone company eavesdropping on your calls and selling the information gained to marketing companies?

harryh 16 hours ago 1 reply      
To all of you who are you who are saying that it's now vital to use a VPN I have to ask:

Why weren't you running a VPN already?

This was a vote to head off the implementation of a regulation that hadn't gone into effect already.

pcmaffey 16 hours ago 3 replies      
Welp, now there's a real market opportunity for 'open' ISP's. I would gladly pay more to a smaller ISP with slightly higher latency for guaranteed privacy.

SpaceX's planned satellite internet will hopefully fill this void for the world... until Elon dies and it's taken over by the evil, ignoramuses of corporate greed.

quillo 16 hours ago 1 reply      
I would expect that this will have an unexpected (?) side effect of further weakening the capabilities of packet inspection by intelligence agencies through increased utilisation of VPN services, especially those outside of the US.

At face value this is a good thing for privacy, but I am concerned that when lawmakers realise their error they will just legislate themselves out of the hole by making access to VPN services harder.

slang800 16 hours ago 1 reply      
Isn't doing this type of data collection without consent already banned under the [Wiretap Act](https://www.law.cornell.edu/uscode/text/18/2511)? What part of these protections weren't redundant?
vhost- 16 hours ago 2 replies      
Doesn't this mean the government can basically buy user data through shell corps and bypass warrants all together?
asimjalis 16 hours ago 3 replies      
Is there another side to this debate or is it really this black and white?
msutherl 16 hours ago 1 reply      
Any clarity re: this comment[1], which seems to suggest that things are not as they seem?

[1] https://news.ycombinator.com/item?id=13942989

> In June 2015, the FCC reclassified the ISP's as common carriers. Tada, the FTC rules no longer apply. So the FCC regulated them with roughly the same set of rules. Now they've undone this.

AndrewDP 16 hours ago 0 replies      
The underlying argument here is there is no difference between say Google and Verizon: the customer has to opt in (or pay) for both. And from a free market (aka conservative) economic perspective if this is a concern shared by the population, someone will offer it as a service that people will pay for (a VPN tax if you will).

This is an unfortunate example where government is not set up to address concerns of today's environment. They are trying to apply legal constructs of 20-50 years ago to a quickly changing age. And while you can argue whether the prior administration did the right thing legislating in this environment, the one thing they did was understand that access to the Internet should be a right as opposed to a privilege. Like education, access to 911, etc. As more services move exclusively online, this fundamental access question only becomes a greater concern.

If individuals aren't guaranteed access nor have any protections online, then we are heading into a very dangerous area (if the only way to lodge a claim against your internet provider is online, then they will know what you are doing).

virmundi 11 hours ago 0 replies      
So why is this necessarily bad? My understanding is that the Congress repealed a fiat control by the executive branch. They can now, if they are are so inclined, enshrine in law, a more durable medium than agency policy, a freer Internet. Let's assume that the Republicans don't. Let's also assume that they make local municipal Internet or competition harder. The Democrats could get elected in 2018 at which point they could enshrine privacy. How is limiting the executive branch bad?
Slackwise 3 hours ago 0 replies      
Welp, time to pipe all port 80 and 443 traffic in my home through http://privateinternetaccess.com , via the OpenVPN config in OpenWRT.
cwkoss 14 hours ago 0 replies      
So, how should we write an daemon that pings high-advertising-value domains to poison their dataset?
tehabe 16 hours ago 0 replies      
I wonder if this has any consequences for the US EU Privacy Shield agreement.
LeicaLatte 2 hours ago 1 reply      
What are the plans to anonymize the data? Are there any standards in the advertising industry for sharing such information?
CrippledTurtle 15 hours ago 1 reply      
Can anyone explain why, when this went through the Senate, it wasn't filibustered? I was under the impression that almost all controversial legislation had to pass the filibuster threshold, and since Democrats were united in opposition against it, I would have expected them to filibuster this. Was there some loophole preventing them from doing so, or did they not consider it important enough to filibuster?
callinyouin 16 hours ago 4 replies      
Does anyone know if this works retroactively? Is every data-hungry company soon going to know all of our past browsing behavior?
raverbashing 4 hours ago 0 replies      
Where's the crowdfunding to buy the navigation history of the representatives involved in the approval of the law?
chrisallick 15 hours ago 0 replies      
Well, they had unlimited bullets and needed just one to hit. We needed to block everyone. But how can people follow a story let alone a lobbying effort with our current ADHD news cycle...

Can someone give people like me a "5 things to fight back" list?

Jach 16 hours ago 2 replies      
Maybe someone at YC could reach out to Thiel who could convince Trump to veto? Something like that is probably the only realistic chance of this failing, and I have no idea how much Thiel personally cares about this issue anyway.
heurist 16 hours ago 0 replies      
Awful. I stand to profit greatly from that data being commercially available but the personal violation underlying it is unjustifiable.

Who will be the first to start a "privacy-driven" ISP with marked up prices?

MBCook 16 hours ago 0 replies      
Why don't articles like this ever link to the votes so you can actually look up how your rep voted (mine? Party line, no surprise)? Took me a few minutes to find it.
colordrops 10 hours ago 0 replies      
This indicates to me an architectural flaw with the internet. We need to start exploring other techniques to circumvent tracking, perhaps through more distributed systems. The politicians can not be trusted.
username223 16 hours ago 0 replies      
(1) This wipes out almost all the value of surveillance companies that don't require logins. Why bother with doubleclick et al when you can get data straight from the ISP?

(2) HTTPS makes a limited amount of sense. Even on encrypted connections, ISPs know which domains you visit. In some situations they may also be able to MITM your certificates and read the data you transmit.

(3) Any semblance of privacy now requires either a reputable VPN or TOR.

thomastjeffery 16 hours ago 3 replies      
Time to start paying the VPN tax.
equalarrow 14 hours ago 0 replies      
Being that this is a pretty red vs. blue issue, there's not a ton ton can do about it if you live in non-red states.

The eff is an obvious choice and I'm a member and have been for almost 20 years.

In my mind the big thing is people that vote for republicans don't fully understand that they are voting for non-privacy, pro-business, and really, pro-military. Granted, there are some dems that can fall into this trap and 9/11 pretty much ensnared all but a few into the reactionary mindset. This actually took true visionaries and leaders to overcome; few and far between.

So, really, local debate has to happen in the red states where these majorities are elected. This is a long uphill battle, but the message of "mega-corporations are not your friends" has to be paramount and when you're not earning tech salaries, we are part of the problem.

For coal miners and all these higher profile ise cases, we need to re-connect with the human and community level. That's the disconnect right there; it's easier to get angry about 'the swamp' than it is to try to take your own local municipality into your own hands or figuring out how to stay local vs. state.

California, New York, etc - these aren't the battlegrounds. They are the future. The majority of their population already agrees on global warming, privacy, tech, etc. They're one step behind bitcoin/ethereum/altcoins globalization.

But for somone in W. Varginia that's a coal miner that has been laid off (a big Trump talking point), these things matter On a massive level.

So there's our schism - how can we provide a forward thinking, longer term vision that helps the common citizenry? In my mind, everything this repubican extremist 'president' represents are big interests and reducing their unfettered access to unlimited profits, regardless of what that means.

Your (what's left of it) privacy and whatever else is fair game.

I'd advise to (of course) moving to tor, vm's and seriously, cryptocurrencies. Currency is a great way to start hacking back towards 1:1, person:person transactions which leads to a less decentralized money system.

And, If course, money underpins pretty much all us entrepreneurs do.

So, we do have options. :-/ These options include vpn, tor, cryptocurrencies, ethereum, etc.

Edit: mobile spelling corrections.

ReinholdNiebuhr 9 hours ago 1 reply      
Question. When were these FCC rules implemented? I know they were under Obama but right now as I try to learn the history google just keeps giving me the news of the repeal.
nichochar 11 hours ago 0 replies      
Fuck you America, you are really doing a good job of making people hate living here.
xnx 16 hours ago 1 reply      
I wish Google would offer VPN service again (waaaay back they had some Windows utilities that would proxy your web connection).
SteveNuts 16 hours ago 2 replies      
Not that I'm necessarily OK with either, but what's the difference between this and the myriad of other sites that are collecting your browser habits/search history and selling it?

I'm not for this at all vote at all, and I'm not sure why Trump supporters are, I'm just trying to come up with a good argument for why it's worse.

rb666 7 hours ago 0 replies      
Just move to Europe, where there is still some semblance of reasonable regulations and politics (for now).
dfar1 16 hours ago 1 reply      
I never cared too much for privacy, but that's one step too far. Lawmakers probably don't understand how this makes them a target, and how their own information will be accessible. Hopefully this will create a market for ISPs that want to protect you. I see VPN markets growing even more.
sixothree 16 hours ago 0 replies      
How can I as a user buy access to my own personal information? Maybe this is an opportunity for a new venture.
snorrah 12 hours ago 0 replies      
The UK and USA engaged in a fierce battle of 'hold my beer'.
coldcode 13 hours ago 0 replies      
I don't care whose fault it is, what can we actually do to defend ourselves?
cmath 16 hours ago 3 replies      
Does anyone have suggestions on staying private that my mom could easily follow?
danso 16 hours ago 0 replies      
I know the political issues are different than in SOPA, but this situation reminds me of how powerful publicity is as a factor in legislation. SOPA was a mostly-unheard of bill that seemed certain to pass (had a huge number of bipartisan sponsors in the Senate [0] and the House [1]) until it blew up into a big online campaign and became mainstream with the blackout [2]. I remember many legislators' staff saying it was the most email and calls they had ever received in a day/week, and these are for members of Congress who voted on Obamacare and the 2002 authorization of use of force in Iraq.

I can't pretend I know what it's like to be a general layperson about tech, but my base instinct is that this issue of Internet privacy protections is much more salient to the average person than SOPA. Yet even as a follower of politics, I barely heard about this until last week when the Senate voted on it.

I can think of a couple of factors:

1. Internet giants advocated heavily against SOPA. Those same companies have less incentive to argue against selling user data, even though selling data at the ISP level is, to me, substantially different than at the website/service level.

2. So much political energy and attention has been spent on the Trump Administration, particularly on the recent push to repeal Obamacare. IIRC, even though SOPA didn't get much media coverage until around the week of the blackout, it wasn't competing with anything quite as big as this past week's vote on Obamacare (nevermind the other issues surrounding the executive branch).

[0] https://www.congress.gov/bill/112th-congress/senate-bill/968...

[1] https://www.congress.gov/bill/112th-congress/house-bill/3261...

[2] https://en.wikipedia.org/wiki/Protests_against_SOPA_and_PIPA

Edit: Worth pointing out the Senate vote from last week, in which no Republican broke ranks in a 50-48 vote. 2 Republicans were not present (edit: I originally wrote "abstained"), including Sen. Rand Paul who is listed as a co-sponsor:



dmode 10 hours ago 0 replies      
I hope someone buys browsing history of all Republican Congressman and publishes them on the web
cmurf 16 hours ago 1 reply      
https://www.govtrack.us/congress/votes/115-2017/h202215 yea, 205 nea. All yeas were Republicans.
bikamonki 15 hours ago 0 replies      
What VPN provider do you guys recommend?
dbg31415 11 hours ago 1 reply      
# House

YEAs ---215


NAYs ---205


Not Voting ---9


House Results - http://clerk.house.gov/evs/2017/roll202.xml

# Senate

YEAs ---50


NAYs ---48


Not Voting ---2


Senate Results - https://news.ycombinator.com/item?id=13943060

(I liked this format.)

dirkg 14 hours ago 0 replies      
Is anyone surprised by this? Elected representatives are rich and priveleged with zero connection, understanding or obligation to the people.

The Republicans have a stated policy of helping corporations at the expense of people. The fact that the American public voted them in just proves how stupid most people in this country really are. We deserve this.

howard941 16 hours ago 2 replies      
Could and should have been filibustered in the Senate

edit: note Senate....

intrasight 13 hours ago 1 reply      
Isn't "https everywhere" going to make this a moot point?
orthecreedence 16 hours ago 1 reply      
Don't worry, the president will veto this. /s
Deep Photo Style Transfer github.com
1067 points by mortenjorck  2 days ago   168 comments top 33
dvcrn 2 days ago 4 replies      
This is super impressive and something that I didn't think would be possible without someone very skilled in photoshop going over the images.

As a photo enthusiast, I am very excited about this, but also a little worried that soon very simple apps are capable of doing the craziest of edits through the power of neural nets. Imagine the next 'deep beauty transfer', able to copy perfect skin from a model onto everyone, making everything a little more fake and less genuine.

The engineer in me now wants to understand how to build something like this from scratch but I think I'm probably lacking the math skills necessary.

vwcx 1 day ago 3 replies      
I am a professional photo editor for a major magazine. Despite what the title sounds like, I spend much of my day sending photographers on assignment and sourcing images rather than manipulating files. I often wonder how my career will evolve in ten years; the writing is on the wall for my job, given the state of the media market.

Looking at this, I now see a future as a forensic imaging specialist. At some point, the algorithm will get pretty damn good, and it will be my job to look for tells -- cracks in the generated image where reality doesn't quite line up. The question will be whether I am seeking out these abnormalities to cover them up or to call them out.

jjcm 2 days ago 3 replies      
One interesting thing that may help the final output quality is preserving the detail layer of the original image, and then applying that to the output image. Here's my quick attempt at it: http://dev.jjcm.org/tonetransfer/

I basically just used a technique called frequency separation - it's extremely quick with the most computational part being a gaussian blur, and it allows you to separate detail from tone into two separate layers. From there I just took the detail layer of the original image and applied it to the tone layer of the output image.

jshmrsn 2 days ago 1 reply      
It almost causes me goosebumps to think about how if someone asked me "imagine this clear sky, but with this red sunset over here" I could very plausibly come up with a similar result as shown in these examples.

The transfer of the flame to the blue perfume bottle looks like a very practical way to prototype marketing images.

rcthompson 1 day ago 2 replies      
This is really cool, but the only thing I can think right now, the question that's eating away at my soul, is: why did it color-cycle the apples?
gedy 2 days ago 2 replies      
Wow. Our kids may be able to walk around with glasses and see the world styled in real-time as they please - perpetual sunshine, gloom, night, whatever..
npgatech 2 days ago 2 replies      
I think it would be amazing if Adobe incorporated some of these projects (Neural Style, etc.) as part of their "Creative Cloud" offerings...actually compute it in the cloud and return the result back to Photoshop.
vwbuwvb 2 days ago 1 reply      
notice how the lighting in the style-transferred images isn't physically plausible according to the target images. For example, in the 5th example the house is still lit as in the original, as if by sunlight not by spotlights. Maybe that's why they've chosen to showcase examples with flat or ambiguous lighting (like the nighttime scenes or the autumn scene). The DNN doesn't model the physical reality of the scene, it doesn't get that it's a 3d world, it simply transports a high-dimensional vector (the 2d image) from one space to another. What our imaginations do is map that 2d image back into 3d before transforming it.
agumonkey 2 days ago 3 replies      
I feel overwhelmed by the domain; as an ex photoshop addict, it's already above "complex job" level. I wonder if people feel bad about AI "stealing their hobs" (not a joke).
rl3 1 day ago 0 replies      
Someone should create an entire film using this technique. Shoot two films in parallel with similar scenes, then see what happens when they're blended.

Give it a creepy and/or surrealist plot so the ethereal-looking output suits the film. Perhaps the visuals could be the result of viewing the world through a robot or AI with imperfect cognition. Would be an interesting twist on the old unreliable narrator trope.

sixQuarks 2 days ago 5 replies      
What was up with the "apple" photo?
johndough 2 days ago 1 reply      
Does anyone know how long it takes per image? The only piece of information I could find was that the run script uses 8 GPUs, which suggests that it takes a while.
tabeth 2 days ago 2 replies      
Wow, this is ridiculously impressive. I wonder what the audio equivalent of this would be.

Anyone know how long this takes per image?

jtraffic 1 day ago 0 replies      
I wondered what was new about this particular implementation (since I've seen several others, notably the one this code is partially based on, from Justin Johnson). From the paper:

"Our contribution is to constrain the transformation from theinput to the output to be locally affine in colorspace, andto express this constraint as a custom CNN layer throughwhich we can backpropagate. We show that this approachsuccessfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits"

I was skeptical at first (even posted then deleted a sort of negative comment), but now that I read this, I see the value. The images are much more crisp and distortion free.

veli_joza 2 days ago 0 replies      
Some impressive results! I would like to see some heavily stylized examples like achieving Sin City and Scanner Darkly visuals.

Was it really necessary to involve Matlab, Python and Lua?

VMG 2 days ago 0 replies      
Premium Instagram filters as-a-service?
mistercow 1 day ago 0 replies      
Jeez, how many style transfer papers have there been in the last year? It's awesome, but what an odd thing to become its own subfield.
thecopy 2 days ago 0 replies      
Really cool! This could prove useful for interior designing.
pseudobry 1 day ago 0 replies      
For anyone interested in a sci-fi book about how a society filled with this kind of technology might work, I recommend reading Rainbow's End by Vernor Vinge.


hayksaakian 2 days ago 1 reply      
now if you implemented this as a web app, i'd be sharing it with everybody

really cool samples, would be cool to upload my own photos and run it through this

wyldfire 2 days ago 0 replies      
Wow -- lua, Python, matlab and CUDA.

I see now that the image segmentation is probably the key element to getting these stunning results. Other style transfers I've seen took abstract elements from the donor picture, but this really captures the donor and transforms the recipient significantly.

gtani 2 days ago 1 reply      
That's funny, i was just going over OS X makefile with somebody yesterday and i came here to look for the thunderbolt external GPU adapter.

The repo author is super responsive if you're RAM constrained:


and https://github.com/luanfujun/deep-photo-styletransfer/issues...


similar project https://github.com/jcjohnson/neural-style

bedros 2 days ago 2 replies      
awesome, wish there's no matlab requirement; can the matlab code be converted to python?
guepe 2 days ago 0 replies      
Funny I saw a very similar attempt at Fitchburg Art Museum (MA) over the week-end.I seem to recall this was an MIT-project: is this related in some way ?It was coming with a rather complex (at least at first sight) interface allowing quite a bit of operations/transformations, which does not appear here, so this might be a separate attempt.

Looks like this thing is "in the air".

double051 1 day ago 0 replies      
Those results look amazing! I'm curious how well it holds up at higher resolutions and closer inspection.

Also, there doesn't seem to be any mention of the distribution license for the project. Would any of the maintainers be able to add a license to the repository? Thanks!

hamilyon2 2 days ago 0 replies      
This is where neural network-generated images might start feeding into other neural networks.Imagine a limited dataset of tagged pictures and a vast number of styles. We could on-demand generate permutations and train network that recognize them much more accurately than they would be able to do with original limited dataset
Cofike 1 day ago 0 replies      
Holy cow, this is amazing. The results are way better than I would ever expect as possible.
markab21 2 days ago 0 replies      
It seems this could have application in conceptual interior design. Think room makeovers.
boxcardavin 2 days ago 2 replies      
This is super fun and impressive, I'm going to get it going on my machine and start playing immediately.
folli 1 day ago 0 replies      
Anyone's got a link to a good write-up on how such a style transfer works on a high level?
kowdermeister 2 days ago 0 replies      
Now imagine what we could do if it were computed in real time, 60FPS, 100x resolution/detail, embeded in AR or contact lenses.

Yep, sci-fi. But I often imagine that sometime soon I'll have a server farm in my pocket (I know I already has via the cloud, but it's a whole new game if you can do the computation on-site, low latency)

draw_down 1 day ago 0 replies      
This is so cool, such an interesting and great idea. Really impressive and well-done.
drodil 2 days ago 0 replies      
So cool :)
Tencent buys 5% of Tesla techcrunch.com
579 points by jhartmann  1 day ago   323 comments top 19
dmix 20 hours ago 8 replies      
Tesla recently announced they are ramping up their Model 3 production even more than what some people thought was already optimistic numbers: https://www.bloomberg.com/news/articles/2017-03-27/tesla-mod...

> For Musk to hit all of his targets, Tesla would need to build about 430,000 Model 3s by the end of next year. Thats more than all of the electric cars sold planet-wide last year.

> Even if half of the Model 3 inventory shipped to other countries, 2 U.S. sales under Musks targets would outpace the BMW 3 Series and the Mercedes C classcombined.

> To sell that many $35,000 sedans in the U.S. would be absolutely unprecedented based on what we know about car markets today and how people spend their dollars, said Salim Morsy, electric car analyst at Bloomberg New Energy Finance. It could happen. Im pretty sure it wont.

If they could pull this off this might be a great investment by Tencent.

It's also great for the car industry and environment as well. Especially considering their work on automated driving. If they get that many cars on the road it would give them a ton of data and a big advantage/lead in AI over other companies. But it could also be setting the bar too high and setting them up for failure (even though they might otherwise have nailed targets).

Regardless, as a design fan it would be interesting to see so many Teslas on the road. They are great looking cars.

jpeg_hero 23 hours ago 8 replies      
Tesla shorts can't get a break. First a smooth $1B+ capital raise without a stock hiccup and now this.
11thEarlOfMar 23 hours ago 1 reply      
I see this move as a blessing for Tesla to gain market share in China. The stock is valued for growth far into the future, and achieving that outcome is really iffy without a robust China market.

[edit] I'm speculating, but I don't think TenCent could have gotten as big as it has without the blessing of the Chinese government. That is the basis for my view.

smaili 23 hours ago 5 replies      
> Tencent is a prolific investor. It holds equity in Snap, this years hot tech IPO, among others following an early investment. While that interest in messaging makes sense since Tencents operates Chinas dominant chat app WeChat it isnt immediately clear whether the Tesla investment has strategic undertones.

This was my immediate question as well. Is this purely an investment for its portfolio or is there a strategic element as well? I imagine being able to send/receive messags on WeChat as the beginning of something more.

andy_ppp 1 hour ago 0 replies      
The Model 3 will come with full automation and an Uber competitor. Just a guess why they are so confident about hitting their numbers.
woodandsteel 22 hours ago 0 replies      
I suppose the administration is going to argue this move supports Trump's claim that global climate change is a Chinese plot to undermine the American economy.
vit05 20 hours ago 1 reply      
Is this showing that they havent found a Chinese company that could compete against tesla? China is investing a lot in Solar energy, batteries and have car companies that want to become global players, and most of Tencent investments are on Chinese companies that make products focused on China and Asian markets. I do not know if they buying in open market tells more about Tesla potential or about China future in cars and energy.


umeshunni 19 hours ago 0 replies      
Worth noting that Tencent is also an investor in Future mobility which has been remarkably quiet since their funding announcement last year:https://www.crunchbase.com/organization/future-mobility#/ent...
bigiain 15 hours ago 1 reply      
When I first read this, my head saw "Fifty Cent"...

And I thought "A _rapper_ has just bought $1.7billion worth of Tesla shares???" and was all ready to make "Has Tesla already become the Cristal Champagne of car brands?" gags...

Still, half a billion return in two weeks on a 1.7 billion play is pretty nice money...

Kiro 20 hours ago 0 replies      
Tencent's reach is mind-boggling: https://en.wikipedia.org/wiki/Tencent#Investments
turingbook 23 hours ago 2 replies      
Smart move. Tencent's investing but not controlling strategy make it good supporter of the new generation of ambitious entrepreneurs against AAAAF(Apple Alphabet Amazon Alibaba Facebook):- JD- Didi- Snap- Meituan-Dianping
intrasight 22 hours ago 0 replies      
Funny. Just yesterday I commented on an HN thread that my first electric car would likely be Chinese, but that it might have the "Tesla" name on it.
BlytheSchuma 13 hours ago 0 replies      
Now I can get free legendary skins with purchase of a model 3? What time to be alive.
camflan 22 hours ago 1 reply      
They should've bought 10%...or change their name to Fivecent
icantdrive55 22 hours ago 0 replies      
I think they knew solar is the future, all around the world.

China has massive pollution, and most homes/businesses that have access to direct sunlight. (Yes--I know solar works 50% on cloudy day. It doesn't work well with a lot foliage coverage. China looks barren of trees--sadly.)

My hope is those solar tiles come down drastically in price. My hope is the average roof will be cost effective to put said tiles up.

I think those solar tiles will be Tesla's Trump card. It will probally be in four years, or more in the United States. We will need a new president. (I was for Trump putting Coal miners back to work, until I found out the problem is not regulations, but automation. Actually, I want clean air. We need a better way of supporting people affected by the elimination of old ways of doing things; like a Basic Income.). Sorry about being all over the place, but there are no simple answers. Trump is just finding this out.

I think Tencent saw a long value in the stock, even though their citizens will not likely buy Tesla's tiles. They will buy the cheapest knock-off as usual, but the rest of the civilized world will buy Tesla's product.

(I don't know what patents are on these new Tesla tiles, but I bet they are seen as a valuable commodity, even to a cheating society like China.)

Digit-Al 20 hours ago 0 replies      
Shouldn't they have bought ten(per)cent?
txmx2000 23 hours ago 1 reply      
I only need Twocent to know this is a bad idea.
matthewhall 20 hours ago 0 replies      
I have a bad feeling about this...
ge96 20 hours ago 0 replies      
I thought I saw TenCent's name in Kong Skull Island
A lawsuit over Costco golf balls qz.com
562 points by prostoalex  1 day ago   260 comments top 27
aluminussoma 1 day ago 4 replies      
Costco sued J&J Vision Care a few years ago over anti-consumer behavior in the contact lenses industry (I characterize it as anti-consumer. The Vision Care industry characterizes it as pro-consumer). They dropped the lawsuit in 2016, probably because Johnson and Johnson discontinued the practice: https://www.law360.com/articles/800034/costco-drops-antitrus...

Costco did support a different lawsuit by state of Utah against Contact Lens Manufacturers. The Manufacturers lost their first appeal in December 2016: http://www.sltrib.com/news/4731439-155/contact-lens-makers-l...

Hopefully this will begin reducing the prices of contact lenses. Kudos to Costco for sticking up for its customers.

Here is one manufacturer's opinion on this matter: https://www.alcon.com/content/unilateral-pricing-policy

gthtjtkt 1 day ago 7 replies      
> Companies with deep pockets lock down the market by making it too expensive for competitors to operate and to offer lower-priced yet quality products. It is a legitimate tactic; even those who succumb to it dont really begrudge the approach.

Who the hell wrote this article, the CEO of Acushnet?

"Don't get the wrong idea, small businesses love being sued over frivolous patents they never infringed upon!"

finaliteration 1 day ago 3 replies      
Ironically, anti-competitive moves like this are only going to accelerate the game of golf's steady decline[0]. I get needing to protect your market as a large player, but when you are the main player and your product is too expensive to buy and the perception is growing that the sport you specialize in is a waste of time and money, what good does it do to push out someone making a cheaper product that may allow beginning players with smaller budgets to enter the game?


hkmurakami 1 day ago 2 replies      
This article is very sparse on details. For one, the factory that makes the Costco balls primarily makes Taylor Made balls. The manufacturer is a Korean company that used its excess capacity to make Costco's balls. Taylor Made sells premium balls so they're pressuring the manufacturer to not do this in the future.

Also i haven't seen any details about Costco having a golf ball design team. Where did this design come from? Did they contract it out to one of the small manufacturers that he article refers to? That's mainly the thing I want to know, since if it's truly their design that they own, then they'll be able to find someone to make it for them.

Also Aschunet isn't that deep pocketed. Their annual revenues are $1.5B with ~$70M shares outstanding and an EPS of about 6, so about $400M in profit, and operating income is in the range of $150M. https://forum.mygolfspy.com/topic/14841-acushnet-losing-sale...

Unlike the small ball companies they sued, Costco is a much bigger company than Acushnet and can afford to fight them off, especially since Costco has the distribution scale, hype, and demographic fit perfectly suited to really move the needle with this product (The upper middle class family with disposable income that is budget conscious, which is Costco's main market, is perfect for a budget high performance golf ball, which is a perishable sporting good that you need to buy hundreds of if you play regularly).

(Fwiw it is very common in the sport to have small upstart club makers. Basically all you need is a milling machine to make a perfectly reasonable iron or putter, and every now and then you'll see a random small manufacturers club in a tour player's bag - ex: the Yes! Golf putter when Retief Goosen won both his US Opens)

BEEdwards 1 day ago 3 replies      
> It is a legitimate tactic; even those who succumb to it dont really begrudge the approach.

Maybe I just haven't given up yet, but what the f*ck? This is not a legitimate tactic, this is LITERALLY everything wrong with our present system.

xupybd 1 day ago 5 replies      
This shows everything wrong with IP today. There should be no place for legal bullying. Especially when it's as simple as crushing competitors under litigation costs.
tedunangst 1 day ago 2 replies      
I would have appreciated some more information about these patents. Like instead of telling me the lawsuit is all hot air, show me? I feel they deliberately omitted any facts which might allow me to form any opinion other than the one I'm supposed to have. (I'm happy to believe the lawsuit is bullshit, but not based on nothing but say so.)
sergiotapia 1 day ago 1 reply      
> David Dawsey, a golf intellectual-property expert

Talk about carving a niche for yourself.

bitmapbrother 1 day ago 0 replies      
Companies that knowingly waste the courts time by filing frivolous lawsuits should be heavily punished. Acushnet is not defending their patents, they're just trying to prevent competition. They're fully aware that no patents were violated because they've already examined the KS balls extensively for infringement. This case will never make it court for the simple reason that their hand has been thoroughly exposed.
cissou 1 day ago 4 replies      
I don't understand how

"We laughed when we got the lawsuit. We knew we made it."


"his company settled the 2015 claims with Acushnet by agreeing to get out of the golf-ball business altogether; it received no payment from Acushnet, nor did it pay."

are compatible statements.

dmritard96 1 day ago 0 replies      
Samsung and Apple fought over rectangles. Nest and Honeywell fought over circles. Next up, spheres...

But in all seriousness, this is just rent seeking via the patent system.

bmcusick 1 day ago 3 replies      
American courts should have a "Loser pays" rule, and stricter standards for determining what is a frivolous lawsuit warranting additional penalties for the filer.

American jurisprudence has always favored making sure "everyone gets their day in Court" to the point where trolls and professional litigants are ruining things.

ALee 19 hours ago 0 replies      
Two things to keep in mind:

1) Acushnet is trying to keep Costco from entering the market, but once Costco sells a significant number of its golf balls, Acushnet will have to deal with the economic ramifications.

2) Streisand effect - this lawsuit plays really well for Costco, namely that it gives them a lot of free publicity and hype around their supposedly amazing golf balls.

tomohawk 1 day ago 0 replies      
Reminds me of the time we almost got inexpensive milk, until some pet congresscritters intervened to keep the milk trust intact.


pkolaczk 5 hours ago 0 replies      
"But we couldnt afford to fight the case" I think there must be something very wrong with the court system in USA. This is the one who claims their patents have been infringed that should prove at the court and pay the price for filing the lawsuit, including the cost of the experts hired by the court. Why are the costs of defense so high in this case?
barking 1 day ago 0 replies      
The law is there to protect us but this is an example of how the high cost of going to law facilitates oppression.

The same goes when it comes to dealing with the government.A government official has no personal liability with respect to any decision they make and has essentially bottomless pockets if it goes to court.

It means for example that a revenue officers decision is final when it comes to the interpretation of tax law in your case, unless you're very rich.

Skylled 20 hours ago 0 replies      
I'm sorry, but shouldn't it be the responsibility of the party making the claim to provide evidence of patent infringement before the defendant is ever even affected?

What a sad state of affairs our court system is in if even a known false lawsuit can be devastating.

praptak 1 day ago 1 reply      
Why don't consumers boycott the trolls out of existence? It's not like there is a huge cost of switching golf balls, right? And the consumer base isn't companies whose deciders spend someone else's money.
aryehof 1 day ago 0 replies      
If parties generally cannot afford to defend claims such as this, surely this reflects poorly on access to justice. Justice only available to the rich?

How can one claim a state based on the rule of law, if it is not accessible to all in a timely manner?

dbg31415 1 day ago 1 reply      
I love Costco. They pay good wages (with benefits), have good quality products, and great prices. They have a kick ass return policy. And they stick it to patent trolls. Fuck yeah, Costco. Keep it up.
kevin_thibedeau 1 day ago 2 replies      
Why not just make a $1 ball using all of the expired 1990's era patents from the top manufacturers. You'd make a mint. Name it the Patentless.
bodyloss 1 day ago 0 replies      
Why is it that the patent holder doesn't have to make a case with proof of infringment? Would it not make sense that if you want to defend a patent, you should show you've made the effort of documenting how someone is infringing?
xroche 1 day ago 0 replies      
This is why most patents should be eradicated. They only allow established companies to stay in business despite lack of innovation and price competition. The whole patent system is abused by parasites.
albeebe1 1 day ago 0 replies      
I was half expecting to read an article about counterfeit golf balls, or a factory side selling its customers product. Not even close, this is interesting.
chmike 1 day ago 0 replies      
Could this be a tactic to chose the trial location and avoid Texas ?
golergka 1 day ago 0 replies      
Everybody's quick to call this "bullying". How are all of you so sure that Costco didn't indeed steal intellectual property? Or the fact that small companies got sued before means that it were frivolous lawsuits - because, being small, they couldn't have possibly done anything bad like stealing IP?

I don't know anything about this issue, but at least I know I don't know it. What I don't understand is whether all the HN commentators get the idea that they know the situation good enough to jump to conclusions here.

duncan_bayne 1 day ago 0 replies      
Just get rid of patents altogether, already:


(Titled "Anti-Copyright Resources", but in fact contains a lot of material relevant to patents, too.)

How to write Common Lisp in 2017 an initiation manual articulate-lisp.com
439 points by macco  21 hours ago   217 comments top 28
hydandata 17 hours ago 4 replies      
I started a big project at work using Common Lisp in 2017 and could not be happier. Sure, most nice features have trickled down to other languages, but they are rarely as nicely integrated. And Lisp still has many advantages that are not found elsewhere: Unmatched stability, on-demand performance, tunable compiler, CLOS, condition system, and Macros to name a few. It has its warts too but which language does not?

I found lack of high quality library documentation a bit annoying, but a non-issue, there were tests and/or examples included in practically all of the libraries I have used so far.

Lastly, this rarely gets brought up, but I think Common Lisp has some of the best books available out of any programming language. The fact that it is so stable means that most of material, and code, from the end of 80's and 90's is quite relevant today, and new stuff is being written.

The biggest downside is that it makes JavaScript and Python revolting to work with. But I can still enjoy SML for example.

yarrel 19 hours ago 1 reply      
I'd replace the first few steps with "Install Roswell" -


Roswell will install a Lisp and QuickLisp for you, and give you a single point of entry to install libraries, create and run code, and launch en editor (Emacs with Slime of course).

I can't recommend it highly enough (I'm nothing to do with the project, just a very happy user).

Scarblac 20 hours ago 8 replies      
I'm used to languages like Python, that have a number of files that are modules, and to start a program you run one of them as an entry point.

C programs consist of a lot of files that are compiled and linked into a binary executable.

Whenever I've tried to learn CL, I couldn't really wrap my head around what the eventual program would be. You build an in-memory state by adding things to it, later dump it to a binary. How do you get an overview of what there is?

I'm just too used to my files, perhaps. Or I'm missing something.

pnathan 19 hours ago 3 replies      
Hi, HN! I made this! Ask me any questions you like. I'll try to respond as the workday progresses and I wait for deploys to complete!

paul@nathan.house if you want to email me instead. (or @p_nathan on Twitter, if that's your thing).

znpy 19 hours ago 5 replies      
Some notes based on my (brief) experience toying with Common Lisp:

* Why hasn't anyone made a more eye-frendly version of the Common Lisp Hyper Spec ? Having good, easily-browsable documentation is a core-problem.

* The relation between the various native data-types were quite unclear to me.

* dealing with the external world was quite a mess. Many project/libraries implementing only half of something and then got abandoned.

* some libraries had a compatibility matrix... with common lisp implementations. that seemed weird to me.

kunabi 15 hours ago 0 replies      
I've been working on a CL project for a couple of years. Was my first big stab at using CL for something other than a toy. Sbcl is a nice choice, but far from the only option. It has many tradeoffs. CL is not without its frustrations. Documentation that has not aged well. A community that can be less than welcoming.(in contrast to say the Racket community)Inconsistencies, e.g. first, nth, elt, getf, aref...However portability appears to be a strong point vs on the scheme scene.Single binary compilation on SBCL/LW/ACL/CCL are great. Found GC to sbcl to be lacking on large garbage rates. Tended to promote garbage to a tenure that prevented it from being removed. It would max out 32GB of memory, even with explicit gc's enabled between files. Where as the other implementations would stay below 4GB.

So ymmv.

Performance benchmarks using cl-bench really highlighted some strong points http://zeniv.linux.org.uk/~ober/clbAWS Cloudtrail parser. https://github.com/kunabi/kunabi

AlexCoventry 19 hours ago 6 replies      
Can someone point me at an argument for why I'd want to write CL in 2017, given all the great alternatives available now?
agentultra 20 hours ago 2 replies      
If you're so inclined I'd make it a "living document" that gets updated as the state-of-the-art evolves. Writing CL in 2017 is not likely to change rapidly in the next decade but even compared to what writing CL was like 8 years ago it has changed enough.

Nice job.

edem 16 hours ago 3 replies      
How does Common Lisp compares to Racket nowadays? I've seen a lot of activity but I can't decide which one to try out. I only have time for one of them ATM.
Grue3 16 hours ago 0 replies      
>Repository for local libraries with the ASD files symlinked in

This method is so old, I can't believe people are still doing this. You can easily setup ASDF to look in a subtree of a directory and never care about it finding your libraries again.

[1] https://common-lisp.net/project/asdf/asdf.html#Configuring-A...

mikelevins 17 hours ago 1 reply      
For people using macs, it's probably worthwhile to mention CCL's IDE, which you can easily build from within the CCL sources using (require :cocoa-application), or which you can get for free from the Mac App Store (it's called "Clozure CL").

It's a little bit bare bones and a little bit perpetually unfinished, but it works and it gives you a Lisp-aware environment for editingand running Lisp code, and even has a few handy tools.

GreyZephyr 13 hours ago 0 replies      
Wondering if anyone has any experience using lisp for machine learning? I'm aware of mgl[0], but it seems to be abandoned. The lack any wrappers for tensor flow or caffe is also a bit surprising to me. The cliki page [1] is also unhelpful and out of date. Is machine learning on lisp dead or are there projects out there that I'm just not aware of?

[0] https://github.com/melisgl/mgl

[1] http://www.cliki.net/machine%20learning

TeMPOraL 19 hours ago 0 replies      
While I remember - we need a refreshed SOTU for 2017. 2015 one[0] seems to be still mostly correct, but the CL scene is pretty active.

[0] - http://eudoxia.me/article/common-lisp-sotu-2015

gravypod 18 hours ago 0 replies      
If this could have a "start" page and a "next" button that will take me from topic to topic in order I'd enjoy that.
juanre 6 hours ago 2 replies      
I wrote in Common Lisp the star map generation software at the core of my startup, http://greaterskies.com, and could not be happier. But now that it's getting off the ground I wonder whether it may adversely impact my chances of being acquired. Are there any known examples of recent CL-based startups?
ntdef 20 hours ago 1 reply      
Ah yes this is exactly what I needed. I was recently trying to start a CL project but I had trouble wading through all the outdated material, especially with regards to including external packages. Thanks for putting this together!
lisper 15 hours ago 0 replies      
If you have a Mac you should try Clozure Common Lisp (http://ccl.clozure.com). It has an integrated IDE so you don't have to futz with emacs and slime.

Also, this library smooths over some of CL's rough edges:


na85 19 hours ago 5 replies      
If emacs is an obstacle to Common Lisp in 2017, maybe what's needed is a Lisp-interaction plugin for vi(m) (or whatever it is that vim uses in lieu of emacs modes). I don't get the hype for modal editing but you can't argue with the data clearly showing emacs users are in the minority.
macco 19 hours ago 2 replies      
If somebody is not comfortable to use emacs (I am), there is a atom plugin for use with CL: https://atom.io/packages/atom-slime

It doesn't replace emacs, but it works as a first Lisp ide.

tarrsalah 5 hours ago 1 reply      
A newbie question please, How to deploy CL on production? I mean for long running programs.
throwaway7645 17 hours ago 4 replies      
"Dear windows user, tell us how this is done for SBCL"

Watch YouTube video from Baggers. It's a lot more complicated than your average windows user will want to go through. Than you have to setup EMACS, quicklisp...etc. I never really new what quicklisp was doing and it made me nervous (I trust VS nuget).

killin_dan 12 hours ago 3 replies      
The biggest problem with lisp adoption imo is that the first step of every path begins with emacs.

Emacs needs to die for lisp to flourish in a more modern editor.

Light Table was a good start, but we need some power behind similar projects.

I always thought guilemacs was the obvious successor, but it still hasn't happened.

s_kilk 18 hours ago 2 replies      
While people are directing their attention here:

Last year I looked into Common Lisp for a while, but got turned off when I found that there's no distinction between the empty list and boolean false (or nil, in CL-speak).

I found this kinda weird and vaguely off-putting. I don't want to write code to handle the diffence between, say, an empty array and false or null in deserialized JSON data.

Can anyone comment on whether this comes up as an actual issue in practice?

quickoats 19 hours ago 1 reply      
i do not know how to message the guy "Scott" author of page, so i am putting this here.

in the "LispWorks CL" page, under "Implementations", the "Notes" section elicidates a mystery about the Personal Edition not recognizing the lisp init files. This is actually a limitation in LispWorks Personal Edition which is described on the link provided to retrieve said edition.

lerax 20 hours ago 0 replies      
ConanRus 20 hours ago 4 replies      
Short answer: don't do that, use Clojure instead. It doesn't have any of listed problems.
cody8295 13 hours ago 0 replies      
Probably the worst implementation of the 5-puzzle problem you can write.

codydallavalle@gmail.com Artificial Intelligence Assignment 1

Problem 09: Write GET-NEW-STATES to implement all possible movements of the empty tile for a givenstate.

CG-USER(151): (defun get-new-states (state)(setf new-states '())(cond ((= 0 (first state)) (setf new-states (list (list (second state) 0 (third state) (fourth state)(fifth state) (sixth state)) (list (fourth state) (second state) (third state) 0 (fifth state) (sixth state)))))((= 0 (second state)) (setf new-states (list (list (first state) (fifth state) (third state) (fourthstate) 0 (sixth state)) (list 0 (first state) (third state) (fourth state) (fifth state) (sixth state)) (list (firststate) (third state) 0 (fourth state) (fifth state) (sixth state)))))((= 0 (third state)) (setf new-states (list (list (first state) 0 (second state) (fourth state) (fifthstate) (sixth state)) (list (first state) (second state) (sixth state) (fourth state) (fifth state) 0))))((= 0 (fourth state)) (setf new-states (list (list 0 (second state) (third state) (first state) (fifthstate) (sixth state)) (list (first state) (second state) (third state) (fifth state) 0 (sixth state)))))((= 0 (fifth state)) (setf new-states (list (list (first state) 0 (third state) (fourth state) (secondstate) (sixth state)) (list (first state) (second state) (third state) 0 (fourth state) (sixth state)) (list (firststate) (second state) (third state) (fourth state) (sixth state) 0))))((= 0 (sixth state)) (setf new-states (list (list (first state) (second state) 0 (fourth state) (fifthstate) (third state)) (list (first state) (second state) (third state) (fourth state) 0 (fifth state)))))))


CG-USER(152): (get-new-states '(1 2 3 4 5 0))

((1 2 0 4 5 3) (1 2 3 4 0 5))

CG-USER(153): (get-new-states '(1 2 3 4 0 5))

((1 0 3 4 2 5) (1 2 3 0 4 5) (1 2 3 4 5 0))

CG-USER(154): (get-new-states '(1 2 3 0 4 5))

((0 2 3 1 4 5) (1 2 3 4 0 5))

CG-USER(155): (get-new-states '(1 2 0 3 4 5))

((1 0 2 3 4 5) (1 2 5 3 4 0))

CG-USER(156): (get-new-states '(1 0 2 3 4 5))

((1 4 2 3 0 5) (0 1 2 3 4 5) (1 2 0 3 4 5))

CG-USER(157): (get-new-states '(0 1 2 3 4 5))

((1 0 2 3 4 5) (3 1 2 0 4 5))

lenkite 20 hours ago 5 replies      
I wish an experienced LISPer would explain why should one use Common Lisp over a language like Golang. Golang now has https://github.com/glycerine/zygomys for scripting. For that matter, why would one choose Common Lisp over GNU guile ? (guile now supports fibers). What does Common Lisp offer for the working programmer that is an advantage over other languages ?
Containers vs. Zones vs. Jails vs. VMs jessfraz.com
543 points by adamnemecek  13 hours ago   200 comments top 35
floatboth 2 hours ago 0 replies      
Jails are actually very similar to Linux namespaces / unshare. Much more similar than most people in this thread think.

There's one difference though:

In namespaces, you start with no isolation, from zero, and you add whatever you want mount, PID, network, hostname, user, IPC namespaces.

In jails, you start with a reasonable secure baseline processes, users, POSIX IPC and mounts are always isolated. But! You can isolate the filesystem root or not (by specifying /). You can keep the host networking or restrict IP addresses or create a virtual interface. You can isolate SysV IPC (yay postgres!) or keep the host IPC namespace, or ban IPC outright. See? The interesting parts are still flexible! Okay, not as flexible as "sharing PIDs with one jail and IPC with another", but still.

So unlike namespaces, where user isolation is done with weird UID mapping ("uid 1 in the container is uid 1000001 outside") and PID isolation I don't even know how, jails are at their core just one more column in the process table. PID, UID, and now JID (Jail ID). (The host is JID 0.) No need for weird mappings, the system just takes JID into account when answering system calls.

By the way, you definitely can run X11 apps in a jail :) Even with hardware accelerated graphics (just allow /dev/dri in your devfs ruleset).

P.S. one area where Linux did something years before FreeBSD is resource accounting and limits (cgroups). FreeBSD's answer is simple and pleasant to use though: https://www.freebsd.org/cgi/man.cgi?rctl

deathanatos 8 hours ago 5 replies      
While I'm not sure I agree entirely with the "Complexity == Bugs" section, the main point, that containers are first-class citizens but a (useful) combination of independent mechanisms is spot-on. This has real repercussions: most people I've spoken do don't know these things exist. They know containers do, they have a very vague idea what containers are, but they have no fundamental understanding of the underlying concepts. (And who can blame them? Really, it was marketed that way.)

For example, pid_namespaces, and subreapers are an awesome feature, and are extremely handy if you have a daemon that needs to keep track of a set of child jobs that may or may not be well behaved. pid_namespaces ensure that if something bad happens to the parent, the children are terminated; they don't ignorantly continue executing after being reparented to init. Subreapers (if a parent dies, reparent the children to this process, not init) solve the problem of grandchildren getting orphaned to init if the parent dies. Both excellent features for managing subtrees of processes, which is why they're useful for containers. Just not only containers.

But developers aren't going to take advantage of syscalls they have no idea that they exist, of course.

although I wish someone could tell me why pid_namespaces are root-only: what's the security risk of allowing unprivileged users to create pid_namespaces?

dreamcompiler 10 hours ago 13 replies      
Ignorance admission time: I still have no idea what problem containers are supposed to solve. I understand VMs. I understand chroot. I understand SELinux. Hell, I even understand monads a little bit. But I have no idea what containers do or why I should care. And I've tried.
nisa 11 hours ago 3 replies      
As a lowly user: linux containers are more like gaffer tape around namespaces and cgroups than something like lego. You want real memory usage in your cgroup? let's mount some fuse filesystem: https://github.com/lxc/lxcfs - https://www.cvedetails.com/vulnerability-list/vendor_id-4781...

We have to gaffer tape with AppArmor and SELinux to fix all the holes the kernel doesn't care about: https://github.com/lxc/lxc/blob/master/config/apparmor/conta...

Solaris Zones are more designed and an evolution from FreeBSD Jails. Okay, the military likely paid for that: https://blogs.oracle.com/darren/entry/overview_of_solaris_ke...

Maybe it's Deathstar vs. Lego. But I assume you can survive a lot longer in a Deathstar in vacuum than in your Lego spaceship hardened by gaffa tape.

1: I have uttermost respect for anyone working on this stuff. No offense, but as a user sometimes a lack of design and implementation of bigger concepts (not as in more code, but better design, more secure) in the Linux world is sad. It's probably the only way to move forward but you could read on @grsecurity Twitter years ago that this idea is going to be a fun ride full of security bugs. There might be a better way?

lloydde 11 hours ago 2 replies      
It feels like Ms Frazelle's essay ends abruptly. I was looking forward to the other use cases of non-Linux containers.

I think most people are considering these OS-level virtualization systems for the same or or very similar use cases: familiar, scalable, performant and maintainable general purpose computing. Linux containers win because Linux won. Linux didn't have to be designed for OS virt. People have been patient as long as they've continued to see progress -- and be able to rely on hardware virt. Containers are a great example of where even with all of the diverse stakeholders of Linux, the community continues to be adaptive and create a better and better system at a consistent pace in and around the kernel.

That my $job - 2, Joyent, re-booted Lx-branded zones to make Linux applications run on illumos (descendent of OpenSolaris) is more than a "can't beat them join them strategy" as it allows their Triton (OSS) users full access, not only to Linux API and toolchains, but to the Docker APIs and image ecosystem and has been an environment for their own continued participation in micro services evolution.

Although Joyent adds an additional flavor, it targets the same scalable, performant and maintainable cloud/IaaS/PaaS-ish use case. In hindsight, it's crazy that I worked at three companies in a row in this space, Piston Cloud, Joyent, Apcera, and each time I didn't think I'd be competing against my former company, but each time the business models as a result of the ecosystems shifted. Thankfully with $job I'm now a consumer of all of the awesome innovations in this space.

apeace 1 hour ago 0 replies      
In this post the author links to one of her previous posts[0], where she wrote:

> As a proof of concept of unprivileged containers without cgroups I made binctr. Which spawned a mailing list thread for implementing this in runc/libcontainer. Aleksa Sarai has started on a few patches and this might actually be a reality pretty soon!

Does anybody know if this made it into runc/libcontainer? I'm not an expert on these technologies but would love to read through docs if it has been implemented.

[0] https://blog.jessfraz.com/post/getting-towards-real-sandbox-...

tannhaeuser 6 hours ago 0 replies      
A couple of observations from someone not-so-familiar with containers:

If the consensus is that containers for the most part are just a way to ship and manage packages along with their dependencies to ease library and host OS dependencies, I'm missing a discussion about container runtimes themselves being a dependency. For example, Docker has a quarterly release cadence I believe. So when your goal was to become independent of OS and library versions, you're now dependent on Docker versions, aren't you? If your goal as IT manager is to reduce long-term maintainance cost and have the result of an internally developed project run on Docker without having to do a deep dive into the project long after the project has been completed, then you may find yourself still not being able to run older Docker images because the host OS/kernel and Docker has evolved since the project was completed. If that's the case, the dependency isolation that Docker provides might prove insufficient for this use case.

Another point: if your goal is to leverage the Docker ecosystem to ultimately save ops costs, managing Docker image landscapes with eg. kubernetes (or to a lesser degree Mesos) might prove extremely costly after all since these setups can turn out to be extremely complex, and absolutely require expert knowledge in container tech across your ops staff, and are also evolving quickly at the same time.

Another problem and weak point of Docker might be identity management for internally used apps; eg. containers don't isolate Unix/Linux user/group IDs and permissions, but take away resolution mechanisms like (in the simplest case) /etc/password and /etc/group or PAM/LDAP. Hence you routinely need complex replacements for it, adding to the previous point.

arca_vorago 2 hours ago 2 replies      
As a sysadmin I just want to point out to this mostly dev crowd, that my current favorite method of operations is to have multiple compartmentalized VM's which then may or may not hold containers or jails.

Why do I do it this way? Because having a full stack VM for each use-case on a good server is realistically not that much more resource hungry than a container, but the benefits are noticeable.

Lots of the core reason stems from security concerns. For example, there are quite a few Microsoft Small Business Server styled linux attempts at hitting the business space, but instead of playing to the strengths of modern hardware, they all mostly throw every service on the same OS just like SBS does... which is a major weakness. So instead of an AD server that also does dns and dhcp and the list goes on, each thing in my environments get it's own seperate VM (eg, SAMBA4 by itself, bind by itself, isc-kea by itself, and so on)

Another reason for this is log parsing related. It's much easier to know that when the bind VM OSSEC logs go full alert, I know exactly what to fix. On multi container systems, a single failure or comprimise can end up affecting many containerizations and convoluting the problem/solution process.

Of course, the main weakness to such a system is any attempt to break out of the VM space illicitly could comprimise many systems, but that's why you harden the VM's and have good logging in the first place, but also do it to the host system, along with using distributed seperation of hosts and good backups.

Just some real world usage from a sysadmin I wanted to convey. I still will do a container or a VM with many containers for the devs if needed, but when it comes time to deploy to prod, I tend to use a full stack VM. I'm also open to talk about weaknesses in this system, as I'd be curious to hear what devs think.

To be fair, I still haven't fully caught up with the whole devops movement either, so perhaps I'm behind.

Also, a big shoutout to proxmox for a virtual environment system, FOSS and production quality since 4.0. I have also run BSD systems with jails in a similar way. The key pont of the article is that zones/jails/vms are top level isolations and containers are not (but that doesn't make containers bad!)

nikcub 11 hours ago 2 replies      
Its probably a good time to stop using containers to mean LXC considering the new OCI runc specs containers on Solaris using Zones and Windows using Hyper-V:


aaossa 4 hours ago 0 replies      
A little bit off topic but I've been following Jess for a while and I think that developers like her are great. In my country is hard to see a happy developer and she seems to enjoy everything she does. That's why I follow her, because of her great work and great personality. I'm happy to see one of her blog posts here in HN
cperciva 12 hours ago 3 replies      
The meme image ("Can't have 0days or bugs... if I don't write any code") is incorrect.

You can't have bugs if you don't have any code, but not writing code just means that your bugs are guaranteed to be someone else's bugs. Now, this may be a good thing -- other people's code has probably been reviewed more closely than yours, for one thing -- but using other people's code doesn't make you invulnerable, and other people's code often doesn't necessarily match your precise requirements.

If you have a choice between writing 10 lines of code or reusing 100,000 lines of someone else's code, unless you're a truly awful coder you'll end up with fewer bugs if you take the "10 lines of code" option.

jo909 7 hours ago 0 replies      
I think it's important to realize that the reduced isolation of containers can also have pretty significant upsides.

For example monitoring the host and all running containers and all future containers only means running one extra (privileged) container on each host. I don't need to modify the host itself, or any of the other containers, and no matter who builds the containers my monitoring will always work the same.

The same goes for logging. Mainly there is an agreed-upon standard that containers should just log to stdout/stderr, which makes it very flexible to process the logs however you want on the host. But also if your application uses a log file somewhere inside the container, I can start another container (often called "sidecar") with my tools that can have access to that file and pipe it into my logging infrastructure.

If I want multiple containers can share the same network namespace. So I listen on "localhost:8080" in one container, and connect to "localhost:8080" in another, and that just works without any overhead. I can share socket files just the same.

I can run one (privileged) container on each host that starts more containers and bootstraps f.e. a whole kubernetes cluster with many more components.

You can save yourself much "infrastructure" stuff with containers, because the host provides them or they are done conceptually different. For example ntp, ssh, cron, syslog, monitoring, configuration management, security updates, dhcp/dns, network access to internal or external services like package repositories.

My main point is that by embracing what containers are and using that to your advantage, you gain much more than by just viewing them as lightweight virtualisation with lower overhead and a nicer image distribution.

Edit: I want to add that not all of that is necessarily exclusive to containers or mandatory. For example throwing away the whole VM and booting a new one for rolling updates is done a lot, but with containers it became a very integral and universally accepted standard workflow and way of thinking, and you will get looked at funny if you DON'T do it that way.

HugoDaniel 4 hours ago 0 replies      
It doesn't matter how many distinction you make on these things (first-class, last-class, second-class, poor-class, etc...). These kind of discussions are always relative.

All is good as long as your decision is conscious of the compromises taken by each approach and what they entail (what other security mechanisms do you have at your disposal ? how could they enhance your app ? will your solution depend on external tools like ansible/puppet/etc ? do you actually need "containers" or jails or [insert your favorite trendy tech here] ?).

Running a *BSD or a Linux is a way bigger design decision than what kind of isolation mechanisms you have as many of the underlying parts are becoming different.

AlexanderDhoore 5 hours ago 1 reply      
Nobody mentioned unikernels yet? It's a bit unrelated to the containers discussion in this thread, but I thought I'd mention it anyway. They let you create an operating system image, which only includes the code you need. Nothing more, nothing less. This improves security, because the attack surface is reduced.

It makes a lot of sense too me when I think about how cloud computing works. Most of the time an operating system container, zone, jail, VM... is booted just to run a select number of processes. There is absolutely no need for a general purpose system. I think unikernels could really shine in this area.

MirageOS is a project that lets you create unikernels. It's written in OCaml, so it's interesting in more than one way. MirageOS images mostly run on Xen, by the way.

[1] https://en.wikipedia.org/wiki/Unikernel

[2] https://mirage.io/

opcenter 11 hours ago 0 replies      
I had to read the post twice before I really got what she was saying. I think the distinction I would make is that while there are many more use cases that you can apply to Containers that may not apply to Jails, Zones, or VMs the most common use case of "run an app inside a pre-built environment" applies to all of them. Since I believe most users (or potential users) of Containers are only looking at that use case, it's harder to see the differences between the different technologies.

My only hope is that anyone in a position of making a decision on which technology to use can at least explain at a high level the difference between a Container and a VM.

swordswinger12 12 hours ago 2 replies      
I'm not an OS person, so forgive me if this is a stupid question: Lots of people are excited about Intel SGX and similar things. Are there any interesting ways people are thinking about combining, like, Docker containers with SGX enclaves and such? One could imagine (e.g.) using remote attestation to verify an entire container image.
Kiro 7 hours ago 2 replies      
I'm trying to understand something. At my last work we had a big problem with "works for me". We started using Vagrant and all those problems disappeared. Then Docker became popular and all of a sudden people wanted to use that instead.

But is Docker really suitable for this? While each Vagrant instance is exactly the same Docker runs on the host system. It feels like it will be prone to all sorts of dissimilarities.

brotherjerky 11 hours ago 0 replies      
As a novice, this was a great informative read. More posts like this on the Internet, please!
fisholito 3 hours ago 0 replies      
"container is not a real thing". But what could we say about real things within software field?
throw2016 3 hours ago 0 replies      
It's a sad reflection of a technical community when 3 years later many do not seem to still clearly understand the bare basics of how containers work. HN has been complicit in massively hyping containers without a corresponding understanding of how containers work outside the context of docker.

How many container users understand namespaces and how easy it is to launch a process in its own namespace, both as root and non root users? Or know overlay file systems and how they work. Or linux basics like bind mounts, and networking.

The docker team leveraged LXC to grow from its tooling to container images but didn't shy from rubbishing it and misleading users on what it is. LXC was presented as 'some low level kernel layer' when it has always been a front end manager for containers like Docker, the only difference is LXC launches a process manager in the container and Docker doesn't. Just clearly articulating this in the beginning would have led to a much better understanding of containers and Docker itself among users and the wider community.

How many docker users know the authors of aufs and overlayfs? The hype is so intense around the front end tools that few know or care to know the underlying tools. This has led to a complete lack of understanding of how things work and an unhealthy ecosystem as critical back end tools do not get funding and recognition, with the focus solely on front ends as they 'wrap' projects, make things more complex and build walls to justify their value. Launch 5000 nodes and 500000 containers. How many users need this?

And this complexity has a huge cost and technical debt, when you are scaling as many stories here itself report and when you are trying to figure out the ecosystem so much so that its now at risk of putting people off containers.

A stateless PAAS has never been the general use case, its a single use case pushed as a generic solution because that's Docker's origin as a PAAS provider. The whole problem with scaling for the vast majority is managing state. Running stateless containers or instances does not even begin to solve that in any remote way. Yes, it sounds good to launch 5000 stateless instances but how is it useful? Without state scaling has never been a problem. A few bash scripts which is what Dockerfiles are will do it. But now because of hype around Docker and Kubernetes users must deal with needless complexity around basic process management, networking, storage and re-architect their stack to make it stateless, without any tools to manage state. Congratulations on becoming a PAAS provider.

kraemate 4 hours ago 0 replies      
We wrote a paper comparing containers and VMs for the middleware conference: http://people.cs.umass.edu/~prateeks/papers/a1-sharma.pdf
qaq 12 hours ago 1 reply      
SmartOS run containers in zones get the best of both worlds
patrickg_zill 11 hours ago 1 reply      
In earlier versions of ProxMox the openvz vms were called containers and the KVM vms were called vms. So it is pretty confusing overall.

For myself I would point out that Zones, Jails, OpenVZ and LXC , even KVM, all pretend that they are fully separate from the host node OS.

While Docker et al do not pretend this; in fact if you are running Apache on your host system and try to run a Dockerized web server on port 80 the Docker container might refuse to start. The other methods mentioned, can't even determine what they are running under.

jtchang 11 hours ago 3 replies      
I don't understand why anyone would say to give up containers and just use Zones or VMs? Containers are solving a very real problem. The problem is that containers weren't as well marketed before Docker (or as user friendly).
benmmurphy 5 hours ago 1 reply      
I think jess is on the money here. The complexity in linux containers vs zones show up in two ways:

1) the linux kernel container primitives are implemented in ways that are more complicated. for example in zones pid separation is implemented by just checking the zone_id and if the zone_id is different then processes can't access each other. this also means that in zones pids are unique and you can't have two processes from two different zones with the same pid [with the exception: i believe they may have hacked something in to handle pid1 on linux].

similarly, in zones there is no user mapping if you are root inside the zone you are also root outside of the zone. the files you create inside a zone are uid: 0 and also uid: 0 outside the zone.

if you look at how device permission is handled in linux we have cgroups that controls what devices can be accessed and created. while in solaris zones they use the existing Role Based Access Control and device visibility. so inside a zone you can either have permission to create all devices (very bad for security) or create no devices. In zones access to devices is mediated by whatever devices the administrator has created in your zone.

in zones there is no mount namespace instead there is something that is very similar to chroot. it is just a vnode in your proc struct where you are restricted from going above. zones have mostly been implemented by just adding a zone_id to the process struct and leveraging features in solaris that already existed [i guess the big exception would be the network virtualization in solaris] while in linux there are all these complicated namespace things.

this complexity means there are probably going to be more bugs in the linux kernel implementation. however, because you don't have as much fine grain control this can also create security bugs in your zone deployment. for example i found an issue in joyent's version of docker where you could trick the global zone into creating device files in your zone and these could be used to compromise the system. under a default lxc container this would not be possible because cgroups would prevent you from accessing the device even if you could trick someone else into creating it. you also have to be careful in zones with child zones getting access to files inside the parent zone. if you ever leak a filesystem fd or hard link into the child zone from the parent zone then all bets are off because the child is able to write into the parent zone as root. (i believe this situation was covered in a zone paper where they describe the risk of a non-privileged user in the global zone collaborating with a root user in a child zone to escalate privileges on the system)

2) because all the pieces are separate in linux then something has to put it together and make sure all the pieces are put together correctly. like i wouldn't trust sysadmins to do this on their own and luckily there are projects like lxc/lxd/docker etc that assemble these pieces in a secure way.

bingo_cannon 12 hours ago 3 replies      
I have been learning about containers fairly recently. What are these security vulnerabilities that the post talks about? I haven't come across any docs that mention security yet.
jlebrech 6 hours ago 0 replies      
before i even heard of containers i found out about jails and wondered if you could serve each users jail using an nginx config file in their home directory.
irrational 9 hours ago 0 replies      
Where does Docker fit into all of this? Asking for a friend.
peterwwillis 12 hours ago 1 reply      
betaby 12 hours ago 5 replies      
Some point are just wrong. Containers and jails have many design similarities which were dismissed by author. Notably PIDs, both containers and jails are nearly identical with regard, you can kind of have one leg here another there, although that harder to achieve with FreeBSD jails; both implementations do not hide PIDs from the host systems. Networking - jails can run on top of non-virtualazed IP/net dev, containers can run in such modes as well. Link is someones rant without tech details.
jlebrech 7 hours ago 0 replies      
"Lego" it's "Lego"
ianburrell 12 hours ago 2 replies      
I think that the important feature of containers is that they are running from image containing the application. Zones, jails, and VMs are other isolation mechanisms that could be used to run containers. Running an application image unpacked into VM would be container.

One place where the difference in namespace is visible is with Kubernetes pods. Containers running in pod share network and volumes namespaces.

rickhanlonii 12 hours ago 1 reply      
Great rant about something way over my head by someone who knows way more than me about it!

If I can transfer to a domain I understand better (front-end dev): It sounds like VMs, Jails, and Zones are like Ember.js: it comes with everything built in and is simple if you stay within the design.

Containers are more like React: it gives you the pieces to build it yourself, and building it all yourself can lead to complexity, bugs, and performance issues.

Disclosure: I have no idea what I'm talking about

twic 6 hours ago 1 reply      
> Legos

"LEGO is always an adjective. So LEGO bricks, LEGO elements, LEGO sets, etc. Never, ever "legos."" [1]

The other one that gets me is "math". I know it's not really plural, but "mathematics" has an a on the end, so it's "maths!"! Or do Americans say "stat" for "statistics" as well?


How much your computer can do in a second computers-are-fast.github.io
598 points by srirangr  3 days ago   232 comments top 32
userbinator 3 days ago 10 replies      
Alternatively, this could be titled "do you know how much your computer could do in a second but isn't because of bad design choices, overengineered bloated systems, and dogmatic adherence to the 'premature optimisation' myth?"

Computers are fast, but not if all that speed is wasted.

A recent related article: https://news.ycombinator.com/item?id=13940014

gizmo 3 days ago 2 replies      
Pretty cool, but a number of the questions are totally unknowable.

For instance the question about web requests to google. Depending on your internet connection you've got more than a order of magnitude difference in the outcome.

In the question about SSD performance the only hint we have is that the computer has "an SSD", but a modern PCIe SSD like in the new Macbook pro is over 10 times faster than the SSDs we got just 5 years ago.

The question about JSON/Msgpack parsing is just about the implementation. Is the python msgpack library a pure python library or is the work of the entire unpackb() call done in C?

The bcrypt question depends entirely on the number of rounds. The default happens to be 12. Had the default been 4 the answer would have been 1000 hashes a second instead of 3. Is the python md5 library written in C? If so, the program is indistinguishable from piping data to md5sum from bash. Otherwise it's going to be at least an order of magnitude slower.

So I liked these exercises, but I liked the C questions best because there you can look at the code and figure out how much work the CPU/Disk is doing. Questions that can be reduced to "what language is this python library written in" aren't as insightful.

realo 2 days ago 6 replies      
Yes, modern computers are fast. How fast?

The speed of light is about 300,000 km/s. That translates to roughly 1 ns per foot (yeah, I mix up my units... I'm Canadian...)

THUS, a computer with a clock speed of 2 GHz will be able to execute, on a single core/thread, about 4 (four !) single-clock instructions between the moment photons leave your screen, and the moment they arrive into your eye 2 feet (roughly) later.

_That_ should give you an idea of how fast modern computers really are.

And I _still_ wait quite a bit when starting up Microsoft Word.

munificent 2 days ago 5 replies      
If, like me, you spend most of your time in high-level, garbage collected "scripting" languages, it's really worth spending a little time writing a few simple C applications from scratch. It is astonishing how fast a computer is without the overhead most modern languages bring in.

That overhead adds tons of value, certainly. I still use higher level languages most of the time. But it's useful to have a sense of how fast you could make some computation go if you really needed to.

chacham15 2 days ago 5 replies      
Be careful what conclusions you attempt to draw from examples when you arent sure what exactly is happening. These examples are actually very wrong and misleading.

Take for example, the first code snippet about how many loops you can run in 1 second. The OP fails to realize that since the loop isnt producing anything which gets actually used, the compiler is free to optimize it out. You can see that thats exactly what it does here: https://godbolt.org/g/NWa5yZ All it does is call strtol and then exits. It isnt even running a loop.

dom0 3 days ago 1 reply      
More impressively, sum.c could go likely an order of magnitude or so faster, when optimized.

> Friends who do high performance networking say it's possible to get network roundtrips of 250ns (!!!),

Well stuff like Infiniband is less network, and more similar to a bus (e.g. RDMA, atomic ops like fetch-and-add or CAS).

> write_to_memory.py

Is also interesting because this is dominated by inefficiencies in the API and implementation and not actually limited by the memory subsystem.

> msgpack_parse.py

Again, a large chunk goes into inefficiencies, not so much the actual work. This is a common pattern in highly abstracted software. msgpack-c mostly works at >200 MB/s or so (obviously a lot faster if you have lots of RAWs or STRs and little structure). Funnily enough, if you link against it and traverse stuff, then a lot of time is spent doing traversals, and not the actual unpacking (in some analysis I've seen a ~1/3 - 2/3 split). So the cost of abstraction also bites here.

If you toy around with ZeroMQ you can see that you'll be able to send around 3 million msg/s between threads (PUSH/PULL) from C or C++, around 300k using pyzmq (this factor 10 is sometimes called "interpreter tax"), but only around 7000 or so if you try to send Python objects using send_pyobj (which uses Pickle). That's a factor 430.

Eliezer 2 days ago 1 reply      
What an excellent teaching pattern - you're far more likely to remember what you learned if you first stop to think and record your own guess, and this is excellent UI and UX for doing that routinely and inline.
bane 2 days ago 0 replies      
This is awesome. The real lesson here is, when you make a thing, compare its performance to these kinds of expected numbers and if you're not within the same order of magnitude speedwise, you've probably screwed up somewhere.

My favorite writeups are the ones that gloat about achieving hundreds of pages served per second per server. That's terrible, and nobody today even understands that.

alkonaut 2 days ago 0 replies      
Don't some of these examples run in O(1) time because the value in the loop isn't used? E.g in the first example 0 is returned instead of the sum.

Obviously we are talking about real world c compilers with real world optimizations so presumably we'd have to also consider whether the loop is executed at all?

paulsutter 2 days ago 1 reply      
That's nothing. Here's code that does 77GFLOPS on a single Broadwell x86 core. Yes that 77 billion opertaions per second.


asrp 2 days ago 0 replies      
This reminds me of "Latency Numbers Every Programmer Should Know"


Edit: Just realized halfway through that there's already a link to this from their page!

gburt 2 days ago 0 replies      
The `bcrypt` question seems out-of-place. It has a configurable cost parameter, so almost any of the answers is correct.
tomc1985 2 days ago 0 replies      
One second on what?

A Core i7? A raspberry Pi? A weird octo-core dual-speed ODROID? An old i915-based Celeron? My cell phone? An arduino?

"Your computer" has meant all the above to me, just in the last few weeks. The author's disinclination to describe the kind of hardware this code is running on -- other than "a new laptop" -- strikes me as kind of odd.

bch 2 days ago 0 replies      
Hard to believe there are 124 comments here and nobody has brought up Grace Hopper's talk[0][1] yet. With good humour she gives a example of what various devices' latency are, and a simple tool to comprehend the cost and orders of magnitude.

 [0] short - https://www.youtube.com/watch?v=JEpsKnWZrJ8 [1] long - https://www.youtube.com/watch?v=ZR0ujwlvbkQ

gibsjose 2 days ago 0 replies      
I'm curious to see the data collected on guesses. Some were quite difficult to guess, like hashes per second with bcrypt not knowing the cost factor, but I guess we can assume some sane default.

I would have really liked to see all these numbers in C, and other languages for that matter. Perhaps add a dropdown box to select the language from a handful of options?

alcuadrado 2 days ago 0 replies      
This reminds me to this email from LuaJIT's list:

Computers are fast, or, a moment of appreciation for LuaJIT https://groups.google.com/forum/#!msg/snabb-devel/otVxZOj9dL...

norswap 2 days ago 0 replies      
Brilliant! I'd like to see those numbers summarized somewhere though, a bit like the latency numbers every programmer should know: https://gist.github.com/jboner/2841832 (visual: https://i.imgur.com/k0t1e.png)
ge96 2 days ago 0 replies      
I came across this "article"? before in the past, I feel like I remember it under a different title like "language speed differences" or something. Or maybe that's another article by the same author/site/format.
sriku 2 days ago 0 replies      
The grep example should search for one character. Grep can skip bytes so that longer search strings are faster to search for. On my machine, I get from 22%-35% more time taken if I changed "grep blah" to "grep b".
partycoder 2 days ago 0 replies      
Computers are fast unless your algorithm is quadratic or worse, then there's no computer to help you.
urza 2 days ago 0 replies      
Anyone care to rewrite these into c#? I am really surprised how fast these python scripts are and I would like to see comparison with equivalent tasks in c# where it stands..
Lxr 2 days ago 4 replies      
Why isn't the first Python loop (that does nothing but pass) optimised away completely?
samirm 1 day ago 0 replies      
The last question seems really misleading. Most modern CPUs have a cache size of 8MB (max), yet the answer is >300MB?
thomastjeffery 2 days ago 0 replies      
Or "how fast can one of my 8 CPU cores run a for loop?" To put that in perspective: all 8 cores together give me about 40gflops. I have 2 GPUs that each give me more than 5000gflops.
kobeya 2 days ago 0 replies      
Was disappointed to find that nearly all the examples were Python and shell script. I'm not interested in knowing random trivia about how slow various interpreters are.
tim333 2 days ago 0 replies      
Or running Windows Vista you can right click and display a menu in one second plus about 29 other seconds.
ilaksh 1 day ago 0 replies      
NVMe SSD can be up to 10X faster than SATA.
brianwawok 2 days ago 0 replies      
> GitHub Pages is temporarily down for maintenance.


wtbob 2 days ago 0 replies      
Well, my computer won't display an image apparently inserted with JavaScript, although it could if I wanted to grant execute privileges on it to computers-are-fast.github.io

Does anyone have a link to the image(s)?

d--b 3 days ago 0 replies      
or "computers are fast, so we might just slow things down by using python for numerical calculations"
joelthelion 2 days ago 0 replies      
This could make a pretty good hiring test. Not expecting perfect answers, but a rough correlation with the results, and some good explanations.
grepthisab 2 days ago 2 replies      
Edit: I'm an idiot
Netflix expected to spend over $6B on original and acquired programming in 2017 business-standard.com
437 points by jaimefjorge  2 days ago   367 comments top 49
irishloop 1 day ago 16 replies      
I'm amazed how many people on HN complain about the content on Netflix. Especially compared to any other streaming service, such as Amazon or Hulu, they continue to provide much higher-end content for my dollar, as well as a (far) superior UI experience.

For me, the stand-up specials like Dave Chapelle and Louis CK alone are huge and welcome additions, never mind the many, very good lesser known comedy specials on there.

Their agreement with Dreamworks has only helped, allowing me to enjoy surprisingly good films like Zootopia.

Even great cable channels like FX (The Americans, Legion) and AMC (Mad Men, Better Call Saul) are not available for any reasonable price as standalone cable packages, so I'm not sure what the comparison is in terms of quality. HBO at 15/month?

But even HBO shows are hit-or-miss. And while their movie selection is generally pretty great, I still find myself not watching it all that much.

It seems to me that Netflix provides a pretty high level of content at their price point.

thirdsun 2 days ago 16 replies      
I really hope they don't lose focus on the quality of their original content.

I understand that not everything Netflix does is tailored to me, particularly due to Netflix' growing, increasingly mainstream audience, diverse and varying preferences, more ground to cover, etc.

However lately Netflix' content simply seems to lack substance to me. It feels as if it's just the superficial result of throwing a promising combination of those very specific tags/categories the services is famous for onto the assembly line and ending up with a show or film that, while ticking all the boxes and not being bad at all, is still pretty far off the masterpieces of the medium. I'm not sure if classics like The Wire, The Sopranos or Mad Men could have been created with such a formulaic approach and I'm missing shows of that caliber on Netflix.Maybe the service should offer brillant content creators the freedom and opportunity to do the shows they want to do, instead of dictating the theme and framework.

flexie 2 days ago 4 replies      
To me, it seems like Hollywood is betting everything on 3D movies in the fantasy/superhero/werewolf/supernatural abilities/animated/zombie genre(s).

That's roughly 3/4 of what the three cinema chains in my city show. That leaves some 25 percent to the remotely realistic movies - the ones with characters and stories that could be true, at least in a distant future. There are nights where the only movies the cinemas show, are fantastic ones. Where the only cinema experience would involve wearing 3D glasses and watching childish characters save the world.

Personally, I am fine with the establishment taking some beating. But I sure hope there will be more than just a handful of providers of content. Hollywood, HBO, Netflix, Amazon. That's not nearly enough.

meesterdude 2 days ago 8 replies      
I've liked some of netflix's original content - house of cards was the first - but a LOT of it is trash too. I don't doubt they have numbers that lead them to create these shows... but i question the scale of it sometimes. Of the netflix originals, i've only enjoyed a handful.

Meanwhile, I've enjoyed a number of "low-budget" shows they've had (like ice pilots) and a number of the kung-fu movies they have available. But to be fair, a LOT of the low-budget stuff they have is trash too.

It's clear (to me) that there is some effort to pad their content, to seem like they have a fuller collection, even if it's only a collection a raccoon would love. This, taken with the idea they tote of "having only the best performing people" displays a mismatch in cultural ideas and actual output.

For me, I want netflix to succeed because another content producer is a welcome addition to the scene, and i hope they stay independent and don't get bought up by disney. I know in a few years they'll have a collection of original content to rival the older players.

lefstathiou 2 days ago 5 replies      
Netflix was forced into this. For a while (I no longer look at the data) content costs were increasing at 8-10% per year which is unsustainable for distributors. The CEO of DISH once suggested on an earnings call that there should be anti trust inquiries into content pricing.

Additionally, Netflix's international growth was hampered by their inability to efficiently negotiate international distribution rights.

Belphemur 2 days ago 3 replies      
When you see that the content holder make it harder and harder for Netflix to license their content, it makes senses to invest so much money on creating their own content.

It seems in the end, we're really returning to the cable era.

Be ready for packages giving you access to Netflix, Amazon, and channels...

Fishman343 2 days ago 5 replies      
I thought this number seemed absolutely nuts to begin with, but actually, given their ~100 million subscribers, it's pretty much inline with what the BBC might spend in the same sort of set up.

As an observer, what still seems odd to me is the low number of shows or movies they are producing with that massive budget - last year "The Get Down" apparently cost $16 million per episode while the entire, huge hit, first season of "Stranger Things" cost $13 million.

I would have thought that would inspire them to look for more of these low budget, low risk, big payoff shows, especially when you run a subscription service - surely a series that people binge over 2 weeks, with the potential for more seasons to keep people subscribing is better for you than a one time film? Nope, $100 Million on a single film this year.

aresant 2 days ago 1 reply      
That's about $65/member per year at most current count of ~92m members.

(1) https://mobile.nytimes.com/2017/01/18/business/netflix-profi...

TylerH 1 day ago 0 replies      
I love Netflix, but they really need to invest in some more depth for the UI.

Let me curate my own lists and my own history per profile, including removing stuff from dynamic lists. I don't want to have to go into my account settings (something only the account owner can do) to remove recently watched items from my account history and then have to wait 24 hours for it to actually happen.

Let me build new lists and manually dump a tv show or movie into them.

Let me search all offerings, sortable and filterable alphabetically and/or by release date.

Let me view my profile's total viewing history if I'm interested. Even for titles that are no longer viewable on Netflix. A neat feature would be to group these by title. So you can show just "The Office" or "Game of Thrones" or "The West Wing", which can be expanded to a list of seasons, each of which can be expanded to a list of episodes, showing which ones I've watched and which ones I haven't.

--- --- ---

As enjoyable and useful as Netflix is, the fact that it still remains essentially a completely non-customizable list of "what's popular" is just incredibly lazy on their part.

tboyd47 1 day ago 3 replies      
> TV stars are demanding movie star salaries of some $250,000 per episode when they previously were content with half that

One major result of Netflix's rise is that the TV format for entertainment is starting to be seen as on par with or better than the feature film format.

There's something obvious about this from a viewer's standpoint. People are naturally drawn to abundance and regularity. Rather than having 100 discussions about which movie to watch together with my wife over the course of a year, then find that only 20 or so of them are on Netflix, and maybe 5 of them turn out to be stinkers, we can simply pick a show we both know is about a B+, and watch 100 episodes, not ever being disappointed. Also, if I watch the pilot of a great show, then I know I have a whole season to go through, with days of entertainment. If I watch a great movie, then welp, I've just watched a great movie and maybe in a few years there will be a sequel.

From an artistic standpoint, there are freedoms that open up due to having established characters, routine plot conventions, etc. There was an article on HN a while back where Conan O'Brien mentioned that this is why he did exactly the same walk to the stage every single night, or something of that nature.

yk 2 days ago 2 replies      
Well, the sane way to distribute content when the only thing that has non-negligible costs is production, not distribution, [1] is to have some kind of infrastructure provider that provides distribution and billing and finances production. However having a monopoly has ugly economic effects and having a single entity that dictates media production has ugly social effects.

I guess both Amazon and Netflix try to capture precisely that role, in that case they could charge almost whatever they want while they are not forced by competition to have high quality (and therefore expensive) content. Plus I am afraid that the market tends to produce a monopoly, because a individual subscriber will look for the service that has more content.

[1] Hetzner currently charges EUR 1.40 / TB of additional traffic, so marginal distribution costs are in the range of EUR 10^-3 / Movie.

jessriedel 2 days ago 2 replies      
Re:paywall, the story is also syndicated on Business Standard. Here are the AMP


and non-AMP versions


deegles 1 day ago 1 reply      
We've already hit "peak content" for Netflix, which I define as "more hours of new content released that I'll watch in a year."

I would estimate I watch about 200 hours a year, Netflix is releasing 1,000 hours of content this year[0]. This trend will only accelerate, especially considering all of the other providers pouring money into content.

I wish that video content licensing could be regulated similar to radio music, where the license holder receives a flat rate per play (this could be adjusted to be per-minute with modifiers for SD/HD/etc). That would open up a whole new market of streaming websites that could focus on cataloging and recommending content, instead of me having to pay for many websites full of content that I cannot physically spend the time to find the best of. Some streaming devices have cross-service search functionality built in, but it could be much better.

[0] http://bgr.com/2016/10/19/netflix-originals-1000-hours-progr...

anothercomment 1 day ago 0 replies      
I often find it a bit funny when the streaming providers advertise their exclusive content. As if it was a bonus for subscribers, when really it is part of their fight for becoming a monopoly and tying users to them.

Best for users would be to have all content available everywhere, not having to subscribe to multiple streaming providers.

That said, if that is the game they have to play, I am happy if at least it results in some worthwhile series.

JohnJamesRambo 2 days ago 0 replies      
I don't want this. I just want them to make more good movies available. I don't like Netflix originals and I hate series type shows. I know everyone doesn't feel this way though.
6stringmerc 1 day ago 1 reply      
As a writer, I want as much industry competition inflating screenwriting demand and prices accordingly. If it ever gets unsustainable, okay, I'm sure the market will react accordingly. For now though, I think it's a great time to be in Content Creation and what Netflix is doing is, no doubt, certainly one definition of "industry disruption" that gets noted frequently. Messy business, that, and risky, but here we are and Netflix is still chugging along.
JackFr 1 day ago 0 replies      
The Netflix version of A Series of Unfortunate Events was orders of magnitude better than theatrically released abomination starring Jim Carrey.
barking 2 days ago 1 reply      
I got rid of sky last year. I was paying a multiple of what netflix costs and had to put up with what seemed like 5 minutes of ads for every 10 minutes of a show. It made GOT completely unwatchable.

I am under no illusions about how netflix might behave if they ever achieve a monolithic dominance but so far it's great.

rdlecler1 2 days ago 1 reply      
I find it hard to believe that NetFlix should be worth $60b on $8b of revenue when Time-Warner (which owns HBO) has a $75B market cap in $25b of revenue. NetFlix had higher EPS in 2014 so scale is not a great argument here.
thomastjeffery 1 day ago 0 replies      
I'm happy about the prospect of Hollywood's death, but I'm disappointed with Netflix's current direction. Since House of Cards they have been creating fantastic quality work, but due to senseless DRM constraints, I can't even watch any of it in 4k without spending ~$100 on specialized hardware, even though my PC is more than proficient enough at video playback.
maverick_iceman 1 day ago 1 reply      
I think Netflix is diluting the quality of their original content in the race of creating more and more minutes. Initial Netflix offerings like House of Cards were awesome, it has gone downhill since then. The Marvel offerings are awfully slow paced and boring (except maybe season one of Daredevil). I've been on the verge of cancelling my Netflix subscription recently and will probably do unless they significantly improve their original content.
ctdonath 2 days ago 5 replies      
The promise of Netflix was "the long tail": an ultimate video library of all but perhaps the very latest content.

I'm not interested in "Netflix originals". Focus on the core competency of archiving & delivery; let others make the content.

I've worked at IBM, Smith Corona, Kodak, CNN, AT&T, others - a major takeaway is: businesses which don't stick to their core competency/purpose die.

literallycancer 1 day ago 0 replies      
Too bad the quality is pretty bad[1], even with the most expensive subscription. I'm surprised people don't mind somehow. Perhaps most of their users are watching the content on laptops?

And since you can only watch it in the browser, you can't use things like madVR or frame interpolation tools to get smoother panning scenes.

1 - Even a blu ray rip of around 3GB looks better. Think a 2 to 2.5 hour movie.

echelon 1 day ago 0 replies      
Is there room for new challengers in this space, or are all of the media incumbents going to lock down the rest of the digital entertainment space?

I live in Atlanta, and the cost of production here is much cheaper than California. I've been mulling over the merits of someone here launching a streaming platform for some of the locally-produced content.

jordanpg 1 day ago 0 replies      
Whatever you might think about their streaming listings, their DVD listings still contain virtually everything, and they stay there indefinitely. And yet I hear that their DVD business is hemorrhaging cash and has numbered days.

This DVD business is the only place I am aware of for renting a great many movies. RedBox has very limited offerings. If it ever does fold, consider that for movies that don't appear in anyone's streaming catalog or a catalog you subscribe to, the only other (legal) option you'll have is to buy the DVD somewhere.

I don't know what things will look like in 10 years technologically, but if the Netflix DVD business ever goes away, there will be movies that are permanently inaccessible for rental.

jccalhoun 2 days ago 0 replies      
Looking at this list, I haven't even heard of at least 1/5 of the English language shows Netflix already has released https://en.wikipedia.org/wiki/List_of_original_programs_dist...
Jedd 2 days ago 0 replies      
> You just cant compete with someone coming in with fresh money, low overhead and a lot less baggage than you, said Darrell Miller, an entertainment lawyer ...

That's such a delicious juxtaposition of wistful observation, and observer's job title (or 'contribution to society' if you prefer).

1ba9115454 2 days ago 0 replies      
For me the sweet spot is a season with around 10 episodes that tells a complete story.

Bosch over on Amazon was pretty good.

sgwealti 2 days ago 1 reply      
Hopefully they'll green-light another season of MST3K.
matteuan 2 days ago 0 replies      
From what I understood, Hollywood is complaining about positive competition. The prices are higher because there are few actors and scarcity of good staff, and this is a normal supply and demand reaction.So basically, they are complaining that there is a business model that works better than theirs.
smaili 1 day ago 1 reply      
I wonder if their focus on original content will open the door for a Blockbuster-like revival. It was nice growing up knowing there was always a reliable place filled with the latest movie releases as well as all of the classics.
erelde 2 days ago 0 replies      
That's 1.3 shows a week. 15 hours. Podcasts and tv blogs will have work for quite some time.
junto 2 days ago 3 replies      
I'm very much looking forward to when they finally release the second season of "The Expanse" and "Sense8".

I don't understand why the Expanse series two is being held back from Europe. Is it already available on Netflix in the US?

kingmanaz 2 days ago 0 replies      
>Netflix expected to spend over $6B on original and acquired programming in 2017

...yet all I watched was Columbo, Poirot, Midsomer Murders and Murder She Wrote.

If only some of this "original content" was original in being upbeat or uplifting.

Keyframe 1 day ago 0 replies      
At $2-3m (double on the high end and established) production cost per episode of anything, we're looking at a massive amount of TV content. Unless they opt for films - if so, good luck.
dghughes 2 days ago 4 replies      
I'd settle for fewer shows with earlier release dates. It's crazy waiting over a year to watch the next season of a show.

I guess I'm a dinosaur from the olden days of TV where once per week you watched a show. Then a break for summer and the new season started in the fall.

These days it's binge watch an entire show and wait 18 months for the next season. I can't stand that who can sit still for that long?

I bet Netflix would prefer a slower production and release schedule rather than a blast of a dozen episodes. Even the actors must hate that they must also have to wait until production starts again.

Production quality is suffering too which may indicate shows are being made too fast. On Iron Fist I noticed in each episode a red laser dot and grid shining on the actors. And in one episode you could see water dripping off the camera lens housing.

hkmurakami 1 day ago 1 reply      
As the HBO CEO once said, (and I paraphrase) 'It's a race between whether HBO can become Netflix faster [1] or Netflix can become HBO faster[2]'

[1] distribution

[2] original content

ulfw 2 days ago 3 replies      
So what differentiates them from HBO? I am confused. With HBO Go it's Cable going Internet and with Netflix it's Internet going Cable. Just that both distribute over the Web
shmerl 1 day ago 0 replies      
They also supposedly are becoming more paranoid and legacy media like (piracy!! and the like). Influence and size corrupts.
nilanjonB 1 day ago 0 replies      
I wonder what bump in revenue streams they are expecting to justify billions in programming every year.
vwbuwvb 2 days ago 0 replies      
great - because their originals are the only thing worth watching because their choice of non-netflix content is rubbish
grabcocque 2 days ago 2 replies      
The self-pitying language from Hollywood is hilarious.

How DARE people want to work for Netflix intead of us, just because it pays better, is sexier and cooler, is technologically cutting edge, and allows much greater freedom from executive meddling?


JCharante 1 day ago 0 replies      
Too bad most of their original & acquired programming is terrible.
krupan 1 day ago 0 replies      
But I still can't stream Goonies.
MisterBastahrd 1 day ago 0 replies      
IMO, the best sort of series are miniseries. Tell your story and be done with it. I'd rather remember something for how great it was than how great its decline was.
mschuster91 1 day ago 0 replies      
Happy Netflix user but the quality especially of old-ish series is beyond infuriating.

I have untouched-DVD rips from Star Trek Voyager. Single episodes run around 2 GB of data, while the same Netflix episodes are ~200MB.

Not only does Netflix not carry the bonus stuff, but the quality is seriously braindead - weird moire effects in the intro, for example, and it just looks pixelated.

In fact, the "scene rips" tend to look better than what Netflix offers! And it's certainly not due to low network quality, I've got a 100/20 connection and stuff like Shadowhunters is clear HD.

michaelmcneff 2 days ago 3 replies      
Long shot, but if anyone from Netflix sees this, I'm looking for finishing funds to complete one of Sir Christopher Lee's last films, The Hunting of the Snark, based on the Lewis Carroll story and I would be happy to exclusively license it to Netflix.


barking 2 days ago 2 replies      
looked interesting, shame it's paywalled
malloryerik 2 days ago 1 reply      
Dear Netflix: New Game of Thrones series, pleaze!
UK Home Secretary says encryption on messaging services is unacceptable reuters.com
489 points by zepolud  3 days ago   377 comments top 63
bartread 3 days ago 9 replies      
We're a very long way from being a totalitarian state and likely to remain so for quite some time but, make no mistake, this is the thin end of a very long and ultimately very fat wedge. It therefore behooves us well to hold the government to account when they try to get us to swallow more of that wedge.

Sure, encryption helps terrorists as well as ordinary citizens but it's my belief that freedom and privacy are more important than that. The work of police and security services has never been easy in a free society, but protecting and upholding that free society is the very essence of the job. Dilution of that freedom is therefore counter to the purpose for which these agencies exist, and so when the government tries to move in that direction we, as citizens, should voice our resistance, and keep voicing it until they understand.

mattbee 3 days ago 4 replies      
Amber Rudd is the UK's Home Secretary not just any minister.

"We need to make sure that our intelligence services have the ability to get into situations like encrypted Whatsapp."

She has said she is "calling in" technology companies this week to try to "deliver a solution".

Marr asks if they refuse to do that, will you legislate to force them to change? She's not drawn on that.

Interview is here:

http://www.bbc.co.uk/iplayer/episode/b08l62r7/the-andrew-mar... [from 45:18]

I understood that UK IP Bill already mean that she already has the ability to e.g. demand a backdoored version of Whatsapp be sent to a target device, but that's not covered in the interview.


SimonPStevens 3 days ago 3 replies      
I watched Amber Rudd interviewed by Andrew Marr this morning and the scariest thing about it was that Marr completely agreed with her. Rather than providing an opposing viewpoint and counteracting her points, he agreed with the idea that it was unacceptable for people to be allowed to use encryption and that it was terrible these companies were using it as a selling point. All he pushed her on was if she would enforce cooperation from tech companies.
makecheck 3 days ago 2 replies      
We hear most terrorists ate with forks so all forks are now banned.

Also, we were shocked to discover that virtually ALL criminals rely on something called Oxygen to perform their work so this is now a controlled substance that will be heavily regulated.

We were then terrified to learn that after banning forks, terrorists were able to successfully eat with spoons or even their hands.


Seriously, you cannot ban tools. Lawmakers have to approach this with a firm grounding in statistics (how LIKELY is a risk, relative to the magnitude of the measures to prevent it?). They also have to realize that some things are just necessary for society to function. Stop being paranoid.

orian 3 days ago 9 replies      
How do they want to prevent someone from creating his own end-to-end encryption app? It may use other protocols to encode content (images, tweets, fb posts etc.).

For me it seems to be more in a direction of so called "Big Brother" than real counter-terrorism.

satysin 3 days ago 4 replies      
So they get a backdoor into WhatsApp and terrorists just move onto some other non-compromised tool. Rinse and repeat. You can't ban maths ffs.

TBH I am surprised attackers do not better destroy their electronic equipment just before they carry out their attack. Pop your phone and SSD/flash drives in the microwave on high for a few minutes is pretty much going to destroy all evidence on them, and if not then chances are you are dead anyway so whatever data they might be able to get off will most likely be useless to them anyway.

s3arch 3 days ago 1 reply      
>Referring to Whatsapp's system of end-to-end encryption, she said: "It is completely unacceptable. There should be no place for terrorists to hide.

Thats it guys. Mommy says no more maths.

sametmax 3 days ago 1 reply      
The British gov is looking more and more like the Finger from V for Vendetta. The US president more and more like the one from Idiocracy. That we tend to live up to caricatures should be an alarming sign, but I only see worries on sites like HN. Most people still don't see the catastrophy in it.
cJ0th 3 days ago 1 reply      
If I may ask a very naive question:Do politicians like her really think encryption is dangerous or is it a devious way to expand mass surveillance?

Attacks of the past have shown that terrorists don't have a need to resort to encryption. The people involved in the Berlin attack last year, for instance, were monitored. Authorities knew they would strike but they didn't have sufficient incriminating evidence that would count in court to lock those guys up.

Even if encryption on messaging services were forbidden (which would make millions of law abiding people vulnerable in some way), terrorists could use throwaway email accounts from internet cafs and wrap their messages in password protected attachments.

fauigerzigerk 3 days ago 0 replies      
It would help the UK government's argument if they didn't grossly abuse every single surveillance power they have: https://www.theguardian.com/world/2016/dec/25/british-counci...
sklivvz1971 3 days ago 0 replies      
Coming from the same government that wants all ISPs to keep a log of all the sites you visit. These people are beasts and as dangerous, if not more, as the perils they are supposed to save us from.

If people knew the damage these idiots do, they would be in the streets.

Oh wait, they already are in the streets...

rijncur 2 days ago 2 replies      
It is the duty of the Home Secretary (and the UK's various nosey institutions - e.g. intelligence agencies, police, etc) to continuously badger us for this information - unfortunately, it's pretty much part of the job description.

It is our duty, as the public, to continuously say "no".

Disregarding any negative consequences, their motivations are pretty transparent - there's little doubt that being able to read everyone's private messages will enable the intelligence services to better do their jobs. However, as Edward Snowden and others have already shown to us many times over the last few years, the UK government can't be trusted with this responsibility - and that this is probably the thin end of the wedge. Britain is already the closest thing that Europe has to a surveillance state, and the number of people killed in the UK by terrorism is vanishingly small - we are hundreds of times more likely to die in a car accident. Is it really worth giving up the last vestiges of our privacy for a little bit more security?

blockoperation 3 days ago 2 replies      
I'm surprised it took this long for her to bring up the subject Theresa May would've had her soundbites prepared in advance and released within hours of the attack if she was still Home Sec.

> That is my view - it is completely unacceptable

You know what else is completely unacceptable? Technologically illiterate, authoritarian jobsworths capitalising on tragedy to push through their agendas. But that's just my view.

Home Office always seems to attract the nastiest and dumbest of politicians, but this is a whole new level of dumb, and sadly will only gain her more support, because the general public either have no idea about the implications of backdoored crypto, or simply don't have any expectation of privacy and are happy to give up what little they have left in order to feel safe.

sn41 3 days ago 0 replies      
In the 1970s, an American president had to resign because of some bugs planted.

Now, private conversation is illegal.

I guess it leads to "ownlife".

ohthehugemanate 2 days ago 0 replies      
There are a few reasons to laugh at her position.

* The UK government leads the "free world" in ignoring its own warrant process, and pursuing a "collect it all" strategy for commsec. UK citizens have no reason to trust that their government, given such access, would not abuse it. They've abused all their other access thus far.

* Privacy and Security help normal citizens and criminals alike. This is as true for a locked front door as it is for an encrypted message. We grant governments the ability to violate privacy under warrant - they may snoop, spy, enter our homes, and read our mail. We do not grant them the ability to violate security, however. They still have to pick the lock, steam the envelope, and crack the safe. These are important distinctions. We do not engineer a backdoor into all encrypted messages, for the same reason we don't mandate a government master key for all doors.

* The idea that you can legislate math out of existence is a joke.

There is one reason to cry at her position.

* They will eventually legislate this way anyway.

logingone 3 days ago 3 replies      
Poorly timed opportunism. Police have said the attacker acted alone, so he wasn't using encrypted comms to talk to anyone.
Doctor_Fegg 3 days ago 3 replies      
> She said it was a case of getting together "the best people who understand the technology, who understand the necessary hashtags"

Our Government is an absolute disgrace; and unfortunately, one to which there is currently no credible, strong opposition.

(from https://www.buzzfeed.com/matthewchampion/necessary-hashtags)

drcross 3 days ago 1 reply      
It's an incredibly foolish thing for a minister to suggest. She demonstrates a complete lack of understanding on the subject and has commited political seppuku. Has she never read Orwell, Huxley, seen articles about tyrannical governments or even heard about the reasons the US constitution was drawn up?
dijit 3 days ago 1 reply      
"He sent an encrypted message from whatsapp"

Yes, and then he went and did something stupid with easily accessible tools and acted alone.

You might have an argument if he was part of a coordinated attack against something but lone-wolf terrorism has always been defined as unpreventable by security services such as SIS. Once radicalised it's impossible to prevent individuals doing stupid stuff.

The only thing she has revealed his the conservative parties desire for totalitarian control. :(

ktta 3 days ago 6 replies      
Hmm, this definitely brings up an interesting discussion I don't think HN has had before, especially something in a similar vein since Apple+San Bernardino fiasco.

Obviously privacy is something that HN holds very close to its heart. But I'm interested in what do people here have to say about the privacy features are used by terrible people to do terrible things.

And I want to share something that I think is one of the best arguments for privacy, complete privacy. I do agree with this completely: https://moxie.org/blog/we-should-all-have-something-to-hide/

partycoder 3 days ago 1 reply      
It's just reverse psychology.

They have the means to break, degrade or bypass the encryption and they emit statements like these so people remain confident that they're not being spied on.

This routinely happens after leaks reveal that certain type of traffic is being targeted. In this particular case, Wikileaks.

In the past after all the PRISM collusion was revealed, all the PRISM partners started their PR campaigns showing their "commitment to privacy", and the soap opera with law enforcement agencies claiming they couldn't decrypt devices. In reality they have many tricks they have used for years now, like setting up a fake cell antenna, impersonate a phone carrier to take over a device.

slashrsm 3 days ago 0 replies      
This is a complete nonsense. Such move would simply encourage "bad guys" to find other means of secure communication while exposing everyone else.
derpadelt 3 days ago 0 replies      
For two decades I've been waiting for popular support for a complete or at least Clipper-chip-style encryption ban in the "free world". It always was on the other far end of the spectrum, directly oppsite questions like IV/nonce choice, PRNG initialization flaws, RSA attack vectors. I have great fear for the freedom and living standard of my kids when I read these top-level news pieces. We stand a real test and we will have to argue against hatred, fear and terrorism. Let's just hope our leaders have no-nonsense advisors as well as those that inspire such news.
callesgg 3 days ago 3 replies      
How can it be acceptable to say shit like this when you have such a position within the government.
Khaine 2 days ago 0 replies      
Reading all of the comments I am deeply concerned. Everyone who is opposed to this is doing 'their side' a disservice.

Comments are about how stupid, or ill informed the Home Secretary and advisors are, or that they are being blackmailed by the intelligence services. Seriously? These kinds of comments are not going to get the broader public to support your ideals.

I think you misunderstand why she (and law enforcement) believe that they should have access to the messages. If the terrorist called someone they can get a warrant for the metadata and see who he called and whether it is relevant to the investigation. If the terrorist sent an SMS they can get a warrant for it. However, if the terrorist sends a WhatsApp message what can they get? Why should a WhatsApp message be treated different from an SMS?

That is what we as the tech community need to explain, why backdoors, weak encryption, and escrow are not a solution.

I value my privacy. I want my messages to be secure. But if the tech community keep acting like most of the comments on this, we will lose.

jimnotgym 2 days ago 0 replies      
When you take away our freedom in order to stop terrorism, then the terrorists win. This is one guy in an estate car. Amber Rudd is not a democrat if she really believes this
hudathun 3 days ago 2 replies      
Best ban everything that can be misused by terrorists... cars, knives, encrypted messaging.
iamben 2 days ago 1 reply      
If the govt. was to force WhatsApp's hand, I'm sure we'd see democracy in action if they prevented everyone using the app for 24 hours, replacing the facility to message with a note telling users to contact their local MP (with clickable email / phone numbers - and maybe links to the ORG).
ourcat 3 days ago 1 reply      
So when they discover that he wrote and sent actual letters, will they then demand access to open our mail?

Also: Will breaking encryption stop a man grabbing a knife and jumping into his car? No.

ajuc 3 days ago 0 replies      
By the same logic if we ban freedom of speech terrorism won't be able to speak with each other.

Seriously who voted these idiots.

singold 3 days ago 0 replies      
Well, she could start herself publishing all her emails, how can we know she isn't a covert terrorist?
sidcool 3 days ago 0 replies      
Even though the article mentions specifically about UK, there are many in the US who hold the same belief. If you want to ban encryption because terrorists might misuse it, what about Guns? Then it is a matter of "freedom".
Entangled 3 days ago 1 reply      
Dear minister, can I whisper in my wife's ear while having sex or do I have to get permission from government?
dfraser992 2 days ago 0 replies      
I assume someone has already brought this up, but it is late and I can't read through 300 comments. From what I recall and have read, this individual has been on the radar of the security services since 2010 and so was a known potential threat. With a history of violence and criminal behavior. Yet effective monitoring of such individuals WAS NOT DONE and apparently isn't. Instead, there is this post-hoc demand that all of the public must give up their right to privacy because the idea of 'pre-crime' prevention is actually viable...

complete and utter bollocks.

So a blanket violation of law abiding citizens rights is more important than actually keeping tabs on known threats more closely and effectively. Pedophiles are viewed with less disdain than terrorists it seems. And the threat of terrorism is trumpeted to the heavens while pedophilia is apparently more rampant is UK society...

It is quite illogical that law abiding people suddenly snap and decide to drive their cars into groups of tourists. How prevalent are the actual potential terrorists - i.e. those with a history of violence, trouble with the law, radicalization, etc? If I knew those stats, then I personally would be better able to judge the claims of the authorities. But I don't have those stats and so the logical assumption is that their claims are exaggerated shite designed to drum up fear and etc etc. Meanwhile idiotic claims that all encryption must be banned or tapped, even for law abiding businesses (does no one remember Cameron's proposals?) are floated... nothing but Band-aids all the way down.

I could move back to America, but at this point, that is like jumping out of the frying pan. I really need to learn a second language, preferably Mongolian.

Asdfbla 2 days ago 0 replies      
I morbidly curious how many terrorist attacks we are away from actual laws that will attempt to outlaw encryption as used by WhatsApp (even if it wouldn't make sense to do that). Resistance against such measures outside of the tech scene would probably be low. The "I've got nothing to hide" mentality is actually quite widespread among the population, so I don't even think it would be a risky move politically.
id122015 3 days ago 0 replies      
Smartness should be banmed.They are too much of a problem!Everyday disruption disruption disruption...

Evolution should be banned too and all those books about biology or astronomy.God made it all!

hanselot 3 days ago 2 replies      
How difficult would it be for these so called terrorists to develop their own end to end encrypted app? Perhaps something masquerading as something common like any port under 1000? It is feasible that the elimination of whatsapp/telegram/signal encryption would just lead to a way more complicated encryption system developed internally to these organisations.
sergior 3 days ago 0 replies      
How about they look into their business partner Saudi Arabia first? It sounds like as if they let this country poison the minds of mentally ill people in hope the attacks they carry on could be used as an excuse to expand control of the society. Use of this tragedy to do just that is simply disgusting and put in question what government is actually doing.
tinus_hn 3 days ago 2 replies      
How would that have prevented anything? As if they'd have responded within 2 minutes to some guy sending weird messages.
vixen99 3 days ago 0 replies      
Some people will simply refuse to let all and sundry (we have no idea as to who reads and acts on intercepted emails) to read private emails and they will therefore turn to steganography or one time pads with a seemingly ambiguous pre-arranged code. Good luck with reading the latter or even thinking it has a hidden message.
razzaj 3 days ago 1 reply      
Isnt it weird that drasticly restrictive all encompassing rules are hastily pushed after attacks? Blanket Decryption of messages, and other privacy suppression rules will make intelligence agencies into super powers with too much control at a very reduced cost (less messy assassinations, or physical threats needed)
Zenst 3 days ago 1 reply      
So a statement that he acted alone by the met police is bing utterly ignored. Ironically no mention of banning 4x4 cars and that frankly puts this whole situation into perspective - government ignorance of encryption, once again.
nbanks 2 days ago 0 replies      
One reason it's good that governments cannot force WhatsApp to disable end-to-end encryption is that different governments have different definitions of nefarious activity. While the British Government could arguably use a backdoor to stop terrorist attacks, what would stop Pakistan or Saudi Arabia from using the same back door to enforce blasphemy laws? The issue is the same: should a private company help law enforcement by disabling encryption?

It's nice to know WhatsApp can help people break the law in places where the law itself is immoral.

bvwiqvqebui 3 days ago 3 replies      
what if the guy read a book and agreed with it because he was an sad angry teenager with no life
ahussain 2 days ago 0 replies      
Shameful way to capitalize on the recent Westminster attack. See Naomi Klein's "The Shock Doctrine" for more.
noarchy 3 days ago 0 replies      
"Home Secretary Amber Rudd told Sky News it was "completely unacceptable" that police and security services had not been able to crack the heavily encrypted service."

This is great news, actually. It means that WhatsApp's encryption works, and stonewalls the efforts of state actors (or at least, hers) to break it.

That said, we don't know if she's lying about this, or not.

benevol 3 days ago 1 reply      
They don't need to touch encryption in any way. It's way simpler to subvert the endpoints, as most people use closed-source operating systems such as iOS and Android which offer closed-source applications.

All they need to do is to pressure Apple and Google to keep some backdoors open, which is more than realistic, as Snowden's revelations have shown a couple of years ago.

doktrin 2 days ago 0 replies      
Looking away from the fact that what they want isn't actually achievable, what does the UK risk by beginning to go down this road? What consequences could this potentially have for their domestic tech sector?

My intuition says that they stand to lose more than they could possibly gain, but I'm curious to hear a more knowledgeable perspective.

codewithcheese 2 days ago 0 replies      
In these digital always online times; its like claiming no one should be allowed to have a private conversation.
mrkgnao 3 days ago 1 reply      
Thought-experimentally: could we potentially be able to scan message databases for the absence of certain phrases, using something like [1], but in a probabilistic manner akin to that of a Bloom filter? This would ensure that law enforcement would be able to flag certain keywords with a nonzero (and nontrivial) false-positive rate. That way, repeated flags end up identifying potentially interesting members of society, without proof and with data inadmissible as reliable evidence in a court of law.

Of course, one runs the risk of the existence of false positives being forgotten, TLA/government pressure to reduce the false positive rate, and so on. But I think this is a slightly interesting way to (partially) preserve privacy while satisfying lawmakers who demand that there be some way for them to listen in on (what should ideally be completely private) data. (This is, of course, only possible once one drops the axiom of privacy being an absolute right: I don't personally support doing this at all.)

[1]: https://crypto.stanford.edu/portia/papers/HardNDB.pdf

hanoz 2 days ago 0 replies      
I wonder how she felt about private communication during her directorships in off shore tax havens?
al2o3cr 2 days ago 0 replies      
In a similar vein, to prevent corruption and bribery we should require Ms. Rudd et al to post all email exchanges (official or otherwise) they engage in publicly, along with their bank statements.

After all, we can't allow corrupt politicians ANYWHERE TO HIDE. ;)

secfirstmd 3 days ago 0 replies      
It's going to be a total clusterf*ck when the UK leaves the EU and starts introducing draconian intelligence gathering laws that go further than the EU regulations permit. Think Privacy Shield style problems but much worse...
ianopolous 2 days ago 0 replies      
The relevant discussion is here:https://www.youtube.com/watch?v=8yIPuHsB8q8
threatofrain 2 days ago 0 replies      
I assume that the UK government has been doing these extremely pro-surveillance, anti-encryption, and anti-porn stances because they detect sufficient support from the UK population?
gjjrfcbugxbhf 2 days ago 0 replies      
Theresa May is just using the tragic deaths of some innocents to push her own political agenda. Pathetic political games at their worst.
I_am_neo 2 days ago 0 replies      
Messaging services without encryption is unacceptable - TRUTH
intrasight 2 days ago 0 replies      
First global warming deniers then mathematics deniers. Where do we go from here?
royka118 3 days ago 5 replies      
Is it technically feasible to have a back door and still be `end to end` encrypted ?
visarga 3 days ago 6 replies      
> "That is my view - it is completely unacceptable, there should be no place for terrorists to hide."

I am sure a ban on encryption would work.

Hey, guys, I just had a great idea. Let's ban bombs, knifes, and driving into people. That would fix the terrorism problem. Once it is illegal, no terrorist would dare do it!!!

I'm wondering why Churchill didn't think to ban the Enigma machine. If only England was led by smart people like the British interior minister...

techrich 3 days ago 0 replies      
as usual she has no idea.
vinceyuan 3 days ago 5 replies      
If I have to choose one from end-to-end encryption and security, I will choose security. I don't mind my WhatsApp chats are scanned by police's software, if it can reduce terrorism. Of course, we need to make sure it is used for anti-terrorism only.

Update: One solution of 'make sure' is the source code of the monitoring software must be reviewed by independent and trusted software engineers/experts.

PS. Downvoting my post doesn't solve any problem. If you have any better idea, welcome to post it out. Thanks

Flex ycr.org
486 points by mpweiher  12 hours ago   65 comments top 23
adamnemecek 9 hours ago 4 replies      
The name is a reference to Alan Kay's master's thesis.Here's an excerpt https://www.mprove.de/diplom/gui/kay68.html

Also, I've previously looked into Ometa, the predecessor of Ohm and I found it to be the possibly cleanest parsing solution available. Check it out if you ever feel parsing-monious.

cphoover 17 minutes ago 1 reply      
I would love something like seymour for JS development. Being able to see the complexity of execution as you type would be invaluable.
shadowmint 7 hours ago 2 replies      
Interesting but vague.

How do I see any of these actually in action? Do they work?

Will they be released at some point or are they proof of concept work that will never see the light of day beyond an academic paper?

Geekette 51 minutes ago 0 replies      
Interesting work. I'm particularly curious to see how the "Natural Language Datalog" will evolve; it seems to have various potential use cases, from conversation in social and occupational settings to specialized project tasks, learning and building across various sectors, etc.

It reminds me of some previous roles where my key task was "translating" abstracted information and creating interfaces for less technical audiences to interact with technical information.

throwaway2016a 10 hours ago 2 replies      
Not sure if this is intentional or not but at first I thought this was something to do with Flex (the Lexer generator - as in one half of Flex and Bison). Something that is even more confusing since the first graphic in the page contains a BNF document and talks about compiler generation.

If Flex was a business with a trademark this would count as infringement without a doubt. Same type of product, same name.

aratno 10 hours ago 1 reply      
I appreciate the demos for each of these applications, but the descriptions could be more specific. I'm interested in what Trainee _does_ but it's hard to understand. There are many questions left unanswered by this page, such as:How are problems with unknown variables displayed?How are more sophisticated games displayed, such as two-player games (Connect 4), imperfect information games (Minesweeper), etc?
usmeteora 1 hour ago 1 reply      
Where is the chorus project hosted? Was browsing around on github for related devs and YC but couldn't find it. Have these tools been officially released yet?reply
krosaen 1 hour ago 0 replies      
Cool, wonder if these guys ever talk to the folks behind http://witheve.com/
molikto 1 hour ago 1 reply      
What about a better interface for Coq? I think this makes more sense to experienced programmers. Also for mankind.
rurban 4 hours ago 1 reply      
So basically ohm is Ian Piumarta's peg compiling to JavaScript, not C. And ohm-editor is the visualizer when going down the operator precedence rabbit-hole, which is the only peg problem, compared to a conventional back-recursive parser generator. One advantage of peg is that it doesn't need a flex lexer. Strange name then. I would have called it pegjs.
asrp 8 hours ago 1 reply      
I wish the Ohm editor allowed custom grammars for describing the grammar.

In fact, I wish more projects based on Ometa and Ohm allowed this so there's a better chance they can bootstrap each other. Of course, some semantics might still be missing but at least parsing would work.

scriptproof 8 hours ago 0 replies      
I like how Ohm is designed. Very intuitive. Surely the best tool if you have to do some parsing in a Web app.
replete 2 hours ago 1 reply      
This stuff looks AMAZING, but no code or binaries to try? :(
d99kris 9 hours ago 2 replies      
In the "Selected Past Work" section there's "Natural Language Datalog" which I would be interested learning more about, but the link seems to be wrong. Anyone knows where to find more info on it? My google skills are failing me.
febin 9 hours ago 1 reply      
Where can I download the chorus mentioned in the link ?
anirudh24seven 10 hours ago 0 replies      
I recently stumbled upon something similar but probably more light-weight: http://ncase.me/loopy/
shriphani 10 hours ago 1 reply      
Hmm so this line of work isn't listed on the YC Research page - is this the result of some sort of a YC grant program that I missed?
sjnair96 10 hours ago 0 replies      
Wow. Just in time for my compilers exam :)
dplgk 9 hours ago 0 replies      
The title could use some embellishment
blueprint 9 hours ago 1 reply      
Is this in any way connected to the Eve/Lighttable guys?
asrp 8 hours ago 0 replies      
Is there no audio for any of the videos or is that just my browser?
tomcam 8 hours ago 1 reply      
vinceguidry 7 hours ago 1 reply      
I'm still having trouble figuring out why anyone would actually want to make their own language, on a real project and not just BYOSchemeForTheLearningAndLulz or whatever. I guess I'm just really spoiled by Ruby, which makes it so easy to add semantics to a system that I absolutely never want new syntax.

Introducing a parser generator workflow to a project that already has access to Ruby is almost certainly Doing It Wrong. Every time I've tried it, I ended up scrapping it and just did it in Ruby with a DSL. You're going to want to access and control whatever it is you're writing with Ruby, so why not just stay in the language?

I guess if you're coding in something really dull like Go or Java, you can easily get to a point to where the developer experience is constrained by the language so you need a new one.

LA Times and ads nelsonslog.wordpress.com
432 points by catacombs  3 days ago   184 comments top 35
SwellJoe 3 days ago 6 replies      
I resisted using an ad blocker for many years; I kinda felt like if I wanted to use a site, I should be willing to trade for seeing their ads. I've changed my tune a couple of years ago, and this is a (small) part of the reason (but a bigger part of it now that I know how crazy usage for ads has gotten).

I'm on mobile data nearly 100% of the time most months. 14GB costs me $50-$70 (depending on which network I'm on, I have two) to download ("unlimited" plans actually aren't, when used as a hotspot, though T-Mobile now seems to actually have a mostly unlimited hotspot option, I haven't tried it yet). So, not only are ads intrusive, disrespectful of privacy, and generally of negative utility for me as a user...they're also ridiculously costly.

So, yeah, I use an ad blocker. Oddly, I tried disabling it earlier today for an LA Times article (because of their blocker blocker), but it didn't correctly detect that I'd disabled it, so I closed it and went elsewhere. Now I know I should never disable ad block for LA Times, no matter how interesting the story seems.

lobster_johnson 3 days ago 5 replies      
I develop software used by newspapers. This is a problem with the whole industry. It's amazing how much crap they load.

These sites all use a third party "tag manager" (Google Tag Manager, Tealium, Piwik etc.) to manage the scripts they load: ads, analytics, trackers etc. The people who use the tag manager typically aren't techies, so they don't understand that adding another tag will cause the page to slow down. Typically I've seen a single newspaper use 3-4 different vendors for the exact same thing, such as analytics. They don't actually use all of them; who knows why they have multiple overlapping ones.

Scripts are often badly written, and it's common to see lots of nonsensical errors spewed to the console. It's very annoying to debug your own stuff when you have that crap loaded, which typically comes from a header/footer combo provided by the customer.

smaili 3 days ago 2 replies      
> Thats a timeline of 30 seconds of page activity about 5 minutes after the article was opened. To be clear, this timeline should be empty. Nothing should be loading. Maybe one short ping, maybe loading one extra ad. Instead the page requested 2000 resources totalling 5 megabytes in 30 seconds. It will keep making those requests as long as I leave the page open. 14 gigabytes a day.

It's not quite clear if 14GB/day is an extrapolation based off the author's sample of 5 MB/30 seconds or if the author actually left the page open for all 24 hours. Regardless, that's quite a bit of data.

tedunangst 3 days ago 2 replies      
So in case you were looking for another reason to avoid the LA Times (or probably tronc papers in general), they're very spammy. I get tons of junk "newsletter" mail from them that I definitely didn't subscribe to, although they inevitably claim I did, and you can unsubscribe, but then they just invent new lists. And sell your address to others.

Sample from only a few days ago.

 From: "San Diego Union-Tribune" <promotions@e.sandiegouniontribune.com> Subject: We are proud to offer you Moonlighting - Hire or be hired! Received: from mta953.e.latimes.com (mta953.e.latimes.com [] [Blah blah bullshit about their proud partner promoting a soulless gig economy.] This email was delivered because you registered for Email Membership at utsandiego.com
This is 100% false. I did create an account for latimes.com (who sent this shit) but not the San Diego paper. Not even close.

jurassic 3 days ago 1 reply      
Garbage like this is one of the many reasons I've gone back to print to a large extent. Printed magazines are amazing things, a superior experience and product in almost every way to reading online outlets. This whole ethical conflict over how to deal with invasive advertising while supporting the work goes away when you bought and paid already before you even start to read.

Daily news doesn't fit into this philosophy that well (printed papers is too much bulk for me), but I get pretty much all the "breaking news" I need from Twitter. Stepping away from the 24 news cycle to sit with a piece of analysis from a weekly or monthly (or even quarterly!) magazine is a much better way for me to stay informed and support journalism. And as a bonus I don't have creepy ads follow me around the internet if I want to read a Socialist magazine, or a gun rights magazine, a bridal magazine, or whatever.

drawkbox 3 days ago 4 replies      
This is why data caps are bad. Left open all month that could be 400+ GB of data used on your plan.

Ad networks are being subsidized by data caps and personal/business broadband costs. Maybe if ad serving went through the main host and their own bandwidth then maybe ad networks would have a reason to control this abuse of transfer.

Mtinie 3 days ago 1 reply      
Watching the requests, there's a huge number of mixed content warnings in the console. Attempting to hit https://www.latimes.com to remove the reason for all of those failed requests triggers Firefox to throw a certificate error. Could all of this be a side effect of a certificate that has gone bad?

> www.latimes.com uses an invalid security certificate.


> The certificate is only valid for the following names:

> .akamaihd.net, .akamaihd-staging.net, .akamaized-staging.net, .akamaized.net, a248.e.akamai.net

mp3geek 3 days ago 0 replies      
Sites will use the disguise of "Native Ads" to make these links look like standard web page links, and they'll do every attempt to avoid being blocked or hidden.

Using base64 images, websocket/blob: injections, third-party scripts, natively hosting the images on the site and using also webRTC to also inject.

Its a long fight of countering/re-countering, and until website developers listen to its users these type of ads aren't acceptable.

/Fanboy from Easylist here

rinze 3 days ago 0 replies      
"An attentive eyeball! Fire at will to trigger consumption!"
danbruc 3 days ago 1 reply      
But how many users would actually be willing to pay for the content they consume in order to get rid of ads and tracking and excessive data volume consumption? And how much would they be willing to pay? It seems like they would have to be willing to pay at least an amount comparable to the average ad revenue per user and page view, whatever that actually is.

Just blocking ads is certainly justifiable in the current situation, but it also certainly not sustainable if the ad blocker installation base keeps growing. And buying subscriptions for all sites is not really an option either. It becomes quite expensive pretty quickly, especially if you want the see only a few articles per month on each site but do so on many sites.

anigbrowl 3 days ago 1 reply      
They are also the most aggressive ad shoveling website I have ever seen. Their ad blocker blocker and paywall works, preventing me from reading articles.

It keeps working even when I have my ad blocker turned off. Obviously I must have some anti-tracking extension still enabled that it dislikes, but I'm not willing to spend that much time troubleshooting my browser. It's baffling to me, because the LAT is one newspaper I'd consider a digital subscription to, but as this article says the advertising/marketing people have clearly won out over the editorial, so fuck'em until that changes.

wjossey 3 days ago 1 reply      
I have been reflecting on this problem a lot over the past six months since I left the ad tech industry after four and a half years. These posts always sadden me, because I recognize the importance and value that advertising brings to web startups and the level of innovation and growth that it has enabled.

I have a hypothesis on how we can reduce the number of these types of ads, while also not harming advertisers or publishers in terms of reach and revenue.

I believe that we are in an advertising death-spiral. Sites are adding additional impression opportunities and ad placements. This is triggering higher numbers of impressions available to programmatic buyers. The additional number of available impressions is devaluing the impressions (we're flooding the market), which has led to a perpetual decrease in CPMs every year. This leads to publishers pushing higher "engagement" ads, which users just find terribly annoying, as well as more ad units. The cycle repeats, repeats, and repeats, just so both sides can "stay effective" with regards to whatever metrics they are measuring against.

My belief is that the LA times does not need N ad placements per article. They probably only need one. We don't need to junk up quality news organizations with taboola and the other "content" recommendation platforms. We quite literally need a dtente.

LA Times goes down to one ad placement per article.LA Times advertiser is guaranteed viewability (high placement in the article, for example), and 100% share of voice. The "impact" of that ad increases, with the overall decrease in other "noise". The cost also goes up, but commiserate with the increased value. Both sides likely end up making / paying the same amount of money, with likely the same level of impact for the advertiser, but they reduce the pain on the user, which they should both care about deeply.

I'd like to believe this is an opportunity from a business perspective. I believe that someone could demonstrate this value, in some way, to both sides of the market. The advertiser would need fewer impressions to achieve the same level of value / impact of their ads, which also has the side benefit of reducing additional costs around TPAT tracking, analytics costs, general tracking costs, etc., which are often priced on a CPM basis (so, fewer CPMs lowers their bill). The publisher would potentially see better engagement from their users, fewer ad blocks, and a higher quality experience.

I think for the sake of newspapers and advertisers alike, some way to make this reality makes this an idea worth solving.

mirimir 3 days ago 0 replies      
Well, I can read the article in Firefox with Adblock Plus if I enable Reader View before the modal box opens :)

Learned that trick here. Thanks, guys.

Edit: Automatic Reader View add-on eliminates the race.

intrasight 3 days ago 1 reply      
I visited the article he mentioned had had no issues. Ads and videos and everything from third-party sites were blocked and all I had was the text of the article. I think the author has got to tune his ad blocking tech.

Edit: uBlock Origin blocked 20 third-party domains. Totally typical of web sites these days.

imgabe 3 days ago 0 replies      
I find sending LA times articles to Instapaper manages to get me a readable copy of the article. It's a bit more of a hassle, but worthwhile if the article looks really interesting.
alrs 3 days ago 3 replies      
Browse with w3m or elinks, problem solved.
BuffaloBagel 3 days ago 0 replies      
So glad to see others complaining about this. I am totally dismayed at the sluggishness of LAtimes.com. It was usable with an adblocker turned on but since being forced to turn it off recently the only computer I can use it on is a dual xeon with 32G memory and a modern gaming card and even then it's a struggle. I'm shocked that the this kind of botched hackery can exist at a major newspaper. It's ugly bad.
manigandham 3 days ago 0 replies      
I work in adtech. The reason this happens is because of poor dev knowledge/resources by most publishers (although shouldn't be a problem with LA Times) and because the ad industry has perverse incentives combined with absolutely no oversight or enforcement.

99% of these companies are in business by running as many impressions/clicks/whatever "engagements" as possible regardless of user experience so we end up with this tragedy of the commons.

donohoe 3 days ago 1 reply      
I see this all the time. No regard for mobile users that are paying for data plans. To me the big reason to have ad-blocker is data, not just UX.

I'm focused on getting a mobile article page down from 20+ seconds on 4G with 300+ requests and page-weight of 2MB+ (mostly ads).

Right now, on CI environment it is averaging 2.1s, 350K in size, <45 requests, and no ads in the initial view.

Sadly, Hearst doesn't own the LAT so it won't help them.

dba7dba 3 days ago 0 replies      
First, did others see this HN post?https://news.ycombinator.com/item?id=13956807

RJ Reynolds recruiting guideline SPECIFICALLY wanted sales (or marketing) people with 2.8-3.1 GPA. NO wonder so many ad server tags by the sales/marketing types are FULL or errors.

Anyhow, the very first time I experience a Mac computer crash HARD was when I opened NYT.com. Yes NYT.com.

It was a typical Mon morning, around year 2011. I got into office, got coffee, tapped keyboard the Mac Keyboard to wake up my iMac and proceeded to open the website I opened every morning, nyt.com.

I noticed some flash based ad doing some fancy thing in the top banner. But whatever.

I continue doing my thing.

Wait what? My iMac is frozen. iMac! This can't be true. I frantically pound on keyboard but nothing works. Out of desperation, I finger it.

When it comes back up, I slowly bring things up one at a time. And I realize it was the NYT.com's flashy flash ad that caused the crash.

When I upgraded my slightly outdated Flash plugin in my Firebox, I could view the nyt.com homepage without my iMac freezing. I could HEAR the mechanical HD in my iMac grinding and CPU widget showing CPU spiking when I open nyt.com homepage.

Because of an ad on NYT.com, I'm pretty sure millions of people experienced their computer crash on that Mon morning.

alistproducer2 3 days ago 0 replies      
Another reason I keep JS turned off on my phone and most news sites on desktop. Seriously, try it.
chiefalchemist 2 days ago 0 replies      
Clearly, there's got to be a better way. The question is, will we find it before my "unlimited" data tops out?
leoh 3 days ago 0 replies      
I've noticed that if you are running an ad-blocker and disable JavaScript (i.e. from the Chrome debugger), you can view the page just fine.
kccqzy 3 days ago 0 replies      
I'm a frequent visitor of L.A. Times and I've already learnt to press Cmd+. as soon as all the content I want has been loaded.
aftbit 3 days ago 0 replies      
I actually installed uMatrix just so I could disable all cookies and Javascript on LA Times. No more ads or anti-ad-blocker.
bogomipz 3 days ago 0 replies      
Does anyone know if you have a paid subscription does the LA Times still insist on sending you 14 gigs of ad data?
jbclements 3 days ago 1 reply      
so, as someone who keeps meaning to try it out but hasn't yet: how does Brave do with this site?
jankotek 3 days ago 0 replies      
Hit Esc (or Cancel) button to stop JavaScript execution. Works well since Netscape 3.x
eXpl0it3r 3 days ago 0 replies      
Do they offer an RSS feed? Does adding the article to Pocket or a similar service work?
inka 3 days ago 0 replies      
"Their ad blocker blocker and paywall works, preventing me from reading articles."

Well, Adblocker in Chrome seems to be blocking the ads on the page quite well. Some single requests are being logged after the page loads, but nothing as to what the article mentions.

stvnbn 3 days ago 1 reply      
https://pi-hole.net/ Turn your raspberry into an ad-blocker.

I just want to spread the word out and make the world a better place.

an_account 3 days ago 3 replies      
Does buying a subscription remove all these ads?
shams93 3 days ago 0 replies      
Why I use the Android app TextBrowser
good_vibes 3 days ago 2 replies      
The data points for my hypothesis just keep connecting.

edit: so I know, why does something this innocent get downvoted? I don't understand what I did wrong.

debt 3 days ago 2 replies      
"They are also the most aggressive ad shoveling website I have ever seen. Their ad blocker blocker and paywall works, preventing me from reading articles."

So their ad blocker blocker and their paywall kept you from reading the articles for free? Why don't you just pay them?

It's just a shitty thing to do at this point. If they have high quality articles then what would require you to pay them? Do they need to be on Patreon or Kickstarter or something?

Next.js 2.0 zeit.co
622 points by tbassetto  2 days ago   215 comments top 33
migueloller 2 days ago 3 replies      
We recently used Next.js to build out an MVP. It was a pleasure to work with.

For those wondering what this is, it's basically a slightly more opinionated Create React App [1].

Here are some of the benefits:

- No need to setup complicated tooling

- Server and client (SPA-style) rendering out of the box

- Filesystem for routing using the `pages` directory

- `getInitialProps` component lifecycle to encapsulate data requirements per component

- Automatic code splitting for each route

- If it works with React, it works with Next.js

Here are some of the issues we encountered:

- Importing non-JS files from `node_modules` (like normalize.css) was not as simple as it could be (#1245 [2])

- Animation transition between pages is still being worked on (#88 [3])

- There are still some inconsistencies between server and client that could improve, like running Webpack on the server (#1245 [2])

- Doing proper scroll restoration when routing (#1309 [4])

We will continue to use it as long as it keeps letting us move fast without having to worry about spending hours setting up React tooling.

[1] https://github.com/facebookincubator/create-react-app

[2] https://github.com/zeit/next.js/issues/1245

[3] https://github.com/zeit/next.js/issues/88

[4] https://github.com/zeit/next.js/issues/1309

jonknee 1 day ago 6 replies      
This page has some huge issues loading resources. I kept it open as a tab to dive into later and noticed it was still spinning after a long time... I refreshed with Developer Tools open and after 2 minutes over 100MB had been transferred!

It appears the screencast video is to blame, it is not only huge (~70MB), but it keeps getting downloaded instead of just replaying. I just checked the console again and the page is now up over 300MB downloaded. Glad I wasn't on mobile data!

Update: I downloaded the video and ran it through ffmpeg to see how much space could be saved... Original size 72.2MB, new size 1.7MB. Screencasts obviously compress very well, but this was pretty surprising. You could easily optimize this down further and probably half the size yet again.

 ffmpeg -i hello-world_2.mp4 -vcodec libx264 -preset veryfast smaller.mp4

Stamy 2 days ago 3 replies      
Vue.js ecosystem has alternative as well. It is Nuxt.js (https://nuxtjs.org/)
ndreckshage 1 day ago 1 reply      
Next is great. I would love to deprecate my SSR starter kit - Sambell - https://github.com/humblespark/sambell - once it supports a layout file, for animated transitions, etc. Right now, React Router seems fundamentally more powerful as a SPA framework.
smdz 1 day ago 2 replies      
I don't understand why people here are comparing Next.js to create-react-app.

I see Next.js primarily focusing on server-rendered React and then it also supports client-rendering. I see create-react-app is primarily focused on client-javascript. Am I missing something?

tabeth 2 days ago 4 replies      
Ah, so close.

I really just want something like Handlebars, with a generic adapter for any language for compilation.

Then you can load with javascript a client-side controller if you want to add in interactivity.


To expand, I think the idea solution would be the following:

1. Server side view, let's call it "Bars"

2. Bars can be written on the front-end or back-end, does't matter. It'll compile to BarsHTML.

3. If you write Bars on the back-end, then when you serve your page it'll compile Bars into BarsHTML.

3a. If you want to sprinkle JS onto Bars, you'll use BarsControllerJS, an adapter to whatever framework of your choice, to manipulate it. The main difference between this and manipulating the DOM is that the interface to do this is not DOM centric.

4. If you write Bars on the front-end, it's the same as (3), but you get (3a). If you decide you'd rather do server side rendering, you literally just move your views to the server. The controller is already abstracted, so you wouldn't need to do anything else.

anupshinde 1 day ago 1 reply      
It would be great to see TypeScript support with NextJS out of the box. It can be done with some work today.

Inspired by an earlier(probably first release) of NextJS, I had attempted TypeScript and server-rendered react here:


egeozcan 1 day ago 0 replies      
I wonder how hard it would be to programatically cache a Next.js app with a ServiceWorker to make it also work offline. I guess one could copy the logic from the prefetch script as it makes network requests like this: http://i.imgur.com/c3H176u.png
jorjordandan 1 day ago 0 replies      
The link is wrong for the Koa server example. It points to the Hapi example, it should point here: https://github.com/zeit/next.js/tree/master/examples/custom-...
zackify 1 day ago 0 replies      
If anyone wants to see, we are using next.js 2.0 on https://kimmel.com. Yes, the designer added way too many fonts, but overall it's a really nice static site thanks to next and the programmatic api allowed me to generate blog post pages!
TeeWEE 1 day ago 1 reply      
Looks cool. But everybody who thinks of building a webapp, try todo it without react if you don't need it. For example https://next-news.now.sh/ is MUCH MUCH slower than normal hacker news. And its also much more difficult to maintain, and less future proof.


beezischillin 1 day ago 1 reply      
Awesome stuff. Just a quick question that doesn't click with me: how would you use Redux to manage application state for multiple users, server-side? If someone could drop me some examples or articles on this, I'd be really thankful!
jakobloekke 2 days ago 17 replies      
I'm curious: Where do these anti-React (anti-spa?) people come from, technologically? Rails? Php? Some .Net stack? Do you think the web Peaked with Perl-based CGI-scripts?I've been in this game for many years, and to me, any technology that speeds up development cycle time is an improvement. There's a reason React is so popular.
mike-cardwell 2 days ago 0 replies      
I rebuilt https://www.emailprivacytester.com in NextJS v1 - https://gitlab.com/mikecardwell/ept3/tree/master - Was a pleasure to use. My main problem though was that I couldn't add custom routes for API end-points, so had to split the application into a frontend (in Next) and backend express app. Looks like v2 fixes this. I will definitely be updating to take advantage.
andrewgleave 2 days ago 0 replies      
The MP4 included in the page causes graphics corruption and locks my MBP in Safari 10.1. Anyone else seeing this?

Save your work before trying, though.

hoodoof 1 day ago 1 reply      
>> "More than 3.1 million developers read our announcement post of Next.js. "

Really? 3.1 million developers? I'm not saying I don't believe, but wow, how?

edit: actually I am saying either I don't believe or you've miscalculated somehow.

sergiotapia 2 days ago 3 replies      
Next looks very interesting and feels very much like the PHP of old where stuff just works with sane defaults. Very cool stuff.

My one criticism of this is: Component CSS is cancer. I've worked on a large scale javascript project and it was riddled with duplicated CSS in every component, all in the name of being conflict-free.

You know what I call that? Not knowing how to scope your styles properly with something like BEM. http://getbem.com/

xutopia 2 days ago 4 replies      
There is something I do not understand. Why does `<div>` not create a syntax error in the Javascript examples? Is there a pre-processor?
mmgutz 2 days ago 0 replies      
If I'm understanding correctly, seems like this is back to the old days when routes mapped directly to pages. From experience with ASP, it is not a good thing. You need to separate the routing from the presentation ala MVC. I can see the benefit if next.js is intended to be create-react-app++.
fourstar 1 day ago 1 reply      
I want to use this but I'm hesitant to do so for a project that's backed specifically by a single company. Can someone help alleviate my concerns?

I had to verify my email address and accept the TOS before I could use their command line tool.

asadlionpk 2 days ago 0 replies      
I have recently migrated my boilerplate code to Next.js and it's awesome to work in. Previously I had my own code-splitted/server-side rendering setup but next does it better!
swlkr 2 days ago 0 replies      
I used next on a side project, zero boilerplate is refreshing.
n3bs 1 day ago 1 reply      
How does this compare to react-boilerplate? I've been using react-boilerplate recently and it's been solid so far.
kbody 1 day ago 0 replies      
I would love and use this if it was using Riot instead of React.
tonetheman 2 days ago 3 replies      
Frameworks made of frameworks... frameworks all the way down
tambourine_man 2 days ago 0 replies      
I guess I became way too cynical.

I though it was a satire, based on its name. Maybe there is some hidden self deprecating joke that went over my head, but Next + JS + 2.0 seemed too buzzwordy to be true.

The tech looks interesting. Still looks to me like a lot of stuff that I don't understand why I would need, but that's an old rant I have with the curent js ecosystem.

geniium 1 day ago 0 replies      
Very promising!
EGreg 1 day ago 1 reply      
Not to toot our own horn, but I wanted remark on how similar the issues we have had to deal with in our own framework (https://qbix.com/platform), since 2011.

When we built it, we had out of the box:

+ Tools (our name for components)

+ Pages (to support all web standards)

You place tools on pages, the framework does the rest:

+ It loads JS and CSS on demand

+ It adds/removes tools from pages automatically as you navigate

+ Support for web standards, HTML5 history w fallbacks, etc.

+ Tools can contain other tools

+ You have events for when all parent tools have activated (onActivate) and when all child tools activated (onInit)

+ JS and CSS had to be namespaced by convention from day 1, by module and tool name.

We use events instead of virtual DOM. We didn't use fancy JSX or preprocessors to do it. It's all written in ES4 JS and runs on every modern browser and IE8.

But the problems are very similar in scope.

iamleppert 2 days ago 3 replies      
Polarity 2 days ago 0 replies      
nice, thx!
vincivince 2 days ago 0 replies      
Nice work.
ergo14 2 days ago 2 replies      
Why in the world everything needs to be made with react that makes it incompatible/hard to integrate with any other frameworks?
grumblestumble 1 day ago 0 replies      
i know this is a tangent and not directly related to next.js, but the react ecosystem's insistence on coming up with increasingly convoluted and self-defeating stories around how to handle CSS instead of just using the tool the way it's intended is pure insanity. the emphasis with styled-jsx favors encapsulation with no real thought put into external overrides or customization, which makes this a non-starter for creating vendor components that are actually usable. compare https://github.com/zeit/styled-jsx, which can only euphemistically be called a naive implementation, with the documentation around custom properties, with the styling documentation at https://developers.google.com/web/fundamentals/getting-start....
Show HN: Kite, copilot for programmers, available for Python kite.com
526 points by adamsmith  22 hours ago   231 comments top 63
arihant 18 hours ago 3 replies      
Since this program uploads code to the cloud, it would be worthy to clarify if it cleans out strings before upload or not. Because if it does not, it is a serious concern as it puts secret keys in code in awful risk.

They also run a background process that needs to be manually killed to be able to uninstall. It feels like a quarantine. This is an editor plugin, is there really no simpler way to provide uninstall capability?

languagehacker 19 hours ago 2 replies      
I just tried Kite on my Mac, and I was really not pleased with it. Uploading all of your code to the cloud is questionable at best when the code you're working on isn't necessarily your own. Having Kite running in the background without a way to disable or uninstall it feels like nothing short of malware. The lack of documentation for how to uninstall Kite from your machine or how to remove your data from their cloud is also pretty worrisome.
adamsmith 22 hours ago 10 replies      
Adam from Kite here. Thanks for all the feedback and encouragement around the launch today. We're excited to be opening up Kite for everyone to download today.

When we launched Kite here on hackernews almost a year ago we were blown away by the enthusiasm for our smart copilot vision. Over 65,000 of you signed up for Kite in the first 72 hours, and over the past year we've been working with many of you to deliver that vision. It's taken a momentous effort, but today we're ready to take off the wrapping paper and open up Kite to the world.

Here's what we've been working on:

* Deep editor integrations: to make Kite better for smaller screens and more integrated into the coding workflow. You no longer have to dedicate a sidebar of your screen to Kite; instead, recommendations from Kite replace your editors autocompletions and hover results.

* Fine-grained privacy controls modeled after the .gitignore file format means that you can selectively and precisely decide which files and folders Kite indexes.

* Next generation type inference engine that uses both static analysis and statistical inference over Github. Kite beats PyCharm and Jedi by 32% on a typical Django project, offering more completions when you need them.

* Ranked completions which put the most relevant completions at the top of the autocomplete box using techniques traditionally used in web search.

* Kite for Windows. (And Linux in testing!)

Check it out at kite.com.

rohit33 22 hours ago 4 replies      
Curious to try Kite, I started to integrate Kite plugin into PyCharm until I saw they keep our code in the cloud which enables Kite do what it does. I'm not sure how many of them would be ok with their code being stored in a private cloud!
nichochar 17 hours ago 1 reply      
I appreciate people trying to build "cool" products, but the downsides of this are so high that people should heavily consider never using it.

Uploading all of your code to the cloud is a massive liability. To top this, the people interested in "something magical that codes for me" are not the good developers, their users are very most likely beginners, bootcamp coders, junior engineers, etc...

I think they're abusing trust through obscurity, people have no idea that their code is being uploaded. Making this the default for a very common python-autocomplete in atom is even worse... see this: https://github.com/autocomplete-python/autocomplete-python/i...

hasenj 21 hours ago 2 replies      
In my professional job I work with code that is private and copy righted by the company that's employing me and praying my salary, not to mention sometimes I edit files that contain sensitive or critical information like passwords and secret encryption/decryption keys.

Anything that sends all my code to the cloud is automatically disqualified.

EDIT: thanks for the downvote btw.

zeptomu 21 hours ago 8 replies      
Maybe a little bit off-topic and controversial, but in my opinion auto-complete is overrated.

Doing software development is mostly reading code and documentation. Obviously one also writes code and for sure one can't memorize every function or package name, but searching for it isn't that much of a bottleneck? Some time ago I wrote Java using Eclipse (which had/has reasonable auto-complete), but when I switched to different languages, I also switched my IDE and mostly use plain text editors these days. There are auto-completion tools for text editors, but I just never invest the time to activate or configure them and AFAIK there aren't completion tools which work well across different languages.

Maybe I revisit them at some point, but at the moment I do not really miss auto-completion.

tekklloneer 20 hours ago 2 replies      
I straight up cannot use Kite. The "code-to-cloud" functionality means that I cannot use it at work. I would love to use it, but it's a non-starter.
inputcoffee 22 hours ago 1 reply      
Where are the Instructions?

okay, so I am excited about this, don't mind some code in the cloud, but I am having trouble with a quick start.

Downloaded it, had trouble launching it (expired certificate).

Once I did launch it there are no instructions.

I went into the tray and went to settings. It was trying to map my WHOLE USER FOLDER.

I turned that off, and whitelisted a smaller folder for it to use. Set up a small test python file. Opened up a sublime file.

Can you include some instructions about how Kite is supposed to integrate with anything? I see this cool video but it is not obvious how I am supposed to get it to work for myself.

atarian 21 hours ago 1 reply      
How do I uninstall Kite on OSX? It seems you guys keep a Kite Helper and Kite Engine process up that's impossible to quit out of and prevents me from deleting the app.
devy 17 hours ago 0 replies      
Just installed it but realized that our code cannot be shared to the cloud with a 3rd party before I open it. So I am trying to delete/uninstall Kite. Been wrestling with com.kite.KiteHelper for the last half an hour and still couldn't get it off my laptop memory. Tried "killall", "kill -9" and force quit from Activity Monitor. It kept reviving. And yes, I've check out the help site and this article in particular, didn't help: http://help.kite.com/article/22-how-do-i-quit-kite

Already disliking this software...

progval 20 hours ago 3 replies      
Could you make your website not display a blank page if the browser has Javascript disabled?

The content does not seem dynamic, so a simple HTML page should work.

vitiral 21 hours ago 0 replies      
Great, now uncle Sam knows everything I'm thinking while I program.

No thanks, I'd like to have SOME privacy. What I punch into my editor shouldn't be public until I git push.

citruspi 21 hours ago 1 reply      
Why are you encrypting my password (as opposed to hashing it)[0]?

[0]: http://i.imgur.com/59VOotU.png

jameside 13 hours ago 0 replies      
I'd be interested in trying Kite for JavaScript when it's ready. Most of my company's code base is open source and we do a lot of open source work so Kite could be a nice fit one day. Trying out Kite on our actual code base for a week would be a real litmus test for me.

We're comfortable with sending our closed-source code to GitHub and our secrets to Google Cloud and AWS so I can see a path towards being more comfortable with uploading code to Kite as well. Some guarantees around privacy and the ability to delete our code and derived data could help assuage concerns.

In the meantime, perhaps you could highlight that the code uploading is opt-in on a per-file or per-directory basis (though one issue with this is that our open-sourcing system allows for private subdirectories within public parent directories and we'd want finer control)? I'd feel good about having clarity around what's uploaded and what's kept local.

In any case this seems really cool for open-source projects to start with. I'd definitely give the JavaScript version a try. And do you think you could add a VS Code extension?

simplehuman 14 hours ago 0 replies      
Why so much concern about the code ? GitHub, Travis all do the same...
bartkappenburg 21 hours ago 2 replies      
Just an honest (legal) concern:

Is stack overflow ok with having their answers inside an IDE? This decreases the number of pageviews on SO for each installed client. Is that something you guys checked?

AstralStorm 20 hours ago 0 replies      
In the meantime, get your code grabbed by major companies writing search engines.

Good luck with privacy.

Bonus points for accidental license violations.

jentulman 22 hours ago 1 reply      
Has your cache/proxy fallen over? I'm getting a 404 for the base domain

404 Not Found

Code: NoSuchKeyMessage: The specified key does not exist.Key: index.htmlRequestId: 759C55C7EA94F7D8HostId: 2i2HH8A3vp5KFvhHhHeoQ+6AiFL/kjd5iByJy6Ouo/pbKwE2xaKP8Es4SU3//1/P7M/5KWJXQv8=

welder 21 hours ago 0 replies      
Useful HN Discussion from the original 1.0 launch:


tedmiston 21 hours ago 0 replies      
Congrats on the launch! As any early beta user on public code, I was really impressed by Kite and my only concern was ability to use it on private codebases ie, work code. Glad to see that you've addressed that.

Does the Sublime integration support packages installed in the current virtual environment (that might not be publicly available)?

Aside: The pricing page is broken on iOS.

nikhil13 3 hours ago 0 replies      
I have been using it on sublime. After adding kite is has started lagging, a lot. And that's when I have quite good configuration in my laptop. Hope you look into it
sidmitra 19 hours ago 0 replies      
The Linux version isn't available still.

Is it just on HN or are there very few people now who use Linux as their main dev machine? With some of the build quality of the new Dell Machines I would have assumed any dev tool would be Linux first, since almost everyone is using some form of 'nix on the servers.

I've never had much trouble installing the latest Ubuntu on any of XPS series(except the 'suspend' feature is weird).

EDIT: nvm i see from another comment that the Linux version is in testing. But still weird to see Mac devs outnumber Linux ones(or maybe they're just a vocal minority :-) )

replete 6 hours ago 0 replies      
Looks awesome but there is no way in hell I'm uploading my code to your cloud. Instantly violates NDAs.
nuggien 7 hours ago 0 replies      
not sure if this has been thought of before but why don't you just have kite cloud index open source and public code, and then have a separate local index for the user's project code. That way, autocompletes/help/doc searches first the user's project index (local), and then search the kite cloud for public/opensource code index.
Sir_Substance 20 hours ago 1 reply      
Interesting project. If I look at one of your code examples/snippets, realize that's exactly what I need and copy it verbatim, where does that leave me, legally?
Philipp__ 21 hours ago 0 replies      
Looks really cool. Anyone tried to see how it integrates with Emacs?
jd20 17 hours ago 0 replies      
On the pages for plugins (like Atom, Sublime, etc...) you might want a simple "how to install". Took me several minutes of confusion, to realize I should open up Atom and search for Kite from there. I kept thinking there should be a download link for the plugins, before remembering that's not how editor plugins get installed these days :)
shultays 7 hours ago 0 replies      

 Most Popular Articles
and the first one

 How do I uninstall Kite?
I guess I will pass

slang800 15 hours ago 0 replies      
Has anyone tried building something like this, but doing the analysis locally and just pulling from a documentation repository like Dash? I don't like the idea of uploading my code to their server, or using a proprietary tool, but I really want documentation lookups in my editor.
theSoenke 16 hours ago 0 replies      
This seems really great on the first look, but uploading the code is a real issue. It is basically a keylogger
li4ick 21 hours ago 1 reply      
No GNU/Linux support? Well, remind me when you do.
axonic 10 hours ago 0 replies      
Dear Kite, I really love this idea, but hell no I'm not using it yet. Here's why... I'll cut to the point here, so please forgive the bluntness as I mean no insult or accusation, just honest criticism, and I'm gonna try to cover a lot in as small a space as possible.

There's not even a mention on kite.com about how data is handled that I can find anywhere. What is the method of transport? What stands between skids and my code? The server my data goes to, is it shared VPS hardware waiting to get pwned by your neighbor, xtremecrackz.zyx or is it on private servers guarded by a three headed puppy named , 13 ninja, and biometric security? Does the page even mention this is a cloud service somewhere? I see support for VS Code, but not MSVS proper, emacs but not specifically GNU/Linux yet; Mac support but not Linux in spite of at least $4M USD in seed and 3 years of development (source: crunchbase [1])? The Windows download page gives instructions for bypassing SmartScreen warnings meaning your code signing certificate has no reputation with Microsoft yet if I understand correctly. Frankly, I didn't think "Adam Smith" was even a real person until I checked it out. LOL, sorry bro but it sounds kinda generic to someone skeptical I guess. Maybe you assume trust since you travel in the circles you do, but we nutjobs like stuff in writing, and trust assumptions without verification are bad practice anyhow -.-

(on trust) Your investor who may or may not provide the same or similar "Kite" software discussed in GCHQ leaks as a "correlates-anything" solution, Palantir Technologies, has been standing in the suspiciously shadowy center of a maelstrom in some circles. I like them supporting our warfighting - but not working against the people of the United States, or anyone's civilians for that matter, however that's an argument for the agencies they contracted with. I've watched my brothers bleed out defending the rights their software has helped undermine, I'm not sure how to feel about them at all right now. Do I want to give my code to their creepy software? No, not really, since I'd have to consider that if they got a contract they might, without even knowing the end use, build software to guide Terminators to hunt down and kill civilians who write bad code or wear plaid socks. Seriously though: eyebrow raised.

(advice) I would add more clear information about how this all works. A link to security answers should come up before the footer IMO, given the nature of this product. Going out of my way to look for it, I guess it seems like security was an afterthought. I can appreciate your blog post about security [2] and the main security page which links to that article (merge these?), but they fail to answer almost all of my questions. They imply that the service isn't really ready for the spotlight, but do not explicitly say anywhere to safeguard sensitive stuff or not to trust everything just yet, but it seems softly implied to me.

(bigFoilHat) This might sound far out to some, feel free to ignore or laugh, but if I were an evil puppet master, I'd have my cybersecurity and intelligence contractor who provides access to mission critical software or monetary capital for a startup attempt to leverage this relationship to gain information about code in the wild and specific targets' code using this service, perhaps to have software look for opportunities to steal parts of keys, suggest code changes to enable exploitation, forward copies of code from persons of interest to investigators. I might ask them to approach them as patriots in the interest of the GWOT and all things decent, to tacitly and deniably or perhaps even expressly cooperate with legally and morally grey-area surveillance operations. Perhaps if there is no cooperation or just to keep it quiet, I might suggest they infiltrate Kite.com and gain the ability to intercept data clandestinely by using their trust and rapport with company leadership. "Plz send all code to spies and disable security stuffz kthxbai" I can weaken my own PRNGs and send copies of my code for spooks to analyze by myself without assistance thanks. Again, I'm attempting to honestly characterize how it makes me feel, just sayin'. I simply have no way to even fool myself into thinking I can know what goes on with my data after it leaves my PC. How do I even build rules for my firewalls? What are the parent processes which need communication, on which ports, using what protocols? Which servers will it upload to? Can we blacklist certain destinations by region or other attributes? I think you need a more robust explanation on the site before us crazy people are satisfied.

(bigFoilHat Q) HN: what say you, am I just being paranoid here in thinking that users' analyzed code may end up being displayed on an alphabet soup agency wiki somewhere along with download links for tools to suprisebuttsecks us being passed out to every malware hoarding contractor who accidentally skated past the SF-86? Maybe I'm just having a bad bout of Stallman Syndrome. One might argue "99.99% of users' code will be useless fluff and bizcruft, who cares if they copy my der.py code?" but finding that 0.01% relevant signal in the noise is exactly what Palantir does for customers, isn't it? So how can I flippantly dismiss the notion?

(Q) Do you sell, gift, trade, share, or otherwise disclose or make available knowingly any information about users' personal data or source code, even if anonymized or generalized in reports and detached from identifying information, to other parties? Can/will/do these parties include your investors? Does Palantir Technologies store, use, or have access to at any time, our source code or any information about it or ourselves?

That said, it sounds cool as phrack and I would love to see this in many languages and editors, but only if it can be trusted somehow. I'll be watching and investigating, thanks for sharing this on HN,


[1] https://www.crunchbase.com/organization/kite-com/[2] https://kite.com/blog/thoughts-on-security

Please correct anything I am mistaken about, I admit I could be completely off the mark here.

pkrefta 22 hours ago 1 reply      
Are there any plans to support Vim/Neovim ?
michaelmior 19 hours ago 0 replies      
> it has twice the documentation coverage of any other tool.

Curious how they could possibly quantify that.

madisonmay 21 hours ago 1 reply      
How close are you to a linux release?
Scaevolus 18 hours ago 0 replies      
Does this have anything to do with Kythe, "a pluggable, (mostly) language-agnostic ecosystem for building tools that work with code"?


ezekg 22 hours ago 1 reply      
Looks awesome. Congrats on the launch! I'd pay for a Ruby/Rails version of this.
turtlebits 21 hours ago 0 replies      
The documentation font for me is way too small, any way to make it bigger? You can see my IDE font size on the left.


js8 22 hours ago 1 reply      
I wish I had something like that for Haskell.. it could work by expected return type.
hollander 17 hours ago 0 replies      
Little Flocker and Little Snitch nightmare, this is.
jnordwick 16 hours ago 0 replies      
I love the ideas in the search, and would definitely buy, except...

I work in finance, and source code in the cloud could get me some prison time.

invokesus 18 hours ago 0 replies      
Not working behind a http-proxy. Dealbreaker for me.
stevemk14ebr 22 hours ago 0 replies      
Do C and C++ and ill pay
ayuvar 22 hours ago 1 reply      
The built-in examples for method use are a really cool feature. I hate having to jump to MSDN, etc just to find an example snippet when the argument comments are unclear.
falsedan 22 hours ago 2 replies      
> Your connection is not secure


gigatexal 20 hours ago 0 replies      
The website isn't intuitive on mobile. Do you have to do something special to get the Java client?
bcherny 21 hours ago 0 replies      
Awesome work! Any chance you can add TypeScript to the "Vote for a Language" menu?
partycoder 9 hours ago 0 replies      
This program uploads your code to a central server.

Please flag this submission.

jMyles 20 hours ago 0 replies      
Does the cloud connectivity requirement mean that kite cannot be used offline?
plazma 9 hours ago 0 replies      
This makes me learn python. Any plans for javascript and Vim plugin?
otto_ortega 20 hours ago 0 replies      
I hope they add support for Php7 soon. Seems like a very useful add-on.
chinathrow 22 hours ago 0 replies      
Congrats on the launch.

Did you address the issue which came up multiple times last time when this was on HN about cloud indexed code by default?

xxcode 14 hours ago 0 replies      
Whats wrong with a Google search?
fuzzythinker 20 hours ago 0 replies      
Is support of 10.9.x (Mavericks) on the roadmap?
gigatexal 20 hours ago 0 replies      
Been waiting for this! Stoked to try it out.
CopyZero 22 hours ago 0 replies      
This looks great. Any plans to support notepad++?
partycoder 9 hours ago 0 replies      
Dash (Mac OS X), Velocity (Windows) and Zeal (Windows/Linux) do something similar. There are plugins for various editors.
alexnewman 21 hours ago 0 replies      
Seems down
nikolay 19 hours ago 0 replies      
The sidebar is way too obtrusive!
karsinkk 9 hours ago 0 replies      
I just spent an awful amount of time trying to uninstall Kite. There were two background processes: Kite Helper and Kite Engine Which showed up on Activity Monitor,and I could never get them to quit, each time I killed a process with the PID, a zombie would spawn up with a different PID. Eventually I killed them both by removing the Kite packages from the Cache in library, emptying the trash and then restarting my machine.Phew!Not to mention the slow autocomplete suggestions in Sublime Text 3. I think I'll just stick with my old setup.
Curl is C haxx.se
501 points by mhasbini  2 days ago   357 comments top 40
simias 2 days ago 5 replies      
I have no problem with Curl being written in C (I'll take battle-tested C over experimental Rust) but this point seemed odd to me:

>C is not the primary reason for our past vulnerabilities

>There. The simple fact is that most of our past vulnerabilities happened because of logical mistakes in the code. Logical mistakes that arent really language bound and they would not be fixed simply by changing language.

So I looked at https://curl.haxx.se/docs/security.html

#61 -> uninitialized random : libcurl's (new) internal function that returns a good 32bit random value was implemented poorly and overwrote the pointer instead of writing the value into the buffer the pointer pointed to.

#60 -> printf floating point buffer overflow

#57 -> cookie injection for other servers : The issue pertains to the function that loads cookies into memory, which reads the specified file into a fixed-size buffer in a line-by-line manner using the fgets() function. If an invocation of fgets() cannot read the whole line into the destination buffer due to it being too small, it truncates the output

This one is arguably not really a failure of C itself, but I'd argue that Rust encourages a more robust error handling through its Options and Results when C tends to abuse "-1" and NULL return types that need careful checking and can't usually be enforced by the compiler.

#55 -> OOB write via unchecked multiplication

Rust has checked multiplication enabled by default in debug builds, and regardless of that the OOB wouldn't be possible.

#54 -> Double free in curl_maprintf

#53 -> Double free in krb5 code

#52 -> glob parser write/read out of bound

And I'll stop here, so far 7 out of 11 vulnerabilities would probably have been avoided with a safer language. Looks like the vast majority of these issues wouldn't have been possible in safe Rust.

ameliaquining 2 days ago 4 replies      
I'm kind of torn on this.

On the one hand, Curl is a great piece of software with a better security record than most, the engineering choices it's made thus far have served it just fine, and its developers quite reasonably view rewriting it as risky and unnecessary.

On the other hand, the state of internet security is really terrible, and the only way it'll ever get fixed is if we somehow get to the point where writing networking code in a non-memory-safe language is considered professional malpractice. Because it should be; reliably not introducing memory corruption bugs without a compiler checking your work is a higher standard than programmers can realistically be held to, and in networking code such bugs often have immediate and dramatic security consequences. We need to somehow create a culture where serious programmers don't try to do this, the same way serious programmers don't write in BASIC or use tarball backups as version control. That so much existing high-profile networking software is written in C makes this a lot harder, because everyone thinks "well all those projects do it so it must be okay".

rwmj 2 days ago 4 replies      
While this doesn't so much apply to libcurl (but see below), there is a third alternative to "write everything in C" or "write everything in <some other safer language>". That is: use a safer language to generate C code.

End users, even those compiling from source, will still only need a C compiler. Only developers need to install the safer language (even Curl developers must install valgrind to run the full tests).

Where can you use generated code?

- For non-C language bindings (this could apply to the Curl project, but libcurl is a bit unusual in that it doesn't include other bindings, they are supplied by third parties).

- To describe the API and generate header files, function prototypes, and wrappers.

- To enforce type checking on API parameters (eg. all the CURL_EASY_... options could be described in the generator and then that can be turned into some kind of type checking code).

- Any other time you want a single source of truth in your codebase.

We use a generator (written in OCaml, generating mostly C) successfully in two projects: https://github.com/libguestfs/libguestfs/tree/master/generat...https://github.com/libguestfs/hivex/tree/master/generator

tannhaeuser 2 days ago 5 replies      
Not only is curl based on C, but so are operating systems, IP stacks and network software, drivers, databases, Unix userland tools, web servers, mail servers, parts of web browsers and other network clients, language runtimes and libs of higher-level languages, compilers and almost all other infrastructure software we use daily.

I know there's a sentiment here on HN against C (as evidenced by bitter comments whenever a new project dares to choose C) but I wish there'd be a more constructive approach, acknowledging the issue isn't so much new software but the large collection of existing (mostly F/OSS) software not going to be rewritten in eg. Rust or some (lets face it) esoteric/niche FP language. Even for new projects, the choice of programming language isn't clear at all if you value integration and maintainability aspects.

devy 1 day ago 4 replies      
The 7th point: "curl sits in the boat"

 In the curl project were deliberately conservative and we stick to old standards, to remain a viable and reliable library for everyone. Right now and for the foreseeable future. Things that worked in curl 15 years ago still work like that today. The same way. Users can rely on curl. We stick around. We dont knee-jerk react to modern trends. We sit still in the boat. We dont rock it.
I see a lot of inertia in there. While it's a great record to maintain 15-year consistency but in the era of every changing InfoSec outlook, it could be a legacy and baggage if the authors resist to change. One thing we know for sure is that human will make mistakes, no matter how skillful you are. In the context of writing a fundamental piece of software with an unsafe programming language, that means we are guarantee to have memory-safety induced CVE bugs in curl in the future.

Some of other points that the author raised are valid too. If there is a trade-off that we can have a safer piece of fundamental software by almost eliminating a whole category of memory safety related bugs, and with the downside of less compatibility with legacy systems, more dependencies etc., perhaps we should consider it? I believe the tradeoff is well worthy in the long run and option is ripe for explore.

unwind 2 days ago 2 replies      
Well put.

Didn't know that curl was stuck back on C89, that's really optimizing for portability.

If anyone is confused by the "curl sits in the boat" section header, that's basically a Swedish idiom being translated straight to English. That rarely works, of course, and I'm sure Daniel knows this. :)

The closest English analog would be "curl doesn't rock the boat", I think the two expressions are equivalent (if you sit, you don't rock the boat).

throwaway5752 1 day ago 1 reply      
It's extremely simple. If you think Curl would be better in another language then port it, release your alternative, and maintain it for a long time.

Even if your language (Rust, Erlang, LISP, Go) is "better", it's still a minimal part of the equation. A maintainer is what makes the tool. It's hard work to decide which PRs to accept (and worse yet, reject), to backport fixes to platforms for which you can't get a reliable contributor, coordinating fundraising/donations, keeping up with evolving standards...

Anyway. Thank you, thank you, thank you Daniel Stenberg. Use whatever damn language you want.

derefr 1 day ago 0 replies      
> A library in another language will add that language (and compiler, and debugger and whatever dependencies a libcurl written in that language would need) as a new dependency to a large amount of projects that are themselves written in C or C++ today. Those projects would in many cases downright ignore and reject projects written in an alternative language.

Why would I be vendoring my own copy of libcurl in my project? Who does? This is how I (or rather, the FFI bindings my language's runtime uses) consume libcurl:

I rely on a binary libcurl package. The binary shared-object file in that package needed a toolchain to build it, but I don't need said toolchain to consume it. That would still be true even if the toolchain required for compiling was C++ or Rust or Go or whatever instead of C, because either the languages themselves, or the projects, ensure that the shared-object files they ship export a C-compatible ABI.

An example of a project that works the way I'm talking about: LLVM. LLVM is written in C++, but exports C symbols, and therefore "looks like" C to any FFI logic that cares about such things. LLVM is a rather heavyweight thing to compile, but I can use it just fine in my own code without even having a C++ compiler on my machine.

(And an example of a project that doesn't work this way: QT. QT has no C-compatible ABI, so even though it's nominally extremely portable, many projects can't or won't link QT. QT fits the author's argument a lot better than an alternate-language libcurl would.)

Sir_Cmpwn 2 days ago 2 replies      
Agreed 100%. Definitely going to be trotting this article out next time I see someone blindly arguing for rewriting xyz in Rust.

I particularly like the mention of portability. No other language comes even remotely close to the portability of C. What other language runs on Linux, NT, BSD, Minix, Mach, VAX, Solaris, plan9, Hurd, eight dozen other platforms, freestanding kernels, and nearly every architecture ever made?

kazinator 1 day ago 0 replies      
> The simple fact is that most of our past vulnerabilities happened because of logical mistakes in the code. Logical mistakes that arent really language bound and they would not be fixed simply by changing language.

This statement is laughable nonsense. Shall we go into their bug history and point out counterexamples left and right? [Edit:user simias has done this; thanks!]

Every single bug you ever make interacts with the language somehow.

Even if you think some bug is nothing but pure, that logic is part of a program, embedded in the program's design, whose organization is driven by language.

coldtea 2 days ago 0 replies      
>There. The simple fact is that most of our past vulnerabilities happened because of logical mistakes in the code. Logical mistakes that arent really language bound and they would not be fixed simply by changing language.

That's wrong. A lot of the C mistakes are indeed "logical mistakes in the code", but most of them would be indeed fixed by changing to a language that prevents those mistakes in the first place.

chousuke 1 day ago 0 replies      
In my view, the problem with C in general is that it's a loaded gun with no safety or trigger guard. It's trivial to shoot yourself (or someone else) in the foot, and it requires knowledge, meticulous care and lots of forethought to avoid getting shot.

I very much agree that rewriting existing, stable software written in C is likely not worth the trouble in many cases, but I can't accept claims that the limitations of C aren't the direct cause of tens of thousands of security vulnerabilities, either.

In Rust, even a less experienced developer can fearlessly perform changes in complicated code because the language helps make sure your code is correct in ways that C does not. And you can always turn off the safeties when you need to.

Experienced developers should feel all the more empowered by simply not having to always worry about things like accidental concurrent access, use-after-free, object ownership, null pointers or the myriad other trivial ways to cause your program to fail that are impossible in safe Rust. You get to worry about the non-trivial failure modes instead, which is much more productive.

jeffdavis 2 days ago 0 replies      
"C is not a new dependency"

To just use a library, rust isn't much of a dependency, either. It's designed so you don't even need to know that it's not C.

Rust would obviously be a build dependency, but that's lessened somewhat because it tries to make cross-compilation easy.

(But this point does apply to pretty much any other language. Curl would not be used as widely if it depended on the Go runtime, for instance.)

tombert 1 day ago 2 replies      
While I'm definitely not suggesting we replace Curl with a rewrite in Rust (since the current Curl has had decades of good testing and auditing done on it), I am actually very curious how a rewrite in a safer language like Rust, OCaml, Haskell, or Go would fair in comparison in regards to performance and whatnot.

If I were ambitious enough, I'd do it myself in Haskell, but I think it'd be too much work for a simpler curiosity.

empath75 2 days ago 2 replies      
This seems like a no-brainer for a re-implementation in rust, but I wouldn't expect that someone would rewrite curl itself in rust, but a new library that does the same things.
coding123 1 day ago 0 replies      
I don't see why this is an issue, whoever is arguing for a change can write rurl and be done, and see if anyone takes it up in their distributions.
gyrgtyn 17 hours ago 0 replies      
What is everyone using curl for that it needs to be written in C (or Rust?).

If it think about my usage, it's like get or post something and see what the returned json looks like. If I need to download something wget usually works without having to remember -O.

But higher level things like httpie are easier to deal with, sane defaults and all that. Maybe they use libcurl...

Are there any re-write userland in ${safe-high-level-lang} projects?

skocznymroczny 2 days ago 0 replies      
I don't think C is a bad language, although I think it could use lists and dictionaries in standard library. std::vector and std::map are the only things that make me pick C++ in an instant, given the choice.
adynatos 1 day ago 0 replies      
While C by itself is not safe, I would argue that no sane development environment uses C by itself. Over the decades of its production use dozens of tools have been developed that make it far safer: *grind suite, coverage tools, sanitizers, static analyzers, code formatters and so on. Those tools are external, otherwise they would make C slower. Something for something.
geodel 2 days ago 2 replies      
I think Rust community increasingly behave like this[1]. They are big on suggesting others the better 'ideas' instead of implementing themselves. So they keep using 'curl' and 'openssl' but tell others to rewrite their software with Rust.

1. http://dilbert.com/strip/1994-12-17

tlrobinson 1 day ago 1 reply      
I'm curious, of the bugs that could have been avoided by using a "safe" language, how many could have been avoided by using a bounds checking extension like as https://www.doc.ic.ac.uk/~phjk/BoundsChecking.html or https://github.com/Microsoft/checkedc

Are such extensions popular, and if not, why not? I assume there's always some performance hit, but that might not be a big deal in an HTTP client, for example.

renesd 2 days ago 2 replies      
CPython also has many vulnerabilities in python rather than C.

It's hilarious reading rust marketers talk about how people should use rust, and yet their software doesn't work as well. It has plenty of bugs.

Then they go on and on about issues which post modern C doesn't have. Guess what? C has a lot of tooling, and yes, it's been improving over the years too. CQual++ exists. AFL exists. QuickCheck exists.

Can your rust project from two years ago even compile? Does it have any users at all?

There's a formally proven C compiler. How's that LLVM swamp going you've built your castle on?

Rust brought a modern knife to a post modern gun fight -- and lost.

koja86 1 day ago 0 replies      
Quite recently I happily used libcurl for C++ project rather than any of those C++ wrappers found at github. Granted there is some non-elegance when you adapt C-style error codes to C++ exceptions and non-C++-idiomatic code style right next to any C lib. Yet libcurl is battle tested (AKA proved to be rather bug free) and has nice clean API unlike.

IMHO it might eventually make sense to use other language/tech/whatever but the bar is quite high and it will quite probably take some serious sustained effort.

lettergram 1 day ago 2 replies      
I feel blaming a language for errors is like blaming a gun for killing people.

The fact is, mistakes will happen, but in general if you follow the best practices you'll be fine. Failing to follow the best practices means you could be a better programmer. Just because the language gives you an option to do something, doesn't mean you should.

oldsj 2 days ago 1 reply      
> The plain fact, that also isnt really about languages but is about plain old software engineering: translating or rewriting curl into a new language will introduce a lot of bugs. Bugs that we dont have today.

Don't rewrites, even in the same language usually lead to a better version of the software? I can't really imagine a seasoned C developer introducing completely new bugs in a code base they are already very familiar with

kodest 1 day ago 0 replies      
Maybe curl could be rewritten in C++ step by step like mpd (https://musicpd.org). C++ has RAII for resource management which can help a lot by itself. In my opinion the most hateful thing in C is freeing resources on all exit paths.

Although, curl in C++ - the naming would became inappropriate...

_of 1 day ago 0 replies      
Rust might be a great language, but it has not completed the test of time yet. C is 45 years old. Rust appeared 7 years ago.
krystiangw 1 day ago 0 replies      
C has still huge market share. Seems that it occurs in 5% of all tech job offers:


Stats also showing that average salary for C developers is above average for all tech job openings.

AstralStorm 1 day ago 0 replies      
Maybe they should attempt writing it using the Isabelle/HOL transpilers to C from SEL4 project. I don't care if it is C or machine code as long as the proof of correctness is complete, down to at least C library.

Curl is small enough to make it relatively easy and used widely enough to make it worthwhile.

madphrodite 2 days ago 1 reply      
This is a great little read and encapsulates the other side of the 'rethink the way' trend-ism of some HN new lang advocacy. C is fine, C is good. It is widely understood, it is a systems staple, and it is not dangerous in knowledgeable hands. Rocking the boat is fashionable.
digi_owl 1 day ago 0 replies      
A refreshing read in what seems to be a ongoing deluge of rewrites, languages and frameworks.
trav4225 21 hours ago 0 replies      
It is utterly amazing to me to see so many people's attitudes on this issue.

If I cut myself by hasty use of a knife, is it the fault of the knife maker? How is that even remotely rational? If you aren't willing (or don't know how) to use the tool correctly, don't use it.

carapace 1 day ago 0 replies      
This has probably been said, in this thread even, but if curl is insecure (for some value of "insecure") then its ubiquity and ease of embedding are a problem rather than a feature. Fuzzy thinking.
didip 1 day ago 0 replies      
It's well within reason and capabilities for rust community to write libcurl and curl CLI libraries.

The community should do it, spend a couple of years stabilizing, and then spread the words to others.

dmitrygr 1 day ago 2 replies      
Software is almost a perfectly open market. If proponents of rust really think their preferred language is better in every way, they are free to rewrite the world in rust, and see the adoption numbers they get. After all, if rust is better in every way, we'd expect the adoption numbers to go up for their rust OS, with a rust http stack and rust web browser. Right?

Telling others to use their language instead of putting their money where their mouth is is truly what irks me about the rust community the most.

Want a rust world? Go write it and ship it.

Oh, and you don't get to complain about C until your PC runs more rust than C


faragon 2 days ago 1 reply      
Another reason: C is beautiful.
mgrennan 1 day ago 0 replies      
Why does something old (C) have to be bad these days?
bitwize 2 days ago 2 replies      
davexunit 2 days ago 0 replies      
>C is not the primary reason for our past vulnerabilities

Completely false. C is a disaster.

fiatjaf 2 days ago 2 replies      
Why are you saying this? Who asked? I always imagined it was written in C.
Alcatel-Lucent releases source for 8th, 9th and 10th editions of Unix tuhs.org
340 points by adamnemecek  22 hours ago   52 comments top 20
stonogo 21 hours ago 6 replies      
The people complaining that this isn't free-as-in-freedom should remember that there's a lot of code in here that Nokia/Alcatel-Lucent does not and has never owned. 10th edition, specifically, was never 'distributed' and probably could not be because it contained gcc. You'll note these archives are not even hosted by the corporation. They STILL aren't 'distributing' any of this. There's no way to know a priori whether there's someone else's IP in here... the packaging method for these versions of unix was "Dennis makes a copy of a running system, including whatever happened to be on that disk."

So, this is a kind gesture made for the benefit of software archaeologists. Retroactively applying some kind of modern-hippie license would cost a tremendous amount of time and money.

tytso 19 hours ago 0 replies      
Technically Acatel-Lucent didn't release source. They simply agreed not to sue over the source releases in question. The folks made the source available online have been holding onto those sources for years, and have been collecting copyright non-assertion letters from various companies who might have an IP interest source in the sources. Acatel-Lucent is just the most recent company who agreed that they aren't going to sue.

This is roughly the same as signing a quit-claim deed. How much significance it has depends on how strong your previous ownership interest was in whatever you are saying you won't sue over. (For example, if I sign a quit-claim assertion over the Brooklyn Bridge, it doesn't mean much. :-)

But given this was sufficient so that the people who had been keeping private copies of Unix source, confident enough that they wouldn't be sued into oblivion, it's certainly significant in that sense.

mindcrime 21 hours ago 1 reply      
This isn't open source, as the "no commercial use" violates a central tenet (#6) of the Open Source Defintion[1].

I believe this would be closer to "Shared Source"[2] than anything else.

[1]: https://opensource.org/osd-annotated

[2]: https://en.wikipedia.org/wiki/Shared_source

bigato 21 hours ago 0 replies      
Link to the original Alcatel-Lucent statement:https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_...
Esau 18 hours ago 0 replies      
This is a little off topic but I want to take a moment and say thank you to Warren Toomey. He is responsible for TUHS and it is a wonder resource for people who enjoy UNIX.

Thank you sir!!

EamonnMR 20 hours ago 0 replies      
Should be a boon to this project to create a git history of Unix:


(previous discussion:)https://news.ycombinator.com/item?id=10995483

f2f 19 hours ago 0 replies      
The README indicates that there are files for 1st and 2nd Plan 9 editions but those are not made available. [1] I guess Lucent's lawyers still want to keep their rights over those...

I have a shrink-wrapped 2nd edition distribution with manuals, but no source :(


1: http://www.tuhs.org/Archive/Distributions/Research/Dan_Cross...

cat199 20 hours ago 1 reply      

So.. Anyone have any insight on what these actually provide, feature wise over v7?

Have often wondered about these 'mystery unices'..

Am sure I will trawl the source archives.. but pointers would be useful.

t1m 19 hours ago 3 replies      
Back when I was in University, we had an Amdahl mainframe with Unix running under VM. The directory structure included an awful lot of source code. I remember porting source for lex and yacc to my PC-XT running Borland's Turbo C. I assume it was licensed to Universities and source was included under an educational clause, though I'm not exactly sure.

I wonder which version of unix I was using. This would have been around December of '87.

f2f 18 hours ago 2 replies      
ahh, the gems one finds in old source code: /games/trek/trek.h:

 #define ever (;;)

fermigier 21 hours ago 0 replies      
Good news, but it's not open source. The statement at the root of the projects says only:

"[...] that it will not assert its copyright rights with respect to any non-commercial copying, distribution, performance, display or creation of derivative works of Research Unix".

aap_ 21 hours ago 1 reply      
These are the operating systems the Blit (https://www.youtube.com/watch?v=Pr1XXvSaVUQ) was used with.
loeg 21 hours ago 1 reply      
Note that this is not available under a conventional open source license, but one of the "non-commercial use" variety. Don't rush to incorporate it into your products ;-).
sigjuice 21 hours ago 2 replies      
Is it too soon to ask the question if it is possible to compile these and run them in some emulator?
jlebrech 4 hours ago 0 replies      
i'd love to see code standards comparison dones for similar code, how does open source stack up.
digi_owl 19 hours ago 0 replies      
> Nokia Bell Laboratories

The paths of mergers and acquisitions are indeed meandering.

Nokinside 12 hours ago 0 replies      
Title should be: Nokia releases source for 8th, 9th and 10th editions of Unix.

Nokia bought Alcatel-Lucent over year ago. See for yourself: http://www.alcatel-lucent.com

cmrdporcupine 21 hours ago 0 replies      
Awesome... But only at least 25 years too late...
ScalaNovice 21 hours ago 2 replies      
A better title would be:

Alcatel-Lucent makes the source code of 8th, 9th and 10th Editions of Unix public

Since the general usage of the word open source has implications about the a "free" license to use too.

Night Shift compared to f.lux justgetflux.com
380 points by mattiemass  15 hours ago   172 comments top 36
craigc 1 hour ago 1 reply      
It seems like the replies here are very much in defense of Apple which I am not surprised about, but I do not really consider it to be warranted.

I have been using f.lux for years and it has definitely had a huge impact on me. I don't have any scientific data to back up my claims, but f.lux is a fantastic product.

When you consider that f.lux released a side loading version of their app on iOS and then Apple threatened to remove their developer license, and then after it was pulled, released their own ripped off version of the software that does not work as well, you can understand how they might be upset about that. I understand that is business, but as someone who has used both I find f.lux to be completely superior.

Apple probably should have bought f.lux and then integrated it into their products, but instead decided to do it themselves. I'm not saying the software is earth shattering, but they spent years on the problem, and it feels as if Apple implemented their version in a few days.

I think Apple should open up the screen/display APIs on iOS to allow f.lux and other similar apps to be installed. I would happily pay for it rather than use night shift.

metafunctor 10 hours ago 4 replies      
One thing I find quite annoying about f.lux is that it doesn't just have a simple custom schedule setting. Night Shift has that, and it's great.

I live very far north. In the winter the sun is up for just a few hours, and in the summer it's down for just a few. Obviously, I don't want to follow the sun for my sleeping rhythm, and exactly nobody over here does.

Most of the time, I go to bed based on the clock. We use lots of artificial lighting in the winter, and window blinds in the summertime. I'd like to simply configure when I expect to go to bed, and possibly when I expect to wake up. With f.lux, I have to try to find a location on the globe where the sun matches my actual sleeping cycle, and hope that it stays that way (it doesn't).

I did notice that there's a new "far from the equator" setting in the latest version, but I don't understand what it does and how it's supposed to help. Just give me a schedule setting.

smnscu 13 hours ago 6 replies      
One important advantage with Night Shift is that it doesn't mess up YouTube videos. I get weird artefacts with f.lux when watching videos, and overall Night Shift seems to perform slightly better as well. I'm sad for f.lux but for now I stopped using it.

edit: I don't get the artefacts with NS but I do get the same white border on the mouse cursor after watching a video full-screen for some time

Razengan 13 hours ago 4 replies      
I think Apple may have deliberately chosen to go for a less severe difference in colors, so as to get more people onboard the general idea of colors shifting through the day, at first. Expect it to evolve in a future macOS/iOS (hopefully along with the introduction of a true dark mode.)

f.lux, while more effective, may be off-putting to most people. The medicinal orangeness was a bit sickening to me when I first tried f.lux, to the point that I didn't want to use it, though I warmed up to it later.

orthecreedence 13 hours ago 5 replies      
Sort of on-topic, I've been using Redshift <https://github.com/jonls/redshift> for years and love it (both windows and linux).
cyberferret 13 hours ago 4 replies      
I've used f.lux for years and I really like it, as I have noticed my sleep patterns have improved during that time.

However, I do think that the transition is sometimes really too quick. I can be working away, deep in 'flow' and I will alarmingly perceive the screen going darker/changing a couple of times over the evening. It almost feel like I am passing out or getting a precursor to a migraine sometimes, with the change, which is quite jarring.

The other thing is the constant annoying notifications of "You are going to be awake in 'x' hours". Well, actually, no - If I am up and coding until 2am, then chances of me being awake at 6am when you expect me to is just not on.

I also wish they would have an 'emergency awake' function, so that when I jump on to the keyboard to fix a server outage at 5am after many hours away sleeping, that it would immediately go to full brightness there and then, rather than wait until 0630 as per normal. If I am active at that time after a long break, I am NOT going back to sleep and I have to have full illumination of all those red signals on my server dashboard! :)

DCKing 6 hours ago 0 replies      
When it comes to blue light filtering, all I want is something with good defaults (for me) that works unobtrusively.

My first days of using Night Shift has worked exactly like that. I don't particularly care about configurability of the tool (I live on a pretty well supported latitude I guess). Moreover Night Shift presents significantly less artifacting in videos based on my brief experience. Transitions are also far less jarring than they are with f.lux.

So yeah, I guess f.lux will be the better choice for those who really care about the details of their blue light filter. Night Shift, like LineageOS' LiveDisplay, Windows' Night Time and GNOME's Night Light, take a simpler approach that will get you 98% of the way there in a few clicks, which should be good enough for most people.

keithkml 11 hours ago 6 replies      
Pretty sure this is all snake oil both f.lux and Night Shift. I don't doubt that blue light affects our brains. But I see no evidence that color filter software has any impact.

If the issue is the number of blue photons per square millimeter of our retinas, why isn't it being discussed as such? This means screen brightness and distance from your face would have a much bigger impact than a color filter.

I personally think the f.lux team knows this and that's why their FAQ is devoid of any questions about effectiveness.

FWIW the only person I've known to use f.lux is an insomniac who barely ever sleeps and is always tired.

ClassyJacket 6 hours ago 0 replies      
Night Shift lets you set a custom schedule. Somehow, insanely, Flux does not. It's the only thing I want, so Night Shift automatically wins. I don't know why they're so stubborn on that issue, I would've even paid a few bucks for a "premium" version with that feature, but now Apple is eating their lunch.

Bye forever, Flux.

nimish 32 minutes ago 0 replies      
I noticed a massive battery life improvement after removing f.lux and using night shift/night light on mac and windows.

Far less stuttering as well. Whatever f.lux is doing is not worth the janky implementation.

lighttower 11 hours ago 0 replies      

>To be fair, we thought it was pretty easy after our first year making f.lux (Night Shift today looks a whole lot like our first version). We figured we'd solved the blue light problem and that there just wasn't much left to do. We couldn't have been more wrong. Every person has individual needs, and those needs are different based on your sensitivity to light, your own chronobiology (imagine early birds and night owls), your own schedule, and other factors too. Those needs change across seasons, and over your lifetime. Today our approach is different: we are working every day to understand how light affects human biology, not strictly sleep, and we are constantly applying what we learn to updates and new features for f.lux.

owenversteeg 1 hour ago 0 replies      
I've found something pretty interesting myself when using redshift. If I set the display directly to 3000K or so, it looks really weird. Same if I fade from 6700->3000K. But if I set it to 1000K and then to 3000K, it looks fine. Anybody else do this to "prepare" themselves?
jluxenberg 12 hours ago 2 replies      
If you use and love f.lux, consider donating!


rocky1138 2 hours ago 1 reply      
"Our circadian system is actually not reacting to small changes in "color". Instead, it is mostly reacting to the "amount" of light. Our eyes are extremely good at distinguishing little shades of color from each other, but this is a different system than the one that drives circadian rhythms."

Is there any data to support this?

AJ007 1 hour ago 0 replies      
For all of the squabbles between Night Shift and f.lux, both are doing a great job compared to what we had before. I am sure they will both continue to improve and become standard in all platforms

I am a lot more concerned about street lights, which are headed in the exact wrong direction: https://www.ama-assn.org/ama-adopts-guidance-reduce-harm-hig...

joemaller1 13 hours ago 2 replies      
My life doesn't necessarily fit into a sunrise-sundown bracketed timeframe. I regularly need to postpone dimming until later in the evening, and then return to full color brightness before dawn. F.lux refuses to do this. Night Shift (at least on iOS) does.
jaxn 1 hour ago 1 reply      
Windows 10 has a similar feature called Night Light. I have it set to Sunset/Sunrise. The $49 Kindle Fire has a similar feature.

I appreciate the work f.lux did, but this is going to be a core feature of every OS now, and none of them bought their tech to do it. That is a tough spot to be in, but it is probably time to start winding the project down.

FrozenVoid 2 hours ago 0 replies      
I've just turned the blue/magenta/cyan sliders in monitor controls to 0%(Blue components in images appear as black/grey pixels). Blue light damages the retina and messes up circadian cycles:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4734149/
russdill 13 hours ago 0 replies      
Gnome also has a similar feature now, night light


int_19h 8 hours ago 5 replies      
Does anyone have a good suggestion for an Android app that does blue filtering, and:

1. Doesn't require root.

2. Actually filters blue out, instead of adding red (i.e. a pure black screen should remain pure black).

The built-in feature in Nougat previews was great, until they removed it...

lucisferre 11 hours ago 2 replies      
I love f.lux but I always end up uninstalling it because of how it messes up with games when they are running. Performance tanks and I get tired of switching out and killing f.lux every time.
psiclops 13 hours ago 0 replies      
As I have my daytime f.lux setting at 5500k, I don't plan to change over. I prefer to have the blue light reduced slightly all the time
turrini 54 minutes ago 0 replies      
Or you can go to astronomy mode (linux):

xcalib -green .1 0 1 -alter

xcalib -blue .1 0 1 -alter

killjoywashere 13 hours ago 2 replies      
They are both terrible compared to Quicksilver's little brother, Nocturne. Nocturne can turn a Mac full red monochrome. Coming from the military, it's amazing.
lmg643 12 hours ago 2 replies      
I must be out of step with the times as I'm surprised by the favorable comments about Night Shift.

For my eyes, the f.lux nighttime adjustment and configurability is great. at work, i use it during daytime hours as well, helps greatly with eye strain.

Night Shift is like an introduction of the concept to a mass audience, surely it has a positive benefit for the uninitiated but there's a lot missing for folks who rely on it. Android is much better with multiple applications available to control this.

I thought it would have been a good gesture for apple to buy f.lux and get the "market leader", and some built in goodwill, as opposed to just copying them, but I guess such is life when you are the largest corporation in the world.

philliphaydon 3 hours ago 0 replies      
Are there any scientific claims to blue light? Or real studies? I tried using flux for a month and ended up with sore eyes which I don't get when I don't use flux. So I don't know if this blue thing is a legit thing or not. My flat mate says he sleeps better. But I just get sore eyes.
dharma1 13 hours ago 2 replies      
Somewhat related - any physical filters for backlit eInk readers? The leds on my kindle paperwhite are pretty blue
jeron 13 hours ago 0 replies      
Are the charts for Night Shift on MacOS? The dataset provided at the very bottom is for the iPad Pro...
jfoldager 7 hours ago 0 replies      
There is already lots of blue light in my room, when I use a computer at night. I don't see how it would help to remove all the blue light from the screen, when the room is still bathed in it. I use quite warm light, and have it even warmer in the evening, but still, the standard settings for Night Shift looks very orange to my eyes.

I would love if Night Shift could just shift the white balance to match the surrounding. Does anyone have experience with True Tone on the 9.7-inch iPad Pro? I imagine would work like that.

Brendinooo 13 hours ago 1 reply      
It's a good enough argument for a current user like me to keep using it, but I don't know if the benefit is tangible enough for most people (or me on a new system someday) to seek out an alternative to Night Shift.

Makes me wonder if the placebo effect would come into play here as well.

michelb 7 hours ago 0 replies      
I'd say a big plus of f.lux is that it works on my 2009 mac pro and 2011 macbook pro, while Night Shift does not.
dayaz36 8 hours ago 0 replies      
How does f.lux make money?
samsamoa 11 hours ago 0 replies      
Any hints on getting f.lux to work in sync with a Hue bulb on macOS?
Udo_Schmitz 6 hours ago 0 replies      
I tried f.lux on the Mac and was very disappointed with the results. To me it looked like a red film overlaid on the screen. Distracting and ugly. Night Shift on iOS looks much more natural.
kartickv 5 hours ago 0 replies      
How does Night Shift's reddest setting compare with Flux?
nice_byte 8 hours ago 1 reply      
I've no idea why people use this type of software. I tried it a couple times, and it just annoys the hell out of me. It makes the colors on my monitor all kinds of messed up, and doesn't have any positive effect whatsoever on the tiredness of my eyes. If anything, the effect is negative: the messed up color makes text hard to read.
Apple Pages 6.1 adds equation support using LaTeX or MathML apple.com
334 points by plg  1 day ago   59 comments top 23
comex 1 day ago 2 replies      
From the "which commands" link, it seems that they're using Blahtex to convert LaTeX to MathML:


Neat feature. I think I might get a lot of use out of it as a "lighter-weight" way to write up documents that include math, using the nice LaTeX syntax, while still keeping the document as a whole WYSIWYG (better for graphics and makes it easier to avoid going back afterward for a fixing-up step). Plus, I can edit from both my Mac and iPhone with automatic sync. (There are existing LaTeX apps for iOS that can do that too, but this seems to be a solid option.)

Edit2: Tried it out on iOS. Two limitations: equations can't wrap across lines, and inserting a new equation requires a few taps - not super cumbersome but enough to discourage a "fluent" style of frequently going in and out of math mode within a sentence. Still useful.

theCricketer 1 day ago 2 replies      

I use Dropbox Paper to take math notes, and it works great with math formulas and Latex support. You get sharable Paper documents, downloadable as MS Word docs and Dropbox paper overall feels light, fast and minimalistic.

The only thing I'd improve about Paper's Latex support is to add some autocomplete features that make it faster to type out Latex and a couple of minor bugs, like the incomplete Latex disappearing when you switch tabs.

JohnHammersley 1 day ago 1 reply      
You might also want to give Overleaf[1] a try if you're interested using LaTeX in a more WYSIWYG style -- we have a rich text mode[2] (currently in beta) that parses the LaTeX and presents it in a way that's more familiar for non-LaTeX users, which makes it particularly handy for collaborations. Of course, you can always edit the underlying LaTeX directly if you prefer :)

I'm one of the founders of Overleaf, so if you have any questions (or if you use it and have any feedback), please let me know -- it's always appreciated, thanks.

[1] https://www.overleaf.com

[2] https://www.overleaf.com/blog/81

tzs 2 hours ago 0 replies      
Note: desktop version requires 10.12 (Sierra) or later.

Looks like the end of my days as a Mac user is in sight. Apple does not support 10.12 on either of my desktop Macs (2008 Mac Pro and 2009 Mac Pro), and I don't see anything in their current desktop lineup that is an acceptable replacement.

pbnjay 1 day ago 6 replies      
I hope this is in the latest Keynote or coming very soon - it's my biggest issue right now when prepping slides for my classes.
kweinber 1 day ago 0 replies      
In my dreamlife, Apple decides to support the LibreOffice format and uses that as the default for the Pages, Keynote, and Numbers formats. For features that they need added to support features, they propose them to the standards body.

That way they become the best editor of a free format. I can't ever see them going head-to-head with Microsoft so this would really open up their marketshare.

k_bx 1 day ago 0 replies      
Slightly offtopic, but for everyone interested please also check out Typora https://www.typora.io/ . Such a great Markdown editor with support of TeX inline blocks.
mavam 1 day ago 1 reply      
Reminds me of a cooked-in version of LaTeXiT [1], a utility I find invaluable when it comes to combining figure drawing (e.g., with OmniGraffle) and LaTeX annotations of arbitrary complexity.

[1] http://chachatelier.fr/latexit

djrogers 1 day ago 1 reply      
Honestly I'm more excited about the ability to replace all fonts in a document - especially in keynote. You have no idea how often I get a presentation to deliver or tweak that has 7 different Arial and Helvetica-like fonts in it, because each person that wrote slides for it used a different font. Drives me batty!
peterarmstrong 1 day ago 0 replies      
The best thing about this is that it's an admission that some aspects of a document are better when authored as plain text.

Now, I'm of the opinion that this is actually true of the entire document, especially when it's a book. This is why we're creating Markua as an open spec: https://leanpub.com/markua/read

jacobp100 1 day ago 1 reply      
Does anybody know what theyre using to render the equations? I know that WebKit has MathML support, so maybe theyre using parts of that codebase? It would be neat if they released some APIs in iOS/macOS.
0xCMP 1 day ago 0 replies      
A great tool I love has this too [0]. They're an improvement over Workflowy which includes LaTeX support among other things.

I'm a very happy paying user of it although I do not use the LaTeX features very often yet I have some finance notes I took in Emacs which transferred over pretty well.

[0]: http://dynalist.io/

jmnicholson 1 day ago 0 replies      
Authorea fully supports LaTeX and MathML with a WYSIWYG interface like GDocs, Paper, or Word.


santaclaus 1 day ago 0 replies      
Is LaTeX support going to come to Messages?
murkle 1 day ago 0 replies      
re https://support.apple.com/en-gb/HT202501

What do \lvertneq, \gvertneq do? I can't find an online reference for them

nodesocket 1 day ago 1 reply      
I'm interested in the new real-time stock quotes available in Numbers. Anybody tried to use that feature?
lutusp 1 day ago 0 replies      
I'm looking forward to the day when browsers support LaTeX content without any special plugins or libraries. Until then I use server-side MathJax to support my site's technical pages. MathJax is a terrific piece of software, but its popularity is an implicit argument for a generic browser feature that does the same thing.

If that became the norm, I could switch to LaTeX right now and HN readers would see a result any mathematically literate person would hope for.

For a while Reddit supported a plugin that rendered LaTeX by way of an external server called "CodeCogs." Eventually CodeCogs realized they were being exploited with no compensating gain, so they blocked all Reddit accesses.

Here's a concise definition of pi --

\displaystyle \pi = 2 \int_{-1}^1 \sqrt{1-x^2} dx

-- I mean, if only you could see it. Here's how e is formally defined:

\displaystyle e = \lim_{n\to\infty} (1+\frac{1}{n})^n

I've been suggesting this for longer than I can remember. Meanwhile, here's my online LaTeX editor:


plg 1 day ago 1 reply      
doesn't work on online version of Pages (iCloud), no way to insert an equation

equations created on desktop version show up in online-version as low-res jaggedy bitmaps, un-editable and un-pretty

so much for vertically integrated "just works" approach


cmurf 1 day ago 1 reply      
Is it still using a proprietary format that isn't documented and nothing else can read; so that there is no such thing as "one day I decided to use something different and still want to read my documents"?
leecarraher 1 day ago 0 replies      
latex in a document editor, holy smokes batman, how do they come up with these crazy ideas!
cozzyd 1 day ago 3 replies      
I believe both Scribus and LibreOffice have had ways of doing embeded LaTeX for at least 10 years.
felippee 1 day ago 1 reply      
Took them only like what - 10 years? Better late then never I suppose.
How to Be Someone People Love to Talk To time.com
413 points by knrz  2 days ago   197 comments top 23
onmobiletemp 2 days ago 5 replies      
I started paying attention to people and discovered a lot of this on my own over the course of three years. At some point i realized that whenever i talked to someone their eyes would glaze over and their face would go stony. Then theyd talk to someone else and their eyes would become focused and their face alive and animated. Laughter. I figured out this was because they didnt care about what i was saying or about my opinions. So i tried various things and looked at their eyes. Sometimes their eyes would become alive again and i could tell they cared. Slowly you learn what people want to hear. And its so true about smiling and body language, people feel uncomfortable if you dont project wellbeing. What you need to understand is that there is no logic in any of it. Humans are machines and the algorithms that they employ for attention and emotion are surprisngly uniform and very unintuitive for autists like me and you. Dont worry about the logic of whats hapenning, just think of what their algorithm is doing. Its verry dificult because you cant verify what people are thinking, you cant debug it and you cant start over -- you have to guess a lot. Overall people want to see big smiles and confident body posture. If you are slouched over people dont like it. If you stand up straight you will be amazed at how differently you are percieved. But it all has to be genuine. If youre trying to manipulate and understand people in a clinical way you will fail. All you need is a genuine desire to bond with people and the patience to pay attention to what seems to work and what doesnt.

I should also add that for me, and probably for most people like me, the process of figurimg out what people like and dont like is also partly a process of self discovery. Im not the kind of person thats in touch with himself. Discovering how your words impact other people will also teach you about how your mind, conciously or otherwise, reacts emotionally to the words of others. Overall ive been genuinely excited to learn about myself amd others and use that wisdom to help enjoy the presence of other people. For me its been a very productive process of growth and discovery. I think framing the problem of interpersonal relations within that context instead of the cringey, manipulative context of internet social tips really helped.

xor1 2 days ago 13 replies      
I think the article severely downplays the importance of attractiveness. If the other party finds you attractive, the bar is lowered to the point of you simply being normal/average in terms of intelligence, wit, and whatever else you want to include in your definition of what makes a person "interesting". You basically need to be a vapid idiot to give anyone a bad impression as an attractive person.

It's a huge factor. I've started putting some of my big programmer bucks into improving my appearance before I hit 30, starting with braces (family couldn't afford them as a kid), eyelid surgery to fix some mild ptosis, and a nose job. I've also started using sunscreen and moisturizer on my face on a regular basis.

The past few years have made me realize that your appearance only becomes more important as you age and progress in a white-collar career -- not less, as I was led to believe as a child. This is especially disheartening to realize while working in CA/NYC tech, which have always been billed as one of the most meritocratic and progressive sectors. Getting into shape only takes you so far. I consider myself average now, but I want to be hot.

tyingq 2 days ago 7 replies      
The best bit of advice in the article:

The right question is How do I get them talking about themselves?

I've noticed that even if the only thing you do is ask someone their opinion, and listen attentively, there is some sort of distortion field effect.

They will often later recall you as knowledgeable, insightful, etc...even though you never did anything but ask questions.

Kiro 2 days ago 3 replies      
I always try to play the game of "don't say a thing about yourself until someone asks" and it always works wonders. Everyone loves me since all I do is ask questions, giving them an opportunity to speak about themselves. It seriously makes me hate people though since so few actually asks anything back.
non_sequitur 2 days ago 1 reply      
I learned a while ago that just asking questions isn't enough - sometimes people don't want to talk, or are really boring, there's too big a group to focus on one person, or just constantly interrogating a person gets weird, etc. So you should have some good stories in your back pocket as well. If you think about the most popular people you know, they aren't well received in social settings because they pepper everyone with questions - they're usually funny, chatty, quick witted, and can either carry or let someone else carry a conversation. Be like that guy/gal, not the one that can only ask questions.
superasn 2 days ago 1 reply      
Oh internet and self-help gurus. Why do you have to be the "best" at everything and get the most of out stuff.

It's like those things they teach you that before giving a bad review first start with the good points then add a "but". Sound great in theory but just absurd when you realize someone is doing it to you on purpose.

You know you can do all this and create great rapport and win the title for best conversationalist but if this is not your nature you still won't have fun nor create that connection which you can have by just being you, with all your flaws, moles and warts. If you're not a total asshole, people like you anyway.

Just imagine if your friends were like this. Trying to be the best conversationalist they can be with you instead of being the usual silly dickheads they generally are..

nunez 2 days ago 2 replies      
Skimming through the article, I observed that they missed the most important step one must do to get better at talking to people:


One doesn't learn how to write code without writing code. One also doesn't learn how to tie their shoes without actually tying shoes. So it follows that one doesn't learn how to become good at people without talking to people.

You've gotta go out! And I'm not talking about grabbing a drink and staying on the sidelines or going to that conference and being glued to your Mac the entire time. You've gotta approach people, and you have to get rejected.

People will walk away. People will ask to be excused. This stuff hurts, but just like a startup, you treat the mistakes as learnings and try again next time. It helps a lot to have a buddy that will help you through the process and give you feedback, since learning on your own (like I did) generally sucks.

How did I learn how to talk to people? I approached hundreds of women to start conversations with them during the morning rush and on the street. nothing deep; usually stuff about food. My dating skills improved slightly, but my conversation skills went through the roof.

There are other things to keep in mind, too. People care way more appearance than they let on, so dressing well and staying healthy go a long way to help you be more. Body language is also something that people look out for without knowing that they're looking out for it. Fixing posture goes a long way towards fixing this too.

dkarapetyan 2 days ago 1 reply      
There was that one time I observed a peculiar quality about a certain CEO. No matter what he was talking about it somehow would always circle back to talking about whatever company he was currently at and the conversation would always end with a joke and hearty laugh for all involved. This happened consistently enough that I thought it was a pre-determined act on his part.

Once I realized he was always practicing I kinda stopped talking to the guy because there was never any genuine interaction. He was always on the job and he was always practicing selling. Every conversation was just another opportunity for him to practice his messaging. I dubbed this mode of interacting and talking ceoesque.

jokoon 2 days ago 0 replies      
I started relationships at the age of 24, I was really impressed how easy it was for me. I always thought I was some kind of nerd loser, which I still feel I am.

During all my life my method was always to slowly ask personal questions and "open up" people, let them talk about themselves, their job and skills.

What people love is to let them talk about their problems, without criticizing them about it. I think I learned that from therapy. Once you do that, people are hooked and it's a pretty good way to learn about them. It's not manipulative as long as you don't exploit it against them or for your interest, which is really evil (and they will notice it very quickly).

Then of course, you should always open up yourself if the person opened up to you, and that can be difficult, generally you should talk about yourself without necessarily waiting for the other person to ask.

I always felt those things were kind of manipulative, but I asked and it seems they're not.

vinceguidry 2 days ago 1 reply      
Everything about this is highly contextual and varies across cultures. Smiling in some situations makes you look powerful, in many others it makes you look weak. Being very animated can make you look carefree in some situations and just wild in others.

The more time I spend in cultures that I didn't grow up in, the more convinced I become that there just aren't any universalities in this direction. Any attempt to do so is to try to generalize over all human behavior and the effort will either be wrong, being that there will be some cultures or situations where the rule doesn't hold, or it'll be useless, essentially telling you what you already know.

kovek 2 days ago 0 replies      
lot of the discussions here remind me of the book How to Win Friends and Influence People. I recommend it. It's a short and easy read, since there are no technical terms. It's a lot of good examples that show what works and what does not in communicating with others.

Also, check out the list of advice from this book that is probably online somewhere. I think it's important to read the examples in the book to understand the list.

brownbat 2 days ago 0 replies      
> How can you strategically make a good impression? From the outset, frame the conversation with a few well-rehearsed sentences regarding how you want to be perceived.

Klosterman comes to the same realization in Sex, Drugs, and Cocoa-Puffs, though from an unlikely angle--the dawn of reality television.

On The Real World, producers had no time to explain anyone's personality in depth, so they boiled each housemate down to a simple stereotype and selectively edited to play up that caricature. On the one hand, it was a trick of production that was massively distorting and harmful to several (most?) of the housemates.

On the other hand, we're all just like the producers when recalling our own interactions.

Like a 20-minute episode, there's just too much ground to cover to get a perfect reproduction of any person's life in a first meeting. A short working draft is the best anyone can hope for. If you help people form that, you can nudge it in a positive direction while also making yourself more memorable.

mythrwy 2 days ago 0 replies      
In some ways being a person people love to talk to is a burden. It takes time. Sometimes it's worth it. Quite often (and this sounds cold but it's true) it isn't.

Other people do make life good though and it's certainly a valuable skill. Just, it comes at a price.

AndrewOMartin 2 days ago 4 replies      
I've been the recipient of active listening on more than one occasion and it's made me want to tear that persons lungs out through their mouth. It feels like you're the victim of a corrupt bureaucrat's evil stalling tactic.
hoodoof 2 days ago 6 replies      
I know a small number of very charismatic people. It seems to just be a natural function of who they are.

I have always wondered if there was a way to "become charismatic".

YCode 2 days ago 4 replies      
Somewhat related...

Dale Carnegie, this article, et. al describe various methods to be liked, listened to, etc. that all basically revolve around the idea that you should make the conversation about them and their needs. Even smiling is a small step away from outright saying you like them and are willing to listen.

One thing I've found though is that this can be mentally exhausting. It starts to feel like the people around you are starving for attention and suddenly they've found an oasis of it.

But over even a short time the entropy of being on the giving end of nearly every interaction with someone creates a sort of mental energy vacuum.

Certainly I can't be the only one who has experienced this -- how do you maintain your energy or sense of self when you are consistently trying to meet other people's needs?

jackskell 16 hours ago 0 replies      
As an introvert and someone who cares about my privacy and doesn't like to reveal personal details, I find it easier to simply keep the conversational ball in the other person's court.

This seems to equal being a good listener.

The trick is to learn to close the conversation when you are done with it, and avoid useless prattle from the other person.

bitL 2 days ago 13 replies      
Am I the only person that feels "hacking other people" for my own benefit is wrong?
EagleVega 1 day ago 0 replies      
I feel like this article is outlining how to fake a lot of things... It emphasizes rote lines. It feels shallow. However, I think it hints at what it takes to be a good conversationalist: a deep and genuine interest in people. That coupled with a broad knowledge allows you to find what someone is interested in and learn from their perspective while adding some to theirs. This is the core of solid communication and conversation.
hamandcheese 2 days ago 0 replies      
"Ask people questions since people love talking about themselves" is common conversational advice I hear.

In general I agree, but it's a bit disheartening when you realize that many people are so happy to talk about themselves that they never bother to ask you about yourself.

nommm-nommm 2 days ago 0 replies      
>Suspend your ego. Avoid correcting people

This is actually an important thing to do and difficult for many of us "hacker" types that think more analytically.

VeronicaHadley 2 days ago 0 replies      
This one is the best from my recent readings. Communication is one of the important aspect of every soul prevailing on earth and we / humans are special one. Now from the business standpoint, it is always better to have good communicator who can negotiate with properly. I just want to add my words at the 'Silence' section; my viewpoints say, silence can be better than words sometimes.
virtualritz 2 days ago 0 replies      
Is it just me or is this just a hidden advertisement for the book "It's Not All About "Me": The Top Ten Techniques for Building Quick Rapport with Anyone" by Robin Dreke [1]?

The book is mentioned (and linked) several times in the article and in articles the article itself cites as sources (and links to).


Why is this little construction crane illegal in New York City? (2016) crainsnewyork.com
398 points by oftenwrong  1 day ago   295 comments top 31
dzdt 1 day ago 4 replies      
The New York Department of Buildings response to this article (June 2016) :

Skypicker presented the department with plans for a small, truck-mounted crane... However, when we inspected the Skypicker in use at a construction site, we found something significantly different: The crane was attached to a building slab, and appeared to be a hodgepodge of parts from other cranes, mounted on a homemade, untested base.


And the inventor shoots back:

The executive director of the [Department of Buildings] Cranes and Derricks Unit when the Skypicker was approved emailed me that [Department of Buildings Commissioner] Chandler confused the Skypicker revocation with a ruling about an entirely different crane... Hes the one thats misinformed.


jakelarkin 1 day ago 13 replies      
This is sort of like how when the waterless urinal was first commercialized, the plumbers union blocked it from getting approved in the general building code. Eventually a compromised was reached that involves requiring a plumber to draw an water line (unutilized) to every water-less install.

Unions for skilled, high-pay jobs in highly regulated industries really extract a lot of questionable value from society.

Houshalter 1 day ago 4 replies      
The first subway in New York City cost about $100 million per kilometer in 1900 (adjusted for inflation of course). The new subway line being opened this year cost $2.2 billion per kilometer. This is despite over a century of improvements in technology.

Here's an interesting HN post, "Forty Percent of the Buildings in Manhattan Could Not Be Built Today" https://news.ycombinator.com/item?id=11736696 In that article is a statistic that three fourths of the square footage of Manhattan was built between 1900 and 1930. If a freak hurricane somehow destroyed New York, we would be literally unable to rebuild it. For both cost and legal reasons.

Someday soon we might be like the later generations of Romans. Living amongst the great crumbling structures built by our ancestors. Wondering how on Earth they could have achieved such things.

xienze 1 day ago 1 reply      
> The International Union of Operating Engineers Local 14-14B lays out the wage floor for its operators at $73.91 per hour. Thats $150,000 a year before overtime, plus benefits equal to $32.50 an hour. When the operator is behind the controls of a tower crane, he gets a $2-an-hour bump. On weekends, when cranes are moved, pay doubles. With overtime, many union members earn half a million dollars a year.

And they don't have to take their work home with them! Wow, I'm in the wrong line of work.

karlkatzke 1 day ago 2 replies      
Question: Why didn't the guy just pack up the crane and take it elsewhere? The issues that make tower cranes unsafe and expensive aren't unique to New York City, and if Austin, TX was any indication, there has been a shortage of cranes due to construction activity worldwide.

Is there something about the economics or the licensing that makes this crane only work in New York City?

nkrisc 1 day ago 4 replies      
Unions have a purpose. Suppressing innovation and competition to protect their gravy train monopoly should not be one of them.
rsync 1 day ago 3 replies      
"In April 2014, City Councilman Ben Kallos, who received $2,500 over two elections from Local 14, introduced new legislation to prohibit climber cranes like the Skypicker from being classified as mobile cranes. The rule would treat the Skypicker like a tower crane, requiring it to carry more insurance and have a Class A operator."

Wow. So, for the price of a laptop I could influence NYC politics and have a councilman of my very own ?

I don't even live in New York and it's still tempting, if only for the lulz.

Do you think Mr. Kallos would pull me in a rickshaw when I visit ? Or would I need an extra fifty bucks ?

tmnvix 1 day ago 0 replies      
They do a really good job of painting the union as the culprit here without any evidence whatsoever. The first comment on the article suggests a competing crane company was responsible. I think that is just as likely.

Anyhow, as others in this thread have pointed out, the skycranes are back in business.

libdong 1 day ago 0 replies      
> Mooney was told his crane didnt have to go through the long approval process required for newly designed large cranes and that it could be approved as a mobile crane.

There has to be some kind of miscommunication going on here. By what definition is his crane mobile? Because it's easier to disassemble and load on to a truck than a tower crane? While the DoB's classifications for cranes seems arbitrary, I can't say I blame them for not considering it mobile.

dmix 1 day ago 1 reply      
Too bad they weren't as popular, well known, or well funded as Uber/Lyft to deal with this type of anti-progress nonsense.

I've been meaning for a long time to start a site to track these events of businesses getting sidelined over regulatory hurdles - similar to those sites who track police shootings. Collecting data is the best way to bring light to prevalent but maligned issues like these. I get the impression there are hundreds of these small stories that could have had an impact on the marketplace but get crushed by regulatory/incumbent barriers.

We only ever hear about it when a big company runs into these issues. But for the most part it's usually people who are already taking a huge financial risk to do an entrepreneurial activity and they don't have the runway/capital to deal with the problem, let alone invest the time to draw attention to the issue in the press or politically.

pkamb 1 day ago 2 replies      
> During the many hours of downtime on job sites, perched hundreds of feet above the street, he would pull out a journal and sketch new ideas for cranes, hoping to solve some of the machines fundamental problems.

Per last week's discussion, who owns those sketches? https://news.ycombinator.com/item?id=13921433

tbirrell 1 day ago 2 replies      
tl;dr - It's like a crawler (mobile with a boom, street level crane) but you put it on top of something already there. Since its on top of something, it's not a "mobile" crane and couldn't get certified as one, however, it also didn't fit into the tower crane (tall, lifts things) certification. So it got stuck in between two classifications and as a result made it too hard for construction companies to bother including it in their building proposals (the only way that NYC would allow anyone to use it... maybe).
valuearb 1 day ago 1 reply      
This is why it's so important to have government regulation, so that an interested party can use their political connections to guarantee themselves monopoly profits.
schoen 1 day ago 1 reply      
This article is from almost a year ago; any update on what's happened on this issue since then?
fr0sty 1 day ago 2 replies      
I wonder whether the inventor would have better luck shipping his cranes to another state where the regulatory barriers would not be so steep. Lots of mid/high-rise buildings going up in places that are not NYC...
matthewmcg 1 day ago 0 replies      
"Crain's Cranes of New York"
cyphunk 18 hours ago 0 replies      
Unions are great. It's also great the whole world isn't run by unions. Meaning, I don't understand why a less union friendly city in the US, china, UAE, anywhere, doesn't pick up his equipment and run with it. After years of drug testing perhaps other cities like NYC will be ready for human trials.
mcguire 1 day ago 0 replies      
Correct me if I'm wrong (the article is now behind a paywall for me), but the skypicker was fast-track approved under a Bloomberg program, then unapproved under a new administration, perhaps because it hadn't been tested properly and was miscategorized.

Given that it is a boom crane, normally mounted on a truck, with a custom mounting and lifting system (the kind of thing that has peeled off of buildings before), I'm not seeing the big deal.


wilwade 1 day ago 0 replies      
Why is this little construction crane not being used in other cities?

This is what is the oddest piece of the story to me. There are lots of cities where these buildings are built. Mid-size cities. If the cost of the tower crane is so much more than this one, then the cost of transport should be minimal.

akeck 1 day ago 0 replies      
Judging from a recent Nova I watched on current advancements in nuclear technology, he should pack up his cranes and move shop to China. China seems more willing to try industry changing tech these days.
jrochkind1 1 day ago 0 replies      
Why not just hire a "Class A operator" to operate it? It'll still be huge cost savings, right? And the union will stop stonewalling them, and they can go into business.
tdburn 1 day ago 1 reply      
This machine seems brilliant.

Those super tower cranes are legit massacre machines. Seeing one of those cranes go down was disturbing

dade_ 1 day ago 0 replies      
He should try peddling it in Toronto. There are over 100 high-rises under construction in the area.
paddy_m 1 day ago 0 replies      
I found a video https://youtu.be/5_PjCwMEdSY. It looks like the crane elevates itself before a new concrete floor has forms built and is poured.
zyztem 1 day ago 1 reply      
How operation of this machine is different from pretty standard small derrick cranes? Like Liebherr DR? Or even spider cranes (Maeda & others)?
Jgrubb 1 day ago 0 replies      
I'm sorry, but I steadfastly refuse to complain about the unions. All you NYC based software developers probably have no idea to what extent the overall high standard of living in the NE is largely due to so many laborers being able to make a good living doing something that isn't tech.
a_c 1 day ago 0 replies      
Many cities need his cranes. Why not relocating the business?
jbigelow76 1 day ago 1 reply      
Dumb admission of the day: the domain name, crainsnewyork.com, combined with a story about cranes caused a temporary brain short circuit. I thought "holy shit, this is a really impressive website for NY crane wonks".
gist 1 day ago 3 replies      
This is interesting. That said for some reason I think that posts like this stray far from the purpose of a site like this as it is more of a general interest and not specific to startups or technology.

I wonder if PG's 'anything that gratifies one's intellectual curiosity' conflicts with his 'If they'd cover it on TV news' (where I assume TV news is actually broader than just what would be on TV news and includes items of a more general audience business interest).

What's the issue exactly? It crowds out and distracts from stories that might be of more value and relevance than simply something that is interesting.

Maybe a new way to describe would be 'of interest to hackers and relevant to what they do in a strong way'.

cerved 1 day ago 0 replies      
Does this little crane also construct the paywall ?
wodencafe 1 day ago 0 replies      
Just an expos of Corrupt Bureaucracy.
Bay Area professionals indicted for H-1B visa fraud mercurynews.com
330 points by prostoalex  3 days ago   187 comments top 29
FesterCluck 3 days ago 3 replies      
I've personally witnessed the hiring of a contractor from India at a small development company simply to save on funds. There were plenty of skilled workers in our pool to fill the position, the owner simply didn't want to pay the price.

Weeks into his employment the developer came to me and asked if there was anyone he could report work complaints to. I gave him contact information for the state's employment agency. It took another week before I realized that he needed to be informed that the state agency protects him whether he's a citizen or not, and that he wasn't trying to report a former employer.

I know many talented people who have come to the US on H-1B clean and have been successful. I, too, do not blame the workers. The employers are abusing the system, and if it continues there must be much stricter oversight.

Velox 3 days ago 6 replies      
This thread is great. Whenever H-1B threads come up on HN, I prepare for the onslaught of what can only be described as hatred. It's the view of many people on this site that all H-1B holders are "cheaters" of the system, and they don't deserve to get a job in the US. As a former H-1B holder, the threads always made me feel guilty, and feel bad about wanting to work for a well regarded, successful company. It's nice, and refreshing, to see that it's not the view of everyone, and that H-1B holders are equals and deserve the same rights as everyone else.
ComodoHacker 3 days ago 9 replies      
I wonder why people here praise free competition on a global market as a primary value when they talk about their products and services, but support competition limiting when it comes to the labor market in their own country.
virtuabhi 3 days ago 3 replies      
In the threads where we discuss H1B visa holders taking American jobs, can we also consider that American companies are selling their products in countries from where H1B workers come from?

Google's Android biggest growth region is India. Why shouldn't then Indians work in Google? Maybe Google should move its headquarters from MV to Bangalore. Alphabet can remain in US.

Or another US company, lets say Pepsi, outsources its IT work to an Indian company, Infosys. And everyone in India is drinking Pepsi. Why is it wrong that IT work in Pepsi is done by Infosys? Should India ban Pepsi because Pepsi Inc. has more non-Indians than Indians?

I am not surprised that EU is actively fining US companies like Google and Apple, and China does not allow US companies in the first place. And if xenophobic and protectionist polices continue in US, then I am afraid that countries like India will have to do the same like EU or China.

pm90 3 days ago 1 reply      
From the article:

> The indictment charges that from 2010 to 2016, Dynasoft petitioned to place workers at Stanford University, Cisco and Brocade, but the employers had no intention of receiving the foreign workers named on the applications.

Should be noted that this is only the most brazen, harebrained scheme that has been uncovered and is being convicted. Not that its not important; but it may be more useful overall to go after the Big Co's that are doing everything technically legally but are still using H1B in ways it was not meant to be used.

Edit: Also just realized, when you say "Bay Area Tech Executives" it almost sounds like you're talking about Larry Page; but these "executives" were only officers of a shell company (seemingly) meant exclusively for committing visa fraud.

throw2bit 2 days ago 1 reply      
I am on H1B, but my job in US is being threatened by fake H1Bs from India who are willing to work on salaries lesser than mine, particularly from the state called Andhra Pradesh in India. There are many one room consultancies all over the US which runs fake payrolls and even apply something called "Future Greencards" for people who are willing to pay money. These Andhra consultancies can be found in abundance in New Jersey, Edison, and on West coast. There is even a temple in Andhra Pradesh where you can go and pray to get selected in H1B.

Even though I was a legitimate H1B worker, when the rules become strict to counter these fake H1Bs, I might also get affected, so I moved to Canada with Express Entry, which is based on skills.

Fake H1Bs cannot fake IELTS exam and Canada does thorough verification and police clearances. So that its Express Entry system is not abused like H1B system of US.

throw2bit 2 days ago 0 replies      
Another typical scheme - There are many Indian IT managers who got GreenCard around the Y2K rush. Now they are settled US citizens working in big corporation's IT depts. They setup these one room consultancies as a side business and because of their strong ties to India, they source unskilled graduates from India thru fraudulent H1B satellite "offices" in India (many fraudulent satellite H1B offices are there in Bangalore & Hyderabad in India), since H1B is a lottery, no scrutiny of candidate's skills are done by USCIS. They take the applicant's "word" for it.

Now when the big corporations have IT openings, these managers wont publish the openings in job portals. They recruits these H1Bs from their own consultancies with a high billing rate, indirectly making profits. They also take a cut from the H1Bs salary on a monthly basis as a "fees" for brining H1Bs to US.

deepnotderp 3 days ago 4 replies      
It's annoying that SV tech companies get a bad rap with the H-1B even though that the real people abusing these visas are clearly outsourcing firms like Tata and Infosys.
writer77 3 days ago 3 replies      
Beyond this, there's an entire system that has undermined American software engineers. Schools are incentivized by the ridiculous out of state/country fees they get when they accept international students for their Masters programs, regardless of a students actual abilities or whether the three year Indian/China/U.K. Bachelors sufficiently prepares them. These people always graduate, because money. Corporations, who love hiring people who can't leave them for another employer then gobble them up legally by requiring a masters degree in job description. And once they gain a big enough foothold in a company, the Good Ole Boy Club effect happens and they mostly hire their own.
NetStrikeForce 3 days ago 2 replies      
The problem with H1-B's is what in Spain we call "ni chicha ni limon" - which means that it is something half-baked or half-way from being A or B.

H1-B seems to increase competition among workers, but in fact it does not, because H1-B workers are captive workers. They don't enjoy the same rights as their local counterparts. They can't leave the company and start applying for other jobs.

This leads to abuse from businesses.

umbs 2 days ago 0 replies      
I don't think the H1-B abuse is straightforward. IMO, many institutions (private and government) are deeply entangled in this.

1. There are many poor universities promising MS degrees in CS (here in California, there's Silicon Valley University, ITU etc). Not casting aspersions, but I doubt the quality of education from them.

2. Students can't get funding/scholarship. They tend to do odd jobs to make ends meet (many are illegal as students can't work outside of campus). They get in to severe debt.

3. Once they get a degree, clock is ticking to get a H1B. Note that recent OPT extensions give them breathing time. But nonetheless, they must get H1B to stay in country.

4. This is where one room consultancies come in to picture. They sponsor H1B and legally allow staying in US.

5. Students start working at lower end of pay spectrum. A friend of mine is willing to work for $20 - $25/hour (after commissions to consultancies) even after 10+ years of experience. The friend is in web application testing.

6. Sticking with consulting firms, it takes long time to break the "low income" cycle as projects are short lived, pay is low and job insecurity is high.

So, #3 and #4 are serving each other. Private universities gets income from students. US consulates in Asia give visas with, probably, tax dollars in mind. I even suspect private universities have some sort of lobbying that allow students from Asian (or other) countries to come for education.

So, my point is, this H1-B visa thing is quite deep and goes way back to universities and students coming for education.

anjc 3 days ago 1 reply      
Every time a thread comes up on whether H-1B visa fraud exists there a cohort of posters who maintain that they've never seen it and that people who say that it happens are just racist and xenophobic.

Remember this thread, and ones like it.

mavelikara 3 days ago 0 replies      
Delighted to see more of these parasites getting indicted. Here they are plotting to keep more of their employees enslaved in long GC backlogs https://youtu.be/lj1bHpPvSuE
hrshtr 3 days ago 1 reply      
In bay area there are quite a handful of such firms which fake person resume and help them find out the job. What surprises me is that employer does catch the difference in experience while interviewing and bunch of positions are filled. These firms file h1b shown more experience than what a person does actually have and make good money out of such schemes :(
org3432 3 days ago 0 replies      
Other than anecdotal stories, are there stats from a reputable source on how widespread this is and what the real impact is?
s0me0ne 1 day ago 0 replies      
Now if we could get companies on intern fraud that would be great. Interns are not legally supposed to take jobs from real workers, be unpaid (unless you are a non-profit), or be used for real work right away (like things that might be used 9 months from now possibly). But their main purpose is to teach them, not to be used for cheap/free labor.
faragon 3 days ago 0 replies      
Is there any study analyzing performance between H-1B tech workers and remote tech workers? (i.e. how being in the same building or working in different time zone affects planning and performance)
diogenescynic 3 days ago 1 reply      
They also need to start going after the lawyers at the immigration firms who knowingly assist in fraud and even coach the applicants how to respond to USCIS questions. There is rampant fraud going on. I've seen it firsthand and know dozens of others in the industry who have as well.
caltrain 1 day ago 0 replies      
Unrelated: Elon Musk was on H-1B before Paypal days.
bobosha 3 days ago 0 replies      
I know what you are going to say: "There are plenty of IT jobs, just not enough at the salaries, us american programmers feel entitled to"
elastic_church 3 days ago 0 replies      
The charge alone will deter alottttt

If you think rumors spread fast in the states, wait till you see asymmetric information spread through India

known 3 days ago 0 replies      
It's even worse https://qz.com/889524
rattray 3 days ago 1 reply      
The title is a bit sensationalist and misleading; "Bay Area staffing executive indicted for H-1B visa fraud" would be more accurate.

One of the two people indicted is the CEO of a staffing company ("Dynasoft Synergy" in Fremont); the other has worked at a couple companies including Cisco, not explicitly in an executive capacity.

tsunamifury 3 days ago 1 reply      
Auto plays an ad at full volume. -- closed.
LeicaLatte 3 days ago 0 replies      
ireadfaces 3 days ago 2 replies      
notliketherest 3 days ago 1 reply      
"Dynasoft Synergys" did anyone else LOL
Animats 3 days ago 2 replies      
This is progress. Give a copy of this to your employer if they start talking about replacing you with an H1-B.
How Subtle Class Cues Can Backfire on a Resume (2016) hbr.org
414 points by apsec112  3 days ago   301 comments top 30
hn_throwaway_99 3 days ago 8 replies      
Great study. I think this perfectly underscores the concept of what "privilege" is, in a scientifically robust way.

I have a worry about one of the implied conclusions, though. The high-status women were clearly discriminated against based solely on gender. However (and trying to tread carefully here), it is at least possible that a high status woman would be more likely to leave for family reasons than a high status man. That doesn't make the discrimination any better, but it also means the employers aren't necessarily acting economically irrationally (of course, there is also the chicken-and-egg problem, in that these high-status women might be more likely to take up a domestic role because they're being discriminated against in the first place). I say this not to give the employers a pass, but to suggest that any real, durable solution to the discrimination shouldn't automatically assume those social factors are imaginary.

SilverSlash 3 days ago 4 replies      
I don't understand one thing though - Why doesn't she talk about the fact that according to their own survey, lower class women were also 5 times more likely to get a callback than lower class men.

So while upper class men have it the best, lower class men have it the worst. But the author seems to be ignoring this entirely.

jzwinck 3 days ago 3 replies      
Sailing is a very stereotypical rich man's sport, and track & field is perfect for the poor (low equipment and facilities costs). So it's no surprise the research used these. But they are saddled with a confounding factor: participant age.

The person reviewing your resume will likely be over 30 years old, maybe over 40. And they probably have more money than you. This makes it more likely that they participate in sailing or golf and less likely that they do track & field.

You may benefit from sharing an interest with your hiring manager or recruiter, and maybe it just happens they like things enjoyed by adults with money. Rather than judging you because you like cheap stuff.

A rich man's sport that doesn't favor older folks so much is crew (rowing). That would have been a better choice than sailing IMO.

jimmyswimmy 3 days ago 1 reply      
The clues may be less subtle and still affect employers decisions. I've read thousands of resumes and I always get frustrated when I can read more into the resume than may have been intended. Membership in the "hundred black men" or the "Aidan American student organization" provides me with information I would prefer not to know. Ideally when I review a resume I only want to know whether or not I think you might be qualified and worth wasting time on a phone call. Providing that information makes it harder for me to pretend that I am hiring blindly.

The idea that there are even more subtle clues is fascinating. When hiring engineers, such clues have remained entirely subliminal to me. There must be some but honestly it wouldn't have occurred to me that there is a class difference between those interested in sailing and those who like track and field. I would probably guess that a track athlete would get along better in my company. Perhaps we are just low class.

imjustsaying 3 days ago 2 replies      
This makes total sense. If you're high class, you're going to yield more business for the firm if for no other reasons than deep social connections to people who can afford to buy the firm's product.

If you're a woman, statistically you're more likely to work for the firm for a brief time before you retiring to being a wife in your late 20s or early 30s - a huge sunk cost for an elite firm that invests heavily in its employees.

If the firms were outright discriminating against someone unjustly due to some kind of shadowy and insidious patriarchy or class hierarchy preservation desire, they would get crushed in the free market for making systematically wrong decisions. But given that they're the top 5% law firms, it looks like their heuristics are correct - high class males are more often than not going to make them a ton more money than other groups.

999natas 3 days ago 2 replies      
I don't see any subtle cues here. An important skill for a lawyer is to know how to present facts in the most favorable light. Take the study's James Clark. He won a "University award for outstanding athletes on financial aid", but it would be just as accurate for him to say simply that he won a "University award for outstanding athletes". Why not mention financial aid? Because it's not something that makes him a better lawyer. It doesn't serve the purpose of the resume. I can just see someone reading the resume and imagining all the dumb things James would write in a court filing and what the result to the client would be.
emodendroket 3 days ago 1 reply      
I'd posit that young tech companies have similar problems, even if the particulars are different than law firms.
utnick 2 days ago 0 replies      
If I received a resume from a collegiate track person I would probably google them out of curiousity to see how fast they were, and in this case I would realize it's a fake name and resume, I wonder if that had any effect on the results
MichaelMoser123 3 days ago 4 replies      
The good thing about tech is that professional interview is more important than the HR interview. No matter how bad a tech interview can be - non technical professions have it worse

why do CV's even have an 'extracurricular activities'/'Personal interests' section? (asking because i never had one in my CV) Is this exclusive to first job applicants? another question: is having no awards better than having 'low class awards' ?

(Ah, got it: they are looking for signs of a first impression bias - these biases are likely to play a role during the interview process (given that these biases exist) )

dllthomas 2 days ago 0 replies      
Concerns I have, quite possibly addressed in the actual research but seemingly not in an admittedly fast read of the article:

The advice seems to be "don't include things that hint you might be low class (in some situations, at least)" - but did they actually include that data point in their analysis? Maybe a lack of class clues is treated as lower class as well.

Second, the athletic component of the "lower-class combo" involves competing against a smaller pool of other students, and might rightly be seen as less impressive. If you're in the top 10 of all students, you're clearly also in the top 10 of those on financial aid.

Third, pretty much all of the "lower-class combo" involves quite a few more words. I'd be a little surprised if that had a big impact, but I sometimes find myself surprised by the weird shit we react to.

brohoolio 3 days ago 3 replies      
I think the differences between gender are fascinating.

Upper class women are not selected for interviews because there is a perception that they won't be as committed, implying they might get married and stay home with the kids.

garfieldnate 3 days ago 0 replies      
Pretty fascinating finding! Interesting related book if anyone is interested: Hillbilly Elegy. It's a personal memoir that extensively discusses the struggles of working class whites, particularly Scots-Irish/hillbillies. It helped me understand our last election better, as well.
Clubber 3 days ago 1 reply      
So, if I pretend to be interested in Polo or golf, the chances of getting a better salary increase. Check.
tjalfi 3 days ago 0 replies      
rafinha 3 days ago 0 replies      
Maybe the interviewers bias resume selection to their own profiles. Because most people come from elite families, mostly people from elite families get interviewed.
mariodiana 2 days ago 0 replies      
Just to add a bit of context, here's a NY Times article from 2005: "Many Women at Elite Colleges Set Career Path to Motherhood."


Among the well-heeled, the MRS degree is not quite dead.

wellpast 3 days ago 0 replies      
If I saw "University athletics award" on one resume and "University athletics award for someone on financial aid" on another - I am going to be thinking less, one of these persons is lower-class and more Why the heck is the latter one almost going out of its way to be more wordy?

I would find the former more succinct and might intuit that this person were better at expressing the essence of information. I'm not saying there isn't bias but I'm not convinced that you can distill the conclusion to purely class. People are complex and any attempt to fake a resume is going to trigger spidey senses. Again I'm sure class-bias is there, but just questioning this study's approach.

VLM 3 days ago 1 reply      
Sometimes the most interesting data point is right out in the open and nobody notices. The author thinks they understand what desirable is, and wrote a paper about gender and class WRT to those closely held beliefs that I'm sure is very interesting. However the truly interesting data point is almost nobody is getting callbacks, failure rates varying from 99% to merely in the 90th percentile.

We're not exactly talking about manned space missions where a 99% mission failure rate wouldn't be tolerated. Or imagine if 99% of aircraft landings ended in a fireball. We're talking about the small details of a system that on a large scale is a miserable failure.

The author thinks they found the golden boy who everyone loves, but the reality is virtually everybody is uninterested in those people.

I would theorize if I had 400 people with medical dwarfism apply to the NBA to be pro basketball centers and then I abused the heck out of SPSS or R I could eventually find a correlation between average skin color and callback rates or perhaps maternal income and callback rates or presence of the father during childhood or whatever. I'm sure it would be an absolutely fascinating paper. But don't miss the forest for the trees, the real story is the people at the NBA who hire pro basketball players hire approximately rounded down to zero people under 4 feet in height, so even if they're subtlety or not so subtlety biased about various demographic characteristics of dwarves, it really doesn't matter because they don't hire dwarves to begin with. Of the people they intensely and strongly dislike and will not hire, they dislike certain demographics slightly less, but virtually all of them are still disliked enough to have approximately zero chance of being hired anyway so it doesn't matter.

Its like asking a Catholic convent of nuns what they like to see in a male applicant, and they respond they slightly prefer Catholic male applicants over, say, Jewish male applicants. Which superficially sounds like an incredible religious discrimination scandal, until you point out that Catholic nun convents accept essentially zero male applicants anyway, so ... if a discrimination tree falls in a forest and no one hears it ...

chrismealy 3 days ago 7 replies      
Who puts hobbies on a resume anyway? Who the fuck cares?
heynowletsgo 2 days ago 0 replies      
So nothing's changed, the status quo remains the most important thing. Shocking. As to why, politics and the love of power. Nothing new.
HarryHirsch 3 days ago 1 reply      
I worked for Miss Lilly Pulitzer once. She was intolerable. The imperious demeanor, the unwillingness to listen to reason, the papered-over cluelessness, it did not work out well. Never again!

Here's a fun thought: elsethread age discrimination is discussed. If an employer shouldn't be allowed to discriminate by age, should they be allowed to discriminate by class?

tomjen3 2 days ago 0 replies      
Reading through their methodology, it seems that they screwed up in giving the lower class person attributes that indicated second best: e.g outstanding atlet vs outstanding atlet $IN_CATHEGORY, which is never going to be as impressive.
na85 3 days ago 3 replies      
Does anyone still believe the "pull yourself up by your bootstraps" narrative any more?

My understanding was that it had been thoroughly debunked.

wruvjg 3 days ago 1 reply      
No comment on how lower class women got 5 times the callback rate of lower class men? Almost exactly the rate at which upper class men outperformed upper class women? Is this reporting on the study biased in any way?
ouid 3 days ago 5 replies      
the assertion that this is scientifically robust is pretty flimsy. I would never hire anyone who listened to country music, regardless of background.
mahyarm 3 days ago 1 reply      
That top header takes way too much space and there is no obvious was to minimize it :/ Bad form.
hasenj 3 days ago 2 replies      
Why do tens of thousands of people have to compete for a small number of positions? Something it's clearly wrong.

It seems like the study treats working in a job as a favor received by the peasants from the slave masters.

Or, the study purposely set up a scenario that matches this idea of what work is.

andrewclunn 3 days ago 1 reply      
If you want to be judged based on your ability rather than people's biases or your appearance, avoid "soft sill" professions. Just a word of advice.
ajross 3 days ago 2 replies      
The headline is just wrong. Yes, there was some correlation between "class cues" and resume response rate. But the overwhelming finding (like, 4x the effect!) was gender. Upper class men were wildly more popular than any other group. And in fact upper class women actually underperformed their lower class sisters.
mindcrime 3 days ago 2 replies      
Wait a minute... hang on.

Every fall, tens of thousands of law students compete for a small number of coveted summer associateships at the countrys top law firms. ... For these reasons, employment in top law firms has been called the legal professions 1%

Our findings confirm that, despite our national myth that anyone can make it if they work hard enough, the social class people grow up in greatly shapes the types of jobs (and salaries) they can attain, regardless of the achievements listed on their resumes.

Just when exactly did the bar for saying that somebody "made it" become working for "the legal profession's 1%". That's a ridiculously high bar... to the point of absurdity. And while it doesn't contradict the results themselves, it certainly colors the interpretation.

I mean, if you think the only thing that matters in life is to be in the 1% of your profession, then fine. But most people would be happy with a bar quite a bit lower than that... a steady job which puts them solidly in the middle class, or anything higher (in terms of socio-economic class).

Curiously, I feel like I see this "moving the goalpost" stuff quite often in articles which try to argue against the idea of meritocracy or the importance of work ethic and individual effort. Probably not a conspiracy, but perhaps a form of bias..

Anger as US internet privacy law scrapped bbc.com
307 points by clouddrover  11 hours ago   179 comments top 36
bbarn 9 hours ago 6 replies      
I remember being so excited as a kid when I started getting drawn into the computing age. I remember when the internet seemed like mankind's next step forward. I remember being excited about so many new trends, social media, the age of everything being free on the internet.

Then, everyone else caught up. Now it's just like watching television. A bunch of companies vying to get you to buy something at any cost, and taking that strategy further than they ever did before the internet.

Now, I just want to leave my phone at home and go ride my bike. I rarely feel like developing anything anymore, haven't done a side project in over a year. Whenever I get motivated to do things like that I see shit like this and it just feels hopeless. There's no way to defeat these companies, because they are full of people struggling to get ahead and doing their cog's part in the machine that ultimately does this to us all.

dguido 10 hours ago 7 replies      
Before anyone races in with a suggestion to use a VPN service, I STRONGLY suggest that you consider running your own self-hosted server instead. There is a great set of Ansible scripts to do just that right here:


jsz0 10 hours ago 5 replies      
Maybe we should all start running a script that browses random websites at random times. Seems to me that would go a long ways towards making the data collected about as valuable as a magic 8 ball. It would be even better if such a script could actually look at my real browsing history and try to generate the most confusing anti-traffic. If I search for cats it searches for dogs and birds. If I check the weather for zip code X it checks it for zip code Y and Z.
srtjstjsj 9 hours ago 0 replies      
Ajit Pai's career is the canonical example of revolving door crony capitalism and regulatory capture. His career is dedicated to using the US government to transfer wealth from the public to Verizon.


tucaz 10 hours ago 0 replies      
I hope this is the beginning of a process that will improve this situation in the future.

Not a long time ago people were completely ignorant about this subject. As companies started to take advantage and abuse the lack of awareness of general people they started to do it more broadly and publicly.

Now the idea of lack of privacy is starting to get out on the streets and make people more aware of the problem.

At some point we will be able to turn the table and a strict legislation around privacy will be put in place.

Things are going to improve, but they still need to get worse, before.

acomjean 9 hours ago 0 replies      
I am assuming with all the money made selling all this private data, ISPs are going to be slashing consumer broadband rates across the US and building better infrastructure!

Cheaper faster internet for all in the USA!

Wait, they don't have to lower rates, I'll go to the one of the other many ISP options I have...

Oh wait..

iliketosleep 10 hours ago 2 replies      
I do not understand. I thought that in the current climate, where people are becoming increasingly aware and concerned about privacy, that such laws would be expanded in scope. But here, the law is being repealed.

Additionally, I find the implications of this kind of admission to be astonishing: Last year, the Federal Communications Commission pushed through, on a party-line vote, privacy regulations designed to benefit one group of favoured companies over another group of disfavoured companies. That's a pretty huge statement, made in a business-as-usual kind of way, that calls into question the overall integrity of the FCC.

confounded 9 hours ago 0 replies      
Worth making a shout-out to the independent ISPs that opposed the change (including the Bay Area's own MonkeyBrains & Sonic).

If you're lucky enough to have one, support your local ISP!


JumpCrisscross 8 hours ago 0 replies      
I am reminded of a conversation with a Russian-born Valley-based venture capitalist. I asked why Silicon Valley seems less politically organized, and thus influential, at the grassroots level than New York City.

"New York is closer to D.C.," she observed. But that doesn't explain why the average person from Silicon Valley has less influence than, say, from Los Angeles.

We're Alan Turings, she said. Turing wanted to be left alone to make things. Unfortunately, his government didn't see similarly. First, with World War II and later by prosecuting him for his sexual orientation. Being able to be left alone to make things is a luxury, a delicate balance almost unprecedented across human history.

We will lose the privilege if we refuse to defend it. Please donate to the EFF [1] or the ACLU [2]. Call your Congressperson [3] and Senator [4]. Get to know their aides. Let your Attorney General [5] know you care about this.

[1] https://supporters.eff.org/donate

[2] https://action.aclu.org/secure/protect-rights-freedoms-we-be...

[3] http://www.house.gov/representatives/find/

[4] https://www.senate.gov/senators/contact/

[5] https://oag.ca.gov/contact

Note: this comment recapitulates an earlier one [I]

[I] https://news.ycombinator.com/item?id=13963777

jarcoal 10 hours ago 0 replies      
If anyone in Portland, OR is looking for an ISP that will respect your privacy, you might try reaching out to Stephouse (https://www.stephouse.net/).

I recently switched to them from Comcast, and this news makes me all the happier that I did.

andr 10 hours ago 4 replies      
Ask your ISP. Show them this matters to you, enough to cancel your contract. I asked mine (PAXIO in the Bay Area) and they said they have no plans to sell any customer data.
pdimitar 2 hours ago 0 replies      
I am waiting for the day a cheery Russian teenager leaks all of the browsing history of several USA senators.

Nothing motivates politicians more than them being directly affected.

As ironically amusing such a story would be, I don't think they'll draw the right conclusion however. They'll probably push for more laws "against terrorism" and will not see such an accident as a proof of how much of a slippery slope the killing of internet privacy is.

TOMDM 9 hours ago 1 reply      
So, ISP's can sell your data now.The few who use VPN's or other methods to obfuscate/hide their data are a rounding error, big ISP won't care at all because the barrier to entry is at the moment much more complex than installing an adblocker, not to mention, the immediate impact is not so apparent to the average user.

What gets me, in the world we live in data is king, now that the ISP's can use this data, surely they could sell it, but what's stopping them from looking at googles throne?

Google at the moment leverages the data they gather from their services, but your ISP has _everything_

Am I missing something here, or does the endgame look like the issue will be what ISPs choose to do with this data in house rather than out of it.

Not to mention, do they also no longer need to disclose when they suffer a data breach or am I mis-remembering?

All this together looks like it ends with gross oversteps in the use of data by your ISP, not to mention they will do the [three letter agency of choice]'s job for them, all they need to do is find a way in.

SN76477 9 hours ago 0 replies      
Can we not just have some representatives browser history leak and blame it on this?
olivermarks 10 hours ago 0 replies      
Dane Jasper, ceo at Sonic in the bay area has a good track record around privacy...so far...https://corp.sonic.net/ceo/category/privacy/
skynode 8 hours ago 0 replies      
May be a good time to reconsider that move abroad. There are quite a bunch of places that still cherish privacy or don't even bother about privacy (so you run your own infrastructure as you like), while you still get to conduct your business reliably. With an Internet connection and a few good bank accounts (and of course a BTC wallet), you can be anywhere these days and still accomplish so much. But you must be willing to be quite flexible about your worldview and learn.
russdill 10 hours ago 1 reply      
Can states enact their own law? California maybe?
atheiste 8 hours ago 1 reply      
I think there is a hope in free software companies. I am working at one nowadays and we are breaking the law almoust daily and get sued with similar frequency. Now we are installing Lye transmitters into any village which is interested communicating via satelite to bring the internet there. Becoming your own ISP solves the problem right? If we see increase in such behaviour the problem might disappear. Because the future is distributed
singularity2001 4 hours ago 0 replies      
If you have some ssh server somewhere (who hasn't), you can very easily use 'VPN over ssh' by calling:

sshuttle -r user@remote_host --dns

methehack 8 hours ago 0 replies      
Seems like one could write a program that continuously (with some sleeping of course) hit random websites in the background. This would hide the "signal" of the sites one is actually browsing. The ISP's data would be much less valuable. The solution to pollution is dilution. I wonder if an approach like that would sufficiently cloak one's data and sufficiently screw the carriers.
MichaelMoser123 8 hours ago 1 reply      
Isn't this in conflict with the fourth amendment? Does the US constitution permit this practice?
LeicaLatte 8 hours ago 0 replies      
Where is the anger the article refers to? Literally none of the big tech executives who have a voice have spoken up about this. I am not sure we minions count for anything anymore.
ktta 9 hours ago 0 replies      
I have a question. Right now I'm using a cheap $3.49 VPS and it is located in Beauharnois, Canada. How are the privacy laws in Canada? Better than US or worse? Is there anything else I should know?

PS/PSA: It was the best value with unlimited internet I could find. It was the cheapest option from OVH. Cheapest, considering I wouldn't have to worry that the company would shut down. Latency isn't terrible actually.

swinglock 8 hours ago 0 replies      
The only difference between allowing the postal and waste industries inspect what they are hired to deliver, log and sell those logs to whoever pays and allowing the Internet pipe industry from doing it is that it's much cheaper for the Internet pipe business to do so.
JumpCrisscross 8 hours ago 0 replies      
Were there any ISPs who did not, if not fight the measure, not overtly support it? Wireless carriers?
httpitis 9 hours ago 1 reply      
I don't use it myself, but could the technology behind the tor network [1] (or the product itself) be used to counter this?

1) https://en.wikipedia.org/wiki/Tor_(anonymity_network)

dcow 9 hours ago 4 replies      
But why should Google be allowed to share your data but not ISPs? Not that I love this move but the reasoning does resonate, or at least make me question if the former law really did anything at all or if FB/Google lobbied it through to stifle competition.
ReinholdNiebuhr 9 hours ago 0 replies      
I asked this is in the other thread on this topic, when did the Obama-era rules emerge? If anyone has the bill info that would be ideal. I've been trying to find stuff on google but it's flooded with the current news.
toodlebunions 10 hours ago 5 replies      
So what's the best VPN that doesn't store or sell their user data?

Surely a new business opportunity if there isn't one good enough to recommend for privacy.

danblick 9 hours ago 0 replies      
Is there any hope I'll be able to find a major ISP that doesn't sell my data? (Google Fiber, I wish?)
mdani 10 hours ago 1 reply      
Is there a way to opt out explicitly by requesting the ISP not to share your data?
tobltobs 8 hours ago 1 reply      
Who needs privacy as long as you have guns.
belovedeagle 10 hours ago 4 replies      
> will soon no longer need consent from users to share browsing history with marketers and other third parties

This is a lie "fake news", if you will. This congressional action cancels an upcoming change in policy: it maintains the status quo, and therefore "no longer" is not an accurate characterization of the situation.

Gustomaximus 9 hours ago 1 reply      
A great time to start using Opera browser with their free built-in VPN


Disclaimer: Worked at Opera ~5 years ago which is why I'm familiar but no skin in the game now.

acover 9 hours ago 2 replies      
Do people actually care? Https makes only reveals the domain not the content. Google/Facebook collect way more information. Everyone keeps using them.

If given the choice of targeted ads vs an extra $30 a month I suspect most people would choose targeted ads.

Edit: remember downvote if you disagree

ericcumbee 10 hours ago 1 reply      
It wasn't a law passed by Congress and signed by the president... It was a regulation. There is a difference.
We Have 24 Hours to Save Online Privacy Rules eff.org
370 points by DiabloD3  2 days ago   50 comments top 9
mgreg 1 day ago 2 replies      
These rules are/were certainly a step in the right direction for the protection of consumer privacy and should be saved; the attention on consumer privacy is welcome.

What's fascinating is that other online privacy areas that I would argue are much more invasive and threatening to consumer privacy are completely ignored. I'm referring, of course, to the ecosystems of Google, Facebook, the new Verizon (AOL, Yahoo), and the many other companies looking to amass as much consumer data as possible for profit.

These FCC rules feel a bit like the FCC was patting themselves on the back for fixing a drip in the bathroom faucet while ignoring the broken pipe to the water main.

From a business perspective I can also see why ISPs would be so upset about their businesses having to abide by these rules while their competitors (again, Facebook, Google, et al) are left to collect data unbridled. The all sell ads after all.

levi_n 1 day ago 4 replies      
https://resistbot.io/ makes it real easy to contact your representatives in a manner they are more likely to see.
tfussell 1 day ago 3 replies      
What I've been curious to learn and haven't heard discussed is how this data will become available. Will I be able to call up Comcast and pay $X for a particular user's browsing history after this passes?
Spivak 1 day ago 2 replies      
Is there anything practical I can do if I already know my representative is going to vote against the bill?
jessaustin 1 day ago 1 reply      
One wonders if this asinine legislation might also be supported by VPN providers?
ReedJessen 1 day ago 0 replies      
The "Calling you now" widget to contact my congress person doesn't seem to work for me. Is this just my problem of is it down for everyone?
exabrial 1 day ago 2 replies      
Unpopular comment alert: This is a good thing.

Guys, I don't think the "Federal Communication Commission" should be passing consumer protection regulations. What's to stop Amazon, Netflix, etc from selling a service where an advertiser sends them an IP and they get back your name and interests? These regulations need to be passed at a much broader level. I think the FCC should focus on anti-competitive behavior on ISPs right now, and leave these sorts of matters to another regulatory body.

And besides, with TLS (and DNSCurve if you're paranoid), I'm not sure this regulation means much anyway.

jumpkickhit 1 day ago 0 replies      
I'm assuming this will pass.

Will we ever be able to remove it one it's in effect? Or is that likely to never be an option.

paxcoder 1 day ago 1 reply      
The title and the article are missing "United States" somewhere.
Thanks for Submitting Your Rsum to This Black Hole nytimes.com
364 points by johnny313  3 days ago   205 comments top 24
ryandrake 3 days ago 10 replies      
Whenever this subject comes up, predictably the answer here is always Blah blah blah network network network. For a demographic always looking for the scalable solution to a problem, HN seems pretty attached to the least scalable option. The numbers just don't make sense to me.

Let's generously assume that you meet and have excellent working relationships with 100 people at each of the 4 jobs you've had. Of those 400 people, say, conservatively, 50% think so highly of you that they'd be willing to stick their neck out help you with your next job search. Out of those 200 people, 50% are no longer working for that company. Out of those 100 people, let's optimistically assume you actively kept in touch with all of them over the years. Now, maybe 50% are working for a company where you'd actually like to apply. Out of those 50 companies, 20% actually have a job opening that fits your background. Out of those 10, maybe 2 actually know and can put you in touch with the hiring manager for that opening. And your success chance through the interview pipeline has got to be worse than 50%.

Tweak my numbers up or down a little, but I think it's a pipe dream for most workers: You have to have an enormous address book full of high-power contacts in order to end up at the end of the funnel with one or two who are both willing and able to successfully help you get a job. And once you've exhausted that network (all it takes is to blow a few interviews), then what?

apsec112 3 days ago 14 replies      
In my job search, I've been surprised by how often you submit a resume online, and then you don't even get a rejection email. It's a true "black hole" in that you never hear back, not even with a "no thanks". I think it's disrespectful of candidates to ask them to spend time filling out an application, and then leave them hanging because you're (presumably) too lazy to read it.
gnarbarian 3 days ago 1 reply      
The worst is when you have to fill out countless repetitive and exhaustive applications for ONE job because they don't have enough a proper interface to indeed/LinkedIn/etc

I probably spent 7 hours on application paperwork for my current job. Complete with detailed personal history that had to be 100% accurate going back 10 years.

I had 3 other offers expire before I got my offer from my current employer. Then once I accepted it was another month before I knew if I passed the background checks. That wanted me to start a few days after I finally knew without giving proper notice to my current employer too!

Thankfully this only had one 3 hour casual interview and it was more a waiting game than endless hours poured into interviews for a job I might not get. I knew this would be a better choice than the other offers and I'm glad I did what I did.

chiefofgxbxl 3 days ago 1 reply      
Minus the dog learning to type at the end, sounds like my experience applying for jobs: submit application online to several places, get a "resume received" email, some automated email a few days later to get one's hopes up, and then get that denial email.

I don't want to hear some euphemistic email detailing how I was a very strong candidate but among a large qualified pool of applicants or how the team was impressed with my resume but unable to move forward at this time... just tell me I didn't get the job already and cut out all the flowery soup.

jellin 3 days ago 1 reply      
I have almost 10 years of experience and I've been looking for a job for the last year. I've sent applications to some companies, carefully selected and for which I'm 100% enthusiast about. I've sent custom cover letters, I've been studied the product and provided feedback and improvements. And still I don't deserve even an email saying that I've been rejected.

With some companies I was exchanging email, providing all the info, and then suddenly I never received any further reply.

I think that it's good for the company to filter out candidates, but something must change, as candidates we expect at least a rejection email, especially when your life depends on it.

dkarapetyan 3 days ago 1 reply      
I'm more and more convinced that being a cog in some corporate machine is coming to an end. We've collectively refined corporate processes to the point that it no longer requires human ingenuity or creativity. This has all happened under the guise of making things more efficient so that a stock ticker will be less volatile and will consistently move up and to the right. The candidate tracking software that this article parodies is one example of such an automated and "efficient" system. One gets the sense that the recruiters are almost secondary. Soon a "dog" will really be able to screen candidates for a job.

I don't think I'm saying anything new here. The mechanization of such work has been happening for a while now and the smart move is to start planning for that inevitability. Anything that requires basic pattern matching and procedures is pretty much gone. My retirement account I think is currently managed by a "robo advisor". Hiring humans to do such mechanical tasks will start getting more and more expensive relative to tweaking some parameters in some neural network coupled maybe with some domain specific policy/optimization framework.

Personally I don't think this is such a bad state of affairs. Why should societal optimization tasks be handled with heroic human effort when we can just do it with math?

crispyambulance 3 days ago 1 reply      
The best way to avoid the "black hole" is to not throw your resume into it in the first place.

Instead, use your professional networks and friends. Reach out to actual human beings. Find any way you can to bypass the bullshit online job application systems and HR departments.

When people (or programs) go through a stack of resumes, it's all about finding reasons to eliminate as many as possible as quickly as possible using the flimsiest of criteria. Of course it's going to create hard feelings but what should one expect when putting oneself into a giant horde of applicants?

M_Grey 3 days ago 1 reply      
I've been so thoroughly broken by naming schemes like "Gravity" or "Flaming Biscuit" or whatever, that I've basically trained myself (at least here) to read things like "Black Hole" as a project name. Imagine my brief consternation when I thought, "Who in their right mind would name a resume hosting service 'Black Hole'?!"... then saw the source and felt incredibly stupid.

Beyond that, let me just add my voice to the multitude shouting, "Holy shit, yes, and it's terrible!"

amgin3 3 days ago 0 replies      
In my experience in searching for a job, companies invite for 3 interviews and a coding test, then decide the position isn't clearly defined, so they restart the entire process and ask you to re-apply. Then you go through 3 more interviews and another coding test, and then one more interview, then you never hear from the company again. Since the whole process took 4-months, you are now broke and homeless.
Tagore 3 days ago 1 reply      
I've literally never gotten a job by sending out resumes. Every single job I've had I got through some connection or another.

I actually refused to send my current job a resume. The conversation went like this:

Can you send them a resume?

No- here's my LinkedIn.

But can you send them a resume?

No- here's my LinkedIn.

OK, can you come in for an interview?


A month passes.

Can you come in for another interview?



Overtonwindow 3 days ago 0 replies      
This is the finest account of the resume/employer process I have ever laid eyes on.
cardiffspaceman 3 days ago 0 replies      
For how ever many weeks you might wish to collect unemployment checks, you have to show that you're looking actively, in my jurisdiction. One might have a network of contacts but it's going to take some time for that network to lead to the kind of events that count as evidence that you're looking for work. On the other hand the black holes can lead to lots of events of that nature. So worst case, the black holes actually generate something (is Hawking radiation the right term?) that you can use to keep those checks coming.
tiatia 3 days ago 0 replies      
You would not believe me how many applications I wrote. I know the black hole.

There are a few rules.

Never apply to a job that uses Taleo, SAP or any other "application" interface. A real job requires to send or upload a resume and an _OPTIONAL_ cover letter. Ignore everything else.

Never apply for a job that requires anything unusual (only non-smokers, hand written resume, time of birth for astrological evaluation, hard copy of application etc.). I am even skeptical about "motivational" letters. All these are signs that your future boss is nuts.

Be very open to create "sample" work, like a 10 page marketing plan or an "investment analysis of 5 companies". Just be very clear that this will be billed at your consulting rate too.

Remember the second most stupid people work in HR (with the most stupid people working in real estate). This does not mean the every HR person is an idiot - in fact I am sure there are brilliant people -, it is just a reflection of the entry requirements for these jobs.

I never really found a decent job after my PhD and I was desperate for years. The funny thing is: Now I get sometimes offered two jobs a day by just meeting people. I don't even engage in the conversation. They wouldn't be able to offer any salary that would make me even considering taking a job. And if I look at my former peer, never getting a job in the past may be the best thing that ever happened to me.

rampage101 3 days ago 1 reply      
I don't get why referral is counted for so much especially when there are referral bonuses which would encourage employees to refer basically anybody they know.

With all these companies having massive HR departments, I also don't understand how a resume goes unread or un-responded to when it takes maybe 10 minutes max to go through a resume thoroughly.

bgribble 3 days ago 0 replies      
I have actually had really good experiences as both a job seeker and a hiring manager with Hired. Their platform makes it easier to keep track of candidates and makes it less likely they will fall through the cracks.

My experience with hiring in the past is that startups that do it completely internally with no HR or recruiter support are likely to get overwhelmed and drop the ball. Recruiters drive the process along but they are motivated to put any ass in the seat and are not usually completely trustworthy from either the job seeker or the client side. A marketplace solution like Hired, Vettery, etc makes the process lot more transparent and has a rhythm that helps keep hiring managers on task.

Of course I'm a programmer and I'm in NYC and what works for me, here doesn't work for everybody, everywhere.

jkaljundi 3 days ago 1 reply      
Have been toying around with an idea of recruitment and job application service which would try to turn the tables and show candidates who's interested in them. So instead of applying, the first step would be a question from candidate to company saying "tell me more about this job". The company would then need to actively get back to the candidate. The advantage to the company would be a much larger pool of candidate contacts, although they would also need to work those contacts more.

It might not be a solution for jobs and company types described in this thread, but in many industries, company types and countries the lack of candidates is a much bigger problem than too many candidates.

wott 3 days ago 0 replies      
I've had a recruiter writing me to thank me for my application, but unfortunately blah blah rsum blah blah position blah blah. The thing is I had never applied to anything at her company nor sent her a rsum, I had just asked her a question once. Which she had never answered...
akhilcacharya 3 days ago 0 replies      
That's why it's best to just directly contact a recruiter.
batguano 3 days ago 0 replies      
No way all this happens in a mere two months. It'd get dragged out for four, at least.
good_vibes 3 days ago 0 replies      
+1 for entrepreneurship
lukaszjb 3 days ago 1 reply      
WTF I just read?
dezb 3 days ago 0 replies      
what a waste of my life reading that stupid page..
kareemsabri 3 days ago 1 reply      
As a counter example, I got a great job from one of those black holes.
Tagore 3 days ago 2 replies      
The issue here is that it's very, very difficult to figure out if someone can code, at all, from a resume. You might have a masters in CS and still be hopeless when it comes to actually writing even very simple things. You'd be surprised at how often that is the case...

So, when I hire, I do so through through connections. I ask friends "Do you know anyone who can actually code who is looking for a job?"

And if it's just me, that's the end of it, but... let's say I'm hiring for a venture-backed firm, or for a department in a bank or something. In that case, I have a fiduciary duty to put a job ad out there, and I have to be able to show that I received a lot of resumes and "looked them over." By which I mean unceremoniously threw away. Who has the time to look over 1,000 resumes, most of which are complete bullshit?

I was going to hire my friend's friend anyway, but I had to solicit your resume along with many others to provide some cover. I threw your resume away without glancing past the education section (if you had gone to an Ivy I might have looked twice.) Capisce?

The new 1 Coin thenewpoundcoin.com
392 points by Velox  2 days ago   406 comments top 54
wonderous 2 days ago 4 replies      
How the "hidden high security feature"* in the new 1 coin works is covered here:


* Royal Mints new anti-counterfeiting technology, which can be authenticated by high-speed automated detection, was called Integrated Secure Identification Systems (ISIS) - but for obvious reasons, they've stopped using that name.

oneeyedpigeon 2 days ago 15 replies      
As a UK citizen, it stuns me that one-in-thirty pound coins is a fake because a) i can't imagine how it's possibly worth it b) out of the thousands of pound coins i must have handled, I've never spotted a fake - they must be very good. Still, the new coin certainly looks nice!
wonderous 2 days ago 5 replies      
For any interested in the related costs:


Seems like there are 45m worth of counterfeit coins in circulation, but the switch may cost over 100 million.


EDIT: The percent of fake coins in circulation appears to be in flux, which makes me wonder what methods they use to estimate the fakes in circulation, if they use controls like injecting known fakes into the count to see if they are spotted, if the sample is truly random, the maths is correct, etc.:

1% 2004 [1]

3.03% May 2014 [2]

2.55% May 2015 [2]

Of note is that the Royal Mint states in the 2016 annual report that the last survey for fakes was in May 2015, but the report was issued in July 2016, which means at least as of that date they failed to do the annual survey; or at least publish the results.[3]

[1] http://www.telegraph.co.uk/finance/personalfinance/10707540/...

[2] http://www.royalmint.com/discover/uk-coins/counterfeit-one-p...

[3] http://www.royalmint.com/~/media/Files/AnnualReports/ar_2015...

itaysk 2 days ago 0 replies      
Watching the video - I was waiting for John Ive voice at the end saying "it's the best coin we've ever made"
doktrin 2 days ago 5 replies      
> "Approximately one in thirty 1 coins in circulation is a counterfeit."

That took me completely by surprise. Is there really that much of a market in counterfeit coins? The profit margins just seem so modest relative to counterfeit bills.

sklivvz1971 2 days ago 3 replies      
Funnily, the new "most secure" pound will be introduced one day before a pretty big event, Mrs. May's Article 50 invocation, which widely predicted to devalue the pound significantly.

You might buy new pounds as "secure" coins, but you'll probably some percentage of their value the next day...

legulere 2 days ago 2 replies      
Another possible recent security feature is a polymer ring: http://news.coinupdate.com/germany-introduces-next-generatio...
anothercomment 2 days ago 6 replies      
For a moment I thought/hoped the UK would introduce an official cryptocurrency...
sergior 2 days ago 1 reply      
Coin has WiFi and a microphone and sends data to the nearest surveillance spot if it is in range.
ChuckMcM 1 day ago 0 replies      
Wow: "Approximately one in thirty 1 coins in circulation is a counterfeit."

I have about a dozen 1 coins in my informal 'collection of currency from other places' bag and it looks like 2 of them are probably counterfeit (at least by the standards of the Royal Mint) one clearly is, the date and back picture don't match, the other has fairly poor milling around it.

DonaldFisk 2 days ago 2 replies      
It reminds me of the old threepenny (prounounced "thrupny") bit: https://en.wikipedia.org/wiki/Threepence_%28British_coin%29
sengork 1 day ago 0 replies      
Some of the dual metal build coins suffer from faults when they are exposed to cold environments. Once can test for this by placing the coin inside a freezer and observing one of the metals shrink to the point where it disconnects from the other part of the coin.

There are much older examples of latent image technology on coins, for example Russian 10 ruble circa year 2000 (look at the middle area of the '0' character): https://thumbs.dreamstime.com/z/coin-rubles-russian-commemor...

blue1 2 days ago 7 replies      
> Hidden high security feature a high security feature is built into the coin to protect it from counterfeiting in the future.

"security through obscurity" as a feature?

rbanffy 2 days ago 2 replies      
What happens if they have to remove the thistle and the shamrock after Brexit?
tehabe 2 days ago 2 replies      
It looks like a nicer version of the 1 euro coin. I like that the coin isn't round but 12-sided, which will certainly help visually impaired people use the coin. I wonder if they will introduce a new 6-sided 50p coin.
d--b 2 days ago 3 replies      
Just curious, has anyone any insight into what that "hidden high security feature is"?
enthdegree 2 days ago 2 replies      
Is it a curve of constant width like the 20p and 50p? If its sides are straight it cannot be.

Also from wiki:

> The new design is intended to make counterfeiting more difficult, via an undisclosed hidden security feature, called 'iSIS' (Integrated Secure Identification Systems). [1]


[1]: https://en.wikipedia.org/wiki/One_pound_(British_coin)

muse900 2 days ago 5 replies      
What I don't get is why we need coins anymore?Can't they just make 1 pound note same to the 5 one?I am always annoyed when I got coins on me and I never pay attention to them. Why not make 1 worth more by making it a note? Why not move away from notes nowadays? Is there any specific reason to that?

Is there just too much metal sitting around that they thought that would be an ok use of it?

koolba 2 days ago 2 replies      
> Hidden high security feature a high security feature is built into the coin to protect it from counterfeiting in the future.

I bet there's an RFID in each coin that uniquely identifies each one. If true, it'd also let them (or a thief!) identify the total value of the coins jingling in your pocket.

> Approximately one in thirty 1 coins in circulation is a counterfeit.

That seems incredibly high! Anyone have comparable stats for US dollar bills or 1 Euro coins?

> The legal tender status of the round 1 coin will be withdrawn on 15th October 2017. From this date shops will no longer accept these coins, but you will still be able to take them to your bank. We would encourage you to use your coins or return them to your bank before 15th October.

Six-months is a pretty short window for something like this.

What prevents a shop from accepting the coins after Oct 15th and then they take it to the bank?

Also, how does the 1/30 fraud number impact the returns of the current pound notes?

bhauer 2 days ago 1 reply      
That coin is a beauty. It looks like it would be a denomination much higher than 1.

As someone who collected coins a child, I've been a tad disappointed by some of the newer designs coming out of the US Mint. Some are classy, others seem a bit goofy.

But this new pound coin is very tasteful. Well done, Royal Mint!

DINKDINK 2 days ago 1 reply      
Many commenters in this thread are speculating on how counterfeiters are inserting the, apparently large volume of, counterfeit coins. Maybe one source is actually unrepudible /merchants/. In dispensing change to customers, market participants it would seem be less likely to scrutinize the coinage.
flexie 2 days ago 1 reply      
Isn't it 14 sides?
smcl 2 days ago 4 replies      
"The pound wont be round for much longer" is a pretty awful tagline, I'm not sure they realised what this sounds like...
beloch 2 days ago 2 replies      
Canada has an 11-sided 1 dollar coin (the loonie) and a bimetallic 2 dollar coin (the toonie). This new 1 coin is basically a mash-up of the two.

Fair warning: People are going to put these things in pneumatic presses, hit them with sledge-hammers, etc. and then claim their brand new 1 coins are defective and "just fell apart". You should probably ignore these people. I'd be very surprised if the Royal Mint hadn't talked to the Canadian Mint and made sure they know how to make these things absolutely bomber.

tyingq 2 days ago 1 reply      
Looks a bit like the failed US Susan B Anthony $1 coin. https://en.numista.com/catalogue/photos/etats-unis/g1288.jpg
himlion 2 days ago 4 replies      
Is counterfeiting coins very lucrative?
bambax 2 days ago 6 replies      
Why not make the new 1 coin Euro compatible? Despite Brexit and all, it would make so much sense to share the same dimensions for the pound and the Euro.

Old pound: diameter 22.5 mm, thickness 3.15 mm.

New pound: diameter 22.63 - 23.43mm [1], thickness 2.8 mm.

Euro: diameter 23.25 mm, thickness 2.33 mm.

So close and yet so different! Why??!?

Edit: Okay, I'm not sure why I saw "so much sense" in having the pound and the Euro the same size. I go to the UK often and hate having different coin sizes in my pockets, but obviously the pound and the Euro aren't the same value, so there's no real reason they should be of the same size...

[1] Since the new pound isn't round but dodecagonal, the diameter isn't constant; the smallest value is the diameter of the incircle and the other value, the diameter of the circumcircle.

nailer 2 days ago 0 replies      
So is the 'hidden high security feature' NFC? Something similar?
berberous 2 days ago 0 replies      
How much lighter is the new coin? I have to say, the old 1 coin is my favorite I have ever seen in the world. It had a great heft/weight/size to it.
mirekrusin 2 days ago 1 reply      
After two pints it won't be distinguishable from coin, many people have both in pockets, it's already hard for < 1 / :(
Keverw 2 days ago 1 reply      
That looks neat. Never seen a coin before that changes based on how you look at it. I kinda want one, even though I'm in the US.
bigbugbag 2 days ago 0 replies      
How is this claim substantiated ? I fail to see any difference from the coins that are widely available around the world.
krick 1 day ago 0 replies      
It's really pretty. I wish euros would be designed that well.
Asooka 2 days ago 0 replies      
Made to look like the 1Euro coin to ease the transition to Euro currency in the future, I'm sure.
peterbraden 1 day ago 0 replies      
Will they have to replace it as soon as the Queen dies? Seems wasteful.
ryanmarsh 1 day ago 0 replies      
> Approximately one in thirty 1 coins in circulation is a counterfeit.

That's staggering.

tomovo 1 day ago 0 replies      
At first I thought this was some Euro-related prank.
Graham24 1 day ago 0 replies      
I'll be glad to see the back of the old Thatcher.
gaspoweredcat 2 days ago 0 replies      
i like how they on longer mention the name of the technology inside it which when it was first announced was to be called "ISIS"
ajarmst 2 days ago 0 replies      
I like the optimism of keeping symbols of Ireland and Scotland on the coin. Of course, those could become ironic in the not-to-distant. The leek seems safe, though.
udev 1 day ago 0 replies      
Designed in California. /s
TheArcane 2 days ago 0 replies      
> a high security feature is built into the coin to protect it from counterfeiting in the future.

Reminds me of the Indian Rupee 2000 'gps feature'

zeristor 2 days ago 1 reply      

How much longer will we be using coins for?

jyoung789 2 days ago 0 replies      
I wonder how many people are going to try to push out the middle the way they do to the toonie.
clamprecht 2 days ago 0 replies      
It's interesting that they chose .com versus .co.uk for their domain.
oblio 2 days ago 0 replies      
I see by the design that they're already planning for Br-re-entry.
miguelrochefort 1 day ago 1 reply      
Is my business ready? Why wouldn't it?
noja 2 days ago 1 reply      
dot com? Why?
wtvanhest 1 day ago 5 replies      
They probably should have continued using the letters. The more things called ISIS the less important those letters are to any one thing, including the terrorist org.
comice 2 days ago 5 replies      
The website for the most secure coin in the world only gets a B rating from ssllabs: https://www.ssllabs.com/ssltest/analyze.html?d=www.thenewpou...
fiatjaf 2 days ago 0 replies      
Containing 1 pound of silver in it? No, so it's fake.
im3w1l 2 days ago 2 replies      
From my understanding, all money is created as debt. Counterfeits would be an exception. What is the impact of this?
fiatjaf 2 days ago 0 replies      
What is the difference between counterfeit and "original" coins? Unless they have 1 pound of silver in them they're all fake money.

I think everybody should just accept and use counterfeit coins, if they look like the coins issued by the central bank.

Haskell Concepts in One Sentence torchhound.github.io
335 points by crystalPalace  3 days ago   162 comments top 25
scottmsul 3 days ago 5 replies      
A monad is any data structure which implements bind. Bind is a higher-order function with two parameters - one is the data structure to be transformed, the other is a function which maps over elements in the data structure. However, unlike a normal map, each result of "bind" sits in its own version of the original data structure, which then have to be combined back into a single data structure. The way in which the data structures are combined is what makes each monad different.

For example, List is a monad. Suppose we had a List of Ints, such as [5,3,4]. If we were to run bind over this list, we would need a function that takes an Int and returns a List of something. We could use "show", the function which converts Ints to Strings (a String is technically a List of Char. Since this is a List, we're good). If we call bind using [5,3,4] and show, we get ["5","3","4"] which are then combined to "534".

We can check with the interpreter (>>= is bind):

Prelude> [5,3,4] >>= show


jnordwick 3 days ago 8 replies      
My general problem with Haskell articles and such: they are written as if you already understand Haskell. They make total sense if you know the language, but if you are trying to learn are mostly useless if not more confusing. Or even worse, they devolve into dense academic prose even when writing introductory articles.

Sometimes they even use Haskell as if you already know it to try to explain it.

I'm still looking for a basic article that describes monads well. My first few languages were all functional too, so that isn't the problem. I even still use APL derivatives.

oblio 3 days ago 5 replies      
I'm still kind of having problems with monads. Funnily enough, I recently found an article explaining what monoids are: https://fsharpforfunandprofit.com/posts/monoids-without-tear...

 You start with a bunch of things, and some way of combining them two at a time. Rule 1 (Closure): The result of combining two things is always another one of the things. Rule 2 (Associativity): When combining more than two things, which pairwise combination you do first doesn't matter. Rule 3 (Identity element): There is a special thing called "zero" such that when you combine any thing with "zero" you get the original thing back. With these rules in place, we can come back to the definition of a monoid. A "monoid" is just a system that obeys all three rules. Simple!
Long explanation overall in the article, but based on 6th grade math. I understood it, and it stuck. Could someone extend the monad explanation from here? Maybe I'll finally get it :)

aetherspawn 3 days ago 3 replies      
Monads are not complex. A monad is a box. Any monad has two functions (plus others, imagine this is a minimal interface): bind and return.

You can put anything in the box. It's TARDIS-like.

`return` replaces the thing that was in the box.

`bind` replaces the box.

Most monads have a number of functions that can only be executed over boxes. This is because the boxes have special metadata (for example, someone has been scribbling state underneath the box). The 'get' function from the State monad just tells you to read the gibberish on the box instead of unpacking the box. The 'set' function scribbles more stuff on the bottom of the box.

Useful monads then provide a function to put things into the box, work with the box and then throw the box away (or inspect it by itself). These are the functions called 'runWhatever' for example 'runState', which lets you put an apple in the box, put a shipping label onto the box and then eventually separate the apple and the shipping label into each hand, throwing the box in the bin.

You can put anything in a box. Even more boxes. And when you're inside the box, you can't really tell how deep you are. If you're in the bottom box you can't actually see that you're in 20 layers of boxes, and this is why Monads are so powerful for creating libraries and frameworks.

LeanderK 2 days ago 2 replies      
To everyone new to haskell and confused: Don't judge haskell based on these definitions. They appear strange and crazy, but when you actually do stuff most of them turn out to be quite intuitive.

My advice is to ignore these things. Don't read a million monad tutorials. Just play around and code something, you don't have to understand the monad-definition before you can use the do-notation. Try to ignore them. After a while you get an intuition and then the definitions will make sense.

mpfundstein 2 days ago 1 reply      
IMO the most accessible resource for learning functional concepts like Functors, Applicatives and Monads is the free online book: https://github.com/MostlyAdequate/mostly-adequate-guide

It uses Javascript to explain everything from scratch. Pure-functions, currying, composition, functors, monads, applicatives and so on.

Its free, so check it out. Reading it and understanding the concepts completely changed my whole coding style in the last couple of month. I hope functional javascript becomes more mainstream and we will soon call this stuff 'standard'. Its just too convenient to stack a Future on top of a Reader and get dependency injection + async function handeling without any boilerplate code.

P.S. Since ES6, javascript is wonderful. Functional code often really looks like lisp. Pity that we don't have macros (yet, and actually there is a way (sweet.js).

P.P.S. If DrBoolean got you hooked. You might want to check Ramda and fantasy-land. The former is a more functional minded underscore/lodash, the latter a specification (and community) for algebraic data structures in javascript to which a lot of new libraries adhere to.

rawicki 3 days ago 0 replies      
I see traces of OP getting to understand the joke "A monad is just a monoid in the category of endofunctors, what's the problem?" :)
iamwil 3 days ago 3 replies      
It'd help beginners if there was a sentence explaining what fmap is.
Cybiote 3 days ago 0 replies      
This is a good start and might be useful to other beginners. From the vantage point of someone who already knows what the words mean, I can fill in the gaps in your definitions but unfortunately, a beginner might not be so able.

Here are some suggestions on how you might close some of those gaps a bit (I think allowing yourself to go over a sentence here and there should be ok).

You use fmap more than once without having defined it.

Currying: You need to define partial application.

Map and filter, I'd use collection instead of list. There is more nuance but that is good enough at this level.

Morphisms generalize the concept of function. They can be thought of as capturing structure (such as a set equipped with + and 0) preserving maps (which is something analogies try to do).

lazy evaluation could do with mentioning thunk, then you'd have to define what a thunk is, of course.

Fold: Folds are a lot more interesting and it's not clear what you mean by your given definition. I suggest defining it in terms of structural recursion.

Category: It's worth defining what objects mean to a category. As well, explicitly mentioning the laws of identity, composition and associativity rather than just the nebulous wording of configuration would be beneficial.

Functor: A more useful foundation is in thinking of functors as morphisms between categories.

Types: There is much more to types than this. Wrong in the correct direction is to think of types in terms of sets. Better yet, as propositions.

Type Classes: Since you mention parametric polymorphism, you should also mention ad-hoc polymorphism and how type classes and interfaces are examples.

algebraic data types: There's a lot more to algebraic data types than this. After defining sum types and product types elsewhere, you can talk about why such types are called algebraic.

parametric polymorphism: what is a type variable?

monoid: Moniods also need an identity element, and giving examples is always useful: natural numbers: +,0 or strings: +,"". One reason monoids are of special interest to computing is because they possess associativity, which is useful when parallelizing.

init0 3 days ago 0 replies      
Jargon from the functional programming world in simple terms! http://git.io/fp-jargons
bonoetmalo 3 days ago 1 reply      
We should probably add a sentence about not using fixed margins that will make the text body 1/5th the width of my screen.
edem 3 days ago 0 replies      
This is all useless for someone without haskell experience.
hota_mazi 3 days ago 0 replies      
> A monad is composed of three functions

Actually just two (bind and return). And three laws which are typically not captured in the type system and which must therefore be tested separately.

ncphillips 2 days ago 0 replies      
These sentences are nice, but I feel like more definitions and some re-ordering could make the document as a whole more meaningful.

For example, Lift is defined way before Functor, and fmap is never defined so I had no idea what Lift was about despite that nice concise sentence.

strictfp 3 days ago 0 replies      
A monad is a wrapper type designed to alter the semantics of chains of operations performed on its wrapped values.
babbeloski 2 days ago 0 replies      
Question to JS people using Redux. Are the middlewares used to control side effects in actions considered monads? The action returns a description of the side effect, and the middleware handles the actual doing, leaving the programmer to focus on the pure aspects.
burticlies 3 days ago 0 replies      
One simplistic but useful analogy I've found is that monads are a way to control program flow with functions. A monad is to functional programming what if/else is to imperative.

It may not cover all the nitty gritty about what is and isn't a monad. But it gets you a long way to understanding why you might use them.

csneeky 3 days ago 0 replies      
What an excellent distillation!

It isn't uncommon to see someone with a fragile ego explain this stuff in a way that is needlessly complex and full of jargon that scares folks off.

Thanks for the great work here. We need more of this kind of thing in the FP world!

mbfg 2 days ago 1 reply      
There is an adage that if you see a lot of advertising for a medicine (think hair regrowth formulas, or pills of reflux disease) you know that none of them are any good.

This page makes me wonder about monads in the same way.

jpt4 3 days ago 1 reply      
Article appears to be in flux; definition of lazy evaluation changed.
leshow 2 days ago 0 replies      
I'd modify the monoid line to include associativity and identity:

A monoid is a a type with an associative function and an identity function

dmead 3 days ago 3 replies      
whats the third function for a monad? bind return and...?
a_imho 3 days ago 0 replies      
The problem when starting out with Haskell is that you can't google type errors.
bogomipz 3 days ago 0 replies      
This is brilliant, thanks for sharing.
nickpsecurity 3 days ago 5 replies      
Hmm. They certainly have a naming and explanation problem in Haskell land. Some impressions from a few of these better explanations.

"A monad is composed of three functions and encodes control flow which allows pure functions to be strung together."

Gibberish compared to claim that monads just execute functions in a specified order. Aka an imperative function or procedure by one definition I've seen a lot. Of course, that monad definition might be wrong, too.

"A recursive function is a function that calls itself inside its own definition."

That's a recursive definition lol. Must have been a joke.

"A monad transformer allows you to stack more than one monad for use in a function."

We've had composition of procedures for a long time. Perhaps Haskell could've called it a MonadComposer?

"Lift is an operation on a functor that uses fmap to operate on the data contained in the functor."

fmap-apply() or something like that?

"Optics(lens and prisms) allow you to get and set data in a data type."

Getters and setters. My early OOP/C++ books are finally more intuitive than something.

"Map applies a function to every element of a list."

foreach(function(), list)

"A predicate is a function that returns true or false."

A boolean function.

"Filter applies a predicate to a list and returns only elements which return true."

Syntactic sugar for a foreach(function, list()) where the function on each member is an If (Conditional) is TRUE Then AddElementToNewListThatsReturned(). Yeah, even the description of imperative version is getting long. This might be a productivity boost.

"A morphism is a transformation from any object to any other object."

A cast from one object to another? Or one with an actual conversion function and/or check? The functional name seems more accurate, though.

"Algebraic data types are a method to describe the structure of types."

Ahh, they're just structs. Wait, what is a type exactly? And therefore what is an abstract... oh darn...

"Free monads allow the transformation of functors to monads."

A functor is an object that can be fmaped over. We covered map. Maybe the same. A monad is either an ordering of functions or something composed of three functions and encodes control flow composed of pure functions. Free monads are apparently an unknown thing that can transform objects that can be fmaped over into something composed of three functions with encoded control flow of composed, pure functions. I heard a lot of good Masters and Ph.D. proposals before this one. This is good, though. Especially quantifying the unknown aspects with a lot of NSF-funded R&D.

"A lambda is an unnamed function."

"Types are an inherent characteristic of every Haskell expression."

"Currying uses partial application to return a function until all arguments are filled."

"A category is a collection of objects, morphisms, and the configuration of the morphisms."

Ok, I just have fun with that one. Author did good on a lot of them. I'm just going to leave these here as quotes to add to Free Monads in... The Advanced Course: Haskell in Two or More Sentences. They provide anywhere from no information at all to the uninitiated to extra confusion inspiring taking or avoiding Haskell courses. :)

AsciiMath An easy-to-write markup language for mathematics asciimath.org
359 points by mintplant  2 days ago   108 comments top 30
jwmerrill 2 days ago 5 replies      
To everyone who says "why do we need this when we have LaTeX?" I ask the question "why do we need Markdown when we have HTML?"

The nice thing about Markdown is that it's quite legible in its source form, which makes it less distracting to edit. Same deal with AsciiMath and LaTeX: AsciiMath is more legible in its source form which means that it has lower overhead during editing.

One of these is more legible than the other:

 (f'(x^2+y^2)^2)/(g'(x^2+y^2)^2) \frac{f'\left(x^2+y^2\right)}{g'\left(x^2+y^2\right)^2}
In my experience, most people who learn LaTeX don't do so until sometime around the middle or end of their undergrad career (certainly in Physics--maybe mathematicians learn it sooner). Earlier than that, people struggle with junk like the Microsoft equation editor. No big deal?

techwizrd 2 days ago 2 replies      
I don't know about other math departments, but most of the students and professors I knew during my math degree knew LaTeX.

From my cursory glance over the page, this isn't much simpler than LaTeX and it mostly just reduces a number of backslashes. It doesn't save me much time when typesetting equations. Nowadays, I mostly type LaTeX for MathJax or Jupyter notebook. Adding asciimath to Jupyter seems to be on the backlog[0], and it's dependent on CommonMark coming up with an extension system.

0: https://github.com/jupyter/notebook/issues/1918

tarjei 2 days ago 1 reply      
AsciiMath has one large benefit over Latex: It fits how you would write math in an email.

AsciiMath is perfect for users who do not know Latex (or code for that matter) but needs to use mathematical notation on a daily basis.

I applaud that AsciiMath has resurfaced. I've used it in combination with Katex a cuple of times with great results.

jostylr 2 days ago 1 reply      
This is about a decade old before MathJax and back when its predecessor, jsMath, was still pretty new. It was targeting students the most, not those who use latex professionally. I used it to create TidlyWiki notebooks for my students and it was something they actually did!

The goal was also about being translated into MathML. LaTeX is not necessarily concerned with mathematical sense while MathML (sometimes) is. I think this was also a motivation.

And, quite frankly, replacing \frac{a}{b} with a/b is a huge win for ease of writing basic math.

TheRealPomax 2 days ago 3 replies      
I'm curious who the audience is for this. If it's people who actually care about maths, then this doesn't have any real value, because they already know LaTeX and will most likely appreciate the higher precision that offers (I personally fall in that category). If it's people who normally don't really need to write mathematics, then for the few times they need to, LaTeX might still make more sense due to convenient quickly-googled online LaTeX creators like https://www.codecogs.com/latex/eqneditor.php

If there's a demographic between those two groups, then I might simply have a blind spot, but... whose problem does this solve, and what is that problem? Because just saying "LaTeX is too much effort", the immediate counter-question is "for whom, exactly?" because it won't be people who already use LaTeX, or need reliable maths typesetting on a daily basis, and it probably isn't for people who need to use maths maybe a handful of times a year. So who's left, and what problem do they have where asciimath makes life easier?

harmonium1729 2 days ago 0 replies      
Despite knowing LaTeX, this is much more intuitive for me when communicating in plaintext. It just matches how I'd write it anyway. In an email I'd always use 1/2 or (f(x+h)-f(x))/h over their LaTeX alternatives.

If, however, the goal is to more easily edit LaTeX -- especially for folks who are less confident with LaTeX -- I suspect WYSIWYG is frequently a better option. MathQuill (mathquill.com), for example, is a fantastic open-source WISYWIG editor for LaTeX.

Disclosure: we use MathQuill heavily at desmos.com, where I work, and have contributed to its development.

Aardwolf 2 days ago 0 replies      
This is pretty nice and intuitive! What is odd is how you don't need spaces between string identifiers

intintint does the same as int int int, 3 integrals

del becomes a del symbol, delt becomes a del symbol plus a t, delta becomes a delta symbol

rhoint: will it become rh + oint (circular integral), or rho + int? It happens to become rho + int here, but does it specify that in its specification?

deltau: will it become del + tau, or delta + u? it happens to become delta + u here. Opposite of the rhoint case about where it chose to make both things a rendered symbol

So it's inconsistent parsing rules, simply requiring spaces between textual identifiers would make it more logical :)

Also, what is now => and lArr could make more sense as ==> and <==. Also sad that <- or <-- doesn't work for left arrow.

davesque 2 days ago 0 replies      
It's probably worth somebody investigating a more short-hand notation for this kind of task. However, I feel compelled to say that I've never found the syntax which is commonly used to typeset equations with LaTeX to be all that complicated. When I was first learning it, I remember repeatedly thinking to myself, "This is it? This really isn't so bad!" Furthermore, the thing I like about LaTeX is that the syntax is very extensible. You can easily add more directives or macros and there are really only a few syntactic constructs that you can use to represent them. If I'm not mistaken, AsciiMath's approach requires that more specialized syntax would be needed when adding more features.
andrepd 2 days ago 0 replies      
What about this is so much different than LaTeX? It seems to have the same basic syntax but the commands don't start with a backslash. Also the symbol list seems severely limited.
lilgreenland 2 days ago 0 replies      
After using MathJax to render LaTex on my website I switched to KaTeX and saw a dramatic decrease in load times. I hope that asciiMath doesn't also suffer from the same speed issues from MathJax.


runarberg 2 days ago 0 replies      
One of my earlier programming experience was writing a more expressive alternative to ascii math [1]. I learned later that this was also the first compiler that I ever write. It fixes some of the shortcomings of asciiMath, like you should never have to resort to latex like syntax, you can enter Unicode characters directly, and it has a nice mapping to MathML.

1: https://runarberg.github.io/ascii2mathml/

devereaux 2 days ago 3 replies      
That's nice, but we are in 2017. It may be better to support unicode. I mean I prefer to write things like:

, , =, ...

a + b ...

What I think is needed are generic 2d composition diacritics for unicode, to have text above/below/to the upper left/UR/LL/LR angle -- I mean, some more generic version to write things like =, with composition characters instead of the dedicated numbers, or letters.

I don't like LaTeX because I want WYSIWYG, which is what unicode is for. Even in the body of an email. Even in a reply on YN.

lenkite 2 days ago 0 replies      
AsciiMath is more readable than Latex for just about everyone except perhaps professional mathematicians. Simplicity versus power.
throwaway7645 2 days ago 0 replies      
Not that this and Latex aren't great, but I wonder if there is a more outside the box solution. APL can represent mathematics very well using Iverson Notation as the design and it is executable to boot. I haven't spent a ton of time with it, so I'm not sure if I could read complex equations as easily with it or not once suitably trained. Other benfits of APL's notation is no order of operations and all you need is the character set which is really easy to deal with I would guess. If it hasn't become popular after 1/2 century, perhaps there really is something to the critical mass of our current notation.
a3_nm 2 days ago 1 reply      
This looks nice, with a much more legible syntax than LaTeX. I'd love to use this, e.g., on my blog. The reasons why I won't:

- No server-side rendering. I don't want to burden my reader's browser with Javascript. (With MathJax, you can do it server-side, I explained how here: https://a3nm.net/blog/selfhost_mathjax.html)

- The project looks dead: https://github.com/asciimath/asciimathml/pulse/monthly

slaymaker1907 2 days ago 2 replies      
Thank you! I've been wanting to do this for a while for myself after becoming fed up with the verbosity of LaTex. My strategy has been a little different in that I've been working on plugging into equations using a Pandoc filter.

Instead of rolling my own or hacking into SymPy, I'll use asciimath.

thyselius 2 days ago 3 replies      
I would love to have the opposite, get the code from writing maths as it is printed. Has that been made?
stu_douglas 2 days ago 0 replies      
I actually wrote a little compiler that converts AsciiMath to LaTeX for a course in school.

Hooked up the executable to an Automator service so I could highlight some AsciiMath text and replace it with LaTeX from the right-click menu. Much faster for writing math notes in LaTeX!

If anyone's interested, the project's at https://github.com/studouglas/AsciiMathToLatex. Haven't touched it since I made it, so don't judge too hard :p

polm23 2 days ago 0 replies      
Anyone remember eqn?


I've enjoyed the part of this interview with Linda Cherry, one of its creators, talking about it in comparison with Tex (incorrectly transcribed as "tech").


sameera_sy 2 days ago 0 replies      
Everything is about how we get used it to quickly though! Takes a little time to get used to latex, but this is definitely something worth trying. Especially the word syntax seems much more effective here! Thanks!The website named http://www.mathifyit.com/ helps in getting latex syntax through plain english. Seems like something I'll use!
krick 2 days ago 0 replies      
This is fabulous. It seems crazy to me, that some in this thread are like "meh, no big deal, LaTeX is fine".

Except, I guess it would be better if that could be compiled to LaTeX instead of rendering it directly. LaTeX is still de-facto standard, and surely there are situations when it would be more powerful. So this mid-layer still would be useful, I guess. But otherwise, I would gladly write everything I need in Markdown+AsciiMath instead of pure LaTeX.

wodenokoto 2 days ago 1 reply      
What are the benefits of this over just using latex with mathjax?
msimpson 2 days ago 0 replies      
Comparison of ASCIIMathML, PHPMathPublisher, MathJax, KaTeX, MathTeX


murbard2 1 day ago 0 replies      
This is really neat. Your table doesn't mention that => can make a double arrow, even though it does. Also it would be nice for the dx in integrals to be \mathrm{d}x.
dbranes 2 days ago 1 reply      
Great. Would love to have some support for diagrams drawing.

This seem to tackle the issue that Latex equations are not very readable, which is great. A related problem is that tikz code for drawing commutative diagrams in latex is basically completely incomprehensible. Looks like this project is in a good position to start tackling that problem.

jcoffland 2 days ago 0 replies      
This is great. It would make an awesome addition to Markdown. Does parsing conflict with the use of backticks in Markdown?
dylanrw 2 days ago 0 replies      
As someone who doesn't have a math background, and doesn't know latex. This is very handy as a teaching and learning tool.
sigvef 2 days ago 0 replies      
In a perfect world, everything is generated from ASCII: https://github.com/sigvef/sigvehtml .
jbmorgado 1 day ago 0 replies      
To all the commenters pointing out the people that talk about LaTeX, you are getting a part of the criticism wrong. It's not: "Why this when we already have LaTeX?", but a "Why this when LaTeX does it better?".

I wouldn't mind for an easier way to introduce mathematics, but just from the example given, I can see right away that the typography in AsciiMath is not good.

Just look at the space (or lack of it) around the inner parenthesis for instance.

seesomesense 2 days ago 0 replies      
At my kid's school, some children used Latex for their maths. Surely, adults can grok Latex too.
Show HN: Colormind Color schemes via Generative Adversarial Networks colormind.io
396 points by Jack000  3 days ago   50 comments top 18
huula 2 days ago 1 reply      
Looks cool! I recently posted HuulaTypesetter (https://huu.la/ai/typesetter) which infers font sizes for web pages based on DOM context and CSSRooster which infers CSS class names based on the context. I'm really happy to see that there are more intelligence happening in the design world! The dream of replacing Web UI design with AI will one day come true!

All (deep) learning models suffers the issue of 'garbage in garbage out', so one way that could make the palettes more related to real world web designs is to learn the colors used in web designs directly instead of from photographs and movies since those data will has much more noise than good web designs (with video and images stripped out of course).

wiradikusuma 3 days ago 4 replies      
sorry for the stupid question, but how do you "apply" color palettes? i mean, sure they look great as colorful columns, but how do you put them into UI?

let say Bootstrap. i tried using 1st color for button, 2nd color for "success" label, etc. it ended up ugly.

whistlerbrk 2 days ago 1 reply      
I do love these color scheme generators, and perhaps this is a suggestion for improvement, the color vs. dominance of the color seems very important as well. That is, proportionally showing the palette based on how dominant that color should be in an overall end design
forlorn 3 days ago 2 replies      
I personally love color palettes generated by Coolors https://coolors.co/.
vanderZwan 3 days ago 1 reply      
If you take into account various forms of colourblindness, could this be used to create colourblind friendly palettes that are still easy on the eye for everyone?
SuperPaintMan 3 days ago 0 replies      
Damn, this has become my new tool for palette generation. Good work!

Have you considered using fine art as training data, perhaps broken down into movements/motifs? In your blog posts you mentioned that not all photographs make for well chosen palettes, this could get around that as artists should have a great understanding of theory.

As far as the pixel-level errors in your palette training, would downscaling+blur solve this?

r0muald 3 days ago 0 replies      
Really good results! Is there a quick way to copy and paste the hex values for the palette?
imh 2 days ago 1 reply      
Why GANs for this?
notheguyouthink 1 day ago 1 reply      
These look cool! Is there a way to get.. more colors? Eg, i'd like to use this for some themes editor themes, but i need a couple backgrounds, foregrounds, etcetc. Maybe 10 colors would do, hard to say.


fgGAMI 3 days ago 1 reply      
Can you get a suggested color palette by API call?
vtange 2 days ago 0 replies      
I can see a lot of utility in this. Great job!

Is this limited to generating palettes of 5 colors? Sometimes all 5 won't be needed, wouldn't that make the output a little less useful since the algorithm considers how each of the five interacts with the others.

If I only needed two colors, I wouldn't need those two colors to work against three other colors.

chias 2 days ago 1 reply      
This looks great!

Did you have any thoughts with regards to releasing it as open source so that we could run our own instances with the various training sets? I note that you appear not to be monetizing it at all, so I figured I'd pop the question :)

auganov 3 days ago 1 reply      
Would be super cool if you could pick your own ground truth data. Or predefined sets.
mgalka 2 days ago 0 replies      
Really great idea, and very practically useful! Thanks for sharing.
kr0 3 days ago 0 replies      
I'd be nice if I could use the color locking thing on an uploaded image. I'd like to start with some base colors from a movie/game/etc and start exploring the color range with the AI.
thwd 3 days ago 1 reply      
Very cool! One thing I noticed is that if you lock all colors except one and press 'generate' repeatedly, each call produces a different color. Why is that?
nelxaretluval 2 days ago 3 replies      
maxmcorp 2 days ago 1 reply      
Using deepmind for generating color palettes is somewhat of an overkill. It is really not that hard a problem.

It is very easy to programatically generate several color palettes from a main color.

It is also wasy to select one or more main colors from a picture.

Employee burnout is becoming a huge problem in the American workforce qz.com
307 points by akeck  1 day ago   334 comments top 39
jressey 22 hours ago 13 replies      
I've been working professionally in IT for about 6 years now and the concept of 'working too little' has never come up from any of my managers. I have a strict personal policy of working the exact amount of hours discussed upon hiring, and never responding to calls or email outside of those hours. For example I worked at a Fortune 50 with a 37.5 hour workweek and always stuck to that. I even counted the time I spent at lunch. Issue never raised.

I am not saying cases exist where workers are asked to work more than their agreed hours. I killed myself in kitchens for a $25k salary before switching to tech. These cases are a problem.

My point is that this behavior is often self-imposed. People seem to feel a sense of importance when they overwork themselves. Simply stick to the number of hours you've agreed upon and tell your manager to discuss with their supervisor if they bring it up as a disciplinary issue. This all qualified by being in a position of demand as an engineer.

Point is, you'd be surprised with what you can 'get away with.'

theothermkn 23 hours ago 12 replies      
I wouldn't doubt overwork as a factor, but the elephant in the room is meaninglessness. Work, like God, is dead. Even for tech workers, the novelty has worn off, and people pretty much realize that the core feature of their jobs is their own economic exploitation.

Burnout, like all pain, may be a feature.

TobyGiacometti 23 hours ago 3 replies      
While companies should definitely be doing something about this, at the end of the day it is our responsibility to look after ourselves. Too many people tolerate this type of treatment out of fear. I understand that this is easier said than done, but I do not think the situation will change much unless people start standing up for themselves.

Remember the number 1 regret Bronnie Ware observed while caring for people in the last 12 weeks of their lives: I wish I'd had the courage to live a life true to myself, not the life others expected of me.

6stringmerc 21 hours ago 2 replies      
None of this is surprising if three elements are considered:

1. Productivity has soared

2. Wages have stagnated / wealth gap has widened significantly

3. US Corporate Culture is currently rife with an attitude "Let the Boomers Retire, we have a Hiring Freeze"

There are too few people doing the work of too many, which chokes the upward mobility of the youth, increases the wealth gap between Working and Investing class citizens, and essentially is masochism in the "modern era" of US Consumerism as an economic engine.

Don't believe me? Do some math on stock buybacks 2015-2016 versus publicly announced hirings and layoffs. You'd be surprised how easily this amounts to justification for putting Greenspan and Bernanke in jail. Those guys stole from tens of millions of Americans to benefit a few hundred.

saboot 23 hours ago 3 replies      
> The Economic Policy Institute shows that productivity increased by 21.6%, yet wages grew by only 1.8% during this time period.

> Companies need to do something about this burnout crisis now because otherwise, they will pay the high price of turnover.

Hm, what on earth could they possibly do? It's a mystery shrouded in an enigma!

mti27 17 hours ago 0 replies      
Once I accumulated so much rollover vacation time I decided "I'm not working Fridays anymore" which lasted for many months. I found out later someone had complained to my manager about this, since I was unreachable. That same company had experimented with a 7:30 - 11:30 schedule on Fridays (7:30 - 5:30 Mon-Thurs) which was great: you'd miss heavy traffic coming in and get a head start on every weekend. But someone in the field complained about corporate being unreachable and they ended that too.... The problem is the "flexible thinking" people always come up against the "Bill Lumberghs" of the world and everyone is pulled down to the lowest common denominator.
krylon 21 hours ago 1 reply      
I am getting paid to work 40 hours a week, and for the most part that's what I do.

I do respond to calls and mails outside of work hours, because we are a 1.5 person IT department, and when e.g. email does not work, it is kind of a big deal. But that does not happen very often. (I used to have that colleague who literally called me every day, even when I was on vacation and sick, but he quit; the guy who replaced him is great though.)

Once a month, servers need to be updated and rebooted, and I do that, too, but I don't mind. It is kind of soothing, in its own way. ;-)

I have no problem working long hours when it is necessary. It happens, even in the best of places; but in places where it is the rule, in my experience, it's because management is too cheap to spring for a decent IT department.

And having been through a case of burnout (which, IMHO, is just a euphemism for depression), you really can't afford the amount of money it would take to make me go through that again. Or maybe you can, but you don't want to. Either way, I am happy to make a modest living working reasonable hours. My boss seems to agree, so we're cool.

(Full Disclosure: I am living and working in Germany, in case it matters.)

imchillyb 23 hours ago 1 reply      
The saying goes: "I'm being over-worked and under-paid."

When greed for profit over product viability or employee considerations is the /only/ goal of a company, this trend will /always/ be the end result.

Profit is what drives markets, but it is employees that drive companies. Or, it is employees that ruin said companies.

Businesses beware.

workerexploited 22 hours ago 1 reply      
First, the unemployment rate numbers are fudged by UNDERemployment, especially by millennials. It's a BS statistic and more people need to realize this.

Moving on, I'm a millennial that doesn't work as an engineer/developer/programmer/etc. I make less than $100,000 and I live in a major US city because that's where the jobs are.

As noted in the article, it really also comes down to wages just as--but perhaps more than--hours put in. But there's just so much more that is contributing to burnout and the inseparable turnover.

Rant incoming.

EVERY educated and skilled millennial I know like me (non-"STEM") is job hopping like crazy for that ever-so-slight raise and hope that the grass is greener on the other side. Our resumes are getting PACKED with 6-month and 1-year gigs.

Nearly every day on my LinkedIn feed I see someone leaving somewhere and getting a new job.

There are just so many things wrong with the workplace resulting in burnout and turnover today for millennials (humans):

- We're sick of being paid poorly; a dog-friendly office, free snacks, hip lighting in the lobby, standing desks, and free Friday lunch doesn't make up for poor pay

- We're often sick of overpaid-and-often-less-skilled supervisors above us and especially the even more bloated and overpaid management above them

- We're sick of positions where we have no opportunities for growth or development of skills or discovering something new

- We're sick of working with fellow millenials who give even less of a crap than us so they're just lazy and don't pull their weight until they find the next gig--and we often have to pull their weight for them

- We're sick of interviewing in-person and never hearing a word back from crap recruiting and human resources teams

- We're sick of being hired on as "freelancer" or "contract" employees so that we're denied benefits even though we dedicate 40+ hours per week to a company

pmoriarty 22 hours ago 2 replies      
I've suffered severe burnout so many times in my career, resulting in taking years off from work because I dreaded going back. I want to switch careers, but can't think of anything else I'm qualified for that I'd like to do, and it's really hard to switch careers when you're older. I envy people who can do what they love, or at least not hate, for a living.
dragonwriter 23 hours ago 1 reply      
> Companies need to do something about this burnout crisis now because otherwise, they will pay the high price of turnover.

No, because it's a tragedy of the commons. Companies who take on extra short term costs to deal with it will lose out to companies that don't; even if long-term, overall, it's a better outcome of companies do deal with it.

The existence of things like this is pretty much the reason for government.

chollida1 23 hours ago 3 replies      
I think the below article on burnout is the best thing Marissa Mayer has ever produced!!


If you subscribe to the theory that burn out is all about resentment then it gives you a whole new set of tools to deal with it.

openforce 20 hours ago 0 replies      
Early in my career, my then really good manager taught me to say no, Which was initially difficult for me. But, I am really thankful for that lesson. Learning to stand your ground and say no to excess work is very important if you care about a life outside work.

People from India, like me, especially have a hard time saying NO to being assigned something that, either you have no time to work on or something you don't want to work on. It's a cultural thing combined with the golden leash of H1b visas.I see a lot of my colleagues from India accepting more and more work and end up having almost no life outside of it.

jeena 22 hours ago 0 replies      
If I didn't have to work for shelter and food, I'd be an artist. I'd be a photographer [0] and a podcaster [1]. I do both already, but I feel that I never have the time to do both thoroughly and as often as I would desire. I'd need some money for travel actually, so I could photograph and interview people who are interesting but do not live where I do.

[0] https://www.flickr.com/photos/jeena/albums/72157677196990660 [1] https://jeena.net/pods

korzun 20 hours ago 0 replies      
A lot of people in technology sector think that showing up and writing two lines of code is more than enough to deserve six figure salary now.

The same people will make a big deal about staying late or having to work on a weekend once in a while. The sense of entitlement is pretty mindblowing to me.

milge 19 hours ago 1 reply      
I was laid off new years. I've been doing development for 10 years and specializing in salesforce for 7 years. I haven't found the right position yet, but I know I've been burned out for a little bit now. I've been considering low-paying metal-working jobs, but the sad reality is unemployment pays more. Unemployment runs out in June. I'm kinda indifferent whether I find work. I've started the foreclosure process on my house. This may be the perfect time to just take some time and explore the US. So while I haven't found work yet, the work I have done has put me in the mentality that that's ok. Thanks for reading.
mnm1 20 hours ago 0 replies      
Companies lie about worker burnout so to not seem inhumane when they're well aware of the conditions they create. Short of a statutory or federal law, this isn't going to change. The fact that the salary loophole exists and we refuse to pay even hourly workers overtime if they make too much money (depending on the state and industry) doesn't bode well for our chances of fixing this. Do these same companies wonder why most of their employees are disengaged? Or is that still such a big "mystery" to their blind management?
jondubois 17 hours ago 0 replies      
A big problem today is that managers and executives are optimised for short-term gains at the expense of everything else (including long term outlook, ethics and even sanity).

People move up the ranks by making random, crazy one-off bets that turn out great in the short term. Nobody notices people who consistently make good long term bets.

This is compounded by the fact that people who have power these days tend to overlook failure and only consider good outcomes.

jm__87 22 hours ago 1 reply      
If you are a skilled employee working in IT and you have experienced burn out, it is likely something you have done to yourself. Do some companies have ridiculously high expectations of their employees: yes. Do you have to live up to those expectations: no. As a skilled IT worker your knowledge and experience are valuable commodities that can presumably be sold elsewhere. The reason that managers can get away with having ridiculous expectations is because their employees let them. Capitalism rewards those who can wring out the most value for the least cost. Many people will take advantage of you if you simply let them.
vogelke 19 hours ago 0 replies      
I wrote about why I like being a sysadmin after 29 years on Reddit about 2 months ago:https://www.reddit.com/r/sysadmin/comments/5omi1n/

One of the biggest things that kept me from burning out was realizing that companies (or branches of the service) are neither good nor evil, they're just big. As a result, they'll take whatever you offer and not blink an eye.

smdz 21 hours ago 0 replies      
I have been an employee, freelancer and just moved to being/creating an agency. Retrospectively thinking, burning out as an employee felt much better(and safer) than burning out as an entrepreneur.

As an employee - I always loved pressure times, but then retrospectively disliked "performing under pressure" - why? When I do more work - my manager(s) did not say "you worked so hard and stayed up so late". There was a casual "Thanks". But when there was no work - it is suddenly my fault - "You don't work hard to find work and aren't staying full 8 hours". And just one such bad incident was enough to have my quarterly rating degraded for multiple other good incidents.

As a freelancer - I thought it would be easy - But it wasn't. Of course its not because of client(s) demand. When I worked hourly, every hour counts and pays. I realized I had worked as an employee - for peanuts and sometimes for free. I can now put in same effort and get paid hourly. If I am getting a predetermined price - I work even longer - because its easier to work in a project trance and reduce task switching inefficiencies. I worked long hours and I was trapped. It was just a golden-handcuff.

As an agency - The pressure is on me to grow it. Marketing, managing, hiring and sometimes coding and troubleshooting issues and much more draining is troubleshooting team issues. My ambitions are now bigger than they ever were. Even if I am on a not-so-frequent vacation, I cannot stop thinking about work - "after all its my biz now, if I don't think who will" - I keep brainwashing myself with that. Most of my leisure weekends are combined with some sort of low-pressure work.

The answer to killing burnouts is not in the law - but in the society. The society today celebrates "entrepreneurship, grilling and hard work" for material wealth. We celebrate the next Facebook entrepreneur, but we don't celebrate social entrepreneurship. Everybody wants more, more and more material stuff (myself included). We are being brainwashed to want more than what we need. If you look around there are many people working so hard just to make a decent living. They do valuable work too. As an employee I may get paid 5 times more because I create business value - while they create lesser business value and arguably add higher social value.

BrandonY 15 hours ago 0 replies      
That photo's position, overhead and looking down a long hallway of cubes, with some meaningless but chipper corporate slogans as well as some very serious business-looking stuff, reminds me of the opening office shot of the game Stardew Valley: http://www11.onrpg.com/wp-content/uploads/2016/03/Stardew-Va...

That game's protagonist, perhaps not coincidentally, burns out on their office job and decides to go become a farmer.

terminallyunix 21 hours ago 1 reply      
I've been in IT for 20 years next year. I've been a sysadmin this entire time. Back in Virginia (Silicon Valley East), I made great money, but here in Texas, I make a pittance.

I'm in my mid 40s and have been looking to get into something else, but building on my existing skills. No one is even looking at me.

I'm toying with the idea of maybe going it alone. Start a small IT consultancy. Not sure what angle to look at this from.I'm not trying to put out a "woe is me" here, but rather appeal to the others in here that are toying with the idea of maybe going it alone in some capacity.

I've put out literally tons of resumes/CVs in the last couple of years and nary an interview. It's not like I don't have skills, but it seems that employers now want sysadmins to also be programmers and network engineers and coffee monkeys all at the same time.

I've also entertained the notion of getting out of IT altogether, but it's all I know. A guy I know bought a small cleaning company and he now cleans houses for the wealthy at 150-200 a house x 4 houses a day. He splits this with one other person. Not quite sure. But in my mid 40s, I don't think my body could handle a purely physical job.

snarf21 19 hours ago 0 replies      
The biggest problem is American corporate work is mostly busyness and not enough business. Too many layers and too many people who spend all their time trying to justify their position and not adding value.
burntoffice 21 hours ago 0 replies      
The two years I spent in Corporate America on any given day was either living an episode of The Office or Office Space.

People burnt out by fire drills coming from above, scared to use PTO as it was "bad optics", not receiving credit or appreciation, etc.

WORTH REMEMBERING: By nature of an employee showing up daily is to perpetually commit to the job. It is entirely up to person and their risk capacity. Saw a lot of "stuck" people who didn't like their job, but also wouldn't venture out to change that or actually tap into their true potential.

deletia 20 hours ago 0 replies      
Employee burnout happens because the majority of businesses today cling to a 20th century, mass production designed, work model (9am-5pm workday, ass in seat productivity measure) while employees are forced squeeze their lives around a corporation for "security".

I recently wrote a post which outlines this idea in a slightly broader context (those interested can visit https://allidina.me, feedback & constructive criticism welcome).


awinter-py 19 hours ago 1 reply      
Investing $$ in employee happiness and retention is a tricky signaling problem.

If you need the best people, you probably should care about retention.

What if, on the other hand, your managers suck so much that good and bad workers perform at the same level? In this case signaling to your people you don't give a crap about them puts you in a powerful negotiating position.

d--b 20 hours ago 0 replies      
I am happy to say that in the hedge I am working at, work hours have gone down a lot.

When I joined in 2010, I would start at 6.45am and finish after 9pm everyday. There was a lot of stuff to do.

But once the platform improved and the workload reduced, so did the working hours. I am now doing 9 to 6 approximately, which is pretty great.

I also removed emails on my smartphone. Best move ever.

Push for it. Working less is worth it.

2sk21 19 hours ago 0 replies      
I often joke that I have to take vacations to get real work done. My company like many does not allow vacation to be rolled over so, I usually wind up taking off the last two weeks of December. This is truly when I get my thinking done - I can spend the entire day from morning to night looking at code without any interruptions.
rux 22 hours ago 1 reply      
It's from a while back, but here Treehouse describe how they operate on a 9am-6pm four-day week.


I remember talking to Ryan Carson (the guy who put this into place) at a conference and he said that the results from doing it were overwhelmingly positive.

jakozaur 22 hours ago 0 replies      
Not enough holidays?

I know a lot of ppl in USA which take too little or none of them for quite a while until they burnt out.


rconti 21 hours ago 1 reply      
"67% said that they think their employees have a balanced life, yet about half of employees disagree."

owww, my head!

st3v3r 22 hours ago 1 reply      
Not surprising. People are tired of working long, useless hours for nothing other than to make someone else rich. Workers haven't seen meaningful pay increases in a long, long time. Most haven't seen a vacation in years. No wonder they're just tired of the whole thing.
kafkaesq 12 hours ago 0 replies      
...and yet those cubes are possibly sumptuous compared to some I've been asked to work in.
danielschonfeld 20 hours ago 0 replies      
Allow me to put this here, seems appropriate.http://slots.info/love-hate-map/#/map
encoderer 20 hours ago 0 replies      
I question the veracity that American's are working harder than ever before. I think we have higher wage productivity than ever before, and that is not the same.
Animats 20 hours ago 0 replies      
Unions. The people who brought you the weekend.
dsfyu404ed 17 hours ago 1 reply      
Everyone comes in this thread to complain about how bad life sucks because you don't get two months of vacation time for existing and can't use your sick days on a paper-cut but most of these people will turn right around and talk about how great their employer is if the article is a positive one.
Florin_Andrei 21 hours ago 1 reply      
> Vacations allow employees to regenerate so they can elevate their productivity upon return.

Sounds like 1975.

VPNs are not the solution to a policy problem asininetech.com
272 points by staticsafe  13 hours ago   185 comments top 32
nikcub 12 hours ago 8 replies      
There are a few schools of thought on where responsibility should lie in protecting user privacy. The first that it is a role of government and policy - in the same way the government sets standards for automobile and road safety they can set and enforce policies for user privacy.

The second school of thought is individual responsibility. Users should take steps to protect their own privacy on a case-by-case basis, in the same way they look after their own home security or personal safety.

The third would be a hybrid approach - that there is a role for the government to play in setting up a universal minimum level of privacy protection while users also have a role to play in their own protection. This is most akin to how healthcare works - i'm guaranteed treatment in an emergency room but I also might choose to keep myself healthy with diet, exercise etc.

I personally believe in user responsibility for personal privacy and security, where you can't and shouldn't depend on policy to protect you and that all users should be aware of the issues and actively educated on how to protect themselves. For a few reasons:

1. Policy is not universal. Some countries may have extensive and rigorous user privacy protections but that doesn't apply to users everywhere. While user privacy protections are strong in Europe, and consumers have access to recourse if they're privacy rights have been violated, that same advice doesn't apply to the majority of internet users, most of whom are residents of a nation or jurisdiction where there is no strong protection or user recourse.

2. Governments are a major party in privacy violations and are conflicted, so they can't be expected to behave in the interest of users. The most recent campaigns to roll out encrypted communications and connections in apps was prompted by the US government intercepting internal Google data. The government will almost always be incentivized to lower barriers to ease intelligence gathering and in most of the world government surveillance trumps individual rights.

3. Similarly, government can't be trusted. This is the point Ed Snowden made when he argued for individual and tech solutions to privacy over government policy[0]. Snowden cites the difference in Obama's campaign promises and what he delivered[1], and this isn't unique to Obama - the FCC ISP privacy rules being blocked this week is yet another example of how easily and quickly policy can be undone, while the mass surveillance Snowden disclosed is an example of how public policy and private actions can be different.

4. Tech solutions to privacy doesn't imply individual responsibility. We can, and do have, tech solutions that are universal - such as the campaign to roll out encrypted communications and connections with Whisper and LetsEncrypt.

5. Policing government policy is labour intensive and difficult. It relies on privacy researchers - usually individuals - to track what companies are doing with user data. With more data being shared between companies it is even more difficult to apply individual oversight to how policies are being enforced. See Natasha Singer's reporting in the NYTimes on data brokers[2]

6. There are usually very minor enforcement penalties for companies that violate user privacy policy. The FCC tracking opt-in rules were prompted by some ISPs adding tracking headers or cookies to user traffic. AT&T and Verizon were adding tracking cookies to user traffic and it took two years to notice, and there were zero implications for both companies[3] other than the new FCC rules which are now dead.

7. Even in the perfect world of good policy, good application of policy and good enforcement you still have more data than ever being stolen and leaked online. You only have to look yourself up on haveibeenpwnd or a similar database to find that for a lot of people, all of their PII has already leaked[4]

It is very clear to me that technology solutions have the primary role in protecting user privacy. Policy isn't a waste of time but it can't be relied upon. The question is how user privacy protection is packaged for a mass-audience. User privacy requires an equivalent of what 'use WhatsApp, use Signal' is for user security, what 'install antivirus, don't click on attachments' used to be for user security and the growing popularity and awareness of ad blockers.

I'm not sure what that will be or what it will look like, but warning people away from VPN's probably isn't going to help. Chances are that some form of VPN connection will become part of the standard solution (along with HTTPS/encrypted comms everywhere) now that the reality of ISPs and users not sharing privacy interests is here and many are aware of it.

Theres a great market opportunity here - perhaps not for VPNs as a product but VPN as a technology.

[0] https://www.wired.com/2016/11/despite-trump-fears-snowden-se...

[1] https://www.forbes.com/sites/thomasbrewster/2016/11/10/edwar...

[2] http://www.nytimes.com/2013/09/01/business/a-data-broker-off...

[3] https://www.techdirt.com/articles/20150115/07074929705/remem...

[4] https://haveibeenpwned.com/

jfoutz 12 hours ago 4 replies      
Lots of people seem to think the right answer is selling improved security. I disagree. It would be much more exiting to get the data coming from politicians homes, and the homes of their staff. It would be a fantastic way to generate news. Why is senator X's household researching cancer treatment? Will they step down this year? I can't help but think military bases would google their next deployment, that's another set of huge news articles.

If you're more into the finance side of things, CXO's home clickstreams would probably be enlightening. Or hedge fund managers. Some will be fully encrypted and secure, but just the dns would be a strong signal about what companies they're researching.

That is the kind of business that will drive privacy legislation.

Goopplesoft 13 hours ago 4 replies      
A heads up: theres a really nice project called Streisand[1] which provides a multi-protocol VPN with very little effort. You can launch one on a cheap cloud provider (like DO, if their policy allows).

[1] https://github.com/jlund/streisand

FridgeSeal 13 hours ago 5 replies      
No, they're not.

The solution is getting strong, enforced laws that protect our privacy and punish those who break them.

But for the moment, with advertisers viewing themselves as gods gift to the internet who think that all your information belongs to them simply by virtue of existing, and who will go to great lengths to acquire and store it all (for perpetuity), a solution is needed, and part of that is VPN's.

dfc 10 hours ago 1 reply      
It's strange to see the evolution of the technology versus policy debate. We started out with "the Internet views censorship as damage and routes around it." A little later we had Lessig saying "code is law." And now the refrain is "VPNs are not the solution to a policy problem."

I miss the idealism and optimism of the past. The only hopeful thing I can find in the new "quote" is that it seems that the tech world is finally aware of the need to work with policy makers and the public in addition to building new systems.

byuu 13 hours ago 8 replies      
Another thing often overlooked with VPNs is that they're just not that fast. I have a 600/40 connection, and I've tried at least six for-pay VPN providers. The fastest one I found (won't mention as my goal isn't to advertise for them) hits, at best, 100/30. And even then, only over L2TP. For whatever reason, OpenVPN is always slower on every PC I've tried this with.

And obviously, you gain a good deal of latency, especially if you use an overseas exit point.

And now we get to deal with shitty services like Netflix punishing privacy-conscious users and blocking access to paid accounts while your VPN is up.

frebord 1 hour ago 0 replies      
This whole damn thing spawns from the lack of competition with ISPs. If consumers had more than 1 or 2 options, we could choose with our money. I don't think the solution is to regulate the industry, but our privacy should certainly be protected by our fucking useless government.
libeclipse 7 hours ago 1 reply      
I understand the viewpoint of the article, but it assumes that the person waving the wand particularly cares about everyone else.

Personally, with the Investigatory Powers Bill in the UK, I will "wave the wand of a technology solution" to conserve and protect my own privacy.

Sure, if the policy was changed upstream then a lot more people would benefit than the technically inclined folks, but if there's a bug upstream we don't all sit with it and wait, we fix it locally and vendor.

sjwright 13 hours ago 3 replies      
Perhaps one solution might be to poison the data and have your router/device make spurious random DNS lookups and HTTPS connections. Ensure the list of random websites includes the top few hundred companies likely to be in the market for usage data. If enough people did this it would make the data useless.
jdoliner 12 hours ago 2 replies      
Why aren't VPNs, and more broadly encryption, a solution to this problem? "Waving the wand of a technical solution," as the post pejoratively calls it, isn't such an unreasonable thing to do with an inherently technical problem. This problem only exists because of other technical wands we waved. Why solve this problem with policy? Policy is hard to get passed, hard to keep passed and even when it is passed often times it means nothing. Remember this is the same government that contains multiple organizations surveilling your every move, not because they legally can, because they illegally can. The point is, it's foolish to count on USG to give you a right to privacy, just look at the history on this, it's not going to happen. But it's especially foolish when this is a right that you can enforce for yourself. If you actually care about your privacy use a VPN, or Tor, don't sit around waiting for the government to do it for you.
bayouborne 2 hours ago 0 replies      
Look to Comcast and TW to buy a few of the mid-tier established VPN providers, and then play both sides of the table.
guelo 13 hours ago 3 replies      
One thing I was wondering, beyond your own personal ISP, does this mean that the backbone providers, the Level 3's of the world, are going to get into selling data to advertisers? I was feeling personally ok because I use an ISP with a strong privacy pledge, but I wonder if their uplink is going to be selling my data. Though I guess it's less of a concern since the backbones don't have the complete personally identifying info that the customer ISPs have.
philip1209 13 hours ago 4 replies      
I think the bigger hole is DNS. Full-tunnel VPNs to primarily TLS-encrypted sites seems like overkill. Encrypted DNS plus an "HTTPS Everywhere" plugin should obfuscate enough info for most people without significantly affecting latency.
quantumfoam 8 hours ago 1 reply      
I'll just leave this here: https://github.com/trailofbits/algo/blob/master/README.md

I used a droplet on DigitalOcean to configure an Algo server. Very seamless setup, highly recommend. There's a $10 promo floating around: DROPLET10. You can self host too.

WhitneyLand 11 hours ago 0 replies      
What would be wrong with selling preconfigured routers to solve the problem?

The router could talk to a standard web api to get information to configure itself. The web service behind the scenes could set up and teardown digital ocean droplets as necessary running streisand. The web service IP's wouldn't be blocked because they'd only be used to periodiy get configuration.

So then you buy a non technical person this router, they create an account on the configuration website and as Ron Popeil would say, set it and forget it.

andrenotgiant 13 hours ago 2 replies      
Until a better solution is found, I think the way the recent IOT botnet stuff + this ISP privacy deregulation is portrayed in the media opens the opportunity for a startup that sells a secure, smart home router + VPN subscription plan.
cottsak 6 hours ago 0 replies      
VPN providers can totally scale. They will cease to be semi-dark-web services and turn first class. Services that test them will emerge verifying the security and encryption of tunnels.

Additionally there will be some who take an extreme view to this "zero knowledge" approach offering all forms of payment and workarounds to preventing down-stream ISPs/backhaul from tracking/identifying/classifying user traffic.

Maybe VPNs "are not the solution" but they still can do a lot of good in the mean time yet.

siculars 13 hours ago 0 replies      
Ya, this sucks... a lot. VPNs are a start with existing tech. I firmly believe new technology will solve this problem. Encryption everywhere. Overlay networks. New fully encrypted and annonymized DNS systems. Digital currency incentivizations. Policy helps but in the absence of policy technology will find a solution.
herbst 3 hours ago 1 reply      
After reading digital ocean the 10th time on here. What makes people think that using a american company that complies with american laws and regularly gives out data is a much better option than renting a VPN in a country that still has privacy in place?
BatFastard 12 hours ago 3 replies      
Does anyone sell a router for the home that has a VPN built in?

So that I dont have to have every computer in my home hook into the VPN when I start it up. Just one account for my whole house?

I imagine you could setup a linux box to do that for you, but I am lazy...

joveian 13 hours ago 1 reply      
One nice although limited alternative to openvpn is sshuttle:https://github.com/sshuttle/sshuttle

The limitations are: no ipv6 support :(, sometimes leaks dns, and always crashes shortly after it is first started (then works fine when you start it again). There seems to be little active development.

To work around the limitations, I mostly use SOCKS (curl also supports SOCKS), plus run sshuttle to try to catch any additional traffic. For that matter, SOCKS alone would at least catch the most sensitive traffic for most people (and would make it easy to have another browser profile for watching netflix).

I get a $15/year OpenVZ account from ramnode.com, which supports VPN usage. I haven't had an issue with bandwith (it seems to undercount quite a lot) but don't watch netflix or otherwise use that much bandwidth.

The main issue I've had is that some websites (google, amazon, gog) will default to various other languages that I assume other people who are doing the same thing speak. Fixed by logging in to the site and they then seem to remember for a while even if you don't log in, but eventually they switch again.

The nice thing is that the remote server can be configured to just have an SSH server on port 80 (in case you ever want to use it from restrictive public wifi; I first stated to do this after seeing SSL downgrade errors on public wifi) with public key authentication, so there is much less to worry about in terms of being responsible for a system open to the internet all the time. In SSH, I set:

 KexAlgorithms=curve25519-sha256@libssh.org HostKeyAlgorithms=ssh-ed25519-cert-v01@openssh.com,ssh-ed25519 Ciphers=chacha20-poly1305@openssh.com MACs=hmac-sha2-256,hmac-sha2-512
So still not a super easy option but a somewhat easier option than OpenVPN. It would be quite easy with an automated way to set up the remote ssh server correctly.

Edit: Speed is quite good with this setup and while I haven't done extensive comparisons, it does not seem to lower the connection speed by much.

pryelluw 13 hours ago 7 replies      
Ok, so which vpn providers are good?
johanneskanybal 4 hours ago 0 replies      
Not the solution perhaps but the next natural move of a cat and mouse game that predates the current policy change. It boils down to: Keep the internet lawless because there's no global entity that has my best interests at heart.
nine_k 13 hours ago 0 replies      
Technology used to trump policy, in an unstable but stubborn way. Napsters and piratebays die, but file sharing lives. It's less intense now nit because of policies, but because legal ways to buy most music and videos became reasonably convenient for the mass user.

How well might connectivity limitation work? It took China immense centralization and a lot of technical effort to build the great firewall, which is not exactly impenetrable, though.

vxxzy 12 hours ago 1 reply      
At the end of the day, it is obvious that policy is the right direction to stop this bleed of infringement. However; be it noted: those who have the capability to circumvent, or ethically "get around" such enchroachment; have a responsibilty to free those who may be entagled by that which is "freedom limiting". The argugment could be had, however; is it really freedom limiting for others to know your web history? Obviously, there are second, and third abilities to be held when a dominant party knows of the lesser's behavior. Still a great bit to parse. As for me and my house, we will tunnel safely through VPN.
godzillabrennus 13 hours ago 1 reply      
The solution to all of this is educating the population.

VPN tech is cheaper and more likely to succeed.

chx 13 hours ago 1 reply      
I had all sorts of VPN problems over the years with various Linux desktops OS. What I do instead is that I have a proxy server with just an OpenSSH daemon on port 443 -- if there's web traffic, add sslh to taste -- and then use the SOCKS v5 proxy built into OpenSSH client and then http://darkk.net.ru/redsocks/ I might be the weird case here but I found this infinitely easier to set up than any VPN.
chlordane 12 hours ago 0 replies      
I'm sure you all remember this read from 6/1/2016:

The impossible task of creating a Best VPNs list todayhttps://arstechnica.com/security/2016/06/aiming-for-anonymit...

awqrre 11 hours ago 0 replies      
if I can buy your browsing history, I should also be able to buy your tax returns...
gshakir 12 hours ago 0 replies      
How about Apple provide a VPN as part of the device? Remember Apple was the one that broke the telecom's dominance on the mobile market. I wouldn't mind paying Apple for the privacy.
logicallee 4 hours ago 0 replies      
Although it would not be a solution, see my request for Google to do this posted a few hours ago:


       cached 29 March 2017 15:11:01 GMT