hacker news with inline top comments    .. more ..    9 Aug 2017 News
home   ask   best   7 months ago   
Why GitHub Can't Host the Linux Kernel Community ffwll.ch
126 points by okket  2 hours ago   69 comments top 12
mi100hael 1 hour ago 1 reply      
There's also Linus's personal aversion to how GitHub implements many opinionated workflows.

 > I don't do github pull requests. > > github throws away all the relevant information, like having even a > valid email address for the person asking me to pull. The diffstat is > also deficient and useless. > > Git comes with a nice pull-request generation module, but github > instead decided to replace it with their own totally inferior version. > As a result, I consider github useless for these kinds of things. It's > fine for hosting, but the pull requests and the online commit > editing, are just pure garbage. > > I've told github people about my concerns, they didn't think they > mattered, so I gave up. Feel free to make a bugreport to github.

snakeanus 18 minutes ago 2 replies      
I can't really see the obsession that everyone has with centralised and closed services like github. We need to start moving away from them, not move more projects to them. Mailing lists and NNTP make decentralisation quite easy while being open standards and without having the need to have any account in any centralised service, why drop all these features away?
jacquesm 1 hour ago 2 replies      
I'm fine with that. Github is 'too large to fail' already, adding the Linux kernel to the pile and forcing the kernel team into Github's workflow are two big negatives. It would be great for Github but bad for everybody else.
mmagin 1 hour ago 3 replies      
I imagine after the Bitkeeper fiasco, Linus and others are disinclined to become dependent on a proprietary service.
vbernat 19 minutes ago 0 replies      
Not mentioned in the article, but work is also coordinated by maintainers with the use of patchwork. For example, for the network subsystem: http://patchwork.ozlabs.org/project/netdev/list/. This enables tracking the status of a patch and not loose them.
monorepoman 16 minutes ago 0 replies      
Lost me at "And lots of people learned that monorepos are really painful, because past a certain size they just stop scaling." Plenty of counterexamples of monorepo projects much larger than Linux kernel.
Boothroid 3 minutes ago 0 replies      
Ugh that font is unreadable on my phone.
tomschlick 1 hour ago 0 replies      
The kernel seems better suited to something like Phabricator instead of Github. Keep Github simple and clean for our "normal" projects.
hyperion2010 57 minutes ago 0 replies      
Here is another overview of how the kernel uses git and why no emails is simply not possible (or sensible). https://www.youtube.com/watch?v=vyenmLqJQjs
taeric 1 hour ago 0 replies      
I actually really like the MAINTAINERS file. Keeps the metadata literally in the repository and doesn't rely on an external system.
liaukovv 1 hour ago 5 replies      
This font gave me a headacheWhy not write with white on white? It would be so stylish
web007 55 minutes ago 1 reply      
> Please support pull requests and issue tracking spanning different repos of a monotree.

Issue tracking you can already file against one or more repos and link them together. It's not ideal, but it'll do the job.

Is "pr against different repos of a monotree" not what submodules let you do? Update whatever things you want in whatever repos, and pull the submodule pointer update(s) as a single change in your monotree repo.

DeepMind and Blizzard Open StarCraft II as an AI Research Environment deepmind.com
335 points by nijynot  4 hours ago   152 comments top 18
qub1t 2 hours ago 6 replies      
A lot of people here seem to be underestimating the difficulty of this problem. There are several incorrect comments saying that in SC1 AIs have already been able to beat professionals - right now they are nowhere near that level.

Go is a discrete game where the game state is 100% known at all times. Starcraft is a continuous game and the game state is not 100% known at any given time.

This alone makes it a much harder problem than go. Not to mention that the game itself is more complex, in the sense that go, despite being a very hard game for humans to master, is composed of a few very simple and well defined rules. Starcraft is much more open-ended, has many more rules, and as a result its much harder to build a representation of game state that is conducive to effective deep learning.

I do think that eventually we will get an AI that can beat humans, but it will be a non-trivial problem to solve, and it may take some time to get there. I think a big component is not really machine learning but more related to how to represent state at any given time, which will necessarily involve a lot of human-tweaking of distilling down what really are the important things that influence winning.

JefeChulo 3 hours ago 6 replies      
"so agents must interact with the game within limits of human dexterity in terms of Actions Per Minute."

I am really glad they are limiting APM because otherwise things just get stupid.

siegecraft 4 hours ago 3 replies      
The API Blizzard is exposing is really nice. Sadly most of the advantages AI had in SC1 were just due to the fact that an automated process could micro-manage the tasks the game didn't automate for you (a lot of boring, repetitive work). SC2 got rid of a lot of that while still allowing room for innovative and overpowered tactics to be discovered (MarineKing's insane marine micro, SlayerS killing everyone with blue flame hellions, some more recent stuff I'm sure from the newest expansions). Hopefully the API lets AIs converge on optimal resource management and get to exploring new and innovative timings, transitions, army makeups, etc.
dpflan 3 hours ago 1 reply      
Related: Today I learned that a group of AI researchers has released a paper called: STARDATA: A StarCraft AI Research Dataset. According to one of the authors: "We're releasing a dataset of 65k StarCraft: Brood War games, 1.5b frames, 500m actions, 400GB of data. Check it out!"

> Article: https://arxiv.org/abs/1708.02139

> Github: https://github.com/TorchCraft/StarData

hitekker 2 hours ago 5 replies      
This seems all in good fun but I wonder if it's come too late.

Starcraft 2 is at its twilight.

The biggest leagues of South Korea have disbanded. [1] The prolific progamers who transitioned to Starcraft 2 have gone back to Broodwar. [2]

Blizzard itself has scrubbed all references to Starcraft 2 on the very home page of Starcraft. [3] Except for the twitter embed, it has only only one "2" character... in the copyright statement.

My take is that the future for the Starcraft franchise will be through remastered and potential expansion packs following it.

Starcraft 2 had a good run but, with the entire RTS genre stagnating [4], I don't think Blizzard wants to bet on anything less than the top horse.

[1] https://www.kotaku.com.au/2016/10/the-end-of-an-era-for-star...

[2] http://www.espn.com/esports/story/_/id/18935988/starcraft-br...

[3] http://starcraft.com

[4]http://www.pcgamer.com/the-decline-evolution-and-future-of-t... (Aside from MOBAs)

SiempreZeus 2 hours ago 2 replies      
It's a bit too bad they're having to move towards supervised learning and imitation learning.

I totally understand why they need to do that given the insane decision trees, but I was really hoping to see what the AI would learn to do without any human example, simply because it would be inhuman and interesting.

I'm really interested in particular if an unsupervised AI would use very strange building placements and permanently moving ungrouped units.

One thing that struck me in the video was the really actively weird mining techniques in one clip and then another clip where it blocked its mineral line with 3 raised depots...

ktRolster 4 hours ago 5 replies      
When Watson won at Jeopardy, one of its prime advantages was the faster reaction time at pushing the buzzer. The fairness of that has already been hashed out elsewhere, but.....

We already know that computers can have superior micro and beat humans at Starcraft through that(1). Is DeepMind going to win by giving themselves a micro advantage that is beyond what reasonable humans can do?

(1)https://www.youtube.com/watch?v=IKVFZ28ybQs as one example

arcanus 4 hours ago 0 replies      
I also want to see the algorithm win on unorthodox maps. Perhaps a map they have never seen before, or one where the map is the same as before but the resources have moved.

Don't tell the player or the algorithm this, and see how both react, and adapt. This tells us a great deal about the resiliency of abilities.

hacker_9 1 hour ago 0 replies      
There's something funny about a company that is actively developing bleeding edge AI technology, but who can't design a webpage that works on mobile without crashing.
arnioxux 1 hour ago 0 replies      
Are there any known arbitrary code injection for starcraft? Like how you can use a regular controller to reprogram super mario world to play pong?



Is this how we are going to accidentally let AGI loose into the world!? /s

On a more realistic note I think this will degenerate into a game of who can fuzz test for the best game breaking glitch. Think of all the programming bugs that turned into game mechanics in BW that we haven't discovered for SC2 yet: http://www.codeofhonor.com/blog/the-starcraft-path-finding-h...

daemonk 4 hours ago 2 replies      
Blizzard should put in an AI-assisted play mode where players are limited to X lines of code that can be launched with keyboard commands.
siliconc0w 1 hour ago 0 replies      
The SCAI bots I've seen are more hardcoded tactics engines rather than machine learning models. They're still impressive, but their logic isn't quite 'learned' it's hand coded which is a crucial difference.
Havoc 4 hours ago 1 reply      
That's surprising. I thought Bliz didn't want anyone near sc2 but approved of sc1 being used for this purpose.
naveen99 2 hours ago 2 replies      
> even strong baseline agents, such as A3C, cannot win a single game against even the easiest built-in AI.

Then, why not release code for the built in ai, and improve on it ? Or is the built in ai cheating ?

ipnon 4 hours ago 3 replies      
Any predictions for how long it will take for an AI to win against the world's best player?
toisanji 4 hours ago 1 reply      
great they opened it up. I'm sure reinforcement learning / Deep learning will solve this. It has been a tough problem before, but honestly doesnt seem that tough compared to all the harder AI problems.
blobbers 2 hours ago 1 reply      

--why are there not more fanboy comments?!

DefNotARogueAI 2 hours ago 0 replies      
This gives me great ideas
uBlock Origin Maintainer on Chrome vs. Firefox WebExtensions mozilla.org
52 points by nachtigall  1 hour ago   13 comments top 5
yborg 17 minutes ago 1 reply      
I found this disturbing:

"Chromium-based browsers are being infested by Instart Logic tech which works around blockers and worst, around browser privacy settings (they may start infecting Firefox eventually, but that is not happening now)."

From his linked post:

"Instart Logic will detect when the developer console opens, and cleanup everything then to hide what it does"

Is this implemented via a CDN-delivered script? Why would Chromium-based browsers be more susceptible?

AdmiralAsshat 25 minutes ago 0 replies      
> It baffles me that some people thinks Firefox is becoming a Chrome clone, its just not the case, its just plain silly to make such statement.

That's probably the single most reassuring statement about Firefox that I've heard in some time, coming from a serious dev who makes a popular cross-platform addon for both Firefox and Chrome.

penpapersw 40 minutes ago 1 reply      
Huh. These actually sound like good arguments to switch to Firefox, arguments I've never heard before until now.
Aissen 27 minutes ago 2 replies      
> I am not aware of any anti-fingerprinting initiative taken up with Chromium

Brave is Chromium-based and has anti-fingerprinting tech (which it was the first to include IIRC):https://github.com/brave/browser-laptop/wiki/Fingerprinting-...

It also works on Brave for Android.

adrianlmm 28 minutes ago 0 replies      
I use Edge and looks like is blocking everything w/o problems.
'Things we create tell people who we are': Designing Zachtronics' TIS-100 (2015) gamasutra.com
57 points by bantunes  2 hours ago   7 comments top 3
spjwebster 33 minutes ago 1 reply      
I recently lost more of my life than I care to admit to Zachtronics' more graphically pleasing follow-up Shenzhen I/O [1] that I think I first heard about from HN [2].

The backstory that unfolds through your fake inbox actually gives the devilishly tricky game some direction, and the post-solution histograms showing just how many people solved the same puzzle cheaper, with fewer instructions and with lower power consumption than you tug at your ego and keep you obsessing over the same puzzle long after you've solved it. There's even a fun Solitaire variation buried in there for good measure, which was evidently so popular they also released it as a standalone game [3].

1: http://www.zachtronics.com/shenzhen-io/

2: https://news.ycombinator.com/item?id=12660253

3: http://store.steampowered.com/app/570490/SHENZHEN_SOLITAIRE/

asciimo 1 hour ago 1 reply      
The first time I heard about this game someone mentioned another assembly language game called Human Resource Machine. I was so captivated by the design of HRM (I'm a fan of the publisher, Tomorrow Corporation, thanks to Little Inferno), I forgot all about TIS-100. I'm happy to be reminded.

Seems that the Internet likes to compare these two games. Here they are compared on Slant.io: https://www.slant.co/versus/6230/6231/~tis-100_vs_human-reso...

bachaco 31 minutes ago 0 replies      
Yep, agree. My record so far is creating a bunch of crap.
Principles of Sharding for Relational Databases citusdata.com
153 points by tikhon  4 hours ago   32 comments top 3
AznHisoka 3 hours ago 7 replies      
I find the "you don't want to shard" camp quite annoying. Of course, I don't want to shard! Who does?! It adds complexity, both implementation-wise and operational.

But if you got 5 TB of data, that needs to be in a SSD drive, then please tell me how I can get that into 1 single physical database.

ozgune 3 hours ago 3 replies      
Hey everyone, it's Ozgun. When I first wrote this blog post, it was much longer. Based on initial feedback, I edited out parts of it to keep the post focused.

If you have any questions that aren't covered in the post, happy to answer them here!

megamindbrian 2 hours ago 1 reply      
I laugh every time I read that word.
Why We Said No to a $40M Round parse.ly
65 points by pixelmonkey  2 hours ago   7 comments top
ChuckMcM 1 hour ago 1 reply      
Smart move, tough to do. Remember that Venture capital is the most expensive capital you will ever raise, it costs you equity, it costs you control, and it costs you opportunity.

Something to consider once you are profitable is that banks are more comfortable extending a line of credit to you, that can be used for those 'unexpected' or 'opportunistic' capital requirements while not incurring a loss of equity now or in the future. Remember, banks make money the old fashion way, they charge interest on what they loan you :-).

Swift 5: start your engines swift.org
174 points by mpweiher  2 hours ago   108 comments top 10
ssijak 1 hour ago 8 replies      
And just today I was contemplating writing my first native iOS and macOS app... I was looking at the options and decided to go native with Swift. I have never written Objective-C app and never used x-code for dev. But I have ~10 years of dev experience, mostly Java and Python on the backend and front end dev exp mostly with Angular. Some Android, and a little from <input_random_tech_here> because I like to experiment.

So, my question is. How hard and enjoyable is for someone like me to write not very complex native iOS/macOS app in Swift starting from scratch? Best resource to start with?

protomyth 1 hour ago 3 replies      
One of the problems I find with Swift is that Apple doesn't go back and properly update their sample code at developer.apple.com. They have examples that will not build. If you search you can find folks that have patch sets, but they really need to fix the examples.
ainar-g 2 hours ago 7 replies      
Maybe someone will explain this to me. Does Swift use this confusing "rapid release" versioning? Does Swift 4 break backwards compatibility with Swift 3?

In my company people are looking for a language to rewrite some legacy Objective-C to. Swift is often discarded as "unstable" because of these major version bumps. Compare this to Go, which, seven or so years after the initial release is still 1.x and still doesn't break code.

I just don't get breaking the language so often. Do people enjoy rewriting code?

jswny 2 hours ago 6 replies      
Can someone with Swift experience comment on the status of Swift on non-Apple platforms? Is it being used outside of the Apple ecosystem? How is the tooling, deployment, availability/support, etc.
__sr__ 2 hours ago 5 replies      
I wish more effort were being made to make it a first class citizen on non-Apple platforms. With the popularity it has enjoyed, it could easily challenge the likes of Go, Python or even Java for server side programming.
Jack4E4B 41 minutes ago 1 reply      
Concurrency finally, it has taken forever. Is there any built-in support now? Server side Swift is lacking this big time.
bsaul 1 hour ago 2 replies      
About concurrency : does anyone know of a language that would let you tag portions of a codebase in some way, and declare something like "all those methods should execute in the same thread". Those declarations would then be checked by the compiler.

That would be a first step toward agent like concurrency, but it would be general enough to apply to other types of concurrency models.

tambourine_man 44 minutes ago 0 replies      
Since things at Tesla haven't worked out, I hope Lattner eventually returns to Apple.

Not that the Swift team is in a bad shape without him, it's just that it's nice to have an amazingly smart guy behind an open source language that many of us use (and that number that will probably only grow).

jorgemf 2 hours ago 2 replies      
Swift developers, how is the evolution of the programming language now? does it still have backwards compatibility issues or things are more stable now (and will be with this new proposals)?
the_common_man 1 hour ago 0 replies      
Implementing Backup cockroachlabs.com
69 points by benesch  3 hours ago   11 comments top 5
hdhzy 2 hours ago 0 replies      
Hmm I think I missed the memo that CockroachDB is no longer a beta:


Does anyone have experience with replacing small MySQL / Postgres instances with CockroachDB? I know it's not a target market but I wonder how flexible CockroachDB is.

jhugg 1 hour ago 0 replies      
Users want backups in resilient DB for the same reason RAID isn't considered a backup strategy.
sciurus 1 hour ago 2 replies      
I understand that when building a business around the "open core" model it can be hard to draw the right line between no-cost and commercial features. Still, making backups a commercial feature surprises me. I can't imagine running a database without backing up the data. Is the intent to let people run CockroachDB in preproduction environments at no cost, but require anyone running it in production to purchase an enterprise license?
jlmorton 2 hours ago 1 reply      
> So as it turns out, even in a system that was built to never lose your data, backup is still a critical feature for many of our customers."

Wow, what a surprising insight! /s

knodi 1 hour ago 1 reply      
CockroachDB does many thing really well but please for the love social acceptance please change the name.
Netlify Raises $12M from A16Z netlify.com
123 points by gk1  4 hours ago   54 comments top 22
alberth 2 hours ago 2 replies      
I've never understood Netlify.

They are supposed to be targeted at static high performance blogs.

Yet their own blog is 15MB in size and over 100 different http requests.

See the YSlow report for their blog site below.


It receives an "F" rating.

Isn't this kind of bloated website the exact problem Netlify is suppose to solve?

tal_berzniz 2 hours ago 1 reply      
Awesome service so far. The Continuous Deployment flow is awesome with deploy previews, CDN, custom routing, Automatic SSL from Let's Encrypt. The SSL can be improved to be true automatic without a provisioning step (first hit issues a cert)
calcsam 3 hours ago 3 replies      
Static sites are becoming more and more popular -- you can host them on CDNs across the globe, they can be cached client-side, pre-loaded, work offline....

Especially important thinking about the billions of people in developing countries accessing the Internet over 2G connections.

If the trend towards static sites continues, Netlify could be the next Heroku.

Personally, I use Netlify to host my Gatsby blog; all I need to do is `git push` and they handle the rest.

dkonieczek 3 hours ago 0 replies      

 Join the conversation on Hacker News
Just FYI, this is linking to post submission

nodesocket 1 hour ago 0 replies      
Seems like Netlify is the offspring if CloudFlare and Heroku had a child.

I am using Google Cloud Storage as my origin and CloudFlare for few corporate static sites, and it works fantastic. Get CDN, http/2, free SSL, and cost is literally cents.

Deployment is simple with the gsutil cli:

 // regex excludes dotfiles recursively gsutil rsync -R -x '^\/?(?:\w+\/)*(\.\w+)' directory-to-sync-here gs://bucket-here

paulgb 2 hours ago 2 replies      
I moved some static sites to Netlify from S3 because it seemed easier than getting HTTPS working on S3. It certainly was, and I haven't looked back. The automatic generation of a static site from the repo is a killer feature I didn't know I needed until I tried it.

Congrats to the team!

slackoverflower 3 hours ago 3 replies      
Read that as Netflix the first time. Was confused since Netflix's valuation is well past $50b+
stevenhubertron 2 hours ago 0 replies      
I'm a big user of Netlify, and host a number of sites with well over a million views a year and they have been rock solid, quick builds, quick support. I can't wait for them to take off!
prashnts 1 hour ago 0 replies      
I love Netlify and how much they support open source projects (thanks!). Hosting a quick mockup is very easy: drag-and-drop zip, done. It comes with Lets Encrypt SSL, CDN, deploy preview and history.
djhworld 2 hours ago 2 replies      
On the main page there's an interactive tutorial

$ npm install netlify -g/usr/local/bin/netlify -> /usr/local/lib/node_modules/netlify-cli/bin/cli.js left-pad@0.0.3 isarray@1.0.0 is-positive-integer@1.1.1 babel@6.5.2

I hope this was a joke because it made me laugh a lot

magic_beans 2 hours ago 0 replies      
Netlify is amazing. It is so easy, a pleasure to use for static sites and react apps, continuous deployment is awesome, and open source projects are (for now and hopefully for the long run!) FREE.

Huge, huge fangirl. Keep it up, Netlify!!!

biot 1 hour ago 1 reply      
Anything on the roadmap for supporting server-side (api, db, etc.) in the same way?
jcsnv 2 hours ago 0 replies      
This is interesting, I just tested it out and its pretty flawless.

For a standard React FE app, why use this vs Heroku vs S3?

cies 3 hours ago 2 replies      
I really like where Netlify is going and how they embrace open source to achieve their goals. There's just one thing I miss in their mix: strong typing. No Elm, PureScript or ReasonML (just yet).
zbruhnke 2 hours ago 1 reply      
Gitlab on-prem support coming? That would make me very likely to switch especially because I think it would make it much easier to spin up review apps for our SPA's
k_sh 3 hours ago 0 replies      
Wasn't aware of Netlify until now - the product looks great!

I think I'll try it out with some upcoming projects.

dotnetkow 2 hours ago 0 replies      
Love love love Netlify! Moved my blog from Tumblr (lol - but it was free!) to Netlify with Hugo as the static site generator. A wonderful experience. With little effort, my blog now has SSL, auto-deploys after I git check-in, and is a million times faster. I wrote a tiny bit about the migration process: https://www.netkow.com/moving-from-tumblr-to-hugo-netlify/
jmuguy 2 hours ago 0 replies      
Just published my first application on Netlify yesterday. I looked at Surge.sh but their pricing doesn't include the SSL certificate. Netlify SSL with Let's Encrypt was literally a button press. Good stuff!
miles_matthias 51 minutes ago 0 replies      
Way to go Netlify. I've been a customer since BitBalloon and this team does great work in a trend that I'm really glad to see.

Edit: a downvote? Really?

at-fates-hands 2 hours ago 0 replies      
Netlify is really good for static sites, but you can also combine their static site resources with Flatmarket or Snipcart and create ecommerce sites as well.

I'd suggest checking out their blog (https://www.netlify.com/blogs/), they have a ton of tutorials on the multiple static site generators they support


pbowyer 3 hours ago 0 replies      
Congratulations to the founders and team - a great product and great service!
drizze 2 hours ago 0 replies      
Congrats! Hope they do great things with it. Use them to host a middleman site and love the service.
Netflix Lands the Coen Brothers Western TV Series techcrunch.com
114 points by janober  3 hours ago   16 comments top 4
aresant 2 hours ago 3 replies      
Netflix is such an interesting business.

They have far and away the broadest streaming distribution (75% of it!) in the USA (1) which is a market that already 53% saturated.

And the bull case guys point to international subscription growth - which is still in its infancy - as the reason to hold Netflix forever. (2)

They have much lower infrastructure costs then the previous cable providers without having to build the pipes.

But they also don't benefit from the built in government / monopoly protections that came with that investment in the distribution platform.

So in terms of long game how do they protect themselves?

Own the content.

Disney pulling their rug out earlier this week underscores how fucked Netflix is as just a distributor.

So if, in the end, distribution becomes the new "dumb pipes" at what point do the content providers pull the judo flip and control the distribution and the margin?

Disney again, case in point.

So do we underwrite Netflix's future - aka their current 227 PE! - in their ability to execute on the content side?

(1) https://techcrunch.com/2017/04/10/netflix-reaches-75-of-u-s-...

(2) https://www.cnbc.com/2017/08/08/buy-netflix-because-its-inte...

askvictor 12 minutes ago 0 replies      
The fragmentation of the streaming market keeps me from using it; I'd happily pay $x per episode/movie (and do on Google play where the content is there). But I'm not going to subscribe to a streaming service just for one series that I want. I'm curious about the concept of brand loyalty to content producers (Disney being the example here) - do people really think "I'd like to see a Disney movie tonight" or do they just want to see a particular movie or genre? I'm sure there's some brand loyalty for Hugh quality (HBO) or specific interest, but am curious how far this goes.
11thEarlOfMar 2 hours ago 2 replies      
I'll say what others are likely thinking...

The Coen brothers hold a special place in my heart. So, so glad to see them headed to Netflix!

what_ever 2 hours ago 1 reply      
Salesforce fires red team staffers who gave Defcon talk zdnet.com
85 points by stevekillian  1 hour ago   15 comments top 7
rsj_hn 5 minutes ago 1 reply      
I was not at the conference and have no first hand knowledge of what happened.

But before everyone gets on their high horse, please pause to reflect:

This was all company work product being presented by company employees who were on a company funded conference trip. Therefore there is an approval process for vetting presentations as well as a legal process for opensourcing code. This is standard practice at all companies.

Now what do you think is more likely: That the PR department would approve of a talk titled "meatpuppet" and the legal dept would approve of open sourcing the code and then at the very last minute both groups would change their mind and try to pull the talk, or that the presenters never got the OK in the first place, the company found out at the last minute, asked them to pull the talk and they refused?

Moreover given that Salesforce can't talk about this matter, who do you think is the source for the article and whose side are you hearing?

The last few days have really highlighted how quick people are to pile on with outrage and self-righteous indignation before getting all the facts.

just2n 24 minutes ago 0 replies      
That seems like a tad bit of an overreaction on Salesforce's part. The only mismatch here was the expectation set around the availability of the tool's source? So yeah, it was clear the tool is owned by Salesforce and ultimately something like that is decided by the company, but saying you're going to "fight to have it open sourced" and advocating to have tooling you build be shared outside of your company doesn't seem like a fireable offense to me. Look at what it's done for companies like Facebook and Google.

What the hell, Salesforce? This looks bad. There's either more to the story or this is just extreme knee jerk.

kafkaesq 22 minutes ago 0 replies      
The unnamed Salesforce executive is said to have sent a text message to the duo half an hour before they were expected on stage to not to give the talk, but the message wasn't seen until after the talk had ended.

Which said unnamed executive should have known was patently unreasonable to expect to be received and read in time.

Sounds like a failure in basic communication, somewhere in the organization. And if someone in the C-level feels they need to intervene at the last minute to set things straight -- this very strongly suggests point source of the failure was most likely somewhere in the middle layers (or at the C-level itself) - not with the frontline engineers.

But which at Salesforce is apparently no protection against getting hung out and dried.

Especially when we read the parts about "The talk had been months in the making" and that the executive pulled the plug at the last minute "despite a publicized and widely anticipated release."

Johnny555 41 minutes ago 1 reply      
Seems like a bad idea for a public SaaS company that relies on trust from customers that their data is secure to piss off their own offensive security team by firing them suddenly without even a warning received.

I expect that lots new Salesforce vulnerabilities will be discovered and disclosed.

tptacek 20 minutes ago 0 replies      
It's probably way too early for us to know what's really happened here. If you're unfamiliar with this stuff, you should know that Salesforce has a large and relatively savvy security team, including people who have presented at offensive security conferences in the past.

There's a lot of weirdness in the reporting here; for instance, the notion that Salesforce management had a meeting with members of their own team under "Chatham House rules".

whatsmyhandle 52 minutes ago 1 reply      
EEK. When speaking in front of a large audience, it's generally a good idea to either mute your phone, or ditch it entirely before you get up onstage.

To get canned for not responding to a text message 30 minutes before a talk - which you were already approved for - seems terribly unfair and a decision probably made in the heat of the moment.

Lazare 29 minutes ago 2 replies      
I'd be fascinated to learn more of the backstory here, because the story as reported so far is baffling.
How to confirm a Google users specific email address tomanthony.co.uk
132 points by TomAnthony  6 hours ago   49 comments top 12
patorjk 5 hours ago 1 reply      
> which allows an attacker to confirm whether a visitor to a web page is logged in to any one of a list of specific Google accounts

I actually reported a similar problem to Google that would allow you to do the same thing back in 2013 (and like you, I used the load and onerror methods for detection). I didn't get a reward either :/.

However, Facebook paid me $1,000 for finding this problem for a particular area of their website (http://patorjk.com/blog/2013/03/01/facebook-user-identificat...). So I wouldn't write off this kind of security issue. It seems to depend on who's giving out the bounty.

TomAnthony 3 hours ago 0 replies      
Worth noting that this also works with GSuite email addresses.

Reddit user 'unsafeword' has suggested (https://www.reddit.com/r/netsec/comments/6smdq0/how_to_confi...) that for organisations like schools/universities could use this for identifying their own users, as the list isn't that large.

michaelhoffman 5 hours ago 3 replies      
This is an issue for those of us who do anonymous peer review of publications that include references to the authors' web sites. It's bad enough that people have tried to identify me just by location in their logs.

I recommend using Tor now. But most people won't.

biftek 1 hour ago 3 replies      
This seems like a handy way to confirm email addresses when a user signs up to your service.If it returns false, send a regular "confirm your email" email.
dpkonofa 5 hours ago 3 replies      
Is this really even a big issue? For one, you have to already have knowledge of the email address in advance. Then you have to somehow get this user to go to a page that you have control over. Then you have to get them to wait around on your page while you run through 1000 possible email addresses every 25 seconds. Unless this got onto a really, really compelling page, I don't think anyone is going to sit around waiting for a page like this to do its business. The chances of getting a successful match are so low that I can understand why it's not a priority to fix this.
leephillips 3 hours ago 1 reply      
Yet another reason I'm glad I use uBlock Origin set to block all 3rd party requests. To get the demo to work, I had to disable uBlock.
jtokoph 4 hours ago 2 replies      
Google can probably prevent the information leak via image tags by not using a 302 redirect and instead using a 200 response and a combination of <meta refresh> and JS document.location.

This way, the image tag will always fire the onError

proactivesvcs 5 hours ago 0 replies      
> 18th July The team came back to me and asked me what my suggestions for handling this would be.

Surely they would make an offer of how much they would like to pay the OP before they expect the OP to work for them?

robin_reala 5 hours ago 1 reply      
seanalltogether 5 hours ago 2 replies      
I'm trying to understand the implications here. Is the author suggesting that real world attack would involve randomly generating email addresses to see if they are valid or not based on whether they might match the current user. Or would the attack involve purchasing a list known email addresses from spammers, and then doing lookup against that list for every visitor that comes to your website?

Option 1 seems like it would take impossibly long to match, and I'm not sure what actionable information you get from option 2, other then maybe verifying that the email address is still active?

royalharsh95 5 hours ago 0 replies      
you cannot if you are using privacy badger.
yorick 5 hours ago 4 replies      
Note that the demo sends your email address to the server if it's a hit.

$.ajax({ url: "/google_leak/save.php?info=manual_hit:" + email});

update: gone now. still pings that it ran. don't forget to hit ctrl-shift-r to bypass your cache.

The New Copycats: How Facebook Squashes Startup Competition wsj.com
103 points by petethomas  5 hours ago   51 comments top 12
abalone 3 hours ago 1 reply      
> Facebook uses an internal database to track rivals... The database stems from Facebook's 2013 acquisition of a Tel Aviv-based startup, Onavo, which had built an app that secures users' privacy by routing their traffic through private servers. The app gives Facebook an unusually detailed look at what users collectively do on their phones...

WTF is this shady-ass sh*t. Way to "secure users' privacy," Facebook.

From the sound of Onavo's App Store reviews they are using deceptive marketing of the "Your phone is infected, install this now!!" variety. Yet they have a lot of positive but suspiciously brief reviews balancing them out. So Facebook bought a company that MITMs unsuspecting users for profit, using scammer marketing techniques and fake reviews to drive installs, then leverages that to knife babies. "Don't be too proud," indeed.

I hope there is cause for Apple to remove this app from the App Store (like deceptive marketing or exploitive practices). Or for a bunch of us good folks to leave negative reviews. These guys depend on informed people avoiding these apps and not leaving reviews.

bhouston 4 hours ago 6 replies      
Facebook is what Microsoft was in the 1990s. Using its existing market dominance to crush potential competitors by offering their distinctive offerings as mere features of its existing popular products.

This did lead to a lot of momentum to the anti-trust proceedings against Microsoft.

I wonder if that encourages Facebook to not do this so obviously in the future? Or maybe it isn't at all worried about anti-trust for the near term.

I am sure Google, Amazon and Microsoft continue to do doing this as well, but it seems that Facebook is doing this most successfully or at least most prominently with its total destruction of Snap.

I guess it is different in that Google and Facebook both have very effective means to accurately measure adoption trends of new successful market entrants, and thus can target these entrants better than ever before with total destruction.

This is just killer:

> In December, Facebook began its group-video-chat offensive. Its Messenger app introduced the feature with the ability to see up to six people in a conversation, compared with the eight-person rooms on Houseparty.

> In February, Facebook invited Houseparty users between the ages of 13 and 17 to come to its offices in Menlo Park, Calif., to participate in a study and keep a diary for a week afterward that they would share with Facebook, offering as an inducement $275 Amazon gift cards.

xbmcuser 4 hours ago 1 reply      
Facebook has been buying all social networks it feels are a competitive threat to it. First it bought insta then it bought WhatsApp it tried to buy Snapchat wasn't successful so copied it in insta. One of the things I read a lot on hn is how Facebook has killed privacy and people try not to use it etc. But I personally stopped using Facebook not because of privacy concerns but because Facebook has been making an open web into a private web. And more and more stuff that in the past would have been on company websites is available on Facebook alone and sometimes not accessible without a Facebook account.
AndrewKemendo 4 hours ago 1 reply      
While getting acquired can be a very good win for the founders, that might be at the expense of a more competitive landscape.

I think about this a lot and at the end of the day a founder has to decide if they care about their own payday or the broader ecosystem/market of independent products.

If you fall into the latter camp, then assuming you are even successful in the market, you should prepare to die by your sword. Otherwise, the big 5 just keep getting bigger with more advocates and authority.

You could argue that joining them will be better in the end because you just bide your time and leave to start something even bigger, but the reality is you'll have the same dilemma in the future. So why wait to take a stand and try to compete?

The real question is if it's even practically possible to compete. Given that VC are generally too timid to fund anything which could get beaten by the big ones, there aren't a whole lot of options to growth fund something which really could compete.

c0smic 1 hour ago 0 replies      
This reminds me how important/relevant it's becoming for people to own their data, instead of giving it up to companies to profit from and eventually sell.

Big issues I see are the vast majority of people just don't want to think about it, and there aren't any good systems in place to empower (the majority of) people to retain their data.

iamleppert 4 hours ago 1 reply      
I'm sorry but if your product can be easily copied by Facebook, you don't really have a product.

There was once a time when video chat was novel, but now since the technology is "done", there is nothing hard about developing these services from a technical perspective. Handling scale and various other things with these products used to be a challenge but now we have the cloud, API's and a mature ecosystem. The world really doesn't need more of the same kinds of communication apps, it all just becomes a gimmick and less of a utility.

So, most of these products will be successful based upon other factors -- such as the cleverness of their marketing, or whether or not they serve a niche that is lucrative enough and underserved enough on which to build a successful business, but not large enough to attract the attention of one of the goliaths.

The one advantage startups have over large corporations like Facebook is their size and speed at which they can move. Engineers at Facebook, like any large tech company, are encumbered by substantial process, political forces, and a reluctance to try new ideas. Your typical startup employee is also far more motivated than an engineer who just wants to be given their daily JIRA tasks. To be honest, why does a single app need a team of 500 (frontend) engineers in the first place? When a tech company gets large, it becomes more about business than technology, anyways.

So, it doesn't come as much of a shock Facebook is turning to the startup world to source their ideas and duplicate them, which is why I advise all my friends to steer clear of any of these "we're a better way to share/video chat/chat/message/communicate" startups. Only go to one if you have some burning technical itch. The one exception is if the founders aren't totally delusional and the company operates more under the impression of just getting an MVP built, with the idea of shopping it around to be acquired in short order. And in this case, know exactly how long that's going to take, take no VC funding, have no delusions of grandeur and as an engineer know exactly what cut you're going to get when the thing gets sold. I've only seen this work if the founders know someone at the big company and the thing has all been basically pre-arranged though.

cft 2 hours ago 0 replies      
It is very difficult to make a significant social app these days, mostly due to Facebook's collection of apps. Almost impossible.
jitix 2 hours ago 2 replies      
> Houseparty says its growth had been stymied by the apps crash, which slowed its ability to introduce new features and attract new users.

This is why horizontal scalability should be a basic requirement in such a product. These days there's a blanket statement used throughout the industry that premature optimization is bad. It is bad if you're developing a website or a stand alone mobile app. But if you're doing something in communications field (or data, IoT, etc.) scalability is a must-have.

neerkumar 3 hours ago 1 reply      
When FB bought Whatsapp, Whatsapp was handling 20 billion messages a day and was the #1 app on the phone pretty much anywhere in the world, except for US, China, and Australia (it still is). Not that they needed some tool to find that out.

FB did an amazing job in keeping their acquisitions fairly independent and let them keep grow. That's something really hard to do and they deserve credit for that.

It is easy to buy the #1 messaging app in the world, if you have the money. It is hard to make sure those people still stay motivated after the acquisition.

noncoml 4 hours ago 1 reply      
New? Isn't that what MS used to do in the 90s?
ValentineC 4 hours ago 1 reply      
For anyone stuck in front of a paywall: http://archive.is/LUAA8
Americans dont need fast home Internet service, FCC suggests arstechnica.com
91 points by JacksonGariety  1 hour ago   22 comments top 10
geff82 50 minutes ago 3 replies      
One more little sign America is more and more retiring its leadership in the world, opening doors for other nations to be more developed?

In a time where many European countries aim at providing 100Mbit as a minimum in the next years and thus also open rural areas for economic development, decision/opinions as the one described in the article seem ludicrous. Of course, providing net infrastructure in the US with its huge size is a challenge. Yet, in a country like Sweden with similar population density, 100Mbit is already kind of the basic minimum even in remote areas.

Here in my town in Germany we had super slow internet until 3 years ago. Now I can choose up to 400Mbit from different providers (100Mbit DSL or up to 400Mbit cable). Connectivity skyrocketed and it does in many other European areas. Now the US decides to lower standards? Is it the same kind of thinking as "we don't need high speed trains, we will have Hyperloop in 50 years", just adapted to "everything will be mobile one day"?

rndmize 5 minutes ago 0 replies      
> But during the Obama administration, the FCC determined repeatedly that broadband isn't reaching Americans fast enough, pointing in particular to lagging deployment in rural areas.

No skin off my back. I already have internet service significantly better than the FCC broadband definition, and my core concern in this space is net neutrality. Rural areas are the ones that will be negatively affected by this kind of policy change in the coming years. You get what you vote for in this case.

chank 1 hour ago 1 reply      
Great so we can let ISPs off the hook for all the money we've already given them for faster service they haven't provided. This is how our government works folks.
iokevins 1 hour ago 0 replies      
California Assembly Bill 1665 attempted to something similar:

"Both Frontier and AT&T maintain antiquated DSL systems that serve millions of Californians who live in communities that dont have sufficient revenue potential. Low income and low density communities in other words."


AB 1665 seems on hold until legislators return on August 21:


noonespecial 10 minutes ago 0 replies      
We didn't need anything faster than dial-up for the web of 1999.

It's all the things yet unimagined that ubiquitous high speed internet would enable that's the real tragedy here.

The complete failure of imagination of today's "leaders" is very disheartening.

just2n 37 minutes ago 3 replies      
The title seems a little clickbaity. It seems all that's being said is the FCC is recommending a reasonable minimum, not a maximum. I don't think it makes sense to run gig fiber connections to rural homes unless someone is footing the bill, but they definitely should have some internet capability, and 10/1 mobile and 25/3 direct seems at least minimally viable.

This title makes it sound like the FCC is advocating that speeds above those are unnecessary for anyone, as if they're coming for your network speed. I don't get the sensationalism here, this hardly even seems newsworthy.

stevefeinstein 1 hour ago 1 reply      
The sentiment, that Americans don't NEED fast home internet is probably more accurate than not. It's not relevant though. Need has never been the driving force in the market. It's want, and if people want it, there's no reason they shouldn't have it. Why would the head of the FCC care if people need high speed? Only if he were in the pocket of the ISP's. Then he'd need a way to NOT create rules that mandates high speed internet. And if it's not a requirement, then it's up to the business to charge whatever they want without regulation. It's insidious, and quite clever albeit evil.
iokevins 1 hour ago 1 reply      
Fix link so it points to article top, instead of comments (?)


tempodox 35 minutes ago 0 replies      
I'm just flabbergasted by this preposterous proposition.
norea-armozel 12 minutes ago 0 replies      
I wonder how long until Pai adds extra regulatory requirements for utility cooperatives that will spring up to kick the telcos out of the rural areas? I'm not joking about that notion since the telcos really hate cooperatives that spring up and replace or compete with them.
Monsanto Was Its Own Ghostwriter for Some Safety Reviews bloomberg.com
214 points by Red_Tarsius  9 hours ago   29 comments top 8
rubatuga 4 hours ago 7 replies      
You should realize how important money is in research today. The main job nowadays of principal investigators (PI) for research labs is to write applications for grants and fundings. These labs are usually underfunded and will accept any private funding if necessary. For example, the PI for my nutrition lab received funding from a Canadian agricultural company for a study on Canola oil. When the test results of the study were investigated (which I don't exactly recall), they were not in favour of Canola oil. The PI therefore "voluntarily" decided not to publish the results. I asked her why, and she said that if she published the results she would not be likely to ever receive funding from the company again.

My point is that a lot of research conducted today is funded by ulterior motives, be it political, private interest, or a company like Monsanto. I fully expected a company like Monsanto to be engaged in this behaviour. The days of pure/basic research are dead, especially with funding from the public sector drying up.

edit: oops i meant to write principal investigator (PI)

pella 4 hours ago 0 replies      
1 week ago:

"Monsanto leaks suggest it tried to kill cancer research about weed killer (baumhedlundlaw.com"


jaclaz 5 hours ago 0 replies      
The actual "news" are IMHO only that this can proven as they have been caught with their hands in the cookie jar.

I often dream of some news like "Independent research actually found to be independent".

shapiromatron 5 hours ago 1 reply      

> The Expert Panel Members recruitment and evaluation of the data was organized and conducted by Intertek Scientific & Regulatory Consultancy (Intertek). The Expert Panelists were engaged by, and acted as consultants to, Intertek, and were not directly contacted by the Monsanto Company. Funding for this evaluation was provided to Intertek by the Monsanto Company which is a primary producer of glyphosate and products containing this active ingredient. Neither any Monsanto company employees nor any attorneys reviewed any of the Expert Panel's manuscripts prior to submission to the journal.

Seems misleading.

Cryptogocrazy 3 hours ago 0 replies      
I don't know the legality of what they did, but it's pretty clearly a reason to favor government involvement in fringe cases like this. Sounds like what they did was fraud.
unclebucknasty 1 hour ago 0 replies      
Interesting timing for me. I just yesterday read an Atlantic article titled "How America Lost Its Mind" [0].

The tldr; of that article is that Americans have had an increasing tendency to create our own realities and to believe anything we choose. This includes conspiracy theories like the government is purposely allowing cancer treatments to be withheld, as well as the idea that vaccines cause autism, etc. The article then goes on to suggest that choosing what to believe is part of being American. I don't necessarily agree, as I think the article was woefully inadequate in assessing the damage that financial interests play in willfully misleading people and creating a post-truth world.

Here on HN, I've had "debates" with people who nearly suggested that glyphosate is the greatest thing that ever happened to mankind. When I cautioned about safety concerns due to overuse, I got the standard pointer to the studies, etc. If you question the studies, then you find yourself being painted as some sort of anti-science conspiracy-theorist. This, when we essentially all know how research is done and the degree of rampant regulatory capture that exists.

I guess my point is that when many of the institutions we're supposed to trust are largely captured and firms that have direct financial incentive to mislead are allowed to decide what's real, then it is an assault on truth and reason. When we ignore this fact and encourage blind-belief in these institutions (worse, allowing them to act as proxies for "the ultimate truth of science"), then we are aiding in the creation of the very post-fact world we claim to abhor.

[0] https://www.theatlantic.com/magazine/archive/2017/09/how-ame...

throwawaymanbot 3 hours ago 1 reply      
Yet again, American style Capitalism giving regular ole Capitalism a bad rep.
londons_explore 6 hours ago 4 replies      
I don't really see anything wrong with using a ghostwriter, as long as the person/organisation whose name is on any document fully read and agreed with the contents, and would stand by them as their own.
Startup Sequences of Shells dingyichen.wordpress.com
12 points by onosendai  2 hours ago   1 comment top
jsjohnst 1 hour ago 0 replies      
I thought maybe this just had a bad UI layout on mobile, but after reviewing on desktop browser, I now know that this useful info was just presented rather poorly.
The incentives of academia naturally select for less reliable results (2016) theatlantic.com
75 points by dom0  7 hours ago   20 comments top 2
avs733 3 hours ago 2 replies      
>Low statistical power is an obvious symptom of weak research.

No it isn't. It is a simple one but it isn't obvious. Low statistical power is just one of many indicators of research quality. Qualitative research with sample sizes in the single digits can be done well and research with large statistical power can be done poorly. The measure of quality in research is much more conceptual and cannot be reduced to rule based measures without failing completely.

nonbel 5 hours ago 2 replies      
The proposed "solutions" of pre-registration and increase sample size are not going to work because they are not fixing the underlying problem of NHST use (where a researcher tests a nil null hypothesis of zero effect and then concludes their favorite explanation is correct if they reject this null hypothesis).

The solution is to go back to the old way of science (pre-1940):

1) Collect data and figure out what is repeatable and consistent (eg come up with "natural laws").

2) Come up with theories that explain the observations from #1 that also make precise predictions about some different type of data.

Reward researchers who figure out methods that lead to reliable, repeatable observations and those that make precise predictions (if your theory predicts nothing more than a positive/negative correlation between two variables it is too vague).

Benefits of a Lifestyle Business bugfender.com
338 points by adchsm  6 hours ago   224 comments top 34
weeksie 6 hours ago 8 replies      
Sure! Lifestyle businesses are great, and so is the whole digital nomad thing (I spent all of 2016 and a good chunk of 2015 traveling around the world).

There are a ton of upsides but I wouldn't go back to it full time. For one, it's surprising how few of the digital nomad types are really that interesting, and while integrating with local populations is fun, you'll still find yourself missing the familiarity of people from your own culture (or similar, Western cultures, assuming you're a Euro or American)

Once you get used to life on the road it's grand. Still, nomad nests like Chiang Mai are insipid and full of scores of people hustling their drop ship schemes. More power to them, but it's just not my vibe.

I dunno. Go nuts, travel, see a bunch of shit, just don't assume the beach is going to be as stimulating as the (very likely) metro urban environment you're living in now.

sevensor 5 hours ago 4 replies      
Your chosen lifestyle doesn't have to involve sea voyages in Southeast Asia or weeklong ski excursions. It could also be living in a medium-sized town in Flyover Country, U.S.A., working 40 hour weeks on interesting problems and spending lots of time with your spouse and children. If you've ever looked around at your Logan's Run coworkers and wondered what happens when you turn 30, here's one of your answers.
AndrewKemendo 5 hours ago 7 replies      
Is there some reason that people keep making the case for creating a standard business that supports one or two people? These types of posts have been pretty consistent over the years: "Take control of your life with a small business" "You don't need to make a massive company to be happy" etc...

I never see articles that encourage: "Here's why you should dedicate your life to starting a company and try to dominate an industry." It's like these posts are fighting against a boogeyman that isn't there.

I think 99% of all small businesses are "lifestyle businesses" where the founders aren't trying to build a market dominating billion dollar company. So who are these articles target to?

Is it simply the amount of press that surrounds VC and hyperscale companies that these folks are rejecting? I don't think any VC or founder has ever claimed that the only way to be happy/make money/do good is by trying to create a massive market dominating company.

k__ 4 hours ago 1 reply      
Can we please stop calling regular businesses "lifestyle business", like it's some hobby for people who don't want to work in a " real" startup?!
wanderings 5 hours ago 3 replies      
Lifestyle business beats a startup, until it doesn't. I'm the example. Ran a category leading website for years until I was demolished by a fully focused bad ass team and thrown out of my leadership position. Ultimately, I was forced to sell out at a much lower valuation than I'd have if I were totally focused. It could vary on niche and industry. But one can't generalize it one way or the other. If you have a great position in a big sector and you don't go for the kill, someone else will and your lifestyle business would be likely chewed up by competition. If it's a business with an intrinsic moat(think a retail store in small tourist town), it's likely to sustain. Take frequent breaks while running a bad ass startup, but don't for a while think that you can let the ball drop.
orthoganol 5 hours ago 0 replies      
From someone who's done both, they are not comparable, directly, but they have a complementary relation: The DN (digital nomad) life is absolutely an engine for the kind of creative and free thinking that engenders killer startup ideas. Startups are "the thing" you want to commit your life to, the world-changing vision that you're ready to sacrifice for; the DN/ lifestyle business/ remote gigs mode is the fertile ground, for when you lack strength of vision, you don't know what you want right now, so you slow down, gain experience, and grow your thinking.

Only ever doing one in your life without the other is unenviable, and makes it hard to fully enjoy and appreciate, or even excel at, whichever one you've chosen.

goodroot 20 minutes ago 0 replies      
Many comments in here make the dialogue feel like a roiling cauldron of over-work and burn-out. Whether you're nomadic, working in a start-up, working at a mega-corp, working at the the grocery store, balance in life is crucial.

In knowledge work, how can one really spend more than 40 hours producing quality output? It becomes an unhealthy compulsion to sate a hyper-stimulated existence instead of a strategy for creation. Whichever way you choose to work, focus on health and ample rest. The rest will take care of itself.

ArmandGrillet 4 hours ago 2 replies      
"A good lifestyle business could even be turned into a multi-million dollar company, if thats what you want.": I've stopped reading there, I don't understand how articles that empty can arrive on top of HN. These questions (where to work? On what? How much?) get way better answers in "Ask HN" threads, articles coming from nowhere with a topbar selling me something are really not making me dream anymore.
chatmasta 3 hours ago 1 reply      
A lifestyle business seems fundamentally incompatible with a team oriented business. Let's assume the goal of a "lifestyle" business by a single founder is to automate all operations such that little to no work is required on the part of that founder.

Ok, that's all well and good. But some of that "automation" will inevitably be delegation to the founder's employees. So the employees have to work. The founder doesn't have to work. How can the founder possibly show good leadership and build a strong team if his goal is to work as little as possible?

As a founder, you are responsible for the well being of your employees. That's why they're employees, not independent contractors. If you're working four hours a week with a team of employees, there is a high chance you're shirking some responsibility toward them.

And if you decide to be a full time boss, then you're still building more than a business. You're building a team that you are responsible for. That is, you "answer" to other people - your employees. At this point, the advantages of a lifestyle business over VC funded business ("low hours," "not beholden to anyone") start to lose their luster.

If you're interested in building a team, and a lasting enterprise, then it becomes more logical to just take some seed funding so you can safely pay your employees and ensure an early growth trajectory. Whereas if you're only interested in a totally automated business to provide you and your family a stable income, then you should avoid hiring employees because you'll just end up beholden to them.

Thus the ideas of a "fully automated lifestyle business" and a "lifestyle business with a strong team" seem at odds with each other.

jasonrhaas 5 hours ago 0 replies      
Meh, kind of a generic article about how you should prioritize lifestyle over building a startup. I guess this is nothing new to me, I did the digital nomad thing with Remote Year for a year and change, and now I'm still working remotely in Austin, TX.

I do miss the constant travel, there is always something coming up to look forward to. When you are in one place, not constantly traveling, you have to make your own fun. Which is why I've taken up other things like riding motorcycles, brewing beer, and speaking at my local Python meetup.

All that year I was working full time as a Python Developer while traveling constantly. Every weekend was an epic adventure. It's an amazing lifestyle if you can pull it off, but its not for everyone can definitely will wear on you after a while.

boyce 5 hours ago 2 replies      
This digital nomad thing just looks hellish to me. Maybe I'm getting old.

Can't imagine being somewhere nice but glued to a laptop, or getting anything useful done without reliable wifi etc, or being part of a team where the boss has gone on holiday but still showing up in slack etc.

I'd hate to feel like I wasn't part of the team for not getting our kids together or not wanting to holiday or spend a day off with colleagues. I'm not impressed by instagram or medium posts from perfect looking beaches giving business advice.

Not sure when a lifestyle business went from being a business that fits around your lifestyle to making the appearance of living an idealised lifestyle everybody else's business.

buf 5 hours ago 0 replies      
I own a lifestyle business and I work at a startup as the founding engineer, but I work remotely.

When you work remotely, you can treat both your lifestyle business and your gig the same, insofar as you have the freedom to take an hour off your gig to do some calls for your lifestyle business in the middle of the day, or you can test particular technologies on your lifestyle business before you commit to it in your startup.

I find them both to be healthily married.

I still have the freedom to hang out with my kid at lunch, or work from a far away place, while at the same time achieving my career goals and attaining financial independence.

alissasobo 4 hours ago 1 reply      
At a certain point, this blog post seemed mostly about the great traveling opportunities that this company offers its employees. That's' neat, for employees who are kid-free. But as a developer married to developer... with 2 kids under the age of three... I can tell you that those work retreats abroad actually become pretty challenging for families. At a certain point.. people want to have kids. I would find a company who made their employee perks more about realistically supporting families far more appealing.
Mz 2 hours ago 0 replies      
Give me a break. He is playing fast and loose with terminology and it is disingenuous because he is twisting lifestyle business to be whatever he wants it to mean while dissing startups and not giving that term the same flexibility to be "anything that grows fast, even if it doesn't eat the CEO's life."

I hate the term lifestyle business and articles like this one are part of why. I have given my POV previously here:


My recollection is that Plenty of Fish was started by one guy who never took VC money, so he got to keep all the money when he sold for millions. Articles like this don't mention examples like that when justifying their biased opinion that "lifestyle business" = good and "startup" = bad. (In part because of the lack of VC money, I assume that Plenty of Fish was not a pressure cooker. Upon rereading my comment, that assumption does not seem clear.)

miheermunjal 5 hours ago 3 replies      
I... I feel I can't believe the company has 1) top salary, 2) top benefits 3) unlimited travel 4) work remote 5) top enterprise clients 6) small teams 7) work as much as you want?

either someone is ridiculous at managing at all of this (kudos!) or something is slipping somewhere. Even in custom-dev it can be cutthroat, especially with large-scale projects and demanding clients.

mcone 6 hours ago 1 reply      
Site seems to be down. Here's the cached article: http://archive.is/p5ZLR
swlkr 5 hours ago 0 replies      
Lifestyle businesses eventually give you more of what you really want, freedom.

VC backed startups seem to just give you a new set of bosses.

tixocloud 6 hours ago 1 reply      
I think what's important here is that we each have to know what our lifestyle aspirations are.

For some folks, a lifestyle business is better suited for them as they are looking to get more time out of their lives to do other things.

For others, a startup might be better because they have more control over whatever product/service they are providing.

thefuzz 6 hours ago 6 replies      
I'm someone who is thinking of changing carriers at 30 to become a developer. I love the idea of cutting out bureaucracy and office politics and be paid decently. I'd love any thoughts and advice from more experienced people about what I should do in the next 12-24 months.
lafay 4 hours ago 1 reply      
I'm all for lifestyle businesses and side hustles. But some ideas really do require a lot of up-front capital. It's hard to imagine Tesla, SpaceX, Boom, or Nest succeeding as lifestyle businesses.
josh_carterPDX 3 hours ago 0 replies      
I have grown both a lifestyle business and a startup and I still don't know which I prefer. I mean, it's nice to have some flexibility, but it's also nice to find the capital that helps propel your business faster. It really depends on the business, the person, and what you'd like to get out of the venture. At the end of the day it's a preference. I don't think one beats the other.
lquist 1 hour ago 0 replies      
Also this doesn't have to be an either/or decision that you have to make on Day 1. We started our business as a lifestyle business and as it got traction have decided to pursue a startup approach. On track to do $10M+ revenue this year :)
znq 5 hours ago 1 reply      
Just in case people are more interested in the details of the business we run, Indie Hackers recently ran an interview with us https://www.indiehackers.com/businesses/bugfender
rb808 5 hours ago 1 reply      
The people who I've seen who have the best lifestyle have big chunks of work followed by big chunks of time off.

They tended to work 6-12 month contracts followed by 3-6 months off. This works great in a good economy, when it turns sour its more difficult.

The other happy group worked in mines or oil rigs on a month on month off schedule. They got paid tax free and had 6 month long vacations a year to travel.

I think I prefer those options to working while travelling.

fiatjaf 5 hours ago 1 reply      
Has "lifestyle" changed its meaning? It seems to mean now that if you're "focusing on lifestyle" you are kayaking on the Pacific Ocean.
astrowilliam 3 hours ago 0 replies      
I've been working in tech for the last 10 years. I've come to the point that I need to enjoy my life and not sit in an office 10 hours a day, coding for someone else's vision.

So I started a brand ( https://lasttrystuff.com ) of my own so I can enjoy an active lifestyle while adventuring. It doesn't quite pay as much, but the trade offs are immensely satisfying.

kornakiewicz 6 hours ago 8 replies      
What does 'a lifestyle business' mean, anyway?
sgwealti 1 hour ago 0 replies      
What is a Lifestyle business? I read through the first 50% of the article and didn't see that term defined anywhere.
jjmorrison 3 hours ago 0 replies      
Sounds great if you want to optimize for your personal happiness. But not a feasible way to really make an impact on the world. The world needs more of the latter IMHO.
lazyjones 4 hours ago 0 replies      
How does the business case of such a "lifestyle business" look, i.e. the numbers? I'm not sure whether operating out of a sailing boat is affordable for small companies and the $6500 MRR of bugfender can't be covering it...
matchagaucho 4 hours ago 0 replies      
Not sure I'd agree with the OP's definition of Lifestyle Business, given he's operating a service company with employees, payroll, clients, and sales quotas.

That's no less hectic than a start-up.

quadcore 6 hours ago 1 reply      
Why does the author think one has to be happy the way he does? Lifestyle business beats a startup for some and the opposite is true for others.
SirLJ 5 hours ago 0 replies      
The holy grail is to automate, once done you'll be really free to enjoy life and give back to humanity
SKYRHO_ 6 hours ago 3 replies      
Whoops... Did HN Crash their site?
WebAssembly: A New Hope pspdfkit.com
128 points by steipete  4 hours ago   81 comments top 12
flavio81 4 hours ago 3 replies      
TL:DR; They have written a PS & PDF renderer that runs completely in the client, by using web-assembly. They show the performance to be not so far off using native code, and this allows offloading the CPU load of PDF rendering to the client rather than the server.

Excellent !!

evmar 4 hours ago 4 replies      
I think WASM is great, but it also is not a panacea.

You can see this demonstrated in their demo where highlighting the text garbles it -- because they render without using browser technologies, they miss out on browser implementations of things like highlighting. (Another example: their "get in touch" link in the demo is not a real link and can't be right-clicked, indexed by search engines, etc.)

In the specific context of a PDF that is perhaps inevitable (PDFs likely need special text layout) but in the larger context of random native apps, you only get a "free" port to the web by cutting out web features.

With that said, let me emphasize again that in the right domains this is all super awesome. It just doesn't solve the problem of making good webapps.

grumblestumble 19 minutes ago 0 replies      
I'd be interested to know what (aside from the new wasm support) the PSPDFKit standalone web viewer offers over PDFJS. We're currently using the latter, which is good enough for our needs, stable and actively developed, and open source. It does a lot of the work via Web Workers, and on desktop at least, it's performant enough for viewing even unusually large PDFs full of raster images.
jules 4 hours ago 4 replies      
WebAssembly might be even bigger than the web. People already use the V8 JS engine server side, and make client side apps with Electron. WebAssembly could become the universal cross platform low level VM.
ChrisSD 4 hours ago 2 replies      
"It's important to point out that it is designed to complement JavaScript not replace it and that in a browser context it has no direct access to the DOM, only via JavaScript."

That maybe true of current implementations but the design docs[0] appear to suggest that you will be able to import platform features in WebAssembly.

[0] https://github.com/WebAssembly/design/blob/master/Portabilit...

throwaway95014 4 hours ago 5 replies      
This is not a comment on the work done by these no doubt well-intentioned folks. But to pull a quote from their article:

"compile our 500.000 LOC C++ core to WebAssembly and asm.js"

This is absolutely terrible for the web :-( We've escaped activex, flash, java applets, and now we're bringing lots of legacy c++ to the web in a way that conflicts with the underlying tenets?

Malic 3 hours ago 2 replies      
I wonder when Node.js will (if?) support WASM. Assuming it does, then this becomes a cross platform server-side solution for PDF generation AND THAT is something I could actually use this very day.
LyalinDotCom 4 hours ago 2 replies      
.NET team is also looking seriously in some C# love for WebAssembly eco-system. Its not a real project here yet but we have made some progress, check out this great interview:

.NET Rocks! #1455 - WebAssembly and Blazor with Steve Sandersonhttps://www.youtube.com/watch?v=cCdF9-q4n5k

ledgerdev 2 hours ago 2 replies      
If you were creating a new general purpose language language today would you target WASM? And what's the latest on garbage collection for WASM?
coldtea 2 hours ago 0 replies      
A big question is, can WebAssembly be used to speed up e.g. Atom and VSC general operation?

Or is the way those are doing things mostly incompatible?

TekMol 2 hours ago 1 reply      

 more powerful features, such as threads, are planned as well
What, no more async hell when coding for the browser?

frik 4 hours ago 6 replies      
We will quickly find out that while WebAssembly might sound like a next step is a bad thing to happen to the web - ads will be binary blobs that consume 100% CPU and adblockers won't work and cannot be implemented without wasting even more CPU. Beside that the web will get more and more closed down - sound like something that Bill Gates had originally in mind in 1994/95 with pay-per-view Win32 based TheMicrosoftNetwork that never took off, because of the free WWW with ads skyrocketed.

Edit: Re "Anything wasm can do, JavaScript can do." ASM.js was one thing and ok, wasm is the real concern. What about the "binary blobs" do you not understand. The usual downvoters who have vast interest in succeeding of this questionable tech.

Why does Sattolo's algorithm produce a permutation with exactly one cycle? danluu.com
123 points by darwhy  9 hours ago   15 comments top 4
akalin 4 hours ago 1 reply      
If you enjoy the discussion of permutations and cycles in this article, there's a puzzle you might also like:

Prisoner A is brought into the wardens room and shown a faceup deck of 52 cards, lined up in a row in arbitrary order. She is required to interchange two cards, after which she leaves the room. The cards are then turned face down, in place. Prisoner B is brought into the room. The warden thinks of a card, and then tells it to B (for example, the three of clubs).

Prisoner B then turns over 26 cards, one at a time. If the named card is among those turned over, the prisoners are freed immediately. Find a strategy that guarantees that the prisoners succeed. (If they fail, they must spend the rest of their lives in prison.)

Needless to say: The two prisoners have the game described to them the day before and are allowed to have a strategy session; absolutely no communication between them is allowed on the day of the game. Notice that at no time does Prisoner A know the chosen card.

(Taken from https://www.reddit.com/r/math/comments/44h3tu/interesting_pu..., but quoting since that link has the answer in the top comment.)

SloopJon 7 hours ago 1 reply      
I've used inside-out Fisher-Yates with and without Sattolo's variation many times in testing, especially for testing various branch instructions in a bytecode interpreter (forward, backward, short, long, etc.). I think I first read about it in Knuth, but you don't need an algorithm bible to implement it. Very simple, very handy.
petters 5 hours ago 2 replies      
On creating a single-cycle permutation from a shuffle:

Take 0, 1, 2, 3, 4, 5 and shuffle it.4, 3, 0, 5, 1, 2

Now you can read out a permutation:

4 3

3 0

0 5

5 1

1 2

2 4

It will only have one cycle.

svat 4 hours ago 0 replies      
This is a nice post and I hadn't heard of Sattolo's algorithm before. The proof is a bit long though. The reference linked from Wikipedia: http://algo.inria.fr/seminars/summary/Wilson2004b.pdf proves the correctness of Sattolo's algorithm in three sentences. I found it fairly easy to understand, while I didn't manage to read through the linked post in detail to get the same level of understanding. Let me try to explain the proof I understood, without assuming mathematical background, but instead introducing it. (I'll use 0-based indices as in the post, instead of 1-based indices that mathematicians would use.)

# What is a permutation?

There are (at least) two ways to think of (or define) a permutation:

1. A list (an ordering): a specific order for writing out elements. For example, a permutation of the 10 elements [0, 1, 2, 3, , 9] means those 10 elements written out in a particular order: one particular permutation of 0123456789 is 7851209463. In a computer, we can represent it by an array:

 i 0 1 2 3 4 5 6 7 8 9 a[i] 7 8 5 1 2 0 9 4 6 3
2. A reordering. For example, the above permutation can be viewed as "sending" 0 to 7, 1 to 8, and in general i to a[i] for each i.

Instead of describing this reordering by writing down 10 pairs 07, 18, , 74, 86, 93, we can save some space by "following" each element until we come back to the beginning: the above becomes a bunch of "cycles":

- 07425 (as 50 again) (note we could also write this as 42507 etc., only the cyclic order matters)

- 18693 (as 31 again)

You can think of cycles the way you think of circular linked lists. This particular permutation we picked happened to have two cycles.

# What is a cyclic permutation?

A cyclic permutation is a permutation that has only one cycle (rather than two cycles as in the above, or even more cycles). For example, consider the permutation 8302741956:

 i 0 1 2 3 4 5 6 7 8 9 a[i] 8 3 0 2 7 4 1 9 5 6
If we follow each element as we did above, we get 0854796132 where all 10 elements are in a single cycle. This is a cyclic permutation.

Our goal is to generate a random cyclic permutation (and in fact uniformly at random from among all cyclic permutations).

# Sattolo's algorithm

Note that in a cyclic permutation of [0, ..., n-1] (in our example above, n=10), for the highest index n-1, there will be some smaller j such that a[j]=n-1 (in the example above, a[7]=9). Now if we swap the elements at positions n-1 and j (which in the example above is:

 Before After i 0 1 2 3 4 5 6 7 8 9 i 0 1 2 3 4 5 6 7 8 9 a[i] 8 3 0 2 7 4 1 9 5 6 a[i] 8 3 0 2 7 4 1 6 5 9
where we swapped a[7]=9 and a[9]=6 to make a[7]=6 and a[9]=9), then in general we get a[n-1]=n-1, and a[0]a[n-2] form a cyclic permutation of [0n-2]. In the above example, in the "after" case, if we ignore i=9 and consider only positions 0 to 8, we have the cycle 085476132. (This is our original cycle 0854796132 with 9 "removed", as we'd do when deleting an item from a linked list.)

This holds in reverse too: if we had started with the cyclic permutation of [0, , 8] that is in the "after" column above, added a[9]=9, and swapped a[9]=9 with a "random" element a[7]=6, we'd get the cyclic permutation of [0, 9] that is the "before" column.

In general, you can convince yourself that there is a unique way of getting any cyclic permutation on [0, , n-1] by starting with a cyclic permutation on [0, , n-2], considering a[n-1]=n-1, picking a particular index j in 0 j n-2, and swapping a[n-1] and a[j].

This gives the following algorithm, which we've already proved is correct (or derived, rather):

 def random_cycle(n): a = [i for i in range(n)] # For all i from 0 to n-1 (inclusive), set a[i] = i for i in range(1, n): # For each i in 1 to n-1 (inclusive), j = random.randrange(0, i) # Pick j to be a random index in the range 0 to i-1, inclusive a[i], a[j] = a[j], a[i] # Swap a[i] and a[j] return a
In the post linked above, you swap with a random element that is "ahead", instead of one that is "behind"; also you start with a list of length n and shuffle it according to the randomly generated cyclic permutation of [0(n-1)] instead of simply generating the permutation. From the post:

 def sattolo(a): n = len(a) for i in range(n - 1): j = random.randrange(i+1, n) # i+1 instead of i a[i], a[j] = a[j], a[i]
This is slightly different, but the proof is similar: in fact this is the algorithm (except going downwards) that is proved correct in the linked paper. (And even if it is not obvious to you that the two algorithms are equivalent, you have an algorithm that generates a random cycle and is just as easy to code!)

Music Fandom Maps nytimes.com
100 points by pmcpinto  9 hours ago   36 comments top 11
sushisource 5 hours ago 1 reply      
As with any map like this, the most startling thing to me is how the south really is a totally different culture, almost a totally different country.
no_gravity 5 hours ago 3 replies      
Since I run https://www.music-map.com this naturally caught my eye.

Unfortunately, there is no word on the methodology except that they did it with "the help of YouTubes geocoded streaming data". Does anybody know what that is?

santaclaus 35 minutes ago 0 replies      
It would be cool to see the inverse of this -- a service that analyzes your Spotify history and guesses where you are from.
santaclaus 41 minutes ago 0 replies      
What is the random zip code in eastern Washington that really likes to bump Future?
samat 7 hours ago 3 replies      
Is this from some public YouTube data/API? Would really like to do this for other countries.
nissimk 3 hours ago 0 replies      
The integrated youtube videos are really nice in this page. I like this best as an introduction to music I haven't heard. I like Kevin Gates 2 Phones.
the_cat_kittles 6 hours ago 2 replies      
are these maps using county boundaries? i would prefer something like kernel density or normalizing for county population or something... but still very cool. mississippi louisiana alabama georgia seem to be one pole, and everywhere else the other.
GoodAdmiral 2 hours ago 0 replies      
I mean yeah their entire discography is pretty religious. Makes sense. I do enjoy it while being non-religious myself however.
Shivetya 2 hours ago 1 reply      
Okay, took me a minute to understand it. So this is only based on Youtube views? I am more interested in radio, let alone if we could ever map iTunes sales to regions by song that would be impressive too. Another good area to explore is what are various college radio stations playing across the country?

With regards to youtube, some genres of music have been more aggressive using the medium than others and even some older artists have barely accepted the digital age fully or took a long time to do so.

landonalder 6 hours ago 1 reply      
Montanans really love their Eminem
vmarshall23 6 hours ago 2 replies      
Wow. Other than Metallica, I don't own an album by any of these people, and other than Metallica and Michael Jackson, I don't think I'd be able to name a single song by the few others that I do recognize the name.

Oh, and I'm using the word "album". That probably explains it all ... :-)

Launch HN: CocuSocial (YC S17) Marketplace for cooking classes at restaurants
41 points by ys1715  5 hours ago   22 comments top 11
iooi 3 hours ago 1 reply      
I was browsing through the available classes and while I thought there were a ton of duplicates it turns out it's the same class on a different date.

It would make more sense for users to pick a class first and then see when it's available. So when browsing through classes you would see a list of the types of classes available (ravioli, sushi, etc), and once a user selects a class they can finalize their decision on the date.

Otherwise, browsing through classes is a bit annoying having to manually de-duplicate things you've seen before and checking them for minor details like, is this hand-rolling sushi class the same one or it with a different instructor? different restaurant?

Another suggestion, highlight the name of the restaurant. The restaurant hosting the event is bound to have more recognition than the instructor.

Make separate profile pages for all instructors, and have them linked straight from the cards that display classes on /products/

Where are the stars coming from? Every single instructor has 5 stars but there are no reviews and it makes everything come off a bit disingenuous. Get rid of those until you have real reviews.

joekrill 1 hour ago 0 replies      
I know you mentioned in your description that this is only in New York so far, but that is not at all clear when visiting the site.
ludicast 4 hours ago 1 reply      
Looks great. I'm in New York myself and could see trying your service with the wife.

Working in the restaurant "niche" myself, and think this is a fun idea you got here.

My one comment is that possibly your site should mention the actual restaurant on the front page, before drilling down, sort of how goldbely does. It makes it more enticing, plus the logistics of your startup means that there's little chance people will go off-platform to deal with the restaurant 1-1 (and vice-versa).

Also, I think using hotel kitchens is a smart idea. They are way more roomy than most restaurant kitchens, so you can accommodate more folks (obviously...).

haaen 1 hour ago 1 reply      
Sam Altman once said that YC only accepts applicants if they have the potential to become billion dollar companies.

What is CocuSocial's growth path to 9 zero's?

d--b 3 hours ago 1 reply      
Just a comment: don't present yourself as a marketplace, you're not the kayak of food classes.

Your added value is that you're offering high quality cooking classes yet affordable (and that cannot be found elsewhere!)

jroseattle 1 hour ago 0 replies      
Cool idea. In terms of adoption, I found there was too much information to know what to do right away (like, instinctively.)

As a suggestion, cut down on the UI components for choosing to book a class. As a comparative, take a look at the approach by AirBnB.

Good luck!

loco5niner 1 hour ago 0 replies      
Some quick initial feedback. The bounciness of the videos on the front page was a bit jarring to me initially. Good luck with your venture!
Jugurtha 50 minutes ago 0 replies      
Hi, Billy..

There's that piece of "folk wisdom" about making sure company names don't mean unexpected things in other languages..

"Cocu", in French, a popular language, means "cuckold", as others have pointed out. As a French speaking person, when I read "CocuSocial", the first thing that popped into my mind was "social platform for swingers". I was a bit perplexed and went "Hmm, YC incubated a Bodily Fluids Exchange as a Service startup, that's interesting".

What are your thoughts on that? Is this deliberate to cause a certain break point of thoughts and titillate the inquisitive mind?

zappo2938 4 hours ago 1 reply      
Do you need a former chef, current JavaScript / Node / PHP developer on your team?
nocgzh1 4 hours ago 1 reply      
This seems like a great idea! Do you have any short-term plans to expand beyond New York?
GuiA 4 hours ago 3 replies      
"Cocu" in French slang translates to "cuckold" (and with "social" being the same word, CocuSocial sounds extremely humorous). Just thought I'd mention it in case you ever are thinking about expanding internationally.
GnuPG 2.1.23 released gnupg.org
62 points by jwilk  5 hours ago   16 comments top 6
lima 4 hours ago 3 replies      
> gpg: Options --auto-key-retrieve and --auto-key-locate "local,wkd" are now used by default.

> Note: this enables keyserver and Web Key Directory operators to notice when a signature from a locally non-available key is being verified for the first time [...]

I'm not sure if I agree with that decision. Many people intentionally keep their keys off the key servers, and GPG is used by many internal signing applications (internal repos, etc.) where the keys aren't public, and enabling this by default would leak lots of metadata about the internal infrastructure. Last time I checked, the key server protocol was unencrypted, I hope this is no longer the case.

On the other hand, it will improve usability for "normal" usage and you'll still be prompted to verify the fingerprint for unknown keys.

snakeanus 6 minutes ago 0 replies      
I wonder if they have any plan to support chacha20/poly1305 for encryption and sha2/keccak/blake for fingerprints.
wonks 4 hours ago 2 replies      
"gpg: "gpg" is now installed as "gpg" and not anymore as "gpg2". If needed, the new configure option --enable-gpg-is-gpg2 can be used to revert this."

Aha. I was wondering if this would be deemed a mere sequel to GPG 1 forever.

eklitzke 3 hours ago 0 replies      
For those concerned about the auto key retrieval mechanism, add the following to your gpg.conf:

 no-auto-key-retrieve auto-key-locate local

jwilk 3 hours ago 0 replies      
> Options --auto-key-retrieve and --auto-key-locate "local,wkd" are now used by default.

Sounds like a security hole to me, especially for systems that automatically verify signatures against a curated keyring.

mjevans 4 hours ago 0 replies      
I like that it will automatically try to download missing public keys.

HOWEVER, a very useful feature (which it might have but wasn't mentioned in the upgrade notes; my own local manpage only has --list-public-keys (and shorter versions) and --locate-keys (which has wording that makes me think it'll also check online)) would be to search, local only, for any key (public or private) matching a given fingerprint hex or other field string. The changelog makes me think that to get that behavior I'd presently have to use: gpg --auto-key-locate local --locate-keys "example@example.com"

The First Law of Complexodynamics (2011) scottaaronson.com
33 points by albertzeyer  6 hours ago   5 comments top 3
schiffern 1 hour ago 0 replies      
I thought that sounded familiar. Geoffrey West gave a talk there, delivering a much better explanation of his research on socio-biological scaling, growth, and sustainability than thepop sci treatments occasionally posted here.


mannykannot 4 hours ago 0 replies      
I think it looks a lot less paradoxical if you ask "why do interesting things (seem to) have 'intermediate' levels of entropy?"

It is not exactly the same issue, because the original formulation suggests that everything with an 'intermediate' entropy is interesting, which appears to be a tougher case to make, though I imagine there's some definition of 'intermediate' and 'interesting' from which an argument may be made.

albertzeyer 6 hours ago 2 replies      
I often wondered that entropy is often not the right measure which you want and something like what he describes as complexity is actually what you want. It's the first time that I have heard of this concept and that it's formally defined as sophistication or logical depth. The only downside is that this is not computable.

This lead me to the question, whether there is a simpler variant of complexity (with similar properties) which is computable: https://math.stackexchange.com/questions/2387690/simple-comp...

The First Eclipse Prediction: Act of Genius, Brilliant Mistake, or Dumb Luck? atlasobscura.com
30 points by diodorus  6 hours ago   6 comments top 2
celticninja 5 hours ago 2 replies      
Reminds me of one of my favourite Tintin books, where he saves himself and his companions from being burnt on a pyre by Incas when he uses an eclipse to make them believe he can control the sun.


Scala Vector operations aren't Effectively Constant time lihaoyi.com
45 points by dmit  6 hours ago   28 comments top 11
kbenson 3 hours ago 1 reply      
I'm a little disappointed that I bothered to read what turned out to be a massive act of pedantry.

A few sections are devoted to showing that O(log32(n)) is the same as O(log(n)). This is entirely correct, as shown, because the difference between them is reducible to a constant multiplier, which big-O notation does not care about. It's also, from the problematic sources given, and entirely made up argument, given that I didn't find O(log32(n)) mentioned in any of them that I was able to view. Plenty of notes in them about it being logarithmic though...

I understand making sure people know that it's not constant time, but when all the examples of problematic descriptions you give make sure to say that it is logarithmic time and then go on to explain why it's "effectively constant", you're railing against a non-issue with an absurdly complicated argument.

kevinwang 4 hours ago 2 replies      
Effectively constant seems like a reasonable thing to say if the real world number of operations is upper bounded by six. I don't interpret that claim as making any claim about the theoretical asymptotic complexity of an algorithm.

Of course, as the author notes near the end, this also means that any real world algorithm running on a real world data type which is bounded in size can also be considered "effectively constant", which does throw doubt onto the effectiveness of the term.

I guess in the real world we should use graphs in units of time instead of classes of functions to discuss performance based on input size, since that's what we care about.

Also interesting to note is that"effectively constant", although extremely hand-wavy and not rigorous, is used even by computer scientists to denote extremely slow growing functions. A professor once used similar words to describe the complexity of the inverse Ackerman function: https://en.m.wikipedia.org/wiki/Ackermann_function#Inverse

smitherfield 3 hours ago 2 replies      
Even a statically-allocated C array doesn't "really" have constant-time lookup with respect to size; the more elements it has the greater the rate of cache misses and page faults on lookups becomes (I'm pretty sure this can be proven true of all data structures). Moreover the possibility of cache misses and page faults means that while lookups take a fixed number of CPU cycles on average, any single lookup might take between <1 and thousands of cycles. If you truly need your data accesses to take a fixed number of CPU cycles, you have to handwrite assembly and put everything in registers. And even that doesn't account for instruction reordering and the possibility of interrupts or context switches.

I would assume the reason the Scala website calls Vector log(n) element lookup "effectively constant" is because in real-world use (as opposed to hypothetical ideal Von Neumann machines), the slowdown of the lookup algorithm with respect to n is nil, in relation to the decrease in cache and allocation efficiency with respect to n.

If you're creating some sort of ultra-fine-grained real-time system where everything needs to take a fixed number of clock cycles, Scala would be a pretty terrible tool for the job. For that sort of thing you'd want MISRA C or assembly running on bare metal, or a FPGA or ASIC.

pierrebai 3 hours ago 2 replies      
This mainly shows that the author mistakes the prupose of Big-O notations. It's a measure of asymptotic performance, so there need to be an asymptote to reach. If the range of values you're measuring is small (and a non-constant factor that varies between 1 and 6 is small) then it's teh fact that one uses Big-O to characterize an algorithm that is wrong.

It's a classic mistake. There are many pitfall to performace measurements, including: using Big-O when the range is small or ignoring cinstant factor when the constant is big. Both of these has lead people down the wrong path, for example usng a theoretically faster algorithm that is in reality slower because in the typical use case the constant overweigh the complexity factor.

The counter-example to the author argument would be to say that a binary decision is linear over the input range.

Asdfbla 2 hours ago 0 replies      
Seems like a pedantic article, but I liked his remark that the 'effectively constant' argument can of course be extended to any algorithm where the input size is bounded and therefore, shouldn't really be used to describe your algorithmic complexities, even in real life implementations (except when you maybe use the inverse Ackermann function or something).
deepsun 2 hours ago 1 reply      

Scala doc claims that their algorithm is O(log32(N)), where N is not greater than 2^31, so it's O(6)=O(1) -- "effectively constant".

Author claims that then all real-world algorithms are constant, for example bubble sort is O(N^2), where N is not greater than 2^31, so it's O(2^62)=O(1) -- also "effectively constant".

wohlergehen 4 hours ago 1 reply      
Has anyone actually benchmarked the vector lookup for various vector sizes in scala?

That would straightforwardly determine if the runtime is "effectively constant" or "effectively logarithmic", e.g. cause it is really O(a+b*log(n)) with a >> b.

DSrcl 3 hours ago 1 reply      
Labelling an operation -- number of steps of which upper-bounded by 6 -- "effectively constant" is different from considering any real world algorithm with bounded input size.

Calling Scala's immutable vector ops "effectively constant" is not a stretch/hack by any means, considering we also say the same for integer addition and multiplication.

stcredzero 2 hours ago 0 replies      
By now certain elements of a modern programming environment have become pretty apparent. (Vectors and maps, for example) Could there be something gained by supporting these directly in the architecture?
flgr 3 hours ago 0 replies      
This is why I've never liked stating that something has "run-time of O(log(n))" since that's rarely true the assumption for that is that all machine instructions take pretty much the same time, which is not the case. CPU instructions involving cache misses are multiple orders of magnitude more expensive than others.

I think it makes much more sense to talk about concrete operations (or cache misses) instead. Sounds like their implementation has O(log(n)) cache misses.

marvinalone 4 hours ago 0 replies      
Wow, you really showed that straw man what's what.
       cached 9 August 2017 22:02:01 GMT