hacker news with inline top comments    .. more ..    21 Aug 2017 Best
home   ask   best   2 years ago   
Explaining React's license facebook.com
945 points by y4m4b4  2 days ago   561 comments top 4
kevinflo 2 days ago 17 replies      
I love love love love love react as a technology, but this is just awful. I believe any developer not on Facebook's payroll still contributing to React or React native at this point has a moral obligation to stop. I personally feel like such a fool for not taking all this seriously before the ASF gave me a wakeup call. React is a trojan horse into the open source community that Facebook purposely and maliciously steered over time to deepen their war chest. Maybe that's an overblown take, but they had a perfect opportunity here to prove me wrong and they didn't. The defensive cover they present here feels so paper thin.

Even if we paint all of their actions in the most favorable possible light, and even if the clause is a paper tiger as some have claimed, it doesn't matter. This is not how open source should work. We should not have to debate for years if a project's license is radioactive. Especially individual devs like myself who just want to use a great tool. We should be able to just use it, because it's open and that's what open means. This is so much worse than closed. It's closed masquerading as open.

DannyBee 2 days ago 4 replies      
So, i feel for them, having watched Google's open source projects be targeted by patent trolls in the past. But i really don't think this is the way forward.

A few things:

1. If you want to suggest you are doing this as part of an attempt to avoid meritless litigation, you really should give concrete examples of that happening. Otherwise, it comes off as a smoke screen.

2. The assertion is that if widely adopted, it would avoid lots of meritless litigation. This is a theoretically possible outcome. Here's another theoretically possible outcome of wide adoption of this kind of very broad termination language: Facebook is able to use other people's technology at will because nobody can afford to not use their stuff, and no startup that they decide to take technology from, and say "no more facebook/react/etc for you" could realistically launch an effective lawsuit before they died.Assume for a second you think Facebook is not likely to do this. If widely adopted, someone will do it.Nobody should have to worry about this possibility when considering whether to adopt particular open source software.

(there are other theoretical outcomes, good and bad).

It's also worth pointing out: None of this is a new discussion or argument. All of the current revisions of the major licenses (Apache v2, GPLv3) went through arguments about whether to use these kinds of broader termination clauses (though not quite as one sided and company focused), and ultimately decided not to, for (IMHO good) reasons. I'm a bit surprised this isn't mentioned or discussed anywhere.

These kinds of clauses are not a uniform net positive, they are fairly bimodal.

jwingy 2 days ago 2 replies      
I wonder how Facebook would feel if all the open source software they currently use incorporated the same license. I bet it would deter them from enjoying much of the code they built their business on. This stance seems pretty antithetical to the goal and spirit of open source software and I really hope it's not the beginning of other companies following suit and 'poisoning' the well.
eridius 2 days ago 3 replies      
> We've been looking for ways around this and have reached out to ASF to see if we could try to work with them, but have come up empty.

There's a pretty obvious solution to this: relicense React. The fact that Facebook isn't even considering that is a pretty strong indication that they "weaponized" their license on purpose.

> To this point, though, we haven't done a good job of explaining the reasons behind our BSD + Patents license.

I think we already understand the reasoning behind it.

> As our business has become successful, we've become a larger target for meritless patent litigation.

And the solution you chose stops merit-ful litigation as well.

> We respect third party IP, including patents, and expect others to respect our IP too.

Clearly you don't, because you've intentionally designed a license to allow you carte blanche to violate other companies' patents if they're dependent enough upon React to not be able to easily stop using it.

Vue.js vs. React vuejs.org
707 points by fanf2  2 days ago   451 comments top 7
pier25 1 day ago 14 replies      
We moved away from React to Vue about 8 months ago and everyone on the team is a lot happier.

First reason is we hate JSX. It forces you to write loops, conditionals, etc, outside of the markup you are currently writing/reading. It's like writing shitty PHP code without templates. It also forces you to use a lot of boilerplate like bind(), Object.keys(), etc.

Another problem with React is that it only really solves one problem. There is no official React router and we hated using the unofficial react-router for a number of reasons. A lot of people end up using MobX too.

With Vue there is no need to resort to third parties for your essential blocks. It provides an official router and store called Vuex, which IMO blows Redux out of the water when combined with Vue's reactive data.

Vue docs are probably one of the best I've used. They provide technical docs, plus excellent narrative docs (guides) for all their projects (Vue, Router, Vuex, templates, etc).

I won't say that Vue is perfect, but we would never go back to React.

If you don't like Vue but want to get out of React, check out Marko, the UI library by Ebay. It's better in every way than Vue or React except that the community and ecosystem are almost non existent.


a13n 2 days ago 19 replies      
You'll see quotes in this thread like "The demand for both React and Vue.js is growing tremendously" thrown around. It's good to check out npm install stats to get an unopinionated comparison.


In reality, React is downloaded roughly 4-5x more than angular and 7-8x more than Vue. In August so far, React has 75% market share among these three libs. Interestingly, this share has grown in August compared to both last month (July) and beginning of year (January).

While this thread and the license thread might indicate that React is dying, it's not. It's growing.

If Vue is going to be what React is today, it has quite a long way to go.

Kiro 1 day ago 7 replies      
I've built semi-large applications in both Vue.js and React. I like both but prefer React.

For me Vue.js is like a light-weight Angular 1, in a good way. It's very intuitive and you can start working immediately. It does however easily end up in confusion about where the state lives with the two-way binding. I've run into a lot of implicit state changes wrecking havoc. The declarative nature of React definitely wins here, especially working with stateless functional components. If you're serious about Vue you should adhere to unidirectional bindings, components and use Vuex.

The best thing about Vue.js for me is the single file components. It's such a nice feeling to know that everything affecting a certain component is right before your eyes. That's also the reason I started adapting CSS-in-JS in my React components.

The biggest problem for me with Vue.js is the template DSL. You often think "how do I do this complicated tree render in Vue's template syntax? In JSX I would just use JavaScript". For me, that was the best upgrade going from Angular to React and it feels like a step backwards when using Vue.js.

blumomo 1 day ago 2 replies      
In this thread people are fighting about their _opinions_ why they use Vue.js or React. And why X is really better than Y.

In reality these programmers don't want to have the feeling they might have made the wrong choice when they used X instead of Y. The idea that they might have taken the poorer choice hurts so much that they need to defend their decision so heavily while in reality taking ReactJS or Vue.js is like ordering pizza or pasta. You usually don't want to have both at the same time. So you need to explain why pizza is better than pasta tonight. Only that you usually have to stick longer around with Vue.js or ReactJS once chosen. Enjoy your choice and solve real problems, but stop fighting about it, programmers. Pasta and pizza will always both win.

spion 2 days ago 2 replies      
To me the whole idea of client-side HTML templates seems bad. They start out easy enough, but then they either limit you in power or introduce new and weird concepts to replace things that are easy, familiar and often better designed in the host language.

Here is an example on which I'd love to be proven wrong:


Its a generic spinner component that waits on a promise then passes off the fetched data to any other custom jsx. It can also take onFulfill and onReject handlers to run code when the promise resolves.

The concrete example shown in the fiddle renders a select list with the options received after waiting for a response from the "server". An onFulfill handler pre-selects the first option once data arrives. The observable selected item is also used from outside the spinner component.

With React+mobx and JSX its all simple functions/closures (some of them returning jsx), lexical scope and components. With Vue I'm not sure where to start - I assume I would need to register a custom component for the inner content and use slots?

kennysmoothx 2 days ago 2 replies      
I used React for a few years and it was great and powerful, there were many things however that I disliked.. Particularly I was not a fan of JSX. I liked React but I did not feel comfortable using it.

When I first saw VueJS I had a hard time understanding how it would be any better than React, that is until I saw single file components.


I fell in love with the eloquence of being able to separate my HTML, JS, Styles for a single component.. it seemed /right/ to me..

In any case, I've been using VueJS ever since for my new projects moving forward and I'm very happy with it. It has everything I would ever need from React but in what I feel is a more polished and thought-out way.

Just my two cents :)

keyle 2 days ago 5 replies      
I've used both. What makes me pick Vue in the end is the fact that there is no compiler needed, no jsx and all the non-sense that goes with that.

If you want a full blown huge application to last years, then go Angular... Although who knows if Angular will be there in 5 or so years.

There is no perfect library/framework but I love Vue because Vue does exactly what it says on the tin.

Firefox Focus A new private browser for iOS and Android blog.mozilla.org
675 points by happy-go-lucky  2 days ago   328 comments top 21
progval 2 days ago 9 replies      
According to F-Droid [1], it contains `com.google.android.gms:play-services-analytics`.

[1]: https://gitlab.com/fdroid/rfp/issues/171#note_30410376

lol768 2 days ago 2 replies      
Have been using this a while, it's really nice as the default browser to open links in. Having the floating button to clear everything is neat and I like the UI desing. It's also really fast.

I'd like to see better support for getting SSL/TLS info - why can't I tap on the padlock and get the certificate info (EV, OV, DV?), cipher suite, HSTS etc?

hprotagonist 2 days ago 3 replies      
I installed Firefox Focus for iOS simply for its content blocker. I still prefer using mobile safari, but augmented with three content blockers:

- Firefox Focus, which blocks all sorts of stuff

- 1Blocker, which blocks all sorts of stuff

- Unobstruct, which blocks Medium's "dickbar" popups.

rcthompson 2 days ago 2 replies      
This is useful to use as your default browser. It has a quick way to open the same link in another browser, so you can use it as a sort of quarantine to vet unknown links before exposing your main browser and all its juicy user data to a new website.
ghh 2 days ago 2 replies      
Focus does not seem to erase your history in a way you may expect. Try this on Android:

- Erase your history.

- Go to HN, click any link you haven't clicked before.

- Wait for it to load.

- Erase your history. Make sure you see the notification "Your browsing history has been erased".

- Go to HN again, and see the link you've just clicked still highlighted as 'visited'.

Xoros 2 days ago 2 replies      
How is this news ? I installed it weeks ago on my IPhone. I don't understand why Mozilla just announced it now. Maybe it's a new version.

On the browser itself, I launched it, navigate on a URI, closed it, relaunched it, type the firsts characters of my previous URI and it auto completed it. From my history I guess.

So it's not like incognito mode on other browsers. (Haven't retested again)

bdz 2 days ago 8 replies      
I wish open source projects publish the compiled .apk file not just the source code.

If I want to install this on my Fire HD I either have to download the .apk from some dodgy mirror site or install Google Play with some workaround on the Fire HD. Cause Firefox Focus is not available in the Amazon App Store. I mean yeah I can do both in the end, not a big deal, but I just want the .apk nothing else.

computator 2 days ago 3 replies      
This would have been perfect for iPad 2's and 3's on which Safari and the normal Firefox keep crashing under the weight of the current bloated web.

But alas, the "simple and lightweight" Firefox Focus actually requires a heavyweight 64-bit processor:

> Why aren't older Apple products supported? Safari Content Blockers (which include Firefox Focus) are only available on devices with an A7 processor (64-bit) or later. Only 64-bit processors can handle the extra load of content blocking, which insures optimal performance. For example, since the iPad 3 has an A5 processor, Firefox Focus is incompatible.[1]

Come on, iPad 2's and 3's are less than 5 years old. There has to be some way to keep the iPad 2 or 3 alive if all you want to do browse the web.

[1] https://support.mozilla.org/en-US/kb/focus

cpeterso 2 days ago 1 reply      
Since I started using Firefox Focus for one-off searches, I'm surprised at how infrequently I really need to be logged into any websites to complete my task. Nice that Focus simply clears all those trackers and search history when I close it.
nkkollaw 2 days ago 5 replies      
So, if I understand this correctly... It's a regular browser, but like you're always in private mode + it's got a built-in ad blocker?

If I want to check Hacker News let's say 5 times throughout the day and feel like leaving a comment, I have to login again, without autocomplete..?

Maybe I'm missing something.

fiatjaf 2 days ago 1 reply      
> For example, if you need to jump on the internet to look up Muddy Waters real name

Best idea ever. That's the most common use case people have and one that's drastically underserved by current browsers.

If people can't get their browser to quickly open a link to simple stuff, it means the web is failing. If the web is failing they'll quickly jump over to sending images over WhatsApp or fall into the trap of using the Facebook app for all their needs that could be otherwise served by the web.

webdevatwork 2 days ago 1 reply      
Firefox Focus is great. It's amazing how much better web readability and performance gets when you block most of the adtech garbage.
ukyrgf 2 days ago 0 replies      
I love Focus. I wrote about it here[1], albeit poorly, but it just made me so happy to be able to use my phone again for web browsing. Sometimes I open Chrome and the tab that loads was something I was testing weeks prior... it's taken that big of a backseat to Firefox Focus.

[1]: https://epatr.com/blog/2017/firefox-focus/

x775 2 days ago 0 replies      
I have been using this for a while on one of my phones (OnePlus 5, newest version of OxygenOS) and am fairly satisfied with its overall performance. It works seamlessly for casual browsing - i.e. opening pages from Reddit or similar. I however cannot help but feel as if the standard version with appropriate extensions (i.e. Disconnect, uBlock Origin and thus forth) remains a better alternative than Focus in solving the very issues Focus seeks to accommodate. I do very much love how closing the browser erases everything though. It is worth mentioning that the ability to install extensions is exclusive to Android for now, so Firefox Focus has become my go-to-browser for my iOS devices. If you have Android the above is worth considering though!
byproxy 2 days ago 1 reply      
There is also the Brave browser, which I believe covers the same ground : https://play.google.com/store/apps/details?id=com.brave.brow...
st0le 2 days ago 1 reply      
Hasn't it been available for a while now?
gnicholas 2 days ago 2 replies      
I love Focus and now use it for almost all of my mobile googling. One thing that would be nice is a share extension, so that when I'm in Safari and see a link I want to open I can share it to Firefox Focus. Right now I have to "share" it to [copy], open Focus, and paste it in. Not a huge hassle, but would be nice to streamline.
noncoml 2 days ago 2 replies      
Looks awesome and fast. Exactly whats needed and expected from Mozilla. Thank you!

Can we have something similar for desktop as well?

api_or_ipa 2 days ago 3 replies      
Why can Firefox build a browser with 16mb and yet every other app on my phone is 80+mb?
wnevets 2 days ago 1 reply      
I've been using it as my default browser for Android for a while and I like it. The only thing I don't love is the notification saying the browser is open, it triggers my "OCD" . I understand why it's there but I wish there was some way around it.
bllguo 2 days ago 0 replies      
I've been loving focus. Fastest mobile browser I've used. Appreciate the privacy features also.

I set it to my default browser and keep chrome handy on the side.

E-commerce will evolve next month as Amazon loses the 1-Click patent thirtybees.com
601 points by themaveness  3 days ago   222 comments top 39
jaymzcampbell 3 days ago 4 replies      
Setting aside the madness that is the patent itself ever being granted, what I found most interesting on that post was that this could now (possibly) become an actual web standard in the future:

> the World Wide Web Consortium (W3C) has started writing a draft proposal for one click buying methods.

The W3C site itself has a number of web payment related proposals in progress[1]. The Payment Request API, in particular, looks pretty interesting (updated 2017-08-17). I wonder what a difference something like that would've made back in the day when I was bathed in Paypal SOAP.

[1] https://www.w3.org/TR/#tr_Web_Payments

tyrw 3 days ago 7 replies      
I ran an ecommerce company for about a year, and one click checkout was the least of our concerns when it came to Amazon.

The speed of delivery, prime benefits, brand recognition, and willingness to lose money on many if not most items are absolutely brutal to compete against.

I'm glad one click checkout will be more broadly available, but it's probably not going to make much of a difference...

NelsonMinar 2 days ago 1 reply      
The 1-Click patent was the genesis of a long debate between Jeff Bezos and Tim O'Reilly about software patents. It resulted in the formation of BountyQuest, a 2000-era effort to pay bounties for prior art for bad patents. Unfortunately it didn't really work out. But the history of arguing about software patents is pretty interesting. http://archive.oreilly.com/pub/a/oreilly//news/patent_archiv...
mseebach 2 days ago 7 replies      
The space (from my online shopping experience) seems to be divided between Amazon (with one click checkout, fast delivery etc) and everyone else (42 click checkout and one week delivery, if you're lucky).

If the one-click patent was a major inhibitor of competition, I'd basically expect to see a lot of two-click check out options. Instead I find myself creating a million redundant user accounts, telling people that my mothers maidenname is "khhsyebg" (she's got some Dothraki blood, it seems) and parsing "don't not uncheck the box if you wish to prevent us from causing the absence of non-delivery of our newsletter and also not abstaining from passing on your details to third parties".

dboreham 3 days ago 7 replies      
I have been buying from Amazon for 20 years and have not once used 1-Click.
pishpash 3 days ago 1 reply      
This patent prevented a nefarious checkout pattern across myriad potentially unscrupulous store fronts for more than a decade so was it really so bad? ;)

Some days I feel Amazon was not only the world's largest non-profit organization but also among its most beneficent!

masthead 3 days ago 7 replies      
Still can't believe that this was a patent!
TheBiv 3 days ago 4 replies      
NOTE that the Registered Trademark of "1-Click" will still be valid and owned by Amazon


romanhn 3 days ago 0 replies      
"E-commerce will change forever" ... strong words. Amazon has features that are a much bigger value proposition than one-click purchases. I don't see this changing the landscape in any significant way.
wheaties 2 days ago 2 replies      
"They have proposed ways of storing cards and address data in the browser..."

Oh hell no! Just what we need, yet another reason for people to attack your browser. Don't we already suggest to never use the "remember your password" button? Now, it's "remember your credit card." No. Please, just no.

dpflan 3 days ago 0 replies      
When the news about Soundcloud's future emerged, discussions turned through some thoughts about how to help SC keep its roots and grow into what it can be rather than be a Spotify competitor. The Amazon One-Click patent was brought up about how to allow buying the song / supporting the artist/record label you're enjoying.

Perhaps there is a chance now for SC (and others) to use this? (It'd be interesting to see how often the patent thwarted any business decisions. Also, I wonder if this was considering in the funding round...)

Here is the comment:> https://news.ycombinator.com/item?id=14991938

Here is the parent HN post:> https://news.ycombinator.com/item?id=14990911

philfrasty 3 days ago 0 replies      
...e-commerce will change forever...

Simply from a legal standpoint this is BS. In some countries you have to display the customer a whole bunch of information and terms before he can make the purchase.

Just because Amazon ignores this due to their size and $$$ doesn't mean everyone can.

10000100001010 2 days ago 0 replies      
I have never used one-click but I have relatives that compusively purchase off Amazon with one-click all of the time. It is almost a drug to them because they click a button and then stuff shows up at their door. For some users, removing all barriers except for a click is sufficient to get them to buy.
jwildeboer 2 days ago 0 replies      
As a former core developer of OSCommerce, where our users were threatened with patent infringement over exactly this, I will order a nice glas of whiskey, celebrating this thing is finally over. This one patent made me join the fight against software patents in Europe, which we sort of won in 2005.
novaleaf 2 days ago 1 reply      
anecdote: I use Amazon for practically all of my shopping, only supplementing it by going to a brick-and-mortar for food.

I have never used the "buy now" feature, so honestly I think it's impact is a bit overblown.

Here are my reasons I never use it:

1) I do a lot of comparison shopping, so I like to review my orders before the final purchase. (in case I put something in my cart and then later added something better)

2) I want to make sure I don't order something under $35 and get stuck paying for expedited shipping (which is free for prime members over $35 in purchases)

3) I have a few addresses and cards on file, and want to make sure the order will use the right one.

4) I use the cart as a temporary list, anything that looks interesting during my shopping session gets thrown in there (or perhaps another browser window if doing comparisons).

drcube 2 days ago 2 replies      
This is a "feature" I actively avoid. Why in the world would anyone want to buy something online without a chance to review their purchase? Other web pages don't even let you leave the page without asking "are you sure?".
clan 3 days ago 7 replies      
I have always hated the thought that retailers stored my credit card information. Seems to be very common with US based shops.

If this gets any traction I will need to fight even harder to opt out.

I yearn for the day I can have one off transaction codes.

stretchwithme 2 days ago 0 replies      
Buying things with 1 click is not an Amazon feature I've ever cared to use.

The right product at the right price, fast. That's what matters.

amelius 2 days ago 0 replies      
Reminds me of the joke I read somewhere about a "half-click patent", where the purchase is done on mousedown instead of on click.
benmowa 2 days ago 0 replies      
"These are the ones [credit card processors] we have worked with in the past that we know use a card vault. Others likely support it too"

Note The more common term is credit card Tokenization, not just Vaulting, and is not required for 1-click if the merchant is retaining CC numbers. - although this is not recommended due to PCI and breach liability.

summer_steven 2 days ago 0 replies      
This is almost like a patent on cars that go above 60 MPH. Or a website that takes less than 50 ms to load.

They have a patent on the RESULT of technology. The patent SHOULD be on THEIR VERY SPECIFIC IMPLEMENTATION of 1-click checkout, but instead it is on all implementations that result in 1-click checkout.

Patents really are not meant for the internet...

ComodoHacker 2 days ago 0 replies      
I'm sure Amazon has already filed an application for "Zero-click checkout". Something like "swipe over a product image in a 'V' pattern to checkout", etc.
drumttocs8 13 hours ago 0 replies      
Huh? 1-Click patent? Does this mean I can literally patent a design choice?
blairanderson 1 day ago 0 replies      
Businesses use that shit. They don't have time and often don't care about the little details.

Businesses are the customers you want.

vnchr 2 days ago 0 replies      
Would anyone like something built to take advantage of this? I'm open next week between contracts (full-stack JS), maybe there is a browser extension or CMS plugin that would make this feature easy to implement?
wodenokoto 2 days ago 0 replies      
Does Amazon even use this themselves? I have fewer clicks going product page to purchase confirmation on Aliexpress.com than on Amazon.com
samsonradu 2 days ago 0 replies      
Interesting to find out such a patent even exists. Does this mean the sites on which I have seen the one-click feature implemented were until now breaking the patent?
dajohnson89 2 days ago 0 replies      
The # of returns are surely higher for 1-click purchases -- wrong address, wrong CC#, no chance to double-check you have the right size/color, etc.
nocoder 2 days ago 2 replies      
Does this mean the use of the term "1-click" will no longer be exclusive to Amazon or is that a part of some trademark type stuff?
tomc1985 2 days ago 0 replies      
Oh joy, now everyone's going to have that stupid impulse buy button. Yay consumerism, please, take my firstborn...
sadlyNess 2 days ago 0 replies      
Hope its going to be added to the payments ISO standards. If that's a fitting home along with the W3C move, is it?
perseusprime11 2 days ago 0 replies      
Amazon is eating the world.The loss of this patent will have zero sum impact.
ThomPete 2 days ago 1 reply      
So quick product idea.

Make a Magento integration that allow ecommerce sites to implement it?

radicaldreamer 2 days ago 0 replies      
Anyone know if a company other than Apple currently licenses 1-Click?
likelynew 2 days ago 0 replies      
Has there been any court case for the validity of this patent?
yuhong 2 days ago 0 replies      
I remember the history on Slashdot about it.
minton 2 days ago 0 replies      
Please stop calling this technology.
kiflay 2 days ago 0 replies      
pdog 2 days ago 1 reply      
> No one knows what Apple paid to license the technology [from Amazon]...

This is factually incorrect. Of course, there are executives at Amazon and Apple who know how much was paid to license the one-click patent.

Why PS4 downloads are so slow snellman.net
685 points by kryptiskt  1 day ago   189 comments top 22
ploxiln 1 day ago 4 replies      
Reminds me of how Windows Vista's "Multimedia Class Scheduler Service" would put a low cap on network throughput if any sound was playing:


Mark Russinovich justified it by explaining that the network interrupt routine was just too expensive to be able to guarantee no glitches in media playback, so it was limited to 10 packets per millisecond when any media was playing:


but obviously this is a pretty crappy one-size-fits-all prioritization scheme for something marketed as a most-sophisticated best-ever OS at the time:


Many people had perfectly consistent mp3 playback when copying files over the network 10 times as fast in other OSes (including Win XP!)

Often a company will have a "sophisticated best-ever algorithm" and then put in a hacky lazy work-around for some problem, and obviously don't tell anyone about it. Sometimes the simpler less-sophisticated solution just works better in practice.

andrewstuart 1 day ago 4 replies      
Its bizarre because I bought something from the PlayStation store on my PS4 and it took DAYS to download.

The strange part of the story is that it took so long to download that the next day I went and bought the game (Battlefield 4) from the shop and brought it back home and installed it and started playing it, all whilst the original purchase from the PlayStation store was still downloading.

I ask Sony if they would refund the game that I bought from the PlayStation store given that I had gone and bought it elsewhere from a physical store during the download and they said "no".

So I never want to buy from the PlayStation store again.

Why would Sony not care about this above just about everything else?

erikrothoff 1 day ago 2 replies      
Totally unrelated but: Dang it must be awesome to have a service that people dissect at this level. This analysis is more in depth and knowledgable than anything I've ever seen while employed at large companies, where people are literally paid to spend time on the product.
g09980 1 day ago 5 replies      
Want to see something like this for (Apple's) App Store. Downloads are fast, but the App Store experience itself is so, so slow. Takes maybe five seconds to load search results or reviews even on a wi-fi connection.
cdevs 1 day ago 1 reply      
As a developer people seemed surprised I don't have some massive gaming rig at home but there's something about it that feels like work. I don't want to sit up and be fully alert - I did that all day at work I want 30 mins to veg out on a console jumping between Netflix and some quick multiplayer game with less hackers glitchin out on the game. It seems impressive what PS4 attempts to accomplish while you're playing a game and yet try and download a 40gig game and some how tip toe in the background not screwing up the gaming experience. I couldn't imaging trying to deal with cranking up the speed here and there while keeping the game experience playable in a online game. Chrome is slow? Close you're 50 tabs, want faster PS4 downloads, close your games/apps. Got it.
ckorhonen 1 day ago 3 replies      
Interesting - definitely a problem I've encountered, though I had assumed the issues fell more on the CDN side of things.

Anecdotally, when I switched DNS servers to Google vs. my ISP, PS4 download speeds improved significantly (20 minutes vs. 20 hours to download a a typical game).

Reedx 1 day ago 3 replies      
PS3 was even worse in my experience - PS4 was a big improvement, although still a lot slower than Xbox.

However, with both PS4 and Xbox One it's amazingly slow to browse the stores and much of the dashboard. Anyone else experience that? It's so bad I feel like it must just be me... I avoid it as much as possible and definitely decreases the number of games I buy.

mbrd 1 day ago 0 replies      
This Reddit thread also has an interesting analysis of slow PS4 downloads: https://www.reddit.com/r/PS4/comments/522ttn/ps4_downloads_a...
jcastro 1 day ago 0 replies      
Lancache says it caches PS4 and XBox, anyone using this? https://github.com/multiplay/lancache

(I use steamcache/generic myself, but should probably move to caching my 2 consoles as well).

foobarbazetc 1 day ago 2 replies      
The CDN thing is an issue too.

Using a local DNS resolver instead of Google DNS helped my PS4 speeds.

The other "trick" if a download is getting slow is to run the in built "network test". This seems to reset all the windows back even if other things are running.

Tloewald 1 day ago 0 replies      
It's not just four years into launch since the PS3 was at least as bad.
tgb 1 day ago 6 replies      
Sorry for the newbie question, but can someone explain why the round trip time is so important for transfer speeds? From the formula I'm guessing something like this happens: server sends DATA to client, client receives DATA then sends ACK to server, server receives ACK and then finally goes ahead and sends DATA2 to the client. But TCP numbers their packets and so I would expect them to continue sending new packets while waiting for ACKs of old packets, and my reading of Wikipedia agrees. So what causes the RTT dependence in the transfer rate?
lokedhs 1 day ago 1 reply      
As one piece of information I offer my own experience with PSN downloads on the PS4.

I'm in Singapore and my normal download speed is around 250 Mb/s, sometimes getting closer to 300.

However, I sometimes download from the Swedish store as well, and those download speeds are always very slow. I don't think I've ever gone above one tenth of what I get with local downloads.

That said, bandwidth between Asia and Singapore are naturally more unpredictable, so I don't know if I can blame Sony here. My point is that PS4 downloads can be very fast, and the Singapore example is evidence of this fact.

sydney6 1 day ago 0 replies      
Is it possible that lacking TCP Timestamps in the Traffic from the CDN is causing the TCP Window Size Auto Scaling Mechanism to fail?

See this commit:


tenryuu 1 day ago 1 reply      
I remember someone hacking at this issue a while ago. They blocked Sony Japan's server, of which the download was coming from. The Playstation the fetched the file from a more local server, of which the speed was considerable faster.

Really strange

deafcalculus 19 hours ago 0 replies      
Why doesn't PS4 use LEDBAT for background downloads? Wouldn't this address the latency problem without sacrificing download speeds? AFAIK, Macs do this at least for OS updates.
jumpkickhit 1 day ago 0 replies      
I normally warm boot mine, saw the speed increase with nothing running before, so guess I was on the right track.

I hope this is addressed by Sony in the future, or at least let us select if a download is a high priority or not.

lossolo 1 day ago 2 replies      
DNS based GEO load balancing/CDN's are wrong idea today. For example if you use DNS that has bad configuration or one that is not supplied by your ISP, then you could be routed to servers thousands km/miles from your location. Last time I've checked akamai used that flawed dns based system. What you want to use now is what for example cloudflare uses which is anycast IP. You just announce same IP class on multiple routers/locations and all traffic is routed to the nearest locations thanks to how BGP routing works.
hgdsraj 1 day ago 1 reply      
What download speeds do you get? I usually average 8-10 MB/s
bitwize 1 day ago 1 reply      
This is so that there's plenty of bandwidth available for networked play.

The Switch firmware even states that it will halt downloads if a game attempts to connect to the network.

frik 1 day ago 3 replies      
PS4 and Switch have at least no peer-to-peer download.

Win10 and XboxOne have peer-to-peer download - who would want that, bad for users, wasting upload bandwidth and counts against your monthly internet consumption. https://www.reddit.com/r/xboxone/comments/3rhs4s/xbox_update...

galonk 1 day ago 0 replies      
I always assumed the answer was "because Sony is a hardware company that has never understood the first thing about software."

Turns out I was right.

Ideal OS: Rebooting the Desktop Operating System joshondesign.com
588 points by daureg  1 day ago   303 comments top 65
joshmarinacci 18 hours ago 8 replies      
I'm the original author. I hadn't planned to publicize this yet. There are still some incomplete parts, broken links, and missing screenshots. But the Internet wants what it wants.

Just to clarify a few things.

I just joined Mozilla Devrel. None of this article has anything to do with Mozilla.

I know that none of the ideas in this article are new. I am a UX expert and have 25 years experience writing professional software. I personally used BeOS, Oberon, Plan 9, Amiga, and many others. I read research papers for fun. My whole point is that all of this has been done before, but not integrated into a nice coherent whole.

I know that a modern Linux can do most of these things with Wayland, custom window managers, DBus, search indexes, hard links, etc. My point is that the technology isn't that hard. What we need is to put all of these things into a nice coherent whole.

I know that creating a new mainstream desktop operating system is hopeless. I don't seriously propose doing this. However, I do think creating a working prototype on a single set of hardware (RPi3?) would be very useful. It would give us fertile playground to experiment with ideas that could be ported to mainstream OSes.

And thank you to the nearly 50 people who have signed up to the discussion list. What I most wanted out of this article was to find like minded people to discuss ideas with.

Thanks, Josh

Damogran6 1 day ago 4 replies      
So what he's saying is: REmove all these layers because they're bad, but add these OTHER layers because they're good.

Thats how you make another AmigaOS, or Be, I'm sure Atari still has a group of a dozen folks playing with it, too.

The OS's over the past 20 years haven't shown much advancement because the advancement is happening higher up the stack. You CAN'T throw out the OS and still have ARkit. A Big Bloated Mature Moore's Law needing OS is also stable, has hooks out the wazoo, AND A POPULATION USING IT.

4 guys coding in the dark on the bare metal just can't build an OS anymore, it won't have GPU access, it won't have a solid TCP/IP stack, it won't have good USB support, or caching, or a dependable file system.

All of these things take a ton of time, and people, and money, and support (if you don't have money, you need the volunteers)

Go build the next modern OS, I'll see you in a couple of years.

I don't WANT this to sound harsh, I'm just bitter that I saw a TON of awesome, fledgling, fresh Operating systems fall by the wayside...I used BeOS, I WANTED to use BeOS, I'da LOVED it if they'd won out over NeXT (another awesome operating system...at least that survived.)

At a certain level, perhaps what he wants is to leverage ChromeOS...it's 'lightweight'...but by the time it has all the tchotchkes, it'll be fat and bloated, too.

jcelerier 22 hours ago 2 replies      
> Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible.

that's absolutely possible on linux with i3wm for instance

> I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.

awk and sed, no, but there are many CLI tools that accept video streams through pipe. e.g. FFMPEG. You wouldn't open your video through a GUI text editor, so why would you through CLI text editors ?

> Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

Sure they are, on linux: https://linux.die.net/man/1/wmctrl

Fifteen years ago people were already controlling their WM through dbus: http://wiki.compiz.org/Plugins/Dbus#Combined_with_xdotool

The thing is, no one really cares about this in practice.

cs702 1 day ago 8 replies      
Yes, existing desktop applications and operating systems are hairballs with software layers built atop older software layers built atop even older software layers.

Yes, if you run the popular editor Atom on Linux, you're running an application built atop Electron, which incorporates an entire web browser with a Javascript runtime, so the application is using browser drawing APIs, which in turn delegate drawing to lower-level APIs, which interact with a window manager that in turn relies on X...

Yes, it's complexity atop complexity atop complexity all the way down.

But the solution is NOT to throw out a bunch of those old layers and replace them with new layers!!!

Quoting Joel Spolsky[1]:

"Theres a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: Its harder to read code than to write it. ... The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and theyve been fixed. ... When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

fundabulousrIII 8 minutes ago 0 replies      
I used to think a system bus concentrator for disparate communications was the way forward but it always ends in tears. You have created an interrupt handling system in userspace with n * x permutations and specifications..this is the literal tower of babel and is very easily abused.
spankalee 1 day ago 3 replies      
This sounds a lot like Fuchsia, which is all IPC-based, has a syncable object-store[1], a physically-based renderer[2], and the UI is organized into cards and stories[3] where a story is "a set of apps and/or modules that work together for the user to achieve a goal.", and can be clustered[4] and arranged in different ways[4].

[1]: https://fuchsia.googlesource.com/ledger/

[2]: https://fuchsia.googlesource.com/escher/

[3]: https://arstechnica.com/gadgets/2017/05/googles-fuchsia-smar...

[4]: https://fuchsia.googlesource.com/sysui/#important-armadillo-...

[5]: https://fuchsia.googlesource.com/mondrian/

alexandercrohde 23 hours ago 6 replies      
I really don't understand the negativity here. I sense a very dismissive tone, but most of the complaints are implementation details, or that this has been tried before (so what?).

I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.

-- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).

-- Yes, it should be a database design, with permissions.

-- Yes, by making it a database design, all applications get the ability to share their content (i.e. make files) in a performant searchable way.

-- Yes, permissions is a huge issue. If every app were confined to a single directory (docker-like) then backing up an app, deleting an app, terminating an app would be a million times easier. Our OSes will never be secure until they're rebuilt from the ground up.[Right now windows lets apps store garbage in the 'registry' and linux stores your apps data strewn throughout /var/etc, /var/log, /app/init, .... These should all be materialized views [i.e. sym-links])

-- Mac Finder is cancer. If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement (like you can with car parts).

-- By having an event-driven architecture, this gives me exact tracking on when events happened. I'd like a full record of every time a certain file changes, if file changes can't happen without an event, and all events are indexed in the DB, then I have perfect auditability.

-- I could also assign permission events (throttle browser CPU to 20% max, pipe all audio from spotify to removeAds.exe, pipe all UI notifications from javaUpdater to /dev/null)

I understand the "Well who's gonna use it?" question, but it's circular reasoning. "Let's not get excited about this, because nobody will use it, because it won't catch on, because nobody got excited about it." If you get an industry giant behind it (Linus, Google, Carmack) you can absolutely reinvent a better wheel (e.g. GIT, chrome) and displace a huge marketshare in months.

noen 1 day ago 7 replies      
As a current developer, former 10 year UX designer, and developer before that, this kind of article irks me to no end.

He contradicts his core assertion (OS models are too complex and layered) with his first "new" feature.

Nearly everything on this manifesto has been done before, done well, and many of his gripes are completely possible in most modern OS's. The article just ignores all of the corner cases and conflicts and trade-offs.

Truly understanding the technology is required to develop useful and usable interfaces.

I've witnessed hundreds of times as designers hand off beautiful patterns and workflows that can't ever be implemented as designed. The devil is in the details.

One of the reasons Windows succeeded for so long is that it enabled people to do a common set of activities with minimal training and maximizing reuse of as few common patterns as possible.

Having worked in and on Visual Studio, it's a great example of what happens when you build an interface that allows the user to do anything, and the developer to add anything. Immensely powerful, but 95% of the functionality is rarely if ever used, training is difficult because of the breadth and pace of change, and discovery is impossible.

avaer 1 day ago 4 replies      
> I suspect the root cause is simply that building a successful operating system is hard.

It's hard but not that hard; tons of experimental OS-like objects have been made that meet these goals. Nobody uses them.

What's hard is getting everyone on board enough for critical inertia to drive the project. Otherwise it succumbs to the chicken-and-egg problem, and we continue to use what we have because it's "good enough" for what we're trying to do right now.

I suspect the next better OS will come out of some big company that has the clout and marketing to encourage adoption.

dcow 21 hours ago 0 replies      
Android already tried things like a universal message bus and a module-based architecture and while nice it doesn't quite live up to the promise for two reasons:

1. Application devs aren't trained to architect new software. They will port old shitty software patterns from familiar systems because there's no time to sit down and rewrite photoshop for Android. It's sad but true.

2. People abuse the hell out of it. Give someone a nice thing and someone else will ruin it whether they're trying to or not. A universal message bus has security and performance implications. Maybe if Android was a desktop os not bound by limited resources it wouldn't have pulled out all the useful intents and neutered services, but then again the author's point is we should remove these complex layers and clearly the having them was too complex/powerful/hungry for android.

I do think there's a point to be made that we're very mouse and keyboard centric at the primitive IO level and in UI design. I always wondered what the "command line" would look like if it was more complex than 128 ascii characters in a 1 dimensional array. But it probably wouldn't be as intuitive for humans to interface with unless you could speak and gesture to it as the author suggests.

nwah1 23 hours ago 2 replies      
I agree with a lot of the critics in the comments, but I will say that the author has brought to my attention a number of features that I'm now kind of upset that I don't have.

I always thought LED keyboards were stupid because they are useless, but if they could map to hotkeys in video players and such, that could be very useful, assuming you can turn off the LEDs.

His idea for centralized application configs and keybindings isn't bad if we could standardize using something like TOML . The Options Framework for Wordpress plugins is an example of this kind of thing, and it does help. It won't be possible to get all the semantics agreed upon, of course, but maybe 80% is enough.

Resurrecting WinFS isn't so important, and I feel like there'd be no way to get everyone to agree on a single database unless every app were developed by one team. I actually prefer heterogeneity in the software ecosystem, to promote competition. We mainly need proper journalling filesystems with all the modern features. I liked the vision of Lennart Poettering in his blog post about stateless systems.

The structured command line linked to a unified message bus, allowing for simple task automation sounds really neat, but has a similar problem as WinFS. But I don't object to either, if you can pull it off.

Having a homogenous base system with generic apps that all work in this way, with custom apps built by other teams is probably the compromise solution and the way things have trended anyways. As long as the base system doesn't force the semantics on the developers, it is fine.

antoineMoPa 23 hours ago 2 replies      
I appreciate the article for its coverage of many OS (including BeOS, wow, I should try that). What about package management though? Package management really defines the way you live under your flavor of linux, and there is a lot of room for improvement in current package managers (like decentralizing them, for example).


> I know I said we would get rid of the commandline before, but I take that back. I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer [...]

I can't agree with that, it is the plain text nature of the command line that makes it so useful and simple once you know a basic set of commands (ls,cd,find,sed,grep + whatever your specific task needs). Plain text is easy to understand and manipulate to perform whatever task you need to do. The moment you learn to chain commands and save them to a script for future use, the sky is the limit. I do agree with using voice to chain commands, but I would not complain about the plain text nature and try to bring buttons or other forms of unneeded complexity to command-line.

snarfy 3 hours ago 1 reply      
What we have today grew together organically over time like a city. To do what is described in the article is akin to demolishing the city and completely rebuilding it from scratch. But it's not just from scratch, it's replacing all of the infrastructure and tooling that went into building the parts of the city, like plumbing and electrical. A state of the art substation requires it's own infrastructure to build. It's akin to requiring a whole new compiler tool chain and software development system just to get started with rebooting the OS.

If this happens it's only going to happen with a top-down design from an industry giant. Android and Fuchsia are examples of how it might happen. Will it? It seems these days nobody cares as long as the browser renders quickly.

lake99 1 day ago 1 reply      
> Traditional filesystems are hierarchical, slow to search, and don't natively store all of the metadata we need

I don't know what he means by "traditional", but Linux native filesystems can store all the metadata you'd want.

> Why can't I have a file in two places at once on my filesystem?

POSIX compatible filesystems have supported that for a long time already.

It seems to me that all the things he wants are achievable through Plan9 with its existing API. The only thing missing is the ton of elbow grease to build such apps.

diegof79 22 hours ago 1 reply      
What the author wants is something like Squeak. The idea behind Smalltalk wasn't to do a programming language, but a realization of the DynaBook (google for the essay "History Behind Smalltalk").

While I agree with the author that more innovation is needed on the desktop; I think that the essay is very disinformed.

For example, Squeak can be seen as an OS with very few layers: everything is an object, and sys calls are primitives. As user you can play with all the layers, and re-arrange the UI as you want.

So why the idea didn't took off? I don't know exactly (but I have my hypothesis). There are many factors to balance, those many factors are the ones that makes design hard.

One of those factors is that people tend to put the wrong priorities of where innovation should be. A good example is what the author mentions as priorities for him. None of the items mentions fundamental problems that computer users face today (from my perspective of course).

ghinda 20 hours ago 1 reply      
You have most of these, or at least very similar versions, in Plasma/KDE today:

> Document Database

This is what Akonadi was when when it came out for 4.x. Nepomuk was the semantic search framework so you could rate/tag/comments on files and search by them. They had some performance problems and were not very well received.

Nepomuk has been superseded by Baloo, so you can still tag/rate/comment files now.

Most KDE apps also use KIO slaves:https://www.maketecheasier.com/quick-easy-guide-to-kde-kio-s...

> System Side Semantic Keybindings

> Windows

Plasma 4 used to have compositor-powered tabs for any apps. Can't say if it will be coming back to Plasma 5.Automatic app-specific colors (and other rules) are possible now.

> Smart copy and paste

The clipboard plasmoid in the system tray has multiple items, automatic actions for what to do with different types of content and can be pinned, to remain visible.

> Working Sets

These are very similar to how Activities work. Don't seem to be very popular.

jmull 23 hours ago 1 reply      
This isn't worth reading.

(It's painfully naive, poorly reasoned, has inaccurate facts, is largely incoherent, etc. Even bad articles can serve as a nice prompt for discussion, but I don't even think this is even good for that. I don't we'd ever get past arguing about what it is most wrong about.)

chrisleader 17 hours ago 0 replies      
"First of all, its quite common, especially in enterprise technology, for something to propose a new way to solve an existing problem. It cant be used to solve the problem in the old way, so it doesnt work, and proposes a new way, and so no-one will want that. This is how generational shifts work - first you try to force the new tool to fit the old workflow, and then the new tool creates a new workflow. Both parts are painful and full of denial, but the new model is ultimately much better than the old. The example I often give here is of a VP of Something or Other in a big company who every month downloads data from an internal system into a CSV, imports that into Excel and makes charts, pastes the charts into PowerPoint and makes slides and bullets, and then emails the PPT to 20 people. Tell this person that they could switch to Google Docs and theyll laugh at you; tell them that they could do it on an iPad and theyll fall off their chair laughing. But really, that monthly PowerPoint status report should be a live SaaS dashboard thats always up-to-date, machine learning should trigger alerts for any unexpected and important changes, and the 10 meg email should be a Slack channel. Now ask them again if they want an iPad." - Benedict Evans
xolve 1 day ago 0 replies      
Not an ideal article for anything. Looks like written with limited research, that by the end of it I an hardly keep focus.

> Bloated stack.True, there are options which author hasn't discussed.

> A new filesystem and a new video encoding format.Apple created new FS and video format. These are far more fundamental changes to be glossed over as trivial in a single line.

> CMD.exe, the terminal program which essentially still lets you run DOS apps was only replaced in 2016. And the biggest new feature of the latest Windows 10 release? They added a Linux subsystem. More layers piled on top.Linux subsytem is a great feature of Windows. Ability to run bash on Windows natively, what's the author complaining about?

> but how about a system wide clipboard that holds more than one item at a time? That hasn't changed since the 80s!Heard of Klipper and similar app in KDE5/Plasma. Its been there for so long and keeps text, images and file paths in clipboard.

> Why can't I have a file in two places at once on my filesystem?Hard links and soft links??

> Filesystem tagsAre there!

What I feel about the article is: OSes have these capabilities since long, where are the killers applications written for these?

hackermailman 21 hours ago 0 replies      
This guy wants GuixSD for 60% his feature requests, like isolated apps, version control, snapshots, ease of configuration, and ability to abstract all of it away, and Hurd for his multi-threaded ambitions, modularity, ability to do things like mount a database in a home directory to use as a fileserver, and message passing. This is slowly happening already https://fosdem.org/2017/schedule/event/guixhurd/

Then he wants to completely redesign a GUI to manage it all, which sounds a lot like Firefox OS with aware desktop apps, but with the added bonus that most things that req privileges on desktop OSs no longer need them with Guix. Software drivers are implemented in user space as servers with GNU Hurd, so you can now access these things and all the functionality that comes with them, exactly what the author wants.

IamCarbonMan 19 hours ago 0 replies      
All of this is possible without throwing out any existing technology (at least for Linux and Windows; if Apple doesn't envision a use case for something it's very likely never going to exist on their platform). Linux compositors have the ability to manipulate the window however the hell they want, and while it's not as popular as it used to be, you can change the default shell on Windows and use any window manager you can program. A database filesystem is two parts: a database and a filesystem. Instead of throwing out the filesystem which works just fine, add a database which offers views into the filesystem. The author is really woe-is-me about how an audio player doesn't have a database of mp3s, but that's something that is done all the time. Why do we have to throw out the filesystem just to have database queries? And if it's because every app has to have their own database- no they don't. If you're going to rewrite all the apps anyways, then rewrite them to use the same database. Problem solved. The hardest concept to implement in this article would be the author's idea of modern GUIs, but it can certainly be done.

On top of this, the trade-off of creating an entirely new OS is enormous. Sure, you can make an OS with no apps because it's not compatible with anything that's been created before, and then you can add your own editor and your own web browser and whatever. And people who only need those things will love it. But if you need something that the OS developer didn't implement, you're screwed. You want to play a game? Sorry. You want to run the software that your school or business requires? Sorry. Seriously, don't throw out every damn thing ever made just to make a better suite of default apps.

dgreensp 23 hours ago 2 replies      
I love it, especially using structured data instead of text for the CLI and pipes, and replacing the file system with a database.

Just to rant on file systems for a sec, I learned from working on the Meteor build tool that they are slow, flaky things.

For example, there's no way on any desktop operating system to read the file tree rooted at a directory and then subscribe to changes to that tree, such that the snapshot combined with the changes gives you an accurate updated snapshot. At best, an API like FSEvents on OS X will reliably (or 99% reliably) tell you when it's time to go and re-read the tree or part of the tree, subject to inefficiency and race conditions.

"Statting" 10,000 files that you just read a second ago should be fast, right? It'll just hit disk cache in RAM. Sometimes it is. Sometimes it isn't. You might end up waiting a second or two.

And don't get me started on Windows, where simply deleting or renaming a file, synchronously and atomically, are complex topics you could spend a couple hours reading up on so that you can avoid the common pitfalls.

Current file systems will make even less sense in the future, when non-volatile RAM is cheap enough to use in consumer devices, meaning that "disk" or flash has the same performance characteristics and addressability as RAM. Then we won't be able to say that persisting data to a disk is hard, so of course we need these hairy file system things.

Putting aside how my data is physically persisted inside my computer, it's easy to think of better base layers for applications to store, share, and sync data. A service like Dropbox or BackBlaze would be trivial to implement if not for the legacy cruft of file systems. There's no reason my spreadsheets can't be stored in something like a git repo, with real-time sync, provided by the OS, designed to store structured data.

mwcampbell 1 day ago 1 reply      
I'm glad the author thought about screen readers and other accessibility software. Yes, easy support for alternate input methods helps. But for screen readers in particular, the most important thing is a way to access a tree of objects representing the application's UI. Doing this efficiently over IPC is hard, at least with the existing infrastructure we have today.

Edit: I believe the state of the art in this area is the UI Automation API for Windows. In case the author is reading this thread, that would be a good place to continue your research.

microcolonel 22 hours ago 0 replies      
> Why can't I have a file in two places at once on my filesystem?

You can! Use hardlinks.

> Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

There are well established standards for controlling window managers from programs, what on earth are you talking about?

> Applications would do their drawing by requesting a graphics surface from the compositor. When they finish their drawing and are ready to update they just send a message saying: please repaint me. In practice we'd probably have a few types of surfaces for 2d and 3d graphics, and possibly raw framebuffers. The important thing is that at the end of the day it is the compositor which controls what ends up on the real screen, and when. If one app goes crazy the compositor can throttle it's repaints to ensure the rest of the system stays live.

Just like Wayland!

> All applications become small modules that communicate through the message bus for everything. Everything. No more file system access. No hardware access. Everything is a message.

Just like flatpak!

> Smart copy and paste

This is entirely feasible with the current infrastructure.

> Could we actually build this? I suspect not. No one has done it because, quite honestly, there is no money in it. And without money there simply aren't enough resources to build it.

Some of this is already built, and most of it is entirely feasible with existing systems. It's probably not even that much work.

benkuykendall 22 hours ago 0 replies      
The idea of system wide "document database" is really intriguing. I think the author identified a real pattern that could be addressed by such a change:

> In fact, many common applications are just text editors combined with data queries. Consider iTunes, Address Book, Calendar, Alarms, Messaging, Evernote, Todo list, Bookmarks, Browser History, Password Database, and Photo manager. All of these are backed by their own unique datastore. Such wasted effort, and a block to interoperability.

The ability to operate on my browser history or emails as a table would be awesome! And this solves so many issues about losing weird files when trying to back up.

However, I would worry a lot about schema design. Surely most apps would want custom fields in addition to whatever the OS designer decided constitutes an "email". This would throw interoperability out the window, and keeping it fast becomes a non-trivial DB design problem.

Anyone have more insights on the BeOS database or other attempts since?

(afterthought: like a lot of ideas in this post, this could be implemented in userspace on top of an existing OS)

Animats 19 hours ago 0 replies      
If you want to study user interfaces, look at programs which solve a hard problem - 3D animation and design programs. Learn Inventor or Maya or Blender.

Autodesk Inventor and Blender are at opposite ends of the "use the keyboard" range. In Inventor, you can do almost everything with the mouse except enter numbers and filenames. Blender has a 10-page list of "hotkeys". It's worth looking at how Inventor does input. You can change point of view while in the middle of selecting something. This is essential when working on detailed objects.

raintrees 9 hours ago 0 replies      
I have been conceptualizing what it would take to abstract away the actual physical workstation into a back-end processing system and multiple UI modules physically scattered throughout my home (I work from home) and grounds.

For example, as in shift my workspace from my upstairs office to my downstairs work area just by signing in on the different console setup downstairs. All of my in-process work comes right back up. Right now I do this (kind of) using VMs, but they are limited when addressing hardware, and now I am multiplying that hardware.

Same thing with my streams - Switch my audio or video to the next room/zone where I want to move myself to. Start researching how to correctly adjust my weed whip's carburetor, then go out to the garage and pull up my console there where my work bench is and the dismantled tool.

Eventually my system would track my whereabouts, with the ability (optionally turned on) to automatically shift that IO to the closest hardware setup to me as I move around the structure/property.

And do something like this for each person? So my wife has her streams? Separate back end instance, same mobility to front-end UI hardware?

Can this new Desktop Operating System be designed with that hardware abstraction in mind?

jimmaswell 21 hours ago 0 replies      
Patently false that Windows hasn't innovated, UX or otherwise. Start menu search, better driver containment/other bsod reduction, multi-monitor expanding task bar, taskbar button reordering, other Explorer improvements, lots of things.
thibran 18 hours ago 0 replies      
Interesting to read someone else ideas about that topic, which I though myself quite a lot about. The basic building block of a better desktop OS is IMHO and as the OP wrote a communication contract between capabilities and the glue (a.k.a apps). I don't think we would need that many capability-services to be able to build something useful (it doesn't even need to be efficient at first). For the start it might be enough to wrap existing tools and expose them and see if things work or not.

Maybe by starting to build command-line apps and see how good the idea works (cross-platform would be nice). I guess that the resulting system would have some similarities with RxJava, which allows to compose things together (get asynchronously A & B, then build C and send it to D if it contains not Foo).

If an app would talk to a data-service it would no longer have to know where the data is coming from or how it got there. This would allow to build a whole new kind of abstractions, e.g. data could be stored in the cloud and only downloaded to a local cache when frequently used, just to be later synced back to the cloud transparently (maybe even ahead of time because a local AI learned your usage patterns). I know that you can have such sync-things today, they are just complicated to setup, or cost a lot of money, or work only for specific things/applications, also they are often not accessible to normal users.

Knowing how to interact with the command-line gives advanced users superpowers. I think it is time to give those superpowers to normal users too. And no, learning how to use the command-line is not the way to go ;-)

A capability-services based OS could even come with a quite interesting monetization strategy by selling extra capabilities, like storage, async-computation or AI services, beside of selling applications.

Groxx 18 hours ago 0 replies      
>Consider iTunes. iTunes stores the actual mp3 files on disk, but all metadata in a private database. Having two sources of truth causes endless problems. If you add a new song on disk you must manually tell iTunes to rescan it. If you want to make a program that works with the song database you have to reverse engineer iTunes DB format, and pray that Apple doesn't change it. All of these problems go away with a single system wide database.

Well. Then you get Spotlight (on OSX, at least) - system-wide file/metadata/content search.

It's great! It's also quite slow at times. Slow (and costly) to index, slow to query (initial / common / by-name searches are fast, but content searches can take a second or two to find anything - this would be unacceptable in many applications), etc.

I like databases, but building a single well-performing one for all usages is quite literally impossible. Forcing everyone into a single system doesn't tend to add up to a positive thing.

anc84 2 hours ago 0 replies      
> It took tremendous effort to get 3D accelerated Doom to work inside of X windows in the mid 2000s, something that was trivial with mid-1990s Microsoft Windows.

Huh? I am not aware of a 3D accelerated Doom version on Windows in that timeframe nor that it was hard on Linux 10 years later. Any pointers?

zaro 1 day ago 2 replies      
> I suspect the root cause is simply that building a successful operating system is hard.

Well, it is hard, but this is not the main source of issues. The obstacle to having nice things on the desktop is this constant competition and wheel reinvention, the lack of cooperation.

The article shows out some very good points, but just think of this simple fact. It's 2017, and the ONLY filesystem that will seamlessly work with macOS, Windows and Linux at the same time is FAT, a files system which is almost 40 years old. And it is not because it is so hard to make such a filesystem. Not at all.Now this is at the core of reasons why we can't have nice things :)

vbezhenar 23 hours ago 1 reply      
I think that the next reboot will be unifying RAM and Disk with tremendous amount of memory (terabytes) for apps and transparent offloading of huge video and audio files into cloud. You don't need filesystem or any persistence anymore, all your data structures are persistent. Use immutable stuff and you have unlimited Undo for the entire device life. Reboot doesn't make sense, all you need is to flush processor registers before turning off. This experience will require rewrite OS from ground up, but it would allow for completely new user experience.
lou1306 1 day ago 4 replies      
Windows 10 didn't add any UX feature? What about Task View (Win+Tab) and virtual desktops?

And why bashing the Linux subsystem, which is surely not even developed by the UX team (so no waste of resources) and is a much needed feature for developers?

BTW, there is a really simple reason why mainstream OSs have a rather conservative design: the vast majority of people just doesn't care and may even get angry when you change the interaction flow. Many of the ideas exposed in the post are either developer-oriented or require significant training to be used proficiently.

Skunkleton 20 hours ago 0 replies      
In 2017 a modern operating system such as Android, iOS, or Chrome (the browser) exists as a platform. Applications developed for these platforms _must_ conform to the application model set by the platform. There is no supported way to create applications that do not conform to the design of the platform. This is in stark contrast to the "1984" operating systems that the OP is complaining about.

It is very tempting to see all the complexity of an open system and wish it was more straight forward; more like a closed system. But this is a dangerous thing to advocate. If we all only had access to closed systems, who would we be seceding control to? Do we really want our desktop operating systems to be just another fundamentally closed off walled garden?

bastijn 20 hours ago 1 reply      
Apart from discussing the content. Can I just express my absolute love for (longer) articles that start with a tl;dr?

It gives an immediate answer to "do I need to read this?", and if so, what key arguments should I pay attention to?

Let me finish with expressing my thanks to the author for including a tl;dr.


jonahss 22 hours ago 1 reply      
The author mentions they wished Object-based streams/terminals existed. This is the premise of Windows Powershell, which today reminds me of nearly abandoned malls found in the Midwest: full of dreams from a decade ago, but today an empty shell lacking true utility, open to the public for wandering around.
joshmarinacci 23 hours ago 0 replies      
OP here. I wasn't quite ready to share this with the world yet, but what are you gonna do.

I'm happy to answer your questions.

doggydogs94 15 hours ago 0 replies      
FYI, most of the author's complaints about the command line were addressed by Microsoft in PowerShell. For example, PowerShell pipes objects, not text.
mherrmann 23 hours ago 1 reply      
What I hate is the _bloat_. Why is GarageBand forced upon me with macOS? Or iTunes? Similarly for video players etc on all the other OSs. I am perfectly capable of installing the software I need, thank you very much.
ksec 1 day ago 3 replies      
I hate to say this, but an ideal Desktop OS, at least for majority of consumers is mostly here, and it is iOS 11.

Having use the newest iPad Pro 10.5 ( along with iOS 11 beta ), the first few hours were pure Joy, after that were frustration and anger flooding in. Because what I realize, is this tiny little tablet, costing only half a Macbook Pro or even iMac, limited by Fanless design with lower TDP, 4GB of memory, no Dedicated GPU, likely much slower SSD, provides a MUCH better user experience then the Mac or Windows PC i have ever used, that is including the latest Macbook Pro.

Everything is fast and buttery smooth, even the Web Browsing experience is better. The only downside is you are limited touch screen and Keyboard. I have number of times wonder If I can attach a separate monitor to use it like Samsung Desktop Dock.

There are far too many backward compatibility to care for with both Windows and Mac. And this is similar to the discussion in the previous Software off Rails. People are less likely to spend time optimizing when it is working good enough out of the box.

nebulous1 20 hours ago 0 replies      
I much preferred the second half of this to the first half.

However, both seemed to end up with the same fundamental flaw: he's either underestimating or understating how absurdly difficult most of what he's suggesting is. It's all well and good saying that we can have a standardized system for email, with everything being passed over messages, but what about everything else? It's extremely difficult to standardize an opinionated system that works for everything, which is exactly why so many operating system constructs are more general than specific. For this to all hang together you would have to standardize everything, which will undoubtedly turn into an insane bureaucratic mess. Not to mention that a lot of software makers actively fight against having their internal formats open.

mcny 12 hours ago 0 replies      
Hi Josh,

Thank you for writing this.

Just noticed a small typo (I think)

> For a long time Atom couldn't open a file larger than 2 megabytes because scrolling would be to slow.

to should be too.


casebash 14 hours ago 0 replies      
I wouldn't say that innovation in Desktop is dead, but most of it seems to be driven by features or design patterns copied from mobile or tablet. Take for examples Windows 8 and Windows 10, Windows 8 was all about moving to an OS that could run on a whole host of devices, while Windows 10 was all about fixing up all the errors made in this transition.
agumonkey 1 day ago 1 reply      
I see https://birdhouse.org/beos/refugee/trackerbase.gif for 2 seconds and I feel happy. So cute, clear, useful.
al2o3cr 23 hours ago 0 replies      

 Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.
My copy of Divvy is confused by this statement. :)

st3fan 22 hours ago 1 reply      
> And if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins. There is no extension API. This is the result of many layers of cruft and bloat.

I am going to say that it is probably a product decision in case of Mail.app.

Whether Mail.app is a big steaming pile of cruft and bloat inside - nobody knows. Since it is closed source.

jacinabox 13 hours ago 0 replies      
In regards to the issue of file systems being non-searchable, it's definitely worth taking a look at compressed full-text indexes: http://pizzachili.dcc.uchile.cl/resources/compressed_indexes...

Under this scheme each file on disk would be stored as an index with constant factor overhead. The original file is not needed; all of the data can be decoded out of the index.

gshrikant 21 hours ago 1 reply      
While I'm not sure I agree with everything in the article, it does mention a point I've been thinking about for a while - configuration.

I really do think applications should try to zero-in on a few standard configuration file formats - I really don't have a strong preference on one (although avoiding XML would be nice). It makes the system uniform and makes it easier to move between applications. Of course, applications can add extended sections to suit their need.

Another related point is the location of configuration files - standard Linux/Unix has a nice hierarchy /etc/ for and /usr/local/etc and others for user-specific configurations (I'm sure Windows and OS X should have a similar hierarchy too) but different applications still end up placing their configuration files in unintuitive places.

I find this lack of uniformity disturbing - especially because it looks so easy (at least on the surface) to fix and the benefits would be nice - easier to learn and scriptable.

A last unrelated point - I don't see why Linux distributions cannot standardize around a common repository - Debian and Ubuntu both share several packages but are yet forced to maintain separate package databases and you can't easily mix and match packages between them. This replication of effort seems more ideological than pragmatic (of course, there probably are some practical reasons too). But I can't see why we can't all pool resources and share a common 'universal' application repository - maybe divide it into 'Free', 'Non-Free', 'Contrib/AUR' like granular divisions so users have full freedom to choose the packages they want.

Like other things, I think these ideas have been implemented before but I'm a little disappointed these haven't made it into 'mainstream' OS userlands yet.

atmartins 10 hours ago 0 replies      
I'm surprised at all the negative, pessimistic views about looking forward with operating systems. I welcome conversations about what things could be like in the future. Obviously Google's pondering this with Fuchsia. Maybe it will take a more vertical approach, where only certain hardware could take advantage of some features for a while.
hyperfekt 21 hours ago 0 replies      
This would be neat, but isn't radical enough yet IMHO. If everything on the system is composed of pure functions operating on data, we can supercharge the OS and make everything both possible AND very simple.The whole notion of 'application' is really kind of outmoded.
PrimHelios 23 hours ago 2 replies      
This seems to me to be written by someone who uses MacOS almost exclusively, but has touched Windows just enough to understand it. The complete lack of understanding of IPC, filesystems, scripting, and other OS fundamentals is pretty painful.

>Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible. Application windows are just bitmaps at the end of the day, but the OS guys haven't built it because it's not a priority.

I'm an idiot when it comes to operating systems (and sometimes even in general), but even I know why there are issues with that. You need a standardized form of IPC between the two apps, which wouldn't happen because both devs would be convinced their way is the best. On top of that, it's a great way to get an antitrust against you if you aren't careful [0]

>Why can't I have a file in two places at once on my filesystem? Why is it fundamentally hierarchical?

Soft/hard links, fam. Even Windows has them.

>Why can['t] I sort by tags and metadata?

You can in Linux, you just need to know a few commands first.

>Any web app can be zoomed. I can just hit command + and the text grows bigger. Everything inside the window automatically rescales to adapt. Why don't my native apps do that? Why can't I have one window big and another small? Or even scale them automatically as I move between the windows? All of these things are trivial to do with a compositing window manager, which has been commonplace for well over a decade.

Decent point IMO. There's a lot of native UI I have a hard time reading because it's so small. That said, I think bringing in the ability to zoom native widgets would bring in a lot of issues that HTML apps have.

>We should start by getting rid of things that don't work very well.

The author doesn't understand PCs. The entire point of these machines is backwards-compatibility, because we need backwards compatibility. I'm sitting next to a custom gaming PC and I have an actual serial port PCIe card because I need serial ports. Serial ports. In 2017. I'd be screwed if serial wasn't supported anymore.

I won't touch the rest of the article because I there's a lot I disagree with, but he seems to just want to completely reinvent the "modern OS" as just chromebooks.

[0]: https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....

free_everybody 11 hours ago 1 reply      
Realistically, how difficult is it to write a brand new operating system like this? Could a few people with full-time jobs write a working model in a year? Maybe 10 people? Is it just too time consuming with too little of a payout? There should be more options; I think a lot of people can agree on that.
ZenPsycho 14 hours ago 0 replies      
this runs parallel to a lot of my thoughts. one thing that you don't quite address, and which i believe has derailed all efforts to do stuff like this, is the challenge of getting a large group of developers to agree on a single set of data formats. it is only once you nail that, that doing many of the composition/copy/paste things become possible. some of these formats are easy: jpeg, png, utf-8. when it comes to something like: the meta data schema for a song? a recipe? that's a can of worms and flamewars.

to some extent you've got the DBFS thing that everything shares but that's only of use for sharing so far as you can get easy agreement about what fieldnames should he available for a kind of thing.

you've also got security concerns. if everything shares the same database, any random bit of code can ship that data off to a russian data mining op. or corrupt your song database. or encrypt everything and ransom it. you kind of address this by puttin a layer of indirection here, and having security and access managed via the message bus, but this needs a UI, and i don't think apple, android, or facebook has really mastered the ui for permissions.

dgudkov 14 hours ago 0 replies      
Many interesting ideas and concepts, no question. However, if it was a startup pitch I would struggle to see a killer application. I can see features (some are very exciting!) here, but I'm failing to see a product. What kind of real-life problem would such OS solve? Is this problem worth billions of dollars required for developing a new OS and a tool kit of apps for it?
OOPMan 7 hours ago 0 replies      
Ah, the age-old assumption among developers:

Everything is terrible and broken, the only way to fix it is to throw everything in the bin and start from scratch.

Some things never change...

oconnor663 21 hours ago 1 reply      
> Wayland is supposed to fix everything, but it's been almost a decade in development and still isn't ready for prime time.

Mutter's Wayland implementation is the default display server for Gnome Shell right now. How much more prime time do can you get?

untangle 7 hours ago 0 replies      
Perhaps the new OS prototypes could be built on top of a hypervisor. Yes, it's a layer. But building hypervisor-up would be a nice jump-start.
atemerev 22 hours ago 0 replies      
"A solution in search of a problem".

What problem of mine "piping my Skype stream to video analysis service" is supposed to solve? Why would I want to dock and undock different application parts to all places they don't belong? Etc.

blueworks 23 hours ago 0 replies      
The reference to atom and it's performance to the underlying electron and nodejs runtime is inappropriate since another popular editor Microsoft's VS Code which also uses electron but is very fast and is a pleasure to work with.
linguae 23 hours ago 2 replies      
I've been thinking a lot about the problem of modern desktop operating systems myself over the past year. I believe that desktop operating system environments peaked last decade. The Mac's high water mark was Snow Leopard, the Linux desktop appeared to have gained momentum with the increasing refinement of GNOME 2 during the latter half of the 2000's, and for me the finest Windows releases were Windows 2000 and Windows 7. Unfortunately both the Linux desktop and Windows took a step in the wrong direction when smartphones and tablets became popular and the maintainers of those desktops believed that the desktop environments should resemble the environments of these new mobile devices. This led to regressions such as early GNOME 3 and Windows 8. GNOME 3 has improved over the years and Windows 10 is an improvement over Windows 8, but GNOME 2 and Windows 7, in my opinion, are still better than their latest successors. Apple thankfully didn't follow the footsteps of GNOME and Windows, but I feel that the Mac has stagnated since Snow Leopard.

I agree with the author of this article that desktop operating systems should develop into workstation operating systems. They should be able to facilitate our workflows, and ideally they should be programmable (which I have some more thoughts about in my next paragraph). In my opinion the interface should fully embrace the fact that it is a workstation and not a passive media consumption device. It should, in my opinion, be a "back to basics" one, something like the classic Windows 95 interface or the Platinum Mac OS interface.

One of the thoughts that I've been thinking about over the years is the lack of programmability in contemporary desktop GUIs. The environments of MS-DOS and early home computers highly encouraged users to write programs and scripts to enhance their work environment. Unix goes a step further with the idea of pipes in order to connect different tools together. Finally, the ultimate form of programmability and interaction would resemble the Smalltalk environment, where objects could send messages to each other. What would be amazing would be some sort of Smalltalk-esque GUI environment, where GUI applications could interact with each other using message passing. Unfortunately Apple and Microsoft didn't copy this from Xerox, instead only focusing on the GUI in the early 1980s and then later in the 1980s focusing on providing an object-oriented API for GUI services (this would be realized with NeXTSTEP/OPENSTEP/Cocoa, which inspired failed copycat efforts such as Microsoft Cairo and Apple/IBM Taligent, but later on inspired successful platforms such as the Java API and Microsoft .NET). The result today is largely unprogrammable GUI applications, though there are some workarounds such as AppleScript and Visual Basic for Applications (though it's far from the Smalltalk-esque idea). The article's suggestion for having some sort of standardized JSON application interface would be an improvement over the status quo.

I would love to work on such an operating system: a programmable GUI influenced by the underpinnings of Smalltalk and Symbolics Genera plus the interface and UI guidelines of the classic Mac OS. The result would be a desktop operating system that is unabashedly for desktop computer users. It would be both easy to use and easy to control.

pier25 23 hours ago 0 replies      
I agree with some of the points stated. For years I've been thinking that a tag based file system would be superior to a folder based one in many aspects.

macOS has tags, but the UX/UI for interacting with them is really poor.

consultSKI 5 hours ago 0 replies      
>> think JSON but more efficient

Amen. Seriously tho, a lot of great insight.

saagarjha 17 hours ago 0 replies      
> if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins.

Mail.app supports plugins.

> Why can't I have a file in two places at once on my filesystem?

Soa hardlink?

> Why don't my native apps do that?

Dynamic text lets you do this, but it's mobile-only currently.

> have started deprecating the Applescript bindings which make it work underneath

Since when?

Afraid of Makefiles? Don't be matthias-endler.de
497 points by tdurden  3 days ago   267 comments top 41
ejholmes 3 days ago 15 replies      
Make's underlying design is great (it builds a DAG of dependencies, which allows for parallel walking of the graph), but there's a number of practical problems that make it a royal pain to use as a generic build system:

1. Using make in a CI system doesn't really work, because of the way it handles conditional building based on mtime. Sometimes you just don't want the condition to be based on mtime, but rather a deterministic hash, or something else entirely.

2. Make is _really_ hard to use to try to compose a large build system from small re-usable steps. If you try to break it up into multiple Makefiles, you lose all of the benefits of a single connected graph. Read the article about why recursive make is harmful: http://aegis.sourceforge.net/auug97.pdf

3. Let's be honest, nobody really wants to learn Makefile syntax.

As a shameless plug, I built a tool similar to Make and redo, but just allows you to describe everything as a set of executables. It still builds a DAG of the dependencies, and allows you to compose massive build systems from smaller components: https://github.com/ejholmes/walk. You can use this to build anything your heart desires, as long as you can describe it as a graph of dependencies.

chungy 3 days ago 7 replies      
I think the primary thing that makes people fear Makefiles is that they try learning it by inspecting the output of automake/autoconf, cmake, or other such systems. These machine-generated Makefiles are almost always awful to look at, primarily because they have several dozen workarounds and least-common-denominators for make implementations dating back to the 1980s.

A properly hand-tailored Makefile is a thing of beauty, and it is not difficult.

bluejekyll 3 days ago 7 replies      
Make is awesome. I have always loved make, and got really good with some of its magic. After switching to Java years ago, we collectively decided, "platform independent tools are better", and then we used ant. Man was ant bad, but hey! It was platform independent.

Then we started using maven, and man, maven is ridiculously complex, especially adding custom tasks, but at least it was declarative. After getting into Rust, I have to say, Cargo got the declarative build just right.

But then, for some basic scripts I decided to pick Make back up. And I wondered, why did we move away from this? It's so simple and straightforward. My suggestion, like others are saying, is keep it simple. Try and make declarative files, without needing to customize to projects.

I do wish Make had a platform independent strict mode, because this is still an issue if you want to support different Unixes and Windows.

p.s. I just thought of an interesting project. Something like oh-my-zsh for common configs.

raimue 3 days ago 1 reply      
By using pseudo targets only in the example and not real files, the article misses the main point of targets and dependencies: target rules will only be executed if the dependencies changed. make will compare the time of last modification (mtime) on the filesystem to avoid unnecessary compilation. To me, this is the most important advantage of a proper Makefile over a simple shell script always executing lots of commands.
rdtsc 3 days ago 4 replies      
Sneaky pro-tip - use Makefiles to parallelize jobs that have nothing to do with building software. Then throw a -j16 or something at it and watch the magic happen.

I was stuck on an old DoD redhat box and it didn't have gnu parallel or other such things and co-worker suggested make. It was available and it did the job nicely.

syncsynchalt 3 days ago 4 replies      
Today's simple makefiles are the end result of lessons hard learned. You'd be horrified to see what the output of imake looked like.

From memory here's a Makefile that serves most of my needs (use tabs):

 SOURCE=$(wildcard *.c) OBJS=$(patsubst %.c,%.o, $(SOURCE)) CFLAGS=-Wall # define CFLAGS and LDFLAGS as necessary all: name_of_bin name_of_bin: $(OBJS) $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) %.o: %.c $(CC) $(CFLAGS) -o $@ $^ clean: rm -f *.o name_of_bin .PHONY: clean all

martin_ky 3 days ago 0 replies      
Due to its versatility, Makefiles can be creatively used beyond building software projects. Case in point: I used a very simple hand-crafted Makefile [1] to drive massive Ansible deployment jobs (thousands of independently deployed hosts) and work around several Ansible design deficiencies (inability to run whole playbooks in parallel - not just individual tasks, hangs when deploying to hosts over unstable connection, etc.)

The principle was to create a make target and rule for every host. The rule runs ansible-playbook for this single host only. Running the playbook for e.g. 4 hosts in parallel was as simple as running 'make -j4'. At the end of the make rule, an empty file with the name of the host was created in the current directory - this file was the target of the rule - it prevented running Ansible for the same host again - kind of like Ansible retry file, only better.

I realize that Ansible probably is not the best tool for this kind of job, but this Makefile approach worked very well and was hacked together very quickly.

[1] https://gist.github.com/martinky/819ca4a9678dad554807b68705b...

AceJohnny2 3 days ago 3 replies      
"Build systems are the bastard stepchild of every software project" -- me a years ago

I've work in embedded software for over a decade, and all projects have used Make.

I have a love-hate relationship with Make. It's powerful and effective at what it does, but its syntax is bad and it lacks good datastructures and some basic functions that are useful when your project reaches several hundred files and multiple outputs. In other words, it does not scale well.

Worth noting that JGC's Gnu Make Standard Library (GMSL) [1] appears to be a solution for some of that, though I haven't applied it to our current project yet.

Everyone ends up adding their own half-broken hacks to work around some of Make's limitations. Most commonly, extracting header file dependency from C files and integrating that into Make's dependency tree.

I've looked at alternative build systems. For blank-slate candidates, tup [2] seemed like the most interesting for doing native dependency extraction and leveraging Lua for its datastructures and functions (though I initially rejected it due the the silliness of its front page.) djb's redo [3] (implemented by apenwarr [4]) looked like another interesting concept, until you realize that punting on Make's macro syntax to the shell means the tool is only doing half the job: having a good language to specify your targets and dependency is actually most of the problem.

Oh, and while I'm around I'll reiterate my biggest gripe with Make: it has two mechanisms to keep "intermediate" files, .INTERMEDIATE and .PRECIOUS. The first does not take wildcard arguments, the second does but it also keeps any half-generated broken artifact if the build is interrupted, which is a great way to break your build. Please can someone better than me add wildcard support to .INTERMEDIATE.

[1] http://gmsl.sourceforge.net

[2] http://gittup.org/tup/Also its creator, Mike Shal, now works at Mozilla on their build system

[3] http://cr.yp.to/redo.html

[4] https://github.com/apenwarr/redo

rrmm 3 days ago 3 replies      
Makefiles are easy for small to medium sized projects with few configurations. After that it seems like people throw up their hands and use autotools to deal with all the recursive make file business.

Most attempts to improve build tools completely replace make rather than adding features. I like the basic simplicity and the syntax, (the tab thing is a bit annoying but easy enough to adapt to).

It'd be interesting to hear everyone's go to build tools.

qznc 3 days ago 1 reply      
I love Make for my small projects. It still could be better. Here is my list:

* Colorize errors

* Hide output unless the command fails

* Automatic help command which shows (non-file) targets

* Automatic clean command which deletes all intermediate files

* Hash-based update detection instead of mtime

* Changes in "Makefile" trigger rebuilds

* Parallel builds by default

* Handling multi-file outputs

* Continuous mode which watches the file system for changes and rebuilds automatically

I know of no build system which provides these features and is still simple and generic. Tup is close, but it fails with LaTeX, because of the circular dependencies (generates and reads aux file).

wyldfire 3 days ago 2 replies      
> You've learned 90% of what you need to know about make.

That's probably in the ballpark, anyways.

The good (and horrible) stuff:

- implicit rules

- target specific variables

- functions

- includes

I find that with implicit rules and includes I can make really sane, 20-25 line makefiles that are not a nightmare to comprehend.

For a serious project of any scope, it's rare to use bare makefiles, though. recursive make, autotools/m4, cmake, etc all rear their beautiful/ugly heads soon enough.

But make is my go-to for a simple example/reproducible/portable test case.

mauvehaus 3 days ago 1 reply      
I feel like any discussion of make is incomplete without a link to Recursive Make Considered Harmful[0]. Whether you agree with the premise or not, it does a nice job of introducing some advanced constructs that make supports and provides a non-contrived context in which you might use them.

[0] http://aegis.sourceforge.net/auug97.pdf

Animats 3 days ago 2 replies      
The trouble with "make" is that it's supposed to be driven by dependencies, but in practice it's used as a scripting language.If the dependency stuff worked, you would never need

 make clean; make


misnome 3 days ago 0 replies      
Almost every build system (where I think it isn't controversial to say make is most often used) looks nice and simple with short, single-output examples to demonstrate the basis of a system.

It's when you start having hundreds of sources, targets, external dependencies, flags and special cases that it becomes hard to write sane, understandable Makefiles, which it presumably why people tend to use other systems to generate makefiles.

So sure, understanding what make is, and how it works is probably important, since it'll be around forever. But there are usually N better ways of expressing a build system, nowadays.

nstart 3 days ago 4 replies      
So I saw this and thought why not give it a try. How hard could it be right? My goal? Take my bash file that does just this (I started go just yesterday so I might be doing cross compiling wrong :D) :


export GOPATH=$(pwd)

export PATH=$PATH:$GOPATH/bin

go install target/to/build

export GOOS=darwin

export GOARCH=amd64

go install target/to/build


which should be simple. Right? Set environment variables, run a command. Set another environment variable, run a command.

45 minutes in and I haven't been able to quite figure it out just yet. I definitely figured out how to write my build.sh files in less than 15 minutes for sure when I started out.

pkkim 3 days ago 2 replies      
One important tip is that the commands under a target each run sequentially, but in separate shells. So if you went to set env vars, cd, activate a Python virtualenv, etc to affect the next command, you need to make them a single command, like:

 target: cd ./dir; ./script.sh

epx 3 days ago 1 reply      
Those who don't understand Make are condemned to reimplement it, poorly.
bauerd 3 days ago 1 reply      
I remember trying to wrap my head around the monstrosity that is Webpack. Gave up and used make, never looked back since
flukus 3 days ago 2 replies      
Personal blog spam, I learned make recently too and discovered it was good for high level languages as well, here is an example of building a c# project: http://flukus.github.io/rediscovering-make.html .

Now the blog itself is built with make: http://flukus.github.io/building-a-blog-engine.html

DangerousPie 3 days ago 2 replies      
If you want all the greatness of Makefiles without the painful syntax I can highly recommend Snakemake: https://snakemake.readthedocs.io/en/stable/

It has completely replaced Makefiles for me. It can be used to run shell commands just like make, but the fact that it is written in Python allows you to also run arbitrary Python code straight from the Makefile (Snakefile). So now instead of writing a command-line interface for each of my Python scripts, I can simply import the script in the Snakefile and call a function directly.


 rule make_plot: input: data = "{name}.txt" output: plot = "{name}.png" run: import my_package my_package.plot(input['data'], output['plot'], name = wildcards['name'])
Another great feature is its integration with cluster engines like SGD/LSF, which means it can automatically submit jobs to the cluster instead of running them locally.

rcarmo 3 days ago 1 reply      
These days, most of my projects have a Makefile with four or five simple commands that _just work_ regardless of the language, runtime or operating system in use:

- make deps to setup/update dependencies

- make serve to start a local server

- make test to run automated tests

- make deploy to package/push to production

- make clean to remove previously built containers/binaries/whatever

There are usually a bunch of other more specific commands or targets (like dynamically defined targets to, say, scale-frontends-5 and other trickery), but this way I can switch to any project and get it running without bothering to lookup the npm/lein/Python incantation du jour.

Having sane, overridable (?=) defaults for environment variables is also great, and makes it very easy to do stuff like FOOBAR=/opt/scratch make serve for one-offs.

Dependency management is a much deeper and broader topic, but the usefulness of Makefiles to act as a living document of how to actually run your stuff (including documenting environment settings and build steps) shouldn't be ignored.

(Edit: mention defaults)

rcthompson 3 days ago 0 replies      
For people who are more comfortable in Python, I highly recommend Snakemake[1]. I use it for both big stuff like automating data analysis workflows and small stuff like building my Resume PDF from LyX source.

[1]: https://snakemake.readthedocs.io/en/stable/

Joky 3 days ago 0 replies      
Make is fine for simple cases, but I'm working on a project that is based on buildroot right now, and it is kind of a nightmare: make just does not provide any good way at this scale to keep track of what's going on and inspect / understand what goes wrong. Especially in the context of a highly parallel build with some dependencies are gonna get missing.

In general also all the implicit it has makes it hard to predict what can happen. Again when you scale to support a project that would be 1) large and 2) wouldn't have a regular structure.

On another smaller scale: doing an incremental build of LLVM is a lot faster with Ninja compared to Make (crake-generated).

Make is great: just don't use it where it is not the best fit.

gtramont 3 days ago 0 replies      
Here's some tips I like to follow whenever writing Makefiles (I find them joyful to write): http://clarkgrubb.com/makefile-style-guide
rileytg 3 days ago 1 reply      
wow i've been feeling like not knowing make has been a major weakness of mine, this article has finally tied all my learning together. i feel totally capable of using make now. thank you.
vacri 3 days ago 1 reply      
One very important thing missing from this primer is that Make targets are not 'mini-scripts', even though they look like it. Every line is 'its own script' in its own subshell - state is not passed between lines.

Make is scary because it's arcane and contains a lot of gotcha rules. I avoided learning Make for a long time. I'm glad I did learn it in the end, though I wouldn't call myself properly fluent in it yet. But there are a ton of gotchas and historical artifacts in Make.

mauvehaus 3 days ago 3 replies      
Has anybody successfully used make to build java code? I realize there are any number of other options (ant, maven, and gradle arguably being the most popular).

In fact, I realize that the whole idea of using make is probably outright foolish owing to the intertwined nature of the classpath (which expresses runtime dependencies) and compile-time dependencies (which may not be available in compiled form on the classpath) in Java. I'm merely curious if it can be done.

zwischenzug 3 days ago 1 reply      
This is great, and needs saying.

Recently I wrote a similar blog about an alternative app pattern that uses makefiles:


fiatjaf 3 days ago 0 replies      
Makefiles are simple, but 99% of the existing Makefiles are computer-generated incomprehensible blobs. I don't want that.
user5994461 3 days ago 1 reply      
>>> Congratulations! You've learned 90% of what you need to know about

The next 90% will be to learn that Make breaks when having tabs and spaces in the same file, and your developers all use slightly different editors that will mix them up all the time.

leastangle 3 days ago 3 replies      
I did not know people are afraid of Makefiles. Maybe a nave question, but what is so scary about make?
systemz 3 days ago 0 replies      
Instead of makefile I can recommend Taskfile https://hackernoon.com/introducing-the-taskfile-5ddfe7ed83bd

Simple to use without any magic.

mschuster91 3 days ago 0 replies      
Please, don't ship your own Makefiles. Yes, autotools sucks - but there is one thing that sucks more: no "make uninstall" target.

Good people do not ship software without a way to get rid of it, if needed.

quantos 3 days ago 0 replies      
I had written Non Recursive Makefile Boilerplate (nrmb) for C, which should work in large projects with recursive directory structure. There is no need to manually add source file names in makefile, it automaically do this. One makefile compiles it all. Of course, it isn't perfect but it does the job and you can modify it for your project. Here is the link


Have a look :)

knowsuchagency 3 days ago 0 replies      
Make is fine, but I think we have better tools nowadays to do the same things.

Even though it may not have been originally intended as such, I've found Fabric http://docs.fabfile.org/en/1.13/tutorial.html to be far far more powerful and intuitive as a means of creating CLI's (that you can easily parametrize and test) around common tasks such as building software.

athenot 3 days ago 0 replies      
After using the various javascript build processes, I went back to good old makefiles and the result is way simpler. I have a target to build the final project with optimizations and a target to build a live-reload version of the project, that watches for changes on disk and rebuilds the parts as needed (thanks to watchify).

This works in my cases because I have browserify doing all the heavy lifting with respect to dependency management.

ojosilva 3 days ago 0 replies      
Opinion poll. I'm writing a little automation language in YAML and I was wondering if people prefer a dependency graph concept where tasks run parallel by default, unless stated as dependency, or a sequential set of instructions where tasks only run in parallel if explicitly "forked".

I'd say people would lean towards the former, but time and real world experience has shown that sequential dominates everything else.

elnygren 3 days ago 0 replies      
I almost always roll a basic Makefile for even simple web projects. PHONY commands like "make run" and "make test" in every project make context switching a bit more easier.

While things like "npm start" are nice, not all projects are Node.js. In my current startup we're gonna have standardised Makefiles in each project so its easy to build, test, run, install any microservice locally :)

bitwize 3 days ago 1 reply      
Or just use cmake and save yourself time, effort, and pain.
erAck 2 days ago 0 replies      
Take a look at the LibreOffice gbuild system, completely written in GNU make "language". And then come back saying you're not afraid of make ;-)

Still, it probably would be much harder, if possible at all (doubted for most), to achieve the same with any other tool mentioned here.

brian-armstrong 3 days ago 3 replies      
Using Cmake is so much nicer than make, and it's deeply cross-platform. Cmake makes cross-compiling really easy, while with make you have to be careful and preserve flags correctly. Much nicer to just include a cmake module that sets up everything for you. Plus it can generate xcode and visual studio configs for you. Doing make by hand just seems unncessary.
Why the Brain Needs More Downtime (2013) scientificamerican.com
430 points by tim_sw  3 days ago   104 comments top 10
laydn 3 days ago 12 replies      
I've been noticing that I'm more tired and need more downtime in days where I make, (or forced to make), critical decisions.

If I start the day by knowing what to do, then I don't really feel the burnout. For example, if I'm designing either a piece of hardware or firmware, and I know how to tackle the problem and it is just the matter of implementing it, I can code/design for 10 hours straight and when the workday ends, I still feel full of energy.

However, if the day is full of "decisions" (engineering or managerial), at the end of the day, I feel exhausted (and irritable, according to my family)

jmcgough 3 days ago 4 replies      
I find that I struggle with offices... you're stuck there for 8+ hours (even if you don't work that way, you need to create an impression), but after several hour of intense focus and the noise and chaos of an open office, I can feel drained and anxious. Some days I'll walk to a nearby park with wifi after work, meditate for a short bit, and then code from there. My focus and creativity comes right back after a bit of downtime in a relaxing space.
hasenj 3 days ago 8 replies      
I've always had a hard time sleeping/waking on time. What you might call a "night owl".

I'm starting to notice that on weekdays I actually perform better with 6 hours of sleep rather than 8 or 9. Then on the weekend I would "sleep in" to make up for the lost sleep time.

For some reason, if I sleep for 8 or 9 hours, I wake up feeling like I don't want to do anything. I don't feel sluggish or anything. I just feel "satisfied". Like there's nothing to be done. I can just "be". I can't bring myself to focus on any specific task. Nothing feels urgent.

When I sleep 6 hours, somehow I can focus more.

This is combined with not consuming caffeine. If I drink coffee after I have slept only for 6 hours, it makes me tired and sluggish.

ihateneckbeards 3 days ago 1 reply      
I noticed I can be intensely focused for about 4 to 6 hours max, after that I'll be "washed out" and I become error prone for complicated tasks

Unfortunately the 9 hour in office format constrain me to stay on my seat, so I'll try work on easier things at that time while beeing quite unproductive

How to we bring this fact to companies? It seems only the most ""progressive"" companies like Facebook or Google really understood this

dodorex 3 days ago 2 replies      
"Some researchers have proposed that people are also physiologically inclined to snooze during a 2 P.M. to 4 P.M. nap zoneor what some might call the afternoon slumpbecause the brain prefers to toggle between sleep and wake more than once a day."

Anecdotally, Thomas Edison was said to sleep only 3-4 hours a night and take frequent (very frequent) naps throughout the day.


danreed07 3 days ago 1 reply      
I'm ambivalent about this. I have a friend whose a Harvard math major, I've seen him work. He sleeps late and wakes early; when we work together, he always messes up my schedule by calling me in the middle of the night. I'm all tired and groggy the next day, and he's totally fine.

I think some people just inherently have more energy than others.

uptownfunk 3 days ago 3 replies      
I think I get a good six hours of actual work in the office. And then I need to check out and take a shower. Something about that after work shower just brings my focus and clarity right back. But if I have to crank with my team for a 12-15 hour day, after max 8 hours, we're all just physically there, but mentally have checked out long before that.

On sleep, 5-6 hours is optimal for me. Too much can be bad, I feel groggy and have brain-fog the rest of the day. I can get by on fewer for one day, but more than that and it becomes painful. I think a lot of this also has to do with lifestyle. How often and when do you eat, have sex, get sunlight, drink water, go out doors, etc. Many levels can be played with here.

Would be interested in hearing any hacks for getting by on less sleep.

qaq 3 days ago 0 replies      
Best option I experienced was working remotely from PST on EST schedule. So start at 6am done at 3 eat + have a drink take 1 hour nap and you have 8 hours which after nap fills like a whole new day.
nisa 3 days ago 0 replies      
I'm having a hard time organising and especially switching tasks and getting meaningful work done when multiple things that are unrelated fall together. Having a single thing do to and beeing able to just leave work would be great but at the moment I'm freelancing and having multiple jobs and doing sysadmin-style work, learning theory and programming in a new language really just kills me and I'm not getting much done. Once I get traction in a certain task it's okay but the constant switching is killing me.
pedrodelfino 22 hours ago 0 replies      
Great article. I remember seeing these ideas on Cal Newport's book, "Deep Work". I need more discipline to execute my "downtime plan".
What next? graydon2.dreamwidth.org
443 points by yomritoyj  2 days ago   146 comments top 26
fulafel 2 days ago 8 replies      
Again my pet ignored language/compiler technology issue goes unmentioned: data layout optimizations.

Control flow and computation optimizations have enabled use of higher level abstractions with little or no performance penalty, but at the same time it's almost unheard of to automatically perform (or even facilitate) the data structure transformations that are daily bread and butter for programmers doing performance work. Things like AoS->SoA conversion, compressed object references, shrinking fields based on range analysis, flattening/dernormalizing data that is used together, converting cold struct members to indirect lookups, compiling different versions of the code for different call sites based on input data, etc.

It's baffling considering that everyone agrees memory access and cache footprint are the current primary perf bottlenecks, to the point that experts recommend considering on-die computation is free and counting only memory accesses in first-order performance approximations.

z1mm32m4n 2 days ago 3 replies      
Grayson's very first answer to "what's next" is "ML modules," a language feature probably few people have experienced first hand. We're talking about ML-style modules here, which are quite precisely defined alongside a language (as opposed to a "module" as more commonly exists in a language, which is just a heap of somewhat related identifiers). ML modules can be found in the mainstream ML family languages (Standard ML, Ocaml) as well as some lesser known languages (1ML, Manticore, RAML, and many more).

It's really hard to do justice explaining how amazing modules are. They capture the essence of abstraction incredibly well, giving you plenty of expressive power (alongside an equally powerful type system). Importantly, they compose; you can write functions from modules to modules!

(This is even more impressive than you think: modules have runtime (dynamic) AND compile time (static) components. You've certainly written functions on runtime values before, and you may have even written functions on static types before. But have you written one function that operates on both a static and a dynamic thing at the same time? And what kind of power does this give you? Basically, creating abstractions is effortless.)

To learn more, I recommend you read Danny Gratzer's "A Crash Course on ML Modules"[1]. It's a good jumping off point. From there, try your hand at learning SML or Ocaml and tinker. ML modules are great!

[1]: https://jozefg.bitbucket.io/posts/2015-01-08-modules.html

Animats 2 days ago 3 replies      
One big problem we're now backing into is having incompatible paradigms in the same language. Pure callback, like Javascript, is fine. Pure threading with locks is fine. But having async/await and blocking locks in the same program gets painful fast and leads to deadlocks. Especially if both systems don't understand each other's locking. (Go tries to get this right, with unified locking; Python doesn't.)

The same is true of functional programming. Pure functional is fine. Pure imperative is fine. Both in the same language get complicated. (Rust may have overdone it here.)

More elaborate type systems may not be helpful. We've been there in other contexts, with SOAP-type RPC and XML schemas, superseded by the more casual JSON.

Mechanisms for attaching software unit A to software unit B usually involve one being the master defining the interface and the other being the slave written to the interface. If A calls B and A defines the interface, A is a "framework". If B defines the interface, B is a "library" or "API". We don't know how to do this symmetrically, other than by much manually written glue code.

Doing user-defined work at compile time is still not going well. Generics and templates keep growing in complexity. Making templates Turing-complete didn't help.

borplk 2 days ago 5 replies      
I'd say the elephant in the room is graduating beyond plaintext (projectional editor, model-based editor).

If you think about it so many of our problems are a direct result of representing software as a bunch of files and folders with plaintext.

Our "fancy" editors and "intellisense" only goes so far.

Language evolution is slowed down because syntax is fragile and parsing is hard.

A "software as data model" approach takes a lot of that away.

You can cut down so much boilerplate and noise because you can have certain behaviours and attributes of the software be hidden from immediate view or condensed down into a colour or an icon.

Plaintext forces you to have a visually distracting element in front of you for every little thing. So as a result you end up with obscure characters and generally noisy code.

If your software is always in a rich data model format your editor can show you different views of it depending on the context.

So how you view your software when you are in "debug mode" could be wildly different from how you view it in "documentation mode" or "development mode".

You can also pull things from arbitrarily places into a single view at will.

Thinking of software as "bunch of files stored in folders" comes with a lot baggage and a lot of assumptions. It inherently biases how you organise things. And it forces you to do things that are not always in your interest. For example you may be "forced" to break things into smaller pieces more than you would like because things get visually too distracting or the file gets too big.

All of that stuff are arbitrary side effects of this ancient view of software that will immediately go away as soon as you treat AND ALWAYS KEEP your software as a rich data model.

Hell all of the problems with parsing text and ambiguity in sytnax and so on will also disappear.

gavanwoolery 2 days ago 2 replies      
I like to read about various problems in language design, as someone who is relatively naive to its deeper intricacies it really helps broaden my view. That said I have seen a trend towards adding various bells and whistles to languages without any sort of consideration as to whether it actually, in a measurable way, makes the language better.

The downside to adding an additional feature is that you are much more likely to introduce leaky abstraction (even things as minor as syntactical sugar). Your language has more "gotchas", a steeper learning curve, and a higher chance of getting things wrong or not understanding what is going on under the hood.

For this reason, I have always appreciated relatively simple homoiconic languages that are close-to-the-metal. That said, the universe of tools and build systems around these languages has been a growing pile of cruft and garbage for quite some time, for understandable reasons.

I envision the sweet spot lies at a super-simple system language with a tightly-knit and extensible metaprogramming layer on top of it, and a consistent method of accessing common hardware and I/O. Instant recompilation ("scripting") seamlessly tied to highly optimized compilation would be ideal while I am making a wishlist :)

mcguire 1 day ago 3 replies      
[Aside: Why do I have the Whiley (http://whiley.org/about/overview/) link marked seen?]

I was mildly curious why Graydon didn't mention my current, mildly passionate affair, Pony (https://www.ponylang.org/), and its use of capabilities (and actors, and per-actor garbage collection, etc.). Then, I saw,

"I had some extended notes here about "less-mainstream paradigms" and/or "things I wouldn't even recommend pursuing", but on reflection, I think it's kinda a bummer to draw too much attention to them. So I'll just leave it at a short list: actors, software transactional memory, lazy evaluation, backtracking, memoizing, "graphical" and/or two-dimensional languages, and user-extensible syntax."

Which is mildly upsetting, given that Graydon is one of my spirit animals for programming languages.

On the other hand, his bit on ESC/dependent typing/verification tech. covers all my bases: "If you want to play in this space, you ought to study at least Sage, Stardust, Whiley, Frama-C, SPARK-2014, Dafny, F, ATS, Xanadu, Idris, Zombie-Trellys, Dependent Haskell, and Liquid Haskell."

So I'm mostly as happy as a pig in a blanket. (Specifically, take a look at Dafny (https://github.com/Microsoft/dafny) (probably the poster child for the verification approach) and Idris (https://www.idris-lang.org/) (voted most likely to be generally usable of the dependently typed languages).

carussell 2 days ago 5 replies      
All this and handling overflow still doesn't make the list. Had it been the case that easy considerations for overflow were baked into C back then, we probably wouldn't be dealing with hardware where handling overflow is even more difficult than it would have been on the PDP-11. (On the PDP-11, overflow would have trapped.) At the very least, it would be the norm for compilers to emulate it whether there was efficient machine-level support or not. However, that didn't happen, and because of that, even Rust finds it acceptable to punt on overflow for performance reasons.
mcguire 2 days ago 0 replies      
"Writing this makes me think it deserves a footnote / warning: if while reading these remarks, you feel that modules -- or anything else I'm going to mention here -- are a "simple thing" that's easy to get right, with obvious right answers, I'm going to suggest you're likely suffering some mixture of Stockholm syndrome induced by your current favourite language, Engineer syndrome, and/or DunningKruger effect. Literally thousands of extremely skilled people have spent their lives banging their heads against these problems, and every shipping system has Serious Issues they simply don't deal with right."


statictype 2 days ago 1 reply      
So Graydon works at Apple on Swift?

Wasn't he the original designer of Rust and employed at Mozilla?

Surprised that this move completely went under my radar

rtpg 2 days ago 2 replies      
The blurring of types and values as part of the static checking very much speaks to me.

I've been using Typescript a lot recently with union types, guards, and other tools. It's clear to me that the type system is very complex and powerful! But sometimes I would like to make assertions that are hard to express in the limited syntax of types. Haskell has similar issues when trying to do type-level programming.

Having ways to generate types dynamically and hook into typechecking to check properties more deeply would be super useful for a lot of web tools like ORMs.

bjz_ 2 days ago 2 replies      
I would love to see some advancements into distributed, statically typed languages that can be run on across cluster, and that would support type-safe, rolling deployments. One would have to ensure that state could be migrated safely, and that messaging can still happen between the nodes of different versions. Similar to thinking about this 'temporal' dimension of code, it would be cool to see us push versioning and library upgrades further, perhaps supporting automatic migrations.
dom96 1 day ago 0 replies      
Interesting to see the mention of effect systems. However, I am disappointed that the Nim programming language wasn't mentioned. Perhaps Eff and Koka have effect systems that are far more extensive, but as a language that doesn't make effect systems its primary feature I think Nim stands out.

Here is some more info about Nim's effect system: https://nim-lang.org/docs/manual.html#effect-system

simonebrunozzi 2 days ago 1 reply      
I would have preferred a more informative HN title, instead of a semi-clickbaity "What next?", e.g.

"The next big step for compiled languages?"

hderms 2 days ago 0 replies      
Fantastic article. This is the kind of stuff I go to Hacker News to read. Had never even heard of half of these conceptual leaps.
lazyant 1 day ago 3 replies      
What would be a good book / website to learn the concepts & nomenclature in order to understand the advanced language discussions in HN like this one?
ehnto 2 days ago 5 replies      
I know I am basically dangeling meat into lions den with this question; How has PHP7 done in regards to the Modules section or modularity he speaks of?

I am interested in genuine and objective replies of course.

(Yes your joke is probably very funny and I am sure it's a novel and exciting quip about the state of affairs in 2006 when wordpress was the flagship product)

msangi 2 days ago 1 reply      
It's interesting that he doesn't want to draw too much attention to actors while they are prominent in Chris Lattner's manifesto for Swift [1]

[1] https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9...

touisteur 1 day ago 0 replies      
jancsika 2 days ago 1 reply      
I'm surprised build time wasn't on the list.

Curious and can't find anything: what's the most complex golang program out there, and how long does it take to compile?

leeoniya 2 days ago 3 replies      
it's interesting that Rust isn't mentioned once in his post. i wonder if he's disheartened with the direction his baby went.
ilaksh 2 days ago 0 replies      
I think at some point we will get to projection editors being mainstream for programming, and eventually things that we normally consider user activities will be recognized as programming when they involve Turing complete configurability. This will be an offshoot of more projection editing.

I also think that eventually we may see a truly common semantic definitional layer that programming languages and operating systems can be built off of. It's just like the types of metastructures used as the basis for many platforms today, but with the idea of creating a truly Uber platform.

Another futuristic idea I had would be a VR projectional programming system where components would be plugged and configured in 3d.

Another idea might be to find a way to take the flexibility of advanced neural networks and make it a core feature of a programming language.

AstralStorm 2 days ago 3 replies      
Extra credit for whoever implements logic proofs on concurrent applications.
platz 2 days ago 2 replies      
whats wrong with software transactional memory?
rurban 2 days ago 1 reply      
No type system improvements to support concurrency safety?
baby 2 days ago 0 replies      
Can someone edit the title to something clearer? Thanks!
Blood Test That Spots Tumor-Derived DNA in Early-Stage Cancers hopkinsmedicine.org
359 points by ncw96  3 days ago   63 comments top 11
gourneau 3 days ago 4 replies      
I work for another player Guardant Health. We are the Liquid Biopsy market leaders right now. We just raised $360M Series E from SoftBank.

If you find this type of thing interesting and want to be part of it, we are hiring lots of folks. My team is looking for bioinformaticians, Python hackers, and machine learning people. Please reach out to me if you want to know more jgourneau@guardanthealth.com

AlexDilthey 3 days ago 0 replies      
All fair enough. The two big immediate challenges in the field are i) that the tumor-derived fraction of total cfDNA can be as low as 1:10000 (stage I) and ii) that it is difficult to make Illumina sequencing more accurate than 1 error in 1000 sequenced bases (in which case the 1:10000 signal is drowned out). This paper uses some clever statistical tricks to reduce Illimina sequencing error; one of these tricks is to leverage population information, i.e. the more samples you sequence the better your understanding of (non-cancer-associated) systematic errors. This follows a long tradition in statistical genetics of using multi-sample panels to improve analysis of individual samples. There are also biochemical approaches like SafeSeq or Duplex Sequencing to reduce sequencing error.

Not-so-obvious point #1 is that the presence of cancer-associated mutations in blood != cancer. You find cancer-associated mutations in the skin of older probands, and assumedly many of the sampling sites would never turn into melanomas. A more subtle point is that cfDNA is likely generated by dying cells, i.e. a weak cancer signature in blood might also be indicative of the immune system doing its job.

Point #2 is that it's not necessarily about individual mutations, which are, due to the signal-to-noise ratio alluded to above, difficult to pick up. One can also look at the total representation of certain genes in cfDNA (many cancers have gene amplifications or deletions, which are easier to pick up because they affect thousands of bases at the same time), and the positioning of individual sequenced molecules relative to the reference genome. It seems that these positions are correlated with gene activities (transcription) in the cells that the cfDNA comes from, and cancer cells have distinct patterns if gene activity.

conradev 3 days ago 1 reply      
There is also Freenome, which raised a $65m Series A to bring something similar to market:

> Last year, we raised $5.5 million to prove out the potential of this technology. Now, its time to make sure that its safe and ready for the broader population.


McKayDavis 3 days ago 1 reply      
I haven't read the referenced study, but I'm sure this is using the same (or very similar) cell free DNA (cfDNA) sequencing techniques currently used clinically for Non Invasive Prenatal Testing (NIPT) to screen for genetic defects such as trisomy 21 (Down Syndrome).

NIPT is a non-invasive blood screening test that is quickly becoming the clinical standard of care. Many insurance companies now cover the entire cost of NIPT screening for for at-risk pregnancies (e.g. women of "Advanced Maternal Age" (35yo+)). The debate is moving to whether it should be utilized/covered for average-risk pregnancies as well.

[1] http://capsprenatal.com/about-nipt/

[2] https://www.genomeweb.com/molecular-diagnostics/aetna-wont-c...

hprotagonist 3 days ago 1 reply      
Slowly but surely. This isn't even close to a real diagnostic, but it's a hopeful proof of concept.

I really do wish detection studies would publish a ROC curve, though, or at least d'.

maddyboo 3 days ago 4 replies      
Possibly a silly question, but is it possible for a 'healthy' person who doesn't have any cancer risk factors to get a test like this done?
melling 3 days ago 3 replies      
According to Craig Venter, early detection is what we need to eliminate cancer:


I guess most are treatable if caught early?

amitutk 3 days ago 3 replies      
Didn't Grail raise a billion dollars to do just this?
AlexCoventry 3 days ago 2 replies      
> They found none of the cancer-derived mutations among blood samples of 44 healthy individuals.

Is 98% specificity adequate for a cancer test?

ziggzagg 3 days ago 1 reply      
When this test has a near 100% success rate, how does it help the patients? Can it really prevent cancer?
jonathanjaeger 3 days ago 0 replies      
Tangent: I'm invested in a small cap stock, Sophoris Bio, that's in a P2B study for prostrate cancer with a drug developed out of Johns Hopkins called PRX302 (Topsalysin).

That and the article about blood tests shows there's a lot they're working on for noninvasive or minimally invasive procedures to help prevent cancer early on.

Opioid makers made payments to one in 12 U.S. doctors brown.edu
265 points by metheus  2 days ago   98 comments top 17
lr4444lr 2 days ago 2 replies      
Maybe it's because Americans just have this cognitive dissonance that their trusted doctor could be any less than 100% conscientious about their health, but we need to plainly face the fact that if members of the press were able to write exposs about drug makers' fudging the data about the addictiveness and effectiveness of their products, that doctors with their medical training and responsibility over actual people's lives should have proceeded with more caution and not written scripts mindlessly to get rid of every tiny pain patients had just because they kept asking for something. It's just unconscionable.

EDIT: this survey was also very damning: http://www.chicagotribune.com/news/local/breaking/ct-prescri...

elipsey 2 days ago 3 replies      
Reminds me of what Rostand said about murder: "Kill one man, and you are a murderer. Kill millions of men, and you are a conqueror. Kill them all, and you are a god."

Sell one oxycontin and you're drug dealer; sell a million and you're a C level.

lootsauce 2 days ago 0 replies      
I have two relatives that died from prescription opioid addiction and abuse and I don't think a few payments here and there is what motivates doctors to prescribe these drugs at a higher rate. Maybe it does maybe not. The fact is they are powerful drugs that can stop pain AND they make LOTS of money so they get pushed as the best option.

The thing that is in question in a doctors mind is, can I say this is the best option. Thats what the face-time with reps, meals, conferences etc are doing, giving the MD a perception that this is best practice. It's the professional cover to prescribe what everyone knows is a highly addictive and dangerous narcotic.

If the same kind of money were spent on informing, reminding and reminding again, face-time with addiction prevention advocates, conferences on the opioid epidemic, payments for speaking on alternatives to opioids for pain treatment, giving doctors the facts about these drugs, the addiction and death rates, the impact on families and communities of the inevitable proportion of people who will become addicted and of those who will die, it will be much much harder to say this is a best practice.

But even then doctors are pushed hard to deal with as many patients as possible. A quick answer that deals with the immediate problem is what the patient wants and its all the doc has time and support from the system to give. This situation lends itself to the potential for those who truly benefit, the makers of these drugs, to take advantage of the situation and push drugs they know will make people addicted leading to higher use and profits. Lost lives and destroyed families be damned.

ransom1538 2 days ago 2 replies      
Feel free to browse doctors' opioid counts here. I was able to match them to their actual profiles. Take into account their field, but, even with that the numbers are ridiculous. If you are in "Family Practice" and prescribe opioids 9167 times per year you probably have a very sore hand.


ams6110 2 days ago 3 replies      
"the average payment to physicians was $15, the top 1 percent of physicians reported receiving more than $2,600 annually in payments"

Neither is enough to sway most physicians IMO. This seems to me like trying to stir up a scandal where there really isn't one.

I did hear on the radio today that 90% of prescription opiates are sold in USA and Canada, with the bulk of that being the USA. Other countries treat pain more holistically.

gayprogrammer 2 days ago 0 replies      
>> Q: What connection might there be between drug-maker payments to physicians and the current opioid use epidemic?

The article is pure speculation. They did not correlate the payments made to doctors with the prescriptions those doctors made, nor even more broadly with national prescription rates.

This article just makes the implied assumption that doctors push pills onto patients. I don't discount that at one time doctors may have been incentivized to play it fast and loose with pain pills, but those days are LONG gone now.

I would like to see research on the population in terms of predisposition to addiction and susceptibility to chemical dependence.

11thEarlOfMar 2 days ago 1 reply      
I don't like the 'pigs at the trough' image of this type of report. There are almost certainly pigs, but there is much more to resolving it than just revoking some licenses or throwing some people in jail.

Standard practice in business of all types is to take clients out for a meal to talk business. Usually, the meal setting enables a different type of legitimate, sober interaction. Many types of business are conducted this way. Some companies have policies that limit the value of what a salesperson can share with a client, for example, Applied Materials limits the value of any type of entertainment by a vendor to $100. This is good corporate policy to inhibit undue influence by vendors.

But it is not 'a payment'.

Likewise, it is pretty easy to see that pharma would want a Dr. who is prescribing their medication and has a positive story to tell to speak at one of their seminars. The Dr. might say that his time is worth $x, and the Pharma needs to cover his travel expenses, and then he'd consent to presenting. In this case, any fees paid would be considered payment. The question is, how much is being paid and does that payment present undue influence. Many doctors are independent contractors and can choose to do this type of activity without a policy to override or limit the value of it. On the other hand, state medical boards which license physicians should have policies that limit all medical and pharmaceutical companies in how they can influence physicians.

liveoneggs 2 days ago 3 replies      
jasonkostempski 2 days ago 0 replies      
Are there any rules that if a doctor has such a deal, it must be clearly expressed to the patient verbally and in writing? I think that would help not only deter doctors for making the deal at risk of being viewed as untrustworthy but also help people who blindly trust their doctor to maybe think twice before accepting their solution. I don't think there's a fix for the patients that just want the drug, and as long as they're informed, consenting adults, it should be their prerogative.
esm 2 days ago 1 reply      
Payments may affect prescribing, but I think that system factors count for more than many people realize. By way of an example, imagine the following case, which is reasonably common at the outpatient medicine office I am rotating through:

A 46 yo M with diabetes, hypertension, a 30 pack year smoking history, and low back pain that has been treated with oxycodone ever since a failed back operation 1.5 years ago presents to your office for routine follow-up. It's 10am, the hospital allots 15 minutes for routine appointments, and your next patient is in the waiting room. You are his physician -- what do you prioritize?

Smoking, diabetes, and hypertension are a perfect storm for a heart attack in the next 10 years, so how much time do you want to spend optimizing antihypertensive meds and glucose control? You could talk to him about quitting smoking, which is pretty high-yield since it would lower his cardiovascular and cancer risk. On the other hand, he doesn't seem particularly motivated to quit right now.

You would like to see him exercise more and eat better, since his blood sugars are not too bad yet, and you might be able to spare him daily insulin injections. But, his back pain is so bad that walking is difficult and exercise is out of the question. Tylenol and ibuprofen only "take the edge off". Oxycodone is the one thing that seems to really help. He asks you to refill his prescription, especially because "the pain is so bad at night, I can't sleep without it".

His quality-of-life is already poor, and it would become miserable if you took away his opioid script without providing some other form of pain control. You believe that he might benefit from physical therapy and time. He is willing to try PT, but he is adamant that he will not be able to "do all of the stretches and stuff" without taking oxycodone beforehand.

You now have 7 minutes to come up with a plan he agrees on (you're there to help him, after all), put in your orders, and read up on the next patient. How do you want to allocate your time? What if you suggest cutting down on his oxycodone regimen and he pushes back?

I don't know if there is a good answer. But these situations happen all the time, and someone has to make a decision. Most doctors are normal people. The different backgrounds, personalities, willingness to engage in confrontation or teaching, and varying degrees of concern for public health vs. individual patient needs, etc. lead to a variety of approaches. In the end, I think that pharma payments have a marginal effect on most doctors who have families, bosses, insurance constraints, a full waiting room, and are faced with the patient above.

refurb 2 days ago 8 replies      
This should be kept in context. Let's say the manufacturer presented new data at a conference. During that presentation they provided lunch and refreshments. Everyone of those doctors that attended will now show up in the CMS database.

Do we think that a $15 lunch is going to influence a physician to over-prescribe a drug?

robmiller 2 days ago 1 reply      
There is an irony here that the US invaded Afghanistan, the world's largest opium exporter[1].

[1] https://en.wikipedia.org/wiki/Opium_production_in_Afghanista...

ddebernardy 2 days ago 1 reply      
Is this really news? John Oliver ran a piece on the topic and the industry's many other dubious practices over 2 years ago, and I'm quite sure he wasn't the first to try to raise awareness.


vkou 2 days ago 2 replies      
Not related to payments, but related to opioids:

My father broke his thumb a few weeks ago, while operating a woodchipper. After getting a cast, he went to see a specialist, who recommended that K-wires be surgically installed - small metal rods that go into his thumb, until it heals, at which point they will be pulled out.

He got local anesthetic, got the wires installed, and got sent home. Because he lives in Canada, they gave him nothing for the pain. Two days later, the pain died down, and he's now waiting for the bones to heal.

In America, I can't imagine that doctor would get many positive reviews from his patients, for not prescribing painkillers. Market forces would push him towards over-prescribing... And statistically, some of his patients will become addicted.

zeep 2 days ago 0 replies      
And they tell them that their patients suffer from "pseudo-addiction" and should get more of the drugs...
CodeWriter23 2 days ago 0 replies      
If it walks like a marketing program and quacks like a marketing program, guess what...
oleg123 2 days ago 1 reply      
bribes - or payments?
Ellen Pao: My lawsuit failed. Others wont thecut.com
452 points by gkanai  9 hours ago   362 comments top 25
dang 7 hours ago 0 replies      
All: this article was flagged but we've turned the flags off because it contains significant new information. Threads about sexism have upticked in contentiousness latelyas has everything else, it seemsso would everyone please take care to follow these rules?

1. Please post civilly and substantively, or not at all;

2. If you have a substantive point to make, make it thoughtfully; otherwise please don't comment until you do.

Yes, there's redundancy there; we appear to need it.


strken 4 hours ago 4 replies      
Articles like this really make me aware that men and women like Ellen Pao and her former partners live in a separate parallel world: three degrees, $10 million golden parachutes, private jet flights to ski resorts, affairs with a creepy married co-worker in Germany, machismo-driven muscling for VC connections, bisexual finance wizards who kickstart an Ivy-League LGBT program, dominance games over which chair an exec sits in, PR firms hired to smear uppity former partners... it's like a movie.

I worry that the media looks at cases like this as typical of the experience of women in tech, and downplays the impact of obvious and unobjectionable steps like "recruit junior devs from the ranks of biology grads" and "give expectant mothers maternity leave" because systemic changes aren't as interesting as diversity training or a VC partner's lawsuit.

Joeri 5 hours ago 11 replies      
It's easy to get hung up on the particulars of Pao's story and get sidetracked into defending or judging her, but I feel that is besides the point. I am more interested in the wider notion of why she wrote this article: to point out that sexism in tech is a thing, and that it shouldn't be.

I'm wondering though: is this just about sexism, or is it about professionalism and maturity? Getting hit on by someone higher up the hierarchy than you can make it impossible to do your job, so that behavior is clearly unprofessional. But getting yelled at by your boss for shipping a bug is also unprofessional, and can also make it a toxic work environment. I'm not saying the two are the same, just that both are examples of unprofessional behavior that many places will tolerate.

Isn't it time we have conversations about what it means to be a professional in tech? Maybe other industries suffer less from these things because they have a longer history and have more guild-like working practices, where professional behavior is more clearly defined. In tech people get away with wildly unprofessional behavior as long as "they get stuff done", and personally I never felt that was acceptable.

Maybe this stuff is also sort of everywhere. Plenty of industries have toxic working relationships. Why isn't professionalism part of standard education tracks? I studied CS and I never learned about what it means to be a professional software developer. How do you have productive conversations with coworkers? How do you organize your work effectively? All of these things you're supposed to figure out on your own, but looking around I can tell that mostly people never do, or only do so after decades of getting it wrong.

dreta 6 hours ago 7 replies      
An interesting read. Though, for me, VC in general is not the kind of job that favors people who are nice. When people are rude to you, or try to use you, there's a multitude of ways you can interpret that. For Ellen, it's sexism. What baffled me was that in the opening paragraphs she felt the need to point out that the powerful men were white. For me, it set the tone for the whole article, and painted a clear picture of her attitude towards the case and people involved. Given the current political climate in SV, it's a poor attempt at manipulation, and doesn't help her come off as reasonable.
ralusek 6 hours ago 8 replies      
I think that it's really interesting that conversations about venture capital and conversations about engineering both get to be lumped into a more general conversation about sexism in Silicon Valley.

A lot of engineers have a good bit to say against the existence of widespread sexism in engineering, myself included. Engineering in computer science has long been represented by a nearly nonexistent barrier to entry outside of one's capabilities and their relevance to the position. Even traditional, and technically very relevant, lateral predictors to output are such as formal education, are largely ignored. Your accomplishments and capabilities interviewing are ultimately what get the hire, with very few exceptions. Anybody who has been in a hiring position can speak to the utilitarian pursuit of the placement; race and sex are the last thing on the mind come hiring time.

All of that being said, however, I don't find it remotely hard to believe that Ellen Pao's recounting of her experience in the world of venture capital is far from the truth. I'm actually relatively certain that she's spared us a good bit of the details. But this isn't engineering, this is finance; quite rarely about the utility of any particular individual in a role, and almost entirely centered around pretty horrible characteristics. Cronyism is the most important characteristic in the club, trading favors, trading connections, looking the other way, getting away with this, getting away with that. The whole thing is a zero sum game, because nobody within is creating any value, you're only ever vying for a piece of the pie baked by the outsiders who actually produce things. As Pao points out, any partner you have largely considers you as a mechanism by which they are to have less investment capital available themselves, and any senior sees you as a way to bubble up the greatest picks for them to skim off the top.

The point is that the business is not about merit, it's about being in the club and playing ball. To deviate from the standards of the club just means you're less of a sure thing when it comes to being a crony, and it doesn't take much to understand why a woman is an outsider in a club like this.

So when we talk about sexism in Silicon Valley, let's please not conflate these two very different businesses. One of them is made up of worker bees, and we don't care what kind of bee you are as long as you're outputting honey. The other one is literally Wall Street pretending like it's anything but.

bane 2 hours ago 4 replies      
I think it's very important to parse Pao from the reprehensible behavior she describes in her workplace. Being polite, Pao does not come off well in her own personal story. But she was also swimming in a garbage pit.

I have no doubt that what she writes about is true, and probably even holds back on much of the frat house nonsense. There can be entire semesters spent at school exploring other avenues that she might have pursued, and in most cases she truly really was a victim of a truly terrible environment nobody should have to work in.

But for those of us who remember the case well while she was at Reddit, she also isn't only an innocent victim and had made some really bankrupt decisions all on her own. I don't get the impression from this article that she's really done any soul searching since then and she even tries to soft-peddle her affair with a co-worker as just a little bit of a school girl crush and doesn't mention various other questionable behaviors that became public as a result of her lawsuit.

There's a lot of thought that equality in the workplace should not mean "start slotting women into the existing power structures and processes", but should instead take into account differences in style and ideas that women might naturally bring. But it goes both ways, terrible behavior also can manifest itself in different ways. As innocent as Pao seems to think she is in her own bad behaviors (which she either ignores here or downplays), her former co-workers also don't think there was any particular harm -- they're both wrong. To truly respect Pao, you have to also respect that she was capable of making bad decisions and that she needs to own them. In my mind, her behavior is not as bad as her former colleagues, but she could probably throw a softball and hit them.

Pao's story is important to tell, but she lacks the personal credibility for people to care. I really feel like if she were a bit more open about where she also messed up, and her motivations, she would come across as a much more empathetic story teller that would get more people's attention and give her better credibility than her former colleagues. But her general unwillingness to publicly confront, and get out in front of, her own behavior leaves her story vulnerable to various naysayers and that's a shame.

Al-Khwarizmi 4 hours ago 2 replies      
Sexism in Silicon Valley or sexism among the corporate elites?

I doubt the average tech worker, who doesn't travel in private jets, casually talks about porn actresses and sex workers at their job.

However, I have often heard and read such anecdotes about elite executives, also in non-tech sectors. I'd say that kind of attitude is more related to the impunity that comes with power than with tech or non-tech.

redm 1 hour ago 1 reply      
Many of the issues raised by Ellen seem to be bad traits of VC's when dealing with people in general. I didn't have any "good" experiences with VC's for many of the same reasons. They look down at you; they are casually dismissive; they are manipulative; they lie as necessary, etc. They seem to be only as friendly as needed in case they suddenly decide they want to invest.

I'm curious how similar or different my experience would have been pitching Ellen Pao.

abtinf 39 minutes ago 0 replies      
"I didnt have time to go through all my emails to figure out which ones to give Kleiner, so during the discovery process we gave them practically everything, some 700,000 emails most of which we could have legally withheld. ... During depositions, they brought up everything from my nannys contract to an exercise Id done in therapy where I listed resentments. Emails to friends, emails to my husband, emails to other family members, even emails to my lawyers."

This is the single most incompetent legal behavior I've ever heard of. It shows such a profound lack of judgement that, by itself, it is enough to blacklist Pao and every lawyer advising her.

kelukelugames 13 minutes ago 0 replies      
>He wanted me to go to school to learn to be a stand-up comic.

I'm doing this! Some guys are really good at laughing and joking with management, but I am awful. One time a partner walked away from me without saying a word because he didn't like my comment about sports. I felt like that one black woman on Insecure when she shows up to a hockey game to suck up to her bosses and nobody remembers her.

trabant00 6 hours ago 6 replies      
One thing to remember: it's only one side of the story from somebody who had/has serious financial interest depending on the outcome of the scandal. It kind of worries me how most comments take the allegations for granted.
ternaryoperator 7 hours ago 1 reply      
For a fairly interesting summary of KP's replies / defense, see this article [1].

[1] https://en.wikipedia.org/wiki/Pao_v._Kleiner_Perkins

dkobran 6 hours ago 1 reply      
Really interesting glimpse into this world.

It's unfortunate that this article is getting negative feedback on HN. I think we all need to be careful not to belittle/condemn issues that don't personally affect us.

probably_wrong 7 hours ago 3 replies      
Interesting read. I wonder if the "paid army of trolls" she mentions followed her to Reddit. That would definitely explain the campaign against her.
minipci1321 1 hour ago 1 reply      
Something doesn't add up.... when my co-worker or manager makes sexist or racial jokes, my experience tells me that a) the immediate issue to be solved is not the sexism or racial discrimination in my line of business, but this specific person. The only solution -- this person needs to be replaced, not taught to behave (let alone persuaded to change views), and that b) his/hers personal issues very probably don't stop at women and minorities.

But I understand that we have this whole lot of discussions recently simply because, there are too many of such types kind of accumulated in one place, which makes replacing them all even less probable.

So how the industry believes this can be successfully addressed at this scale? I'd think the behaviour of people on the plane she described is simply about basic human decency? So, everything now needs to be codified by HR? "Don't speak of pornstars when on the plane with other people", "Don't bring into conversation how you tortured small animals when a kid"?

lettergram 13 minutes ago 0 replies      
How are the turning off of flags determined, Dang?

Not trying to be rude, but unless there is a set criteria it seems prone to bias...

natecavanaugh 4 hours ago 2 replies      
I totally agree sexism is a thing, not just in Silicon Valley, but in every strata of our society. And I'm saying this not just as a life long conservative, but also as someone who has had many strong women in his life who have absorbed a ton of crap simply because they are women, including my mother, grandmother and basically almost every woman I've ever met.

I don't like blaming of societal issues for a person's achievements (or lack thereof), but at some point, you have to admit, the same gender who bewilderingly sends dick pics to strangers, talks crudely about women behind their backs, and views them as a means to a physical end, maybe we don't see how we interact with women or at least admit that many of our gender demean women in ways that equates to bullying in one form or another.I normally don't like victim culture, or people who assume the intentions of others to prove discrimination, but I don't think we need that for many cases when it comes to women.

Men and women, on the general, have different societal and interpersonal strengths, and I love all of it (yet don't assume or exclude those strengths from either gender).

But if I'm honest, men and women both have their own forms of aggressive behavior in getting ahead, but men do it even more aggressively, frequently and overtly than women, and often at the expense of women.

Personally, I love each gender's strength. But I absolutely loathe the behavior of many, if not most, of my gender's treatment of anyone in a "weaker" position.

We must do better, for no other reason than each of our strengths is a responsibility to use it to help those who are victimized.

We can hide behind a meritocracy of which we enjoy hidden benefits, but at the end of the day, we should ask, is it because we really are better suited to a task, or have we institutionally created a system that enforces our biases and preferences?

It's probably a mix of both, but until we see our own part in this, were can't actually improve anything.

dandare 3 hours ago 0 replies      
Ellen Pao may have lost the trial but she changed - me. In 2015 I saw a tech world ruled by meritocracy with sexism and discrimination being the evils of the Med Man era. Two and a half years (and many other affairs) later I see how naive I was and how much more difficult is work in tech for women.
jessaustin 2 hours ago 0 replies      
Most VCs wouldn't tolerate such boorish behavior on the part of execs at their investments. (They might let it slide for a time, for tactical reasons...) Why do limited partners put up with it when the VCs do it?
FuriouslyAdrift 2 hours ago 0 replies      
A much better article written to show more than one perspective from a few years ago: https://www.vanityfair.com/style/scandal/2013/03/buddy-fletc...
jansho 4 hours ago 0 replies      
I'm honestly disappointed that many commenters here are making a lot of judgment about Pao's intentions. It's like no one read the piece properly. Here, let me help you (and get your walls down!)


Claim 1: She's over-reading an already tough environment (which btw men also face) for sexism.

See the Ajit Nazre harassment section. Also note that Pao wasn't the only one targeted.

See the male-only ski trips/ club visits/ etc section. It's bullshit to say that it's not their fault that women are uncomfortable talking about porn stars. It's unprofessional anyway. And seriously, a top venture firm can't afford to accommodate for all members of their team?

"I was later told that they didnt invite any women because women probably wouldnt want to share a condo with men."

Still a big presumption to make; Pao may actually be OK with this.

I get the male buddy bonding thing, but arguably doesn't this make it harder for women to feel part of the team too?

See also the Flipboard section. Now this I'm very familiar with; make a suggestion, be promptly turned down, another guy makes the same suggestion later and wahey! Back slaps all around.

See also the section where she was excluded from meetings, discussions and 'scoops.'

See also the section where others (white, male) more junior than her received promotion and she didn't.


Claim 2: Pao is out for attention.

Seriously I can't even -

OK, I'm not going to do a reference here. Think about it: Pao is well-educated and has a very good position at a top firm. She's also just started a family, and instability is the last thing anyone in that position wants. It will take a very serious claim to risk all that.

She was also very aware that the odds are overwhelmingly against her. See section when she asked other women who had sued powerful firms over discrimination. One even said "Its a complete mismatch of resources. They dont fight fair. Even if you win, it will destroy your reputation."

See section about how her firm launched an aggressive media campaign to discredit her, including "click farms" to spread negative rumours about her incompetency etc. How Vanity Fair (!) suggested that her marriage was a sham because her husband previously had gay relationships.

She did lose, but for what it's worth, she did end up getting attention - the right sort I would say (see the "Pao Effect".)


Claim 3: Pao did it for the money."

See section when she was offered money to leave quietly.

Though "When I spoke to the COO, he asked how much I wanted in order to quietly leave. I want no less than what Ajit gets, I said which I suspected was around $10 million. The COO gasped."

The same Ajit as above.

Oh yes she's releasing a new book, but seriously, is the profit really enough to make it her primary intention to sue for harassment?

And as mentioned above, even if she won, she would have lost her reputation. There's a reason everyone applauds whistle-blowers, and no one wants to hire them.


Claim 4: Sexism is only an emergent behaviour of a toxic industry. To fix sexism, we have to instil professionalism.

This is true and a good intention, but forgive me this is also belittling the issue of sexism.

For sure, women aren't the only ones with negative experience. You could say the same for other minority groups such as black, LGBTQ etc, and yes even overworked white, male juniors.

But sexism is a major part of what makes the environment toxic. Women make 50% of the global population! And clearly many women in tech experience uncomfortable issues rooted in sexism. If you want to instil more professionalism, you will also need to cover sexism in the training.

See section where the firm admits that they don't even have a HR department until recently.

See also comments that claim women are using their gender as a card. So f---ing patronising.

See section where the firm has very low female presence in the board.

It's true that there is a shortage of females in tech, making recruitment harder. But this is why we need to go beyond professionalism training. To get more gender balance, we need to start at schools. So for those tired of the whole "women are victims" yada-yada (I admit that I also feel the same) for what it's worth, people are now paying attention and working harder to encourage girls into STEM subjects, and make the tech environment more inclusive.


Running out of time, but I hope this is enough food for thought.

moon_of_moon 23 minutes ago 0 replies      
Two possibilities:

1] She brought value to the table and was discriminated against.

2] She brought little value to the table, was one of those whiny entitled people (ooh look I have all these degrees, now I should automatically be promoted), who cannot accept the truth, and resorted to sleeping with a senior partner to compensate and when it all fell apart she got offended.

It could be either.

She has to convince the world it was [1] not [2].

devnonymous 6 hours ago 0 replies      
Reading the comments here I get a feeling not a lot of commenters are reading the entire post (please do), or are reading after already deciding that they won't agree with what they read (if that's the case, please don't bother with the comments and move on to the next HN article)

I thought it was an insightful read as much as I dislike the arrogance that SV seems to possess despite its ignorance of the rest of the world... And the accompanying culture that spills out of that combination of arrogance and ignorance.

SV culture has a lot of problems, sexism is one of them.

0xbear 6 hours ago 0 replies      
That was better than I thought it would be. A real, "no holds barred" account, and well written as well. I wish there were more of these, including perhaps accounts of what others have to say about Ellen herself. This really takes the patina off the whole SV venture capital scene. It's like there are different sets of rules, one for them, and the other is for the peons who work for them. A peon would be let go after the first, like, five seconds of that private jet conversation.
bobcostas55 2 hours ago 8 replies      
>and downplays the impact of obvious and unobjectionable steps like [..] "give expectant mothers maternity leave"

This is a bit off-topic, but maternity leave is neither obvious nor unobjectionable. There is significant empirical evidence that shows that policies such as long paid maternity leave harms women's wages and career progression.

Have a look at:

* The impact of Nordic countries family friendly policies on employment, wages, and children https://link.springer.com/article/10.1007/s11150-007-9023-0

* Is there a glass ceiling over Europe? Exploring thegender pay gap across the wages distribution https://www.econstor.eu/bitstream/10419/92046/1/2005-25.pdf

Docker Is Raising Funding at $1.3B Valuation bloomberg.com
319 points by moritzplassnig  2 days ago   273 comments top 19
bane 2 days ago 5 replies      
I feel like this is one of those valuations which makes sense contextually, but not based on any sort of business reality.

Docker reminds me a lot of the PKZIP utilities. For those who don't remember, back in the late 80s the PKZIP utilities became a kind of defacto standard on non Unixes for file compression and decompression. The creator of the utilities was a guy named Phil Katz who meant to make money off of the tools, but as was the fashion at the time released them as basically feature complete shareware.

Some people did register, and quite a few companies registered to maintain compliance so PKWare (the company) did make a bit of money, but most people didn't bother. Eventually the core functionality was simply built into modern Operating Systems and various compatible clones were released for everything under the sun.

Amazingly the company is still around (and even selling PKZIP!) https://www.pkware.com/pkzip

Katz turned out to be a tragic figure http://www.bbsdocumentary.com/library/CONTROVERSY/LAWSUITS/S...

But my point is, I know of many many (MANY) people using Docker in development and deployment and I know of nobody at all who's paying them money. I'm sure they exist, the make revenue from somewhere presumably, but they're basically just critical infrastructure at this point and just becoming an expected part of the OS, not a company.

new299 2 days ago 12 replies      
I'm so curious to understand how you pitch Docker at a 1.3BUSD valuation. With I assume a potential valuation of ~10BUSD to give the investors a decent exit?

Does anyone have an insight into this?

Looks like Github's last valuation was at 2BUSD. That also seems high, but I can understand this somewhat better as they have revenue, and seem to be much more widely used/accepted than Docker. In addition to that I can see how Github's social features are valuable, and how they might grow into other markets. I don't see this for Docker...

foota 2 days ago 3 replies      
My first reaction was that I was surprised it wasn't higher.

My second reaction was incredulity at how ridiculous my first reaction was.

locusofself 2 days ago 2 replies      
I used docker for a while last year and attended Dockercon. I was really excited about it and thought it was going to solve many of my problems.

But with how complicated my stack is, it just didn't make sense to use ultimately. I loved the idea of it, but in the end good old virtual machines and configuration management can basically do most of the same stuff.

I guess if you want to pack your servers to the brim with processes and shave off whatever performance hit you get from KVM or XEN, I get it.

But the idea of the filesystem layers and immutable images just kindof turned to a nightmare for me when I asked myself "how the hell am I going to update/patch this thing"

Maybe I'm crazy, but after a lot of excitement it seemed more like an extra layer of tools to deal with more than anything.

raiyu 2 days ago 0 replies      
Monetizing open source directly is a bit challenging because you end up stuck in the same service model as everyone else. Which is basically to sell various support contracts to the fortune 100-500.

Forking a project into a enterprise (paid for) version and limiting those features in the original open source version, creates tension in the community, and usually isn't a model that leads to success.

Converting an open source project directly into a paid for software or SaaS model is definitely the best route as it reduces head count and allows you to be a software company instead of a service company.

Perhaps best captured by Github warpping git with an interface and community and then directly selling a SaaS subscription and eventually an enterprise hosted version that is still delivered on a subscription basis just behind the corporate firewall.

Also of note is that Github didn't create git itself, and instead was done on the direct need that developers saw themselves, which means they thought what is the product I want, rather than, we built and maintain git, so let's do that and eventually monetize it.

ahallock 2 days ago 5 replies      
Docker still has a long way to go in terms of local development ergonomics. Recently, I finally had my chance to on board a bunch of new devs and have them create their local environment using Docker Compose (we're working on a pretty standard Rails application).

We were able to get the environments set up and the app running, but the networking is so slow to be pretty much unusable. Something is wrong with syncing the FS between docker and the host OS. We were using the latest Docker for Mac. If the out of the box experience is this bad, it's unsuitable for local development. I was actually embarrassed.

z3t4 2 days ago 9 replies      
I dont understand containers. First you go to through great pain sharing and reusing libraries. Then you make a copy of all the libraries and the rest of the system for each program !?
throw2016 2 days ago 4 replies      
Docker generated value from the LXC project, aufs, overlay, btrfs and a ton of open source projects yet few people know about these projects, their authors and in the case of the extremely poorly marketed LXC project even what it is thanks to negative marketing by the Docker ecosystem hellbent on 'owning containers'.

Who is the author of aufs or overlayfs? Should these projects work with no recognition while VC funded companies with marketing funds swoop down and extract market value without giving anything back. How has Docker contributed back to all the projects it is critically dependent on?

This does not seem like a sustainable open source model. A lot of critical problems around containers exist in places like layers, the kernel and these will not get fixed by Docker but aufs or overlayfs and kernel subsystems but given most don't even know the authors of these projects how will this work?

There has been a lot of misleading marketing on Linux containers right from 2013 here on HN itself and one wishes there was more informed discussion that would correct some of this misinformation, which didn't happen.

eldavido 2 days ago 0 replies      
I wish people would stop talking about valuation this way, emphasizing the bullshit headline valuation.

The reality is that (speculating), they probably issued a new class of stock, at $x/share, and that class of stock has all kinds of rights, provisions, protections, etc. that the others don't, and may or may not have any bearing whatsoever on what the other classes of shares are worth.

Steeeve 2 days ago 1 reply      
The guy who came up with chroot in the first place is kicking himself.
kev009 2 days ago 1 reply      
Do they actually have any significant revenue? I love developer tools companies, but there are several tools upstarts that have no proven business model. They look like really bad gambles in terms of VC investment, unless you can get in early enough to unload to other fools.
contingencies 2 days ago 1 reply      
I worked with LXC since 2009, then personally built a cloud provider agnostic workflow interface superior in scope to Docker in feature set[1] between about 2013-2014 as a side project to assist with my work (managing multi-DC, multi-jurisdiction, high security and availability infrastructure and CI/CD for a major cryptocurrency exchange). (Unfortunately I was not able to release that code because my employer wanted to keep it closed source, but the documentation[2] and early conception[3] has been online since early days.) I was also an early stage contributor to docker, providing security related issues and resolutions based upon my early LXC experience.

Based upon the above experience, I firmly believe that Docker could be rewritten by a small team of programmers (~1-3) within a few month timeframe.

[1] Docker has grown to add some of this now, but back then had none of it: multiple infrastructure providers (physical bare metal, external cloud providers, own cloud/cluster), normalized CI/CD workflow, pluggable FS layers (eg. use ZFS or LVM2 snapshots instead of AUFS - most development was done on ZFS), inter-service functional dependency, guaranteed-repeatable platform and service package builds (network fetches during package build process are cached)...

[2] http://stani.sh/walter/cims/

[3] http://stani.sh/walter/pfcts/

vesak 1 day ago 0 replies      
Check out Chef's https://habitat.sh for one fresher take on all this. It moves the containerization approach closer to something that feels like Arch Linux packaging, with a pinch of Nix-style reproducibility. Looks very promising at this point, even if a bit rough on the edges still.
jdoliner 2 days ago 2 replies      
There's a couple of things in this article that I don't think are true. I don't think Ben Golub was a co-founder of Docker. Maybe he counts as a co-founder of Docker but not of Dotcloud? That seems a bit weird though. I also am pretty sure Docker's headquarters are in San Francisco, not Palo Alto.
StanislavPetrov 2 days ago 0 replies      
As someone who witnessed the 2000 tech bubble pop, I feel like Bill Murray in Groundhog's day, except unfortunately this time its not just tech. Its going to end very badly.
frigen 2 days ago 1 reply      
Unikernels are a much better solution to the problems that Docker solves.
slim 2 days ago 2 replies      
Docker is funded by In-Q-Tel
elsonrodriguez 2 days ago 0 replies      
That's a lot of money for a static compiler.
jaequery 2 days ago 2 replies      
why are they called Software Maker?
Try Out Rust IDE Support in Visual Studio Code rust-lang.org
264 points by Rusky  2 days ago   82 comments top 12
modeless 2 days ago 5 replies      
I have been using this for a few weeks, as a newcomer to Rust. Although it has some issues, I would not try to develop Rust code without it. It is incredibly useful and works well enough for day-to-day use.

Some of the issues I've found:

* Code completion sometimes simply fails to work. For example, inside future handlers (probably because this involves a lot of type inference).

* When errors are detected only the first line of the compiler error message is accessible in the UI, often omitting critical information and making it impossible to diagnose the problem.

* It is often necessary to manually restart RLS, for example when your cargo.toml changes. It can take a very long time to restart if things need to be recompiled and there isn't much in the way of progress indication.

* This is more of a missing feature, but type inference is a huge part of Rust, and it's often difficult to know what type the type inference engine has chosen for parts of your code. There's no way to find out using RLS in VSCode that I've seen, or go to the definition of inferred types, etc.

Other issues I've seen as a newcomer to Rust:

* It's very easy to get multiple versions of the same dependency in your project by accident (when your dependencies have dependencies), and there is no compiler warning when you mix up traits coming from different versions of a crate. You just get impossible-seeming errors.

* Compiler speed is a big problem.

* The derive syntax is super clunky for such an essential part of the language. I think Rust guides should emphasize derive more, as it's unclear at first how essential it really is. Almost all of my types derive multiple traits.

* In general, Rust is hard. It requires a lot more thinking than e.g. Python or even C. As a result, my forward progress is much slower. The problems I have while coding in Rust don't even exist in other languages. I'm sure this will improve over time but I'm not sure it will ever get to the point where I feel as productive as I do in other languages.

sushisource 2 days ago 1 reply      
Rust is not only a fantastic language, but the level of community involvement from the devs is just completely unlike any other language I've seen in a very long time. That really makes me excited that it will be adopted in the industry over time and ideally replace some of the nightmare-level C++ code out there.
KenoFischer 2 days ago 2 replies      
I haven't had the chance to try the Rust language mode, but I've been using VS Code for all my julia development lately, and I'm pretty impressed. It's quite a nice editor. I avoided using it for a very long time because I thought it'd match Atom's slowness due to their shared Electron heritage. But for some reason VS Code feels a lot snappier. Not quite Sublime levels, but perfectly usable.
int_19h 2 days ago 2 replies      
It looks like code completion is extremely basic. I tried this:

 struct Point { x: i32, y: i32 } fn main() { println!("Hello, world!"); let pt = Point { x: 1, y: 2 }; println!("{} {}", pt.x, pt.y); let v = vec![ pt ]; let vpt = &v[0]; println!("{} {}", vpt.x, vpt.y); }
And I can't get any dot-completions on vpt (but I can on pt). Which is also kinda weird, because if I hover over vpt, it does know that it is a &Point...

Even more weird is that if I add a type declaration to "let vpt" (specifying the same type that would be inferred), then completion works.

That sounds like a really basic scenario... I mean, type inference for locals is pervasive in Rust.

ericfrederich 2 days ago 4 replies      
CSDude 2 days ago 1 reply      
I tried it, it works really nice. If you are looking for an alternative, the Intellij IDEA works very well when you install Rust plugin.
alkonaut 2 days ago 0 replies      
The vscode rust support is progressing nicely and if vscode is already your go-to editor it's the obvious choice.

However if you just want to dip your toes and get going with rust with minimal fuss, I find IntelliJ community+Rust to be the best combo. Vscode+Rust is not as polished yet.

RussianCow 2 days ago 2 replies      
Does this support macro expansion of any kind? I'm currently using the plugin for IntelliJ IDEA, and it works really well aside from completely lacking support for macros, which makes its type annotations and similar features nearly useless for the project I'm working on.
rl3 2 days ago 1 reply      
Anyone here writing Rust on Windows and using WSL (Windows Subsystem for Linux) in your workflow?

I've found using WSL's "bash -c" from my Rust project's working directory in Windows to be a rather elegant way to compile and run code for a Linux target.

Theoretically it should be possible to remote debug Linux binaries in WSL from an editor in Windows, but I haven't had time to explore this yet. Both GDB and LLDB have remote debugging functionality.

demarq 2 days ago 1 reply      
Autofix doesn't seem to trigger for me, would someone confirm it's not just me?

try this in your editor

 let x = 4; if x = 5 {}
it should figure out you wanted x == 5.

tomc1985 2 days ago 1 reply      
I would love Rust support in Visual Studio proper...
Dowwie 2 days ago 1 reply      
Is anyone working on support in Atom?
Towards a JavaScript Binary AST yoric.github.io
306 points by Yoric  3 days ago   204 comments top 32
nfriedly 3 days ago 6 replies      
To clarify how this is not related to WebAssembly, this is for code written in JavaScript, while WASM is for code written in other languages.

It's a fairly simple optimization - it's still JavaScript, just compressed and somewhat pre-parsed.

WASM doesn't currently have built-in garbage collection, so to use it to compress/speed up/whatever JavaScript, you would have to compile an entire JavaScript Virtual Machine into WASM, which is almost certainly going to be slower than just running regular JavaScript in the browser's built-in JS engine.

(This is true for the time being, anyway. WASM should eventually support GC at which point it might make sense to compile JS to WASM in some cases.)

cabaalis 3 days ago 12 replies      
So, compiled Javascript then? "We meet again, at last. The circle is now complete."

The more I see interpreted languages being compiled for speed purposes, and compiled languages being interpreted for ease-of-use purposes, desktop applications becoming subscription web applications (remember mainframe programs? ), and then web applications becoming desktop applications (electron) the more I realize that computing is closer to clothing fads than anything else. Can't wait to pickup some bellbottoms at my local target.

apaprocki 3 days ago 3 replies      
From an alternate "not the web" viewpoint, I am interested in this because we have a desktop application that bootstraps a lot of JS for each view inside the application. There is a non-insignificant chunk of this time spent in parsing and the existing methods that engines expose (V8 in this case) for snapshotting / caching are not ideal. Given the initial reported gains, this could significantly ratchet down the parsing portion of perceived load time and provide a nice boost for such desktop apps. When presented at TC39, many wanted to see a bit more robust / scientific benchmarks to show that the gains were really there.
le-mark 3 days ago 3 replies      
Here's some perspective for where this project is coming from:

> So, a joint team from Mozilla and Facebook decided to get started working on a novel mechanism that we believe can dramatically improve the speed at which an application can start executing its JavaScript: the Binary AST.

I really like the organization of the present article, the author really answered all the questions I had, in an orderly manner. I'll use this format as a template for my own writing. Thanks!

Personally, I don't see the appeal for such a thing, and seems unlikely all browsers would implement it. It will be interesting to see how it works out.

mannschott 2 days ago 1 reply      
This is reminiscent of the technique used by some versions of ETH Oberon to generate native code on module loading from a compressed encoding of the parse tree. Michael Franz described the technique as "Semantic-Dictionary Encoding":

SDE is a dense representation. It encodes syntactically correct source program by a succession of indices into a semantic dictionary, which in turn contains the information necessary for generating native code. The dictionary itself is not part of the SDE representation, but is constructed dynamically during the translation of a source program to SDE form, and reconstructed before (or during) the decoding process. This method bears some resemblance to commonly used data compression schemes.

See also "Code-Generation On-the-Fly: A Key to Portable Software" https://pdfs.semanticscholar.org/6acf/85e7e8eab7c9089ca1ff24...

This same technique also was used by JUICE, a short-lived browser plugin for running software written in Oberon in a browser. It was presented as an alternative to Java byte code that was both more compact and easier to generate reasonable native code for.


I seem to recall that the particular implementation was quite tied to the intermediate representation of the OP2 family of Oberon compilers making backward compatibility in the face of changes to the compiler challenging and I recall a conversation with someone hacking on Oberon that indicated that he'd chosen to address (trans)portable code by the simple expedient of just compressing the source and shipping that across the wire as the Oberon compiler was very fast even when just compiling from source.

I'm guessing the hard parts are:(0) Support in enough browsers to make it worth using this format.(1) Coming up with a binary format that's actually significantly faster to parse than plain text. (SDE managed this.)(2) Designing the format to not be brittle in the face of change.

onion2k 3 days ago 2 replies      
This is a really interesting project from a browser technology point of view. It makes me wonder how much code you'd need to be deploying to for this to be useful in a production environment. Admittedly I don't make particularly big applications but I've yet to see parsing the JS code as a problem, even when there's 20MB of libraries included.
nine_k 2 days ago 1 reply      
This is what BASIC interpreters on 8-bit systems did from the very beginning. Some BASIC interpreters did not even allow you to type the keywords. Storing a trivially serialized binary form of the source code is a painfully obvious way to reduce RAM usage and improve execution speed. You can also trivially produce the human-readable source back.

It's of course not compilation (though parsing is the first thing a compiler would do, too). It's not generation of machine code, or VM bytecode. it's mere compression.

This is great news because you got to see the source if you want, likely nicely formatted. You can also get rid of the minifiers, and thus likely see reasonable variable names in the debugger.

ryanong 3 days ago 2 replies      
This is some amazing progress, but reading this and hearing how difficult JavaScript is as a language to design around makes me wonder how many hours have we spent optimizing a language designed in 2 weeks and living with those consequences. I wish we could version our JavaScript within a tag somehow so we could slowly deprecate code. I guess that would mean though browsers would have to support two languages that would suck..... this really is unfortunately the path of least resistance.

(I understand I could use elm, cjs, emscriptem or any other transpirer but I was thinking of ours spent around improving the js vm.

iainmerrick 2 days ago 1 reply      
This article says "Wouldnt it be nice if we could just make the parser faster? Unfortunately, while JS parsers have improved considerably, we are long past the point of diminishing returns."

I'm gobsmacked that parsing is such a major part of the JS startup time, compared to compiling and optimizing the code. Parsing isn't slow! Or at least it shouldn't be. How many MBs of Javascript is Facebook shipping?

Does anyone have a link to some measurements? Time spent parsing versus compilation?

vvanders 3 days ago 2 replies      
Lua has something very similar(bytecode vs AST) via luac for a long while now. We've used to to speed up parse times in the past and it helps a ton in that area.
nikita2206 3 days ago 0 replies      
In this thread: people not understanding the difference between byte code (representing code in the form of instructions) and AST.
s3th 2 days ago 3 replies      
i'm very skeptical about the benefits of a binary JavaScript AST. The claim is that a binary AST would save on JS parsing costs. however, JS parse time is not just tokenization. For many large apps, the bottleneck in parsing is instead in actually validating that the JS code is well-formed and does not contain early errors. The binary AST format proposes to skip this step [0] which is equivalent to wrapping function bodies with eval This would be a major semantic change to the language that should be decoupled from anything related to a binary format. So IMO proposal conflates tokenization with changing early error semantics. Im skeptical the former has any benefits and the later should be considered on its own terms.

Also, theres immense value in text formats over binary formats in general, especially for open, extendable web standards. Text formats are more easily extendable as the language evolves because they typically have some amount of redundancy built in. The W3C outlines the value here (https://www.w3.org/People/Bos/DesignGuide/implementability.h...). JS text format in general also means engines/interpreters/browsers are simpler to implement and therefore that JS code has better longevity.

Finally, although WebAssembly is a different beast and a different language, it provides an escape hatch for large apps (e.g. Facebook) to go to extreme lengths in the name of speed. We dont need complicate JavaScript with such a powerful mechanism already tuned to perfectly complement it.

[0]: https://github.com/syg/ecmascript-binary-ast/#-2-early-error...

d--b 2 days ago 3 replies      
I am puzzled by how an binary AST makes the code significantly smaller than a minified+gziped version.

A JavaScript expression such as:

var mystuff = blah + 45

Gets minified asvar a=b+45

And then what is costly in there is the "var " and character overhead which you'd hope would be much reduced by compression.

The AST would replace the keywords by binary tokens, but then would still contain function names and so on.

I mean I appreciate the effort that shipping an AST will cut an awful lot of parsing, but I don't understand why it would make such a difference in size.

Can someone comment?

kyle-rb 2 days ago 1 reply      
The linked article somehow avoids ever stating the meaning of the acronym, and I had to Google it myself, so I imagine some other people might not know: AST stands for "abstract syntax tree".


svat 2 days ago 0 replies      
However this technology pans out, thank for a really well-written post. It is a model of clarity.

(And yet many people seem to have misunderstood: perhaps an example or a caricature of the binary representation might have helped make it concrete, though then there is the danger that people will start commenting about the quality of the example.)

mnarayan01 3 days ago 1 reply      
For those curious about how this would deal with Function.prototype.toSource, via https://github.com/syg/ecmascript-binary-ast#functionprototy...:

> This method would return something like "[sourceless code]".

Existenceblinks 2 days ago 0 replies      
These are random thought I just wrote on twitter in the morning(UTC+7):

"I kinda think that there were no front-end languages actually. It's kinda all about web platform & browsers can't do things out of the box."

"Graphic interface shouldn't execute program on its own rather than rendering string on _platform_ which won't bother more."

"This is partly why people delegate js rendering to server. At the end of the day all script should be just WebAssembly bytecodes sent down."

"Browser should act as physical rendering object like pure monitor screen. User shouldn't have to inspect photon or write photon generators."

"SPA or PWA is just that about network request reduction, and how much string wanted to send at a time & today http/2 can help that a lot."

"Project like Drab https://github.com/grych/drab 's been doing quite well to move computation back to "server" (opposite to self-service client)"

"WebAssembly compromise (to complement js) to implement the platform. JS api and WebAssembly should be merged or united."

"VirtualDom as if it is a right way should be built-in just like DOM get constructed from html _string_ from server. All JS works must die."

"That's how WebComponent went almost a half way of fulfilling web platform. It is unfortunate js has gone far, tools are actively building on"

"I'd end this now before some thought of individualism-ruining-the-platform take over. That's not gonna be something i'd like to write (now)"


Not a complete version though. Kind of general speaking but I've been thinking in detail a bit. Then hours later I found this thread.

c-smile 2 days ago 2 replies      
To be honest I (as an author of the Sciter [1]) do not expect too much gain from that.

Sciter contains source code to bytecode compiler. Those bytecodes can be stored to files and loaded bypassing compilation phase. There is not too much gain as JS alike grammar is pretty simple.

In principle original ECMA-262 grammar was so simple that you can parse it without need of AST - direct parser with one symbol lookahead that produces bytecodes is quite adequate.

JavaScript use cases require fast compilation anyway. As for source files as for eval() and alike cases like onclick="..." in markup.

[1] https://sciter.comAnd JS parsers used to be damn fast indeed, until introduction of arrow functions. Their syntax is what requires AST.

TazeTSchnitzel 3 days ago 0 replies      
It's really exciting that this would mean smaller files that parse faster, but also more readable!
iamleppert 2 days ago 3 replies      
I'd like to see some real-world performance numbers when compared with gzip. The article is a little overzealous in its claims that simply don't add up.

My suspicion is it's going to be marginal and not worth the added complexity for what essential is a compression technique.

This project is a prime example of incorrect optimization. Developers should be focused on loading the correct amount of JavaScript that's needed by their application, not on trying to optimize their fat JavaScript bundles. It's so lazy engineering.

mnemotronic 2 days ago 1 reply      
Yea! A whole new attack surface. A hacked AST file could cause memory corruption and other faults in the browser-side binary expander.
kevinb7 3 days ago 1 reply      
Does anyone know the actual spec for this binary AST can be found? In particular I'm curious about the format of each node type.
z3t4 2 days ago 0 replies      
I wish for something like evalUrl() to run code that has already been parsed "in the background" so a module loader can be implemented in userland. It would be great if scripts that are prefetched or http2 pushed could be parsed in parallel and not have to be reparsed when running eval.
malts 3 days ago 1 reply      
Yoric - the Binary AST size comparisons in the blog - was the original javascript already minified?
limeblack 2 days ago 1 reply      
Could the AST be made an extension of the language similar to how it works in Mathematica?
bigato 3 days ago 0 replies      
Trying to catch up with webassembly, huh?
jlebrech 3 days ago 1 reply      
with an AST you can visualise code in ways other than text, and also reformat code like in go-fmt.
megamindbrian 2 days ago 1 reply      
Can you work on webpack instead?
tolmasky 2 days ago 3 replies      
One of my main concerns with this proposal, is the increasing complexity of what was once a very accessible web platform. You have this ever increasing tooling knowledge you need to develop, and with something like this it would certainly increase as "fast JS" would require you to know what a compiler is. Sure, a good counterpoint is that it may be incremental knowledge you can pick up, but I still think a no-work make everything faster solution would be better.

I believe there exists such a no-work alternative to the first-run problem, which I attempted to explain on Twitter, but its not really the greatest platform to do so, so I'll attempt to do so again here. Basically, given a script tag:

 <script src = "abc.com/script.js" integrity="sha256-123"></script>
A browser, such as Chrome, would kick off two requests, one to abc.com/script.js, and another to cdn.chrome.com/sha256-123/abc.com/script.js. The second request is for a pre-compiled and cached version of the script (the binary ast). If it doesn't exist yet, the cdn itself will download it, compile it, and cache it. For everyone except the first person to ever load this script, the second request returns before the time it takes for the first to finish + parse. Basically, the FIRST person to ever see this script online, takes the hit for everyone, since it alerts the "compile server" of its existence, afterwards its cached forever and fast for every other visitor on the web (that uses chrome). (I have later expanded on this to have interesting security additions as well -- there's a way this can be done such that the browser does the first compile and saves an encrypted version on the chrome cdn, such that google never sees the initial script and only people with access to the initial script can decrypt it). To clarify, this solution addresses the exact same concerns as the binary AST issue. The pros to this approach in my opinion are:

1. No extra work on the side of the developer. All the benefits described in the above article are just free without any new tooling.

2. It might actually be FASTER than the above example, since cdn.chrome.com may be way faster than wherever the user is hosting their binary AST.

3. The cdn can initially use the same sort of binary AST as the "compile result", but this gives the browser flexibility to do a full compile to JIT code instead, allowing different browsers to test different levels of compiles to cache globally.

4. This would be an excellent way to generate lots of data before deciding to create another public facing technology people have to learn - real world results have proven to be hard to predict in JS performance.

5. Much less complex to do things like dynamically assembling scripts (like for dynamic loading of SPA pages) - since the user doesn't also have to put a binary ast compiler in their pipeline: you get binary-ification for free.

The main con is that it makes browser development even harder to break into, since if this is done right it would be a large competitive advantage and requires a browser vendor to now host a cdn essentially. I don't think this is that big a deal given how hard it already is to get a new browser out there, and the advantages from getting browsers to compete on compile targets makes up for it in my opinion.

agumonkey 3 days ago 0 replies      
hehe, reminds me of emacs byte-compilation..
Laaas 3 days ago 0 replies      
Why does this guy use bits instead of bytes everywhere?
FrancoisBosun 3 days ago 1 reply      
I feel like this may become some kind of reimplementation of Java's byte code. We already have a "write once, run anywhere" system. Good luck!
Laverna A Markdown note-taking app focused on privacy laverna.cc
325 points by mcone  1 day ago   168 comments top 50
edanm 1 day ago 6 replies      
I'd really love a good Evernote alternative, but the one feature that tends not to exist is full page bookmarking / web clipping. I want to be able to clip a full page easily into the program, which will also save a copy of whatever article I happen to be reading. I really wouldn't mind (and would even love) to roll my own notes system with vim/etc. But without full page clipping, it would be a problem.

Another good thing about Evernote is the easy ability to mix in images, documents, and text.

The reasons I want to leave Evernote, btw, is:

1. I worry about their future and would rather a more open solution.

2. Their software, at least on Mac, really, really sucks. It's slow, and has tons of incredibly ridiculos bugs that have been open for a long time. E.g. when typing in a tag, if there's a dash, it will cause a problem with the autocompletion. For someone who uses the tags a lot and has a whole system based on them, having dashes cause a problem is a big deal, and the fact that it hasn't been fixed in ~ a year makes me really question their priorities.

yborg 1 day ago 3 replies      
Apart from having sync capability (via Dropbox) this in almost no way shape or form replicates the current capabilities of Evernote. A more accurate title would be "Laverna: An open source note-taking application." This of course will not generate many clicks, since there are dozens of things like this, many of them better-looking and more mature.
zachlatta 1 day ago 16 replies      
I've given up on using any sort of branded app for notetaking. At best it's open source and the maintainers will lose interest in a few years.

When you write things down, you're investing in your future. It's silly to use software that isn't making that same investment.

After trying Evernote, wikis, org-mode, and essentially everything else I could find, I gave up and tried building my own system for notes. Plain timestamped markdown files linked together. Edited with vim and a few bash scripts, rendered with a custom deployment of Gollum. All in a git repo.

It's... wonderful. Surprisingly easy. Fast. If there's a feature I wish it had, I can write a quick bash script to implement it. If Gollum stops being maintained, I can use whatever the next best markdown renderer is. Markdown isn't going away anytime soon.

It's liberating to be in control. I find myself more eager to write things down. I'm surprised more people don't do the same.

Edit: here's what my system looks like https://imgur.com/a/nGplj

trampi 1 day ago 1 reply      
Just FYI, more than one year has passed since the last release. The commit frequency has declined significantly. I use it, but I am not sure if I would recommend it in its current state. It does it's job and I like it, but the future is uncertain.
mikerathbun 1 day ago 2 replies      
I am constantly looking for a good notes app. I have been a paying Evernote user for years and I really like it. The only problem is the formatting. I take a lot of pride in formatting my notes and like it to look a certain way depending on the content. Markdown is definitely the way I want to go which Evernote has promised in the past but still hasn't delivered. That said note of the buttons on Laverna seem to work on my Mac. Can't sign into DropBox and can't create a notebook. Oh well.
omarish 1 day ago 0 replies      
The encryption seems very insecure. I just tried turning on encryption and it revealed my password in the URL bar. And now each time I click on a new page, it shows my password in the URL bar.


itaysk 1 day ago 6 replies      
There are so many note taking apps and yet I still can't find one I like.My requirements are simple:

- Markdown- cross platform with sync- tags

I have settled on SimpleNote for now, but I'm not completely happy. It's mac app is low quality and doesn't have markdown, It's open source but they ignore most of the issues.Bear Notes looks cool but wasn't cross platform.

I am still looking. If this thing had phone apps (I'm on iPhone) I'd give it a go.

bharani_m 1 day ago 1 reply      
I run a minimal alternative to Evernote called EmailThis [1].

You can add the bookmarklet or browser extension. It will let you save complete articles and webpages to your email inbox. If it cannot extract useful text, EmailThis will save the page as a PDF and send it as an attachment.

No need to install apps or login to other 3rd party services.

[1] https://www.emailthis.me

mgiannopoulos 1 day ago 0 replies      
This came up on Product Hunt today as well >> Turtl lets you take notes, bookmark websites, and store documents for sensitive projects. From sharing passwords with your coworkers to tracking research on an article you're writing, Turtl keeps it all safe from everyone but you and those you share with. <https://turtlapp.com/download/
ernsheong 1 day ago 2 replies      
It doesn't do web clippings though.

Incidentally, I am building https://pagedash.com to clip web pages more accurately, exactly as you saw it (via a browser extension)! Hope this helps someone.

scribu 1 day ago 1 reply      
Would be interesting to do a comparison with Standard Notes, which seems to offer the same features.
trextrex 1 day ago 0 replies      
Last I checked Laverna, they had really serious issues with losing data after every update or so. I stopped using it after encountering one of these. Looks like a lot of these issues are still open:







Edit: Formatting

kepano 1 day ago 0 replies      
Recently went through the process of evaluating every note taking tool I could find. Settled on TiddlyWiki which is slightly unintuitive at first but very well thought out once you get it customized to your needs. Fulfills most of the needs I see people requesting on this thread, i.e. flat file storage, syncable via Dropbox, markdown support, wiki structure.
devinmcgloin 1 day ago 0 replies      
I've been using Notion (https://www.notion.so) for a while and have nothing but good things to say.

- It's incredibly flexible. You can model Trello Task Boards in the same interface as writing or making reference notes.- They've got a great desktop client and everything syncs offline.- Latex Support- Programmable Templates- Plus there seems to be pretty neat people behind it

I switched to it 8 months ago or so and haven't really looked back.

yeasayer 1 day ago 2 replies      
One of the biggest use cases of Evernote for me OCR notes with search. All my important checks, slips and papers are going there. It's seems that Laverna doesn't have this feature. So it's not an alternative for me.
tandav 1 day ago 0 replies      
I use plain .md files in a github "Notes" repo.I even don't render it, just using Material Theme for sublime text.


macawfish 1 day ago 1 reply      
For notes, I use a text editor and Resilio Sync/Syncthing.

It's great!

jasikpark 23 hours ago 0 replies      
A ridiculously simple, but good notes app I've found is https://standardnotes.org
twodave 20 hours ago 0 replies      
I tend to use Workflowy.com for anything hierarchical/simple/listy and then Trello for anything bigger.

For instance, recently did some CTO interview screenings via phone. It was really easy to set up a Trello board with a card per candidate, drop them in the list matching their current position in the pipeline, attach a resume, recruiter notes, due dates etc. The interview itself I threw as a bulleted list into Workflowy and just crossed things off as they were covered. Took notes in notepad and uploaded to the Trello board at the end. Invited stake holders to view the board and sent out a daily email with progress. Interviewed 8 candidates this way in a total of about 10 hours, including all the time spent prepping and scoring and communicating with the hiring team.

barking 1 day ago 0 replies      
What are the main concerns people have about using evernote, data protection, the company going out of business, the code being closed and proprietary? I can understand all those but sometimes it also feels like everyone (me included) expects every software to be free now.

I have a free evernote account and don't use it very much but I find it handy for some things such as cooking recipes and walking maps. I think it would also be great for Dave Allen's GTD technique if I could ever be disciplined enough.

If evernote removed the free tier I think I would pay up, the pricing for the personal plans is very reasonable. I'd probably make more use of it too. Humans don't tend to value free stuff.For someone like me I think they'd have had a better chance of turning me into a paying customer if their model was an initial free period followed by having to pay up.But I will never pay up if I can get away with paying nothing.

ziotom78 1 day ago 0 replies      
I used to use org-mode to take down notes when I attended seminars or meetings (I'm an astrophysicist). However, a feature I missed was the ability to quickly take photos to insert into my notes, in order to capture slides or calculations/diagrams done on the blackboard.

Thus, last year I subscribed to Evernote (which provides both features), and I must say that I am extremely satisfied. Moreover, Evernote's integration with Firefox and Android allows me to quickly save web pages for later reading (this might be possible with org-mode, but not as handy as with Evernote, which just require one tap.)

I think that Laverna is interesting for users like me: it provides a web app with a nice interface, it implements the first feature I need (easy photo taking), and if really an Android app is on the way, integration with Android services might allow to save web pages is Laverna using one tap like Evernote.

dade_ 1 day ago 2 replies      
I recently tried it again, Laverna is very buggy and I just received an email from dropbox noting that the api they used is being deprecated. The app isn't really native, just a chromium window running a local web app.

So if it needs to be mobile, I am using onenote, but have to use the web app in Linux, and search is useless on the web app. So for desktop only, I use Zim. Cross platform, lots of plugins, stores everything in a file system with markdown. I haven't been able to get SVG to render in the notes though, which would be awesome, then I could just edit my diagrams and pictures with Inkscape. I can read the notes on mobile devices as they are just in markdown, but a mobile app really is needed.

bunkydoo 1 day ago 2 replies      
I'm still using paper over here, nothing seems to do it for me on the computer. Paper is great, and paper is king.
LiweiZ 22 hours ago 0 replies      
Notes are data. We need ways to input and store it fully under user's control. And we need a much better way to get insight from our own notes.
perilunar 1 day ago 0 replies      
I gave up on Evernote after experiencing syncing problems. Now I just use the default MacOS and iOS notes.app. Seems kind of boring but it actually works really well, and is nicely minimal. Also its free, pre-installed, no sync problems, and has web access via iCloud when I need it.

But for the love of god, why did they make link colour orange instead of the default blue? And why cant it be changed via preferences? They had one job

tardygrad 1 day ago 0 replies      
I'm going to give this a go.

Self hosted Dokuwiki has been my note taking tool of choice, usable on multiple devices, easy to backup, easy to export notes but markdown sounds good.

Is it possible to share notes or make notes public?

tomerbd 1 day ago 1 reply      
I found google keep to be the best for small notes without too much categorization, and google spreadsheet to be the best for larger scoped note taking due to the tabs.
anta40 1 day ago 0 replies      
I still use Evernote on my Android phone (Galaxy Note 4), mainly because of handwriting support.

For simplistic notes, well Google Keep is enough.

Still looking for alternatives :)

paulsutter 1 day ago 1 reply      
What I really really want is a tool that keeps notes in github, therefore an open/standard/robust way to do offline, merge changes, resolve conflicts.

I've lost so much data from Evernote's atrocious conflict resolution that it's my central concern. I don't see any mention of that here.

Use case: edit notes on a plane on laptop, edit notes on phone after landing, sometime later use laptop again and zap.

djhworld 1 day ago 0 replies      
org-mode works well enough for me. It's a bit awkward at first and requires you to remember a lot of key combinations and things, but it does the job.

It doesn't work so well across devices (especially mobile), so I tend to carry around a small notebook, and then when I'm back at my computer I type anything useful that I'd captured in my notebook into org mode.

Sometimes I just take a picture of my notes in my notebook and then use the inlineimages feature to display the image inline, that works pretty well too although there's no OCR.

It seems to work OK.

mavci 17 hours ago 0 replies      
I exported my contents and I found my contents in plain text. I think exported contents should be encrypted too.
snez 21 hours ago 1 reply      
Like what's wrong with the macOS Notes app?
chairmanwow 1 day ago 0 replies      
Using the online editor on Android with Firefox is essentially unusable. It feels almost like Laverna is trying to do autocorrect at the same time as my keyboard. Characters appear and disappear as I type which makes for a really confusing UX.
pacomerh 20 hours ago 0 replies      
Bear notes is free if you don't sync your devices and it supports markdown well. Very clean app.
Skunkleton 20 hours ago 1 reply      
We have had this application for a long time. It is called a text editor or a word processor.
jusujusu 1 day ago 0 replies      
Title is making me post this:http://elephant.mine.nu

Cons: no mobile app, no OCR for docs, no web clipper

devalnor 1 day ago 0 replies      
I'm happy with Inkdrop https://www.inkdrop.info/
pacomerh 1 day ago 0 replies      
I'm very happy with Bear notes. Will give this a shot though.
nishs 1 day ago 0 replies      
The macOS and web application don't look like the screenshot on the landing page. Is there a theme that needs to be configured separately?
pookeh 1 day ago 0 replies      
I have been using Trello. To save a screenshot, I Ctrl+Cmd+Shift+4 the screen, and paste directly into a card. It's fast.
znpy 1 day ago 0 replies      
Very cool!

Just wanted to say that the nodes app in nextcloud is very handy too!

Actually, if Nextcloud could embed this Laverna somehow... that would be awesome.

ehudla 1 day ago 0 replies      
The two must haves for me are integration with org mode (as was mentioned in thread) and with Zotero.
5_minutes 1 day ago 0 replies      
I love Evernote for its ocr capabilities, so I can go paperless. But it seems this is not implemented here.
4010dell 1 day ago 0 replies      
I like it. Better than evernote. evernote was like trying to win a marathon running backwards.
Brajeshwar 1 day ago 1 reply      
laverna.app cant be opened because it is from an unidentified developer.


nodomain 1 day ago 0 replies      
Last release 1 year ago... seems dead, right?
krisives 18 hours ago 0 replies      
Download no thanks
lewisl9029 1 day ago 0 replies      
It's really cool to see another app using remoteStorage for sync! I built Toc Messenger a few years ago on top of remoteStorage for sync as well, and it was a pleasure to work with (https://github.com/lewisl9029/toc, the actual app is no longer functioning since I took down the seed server quite a while ago). Unfortunately, it seems like the technology hasn't gained much traction since I last worked with it. The only 2 hosts listed on their wiki that offer hosted remoteStorage are the same that I saw two years ago: https://wiki.remotestorage.io/Servers

The other alternative sync method offered is Dropbox, and if it's also using the remoteStorage library as the interface as I'm assuming, it would have to depend on their Datastore API, which has been deprecated for more than a year now AFAIK (https://blogs.dropbox.com/developers/2015/04/deprecating-the...). Is that aspect of the app still functional? If anyone knows any other user-provided data storage APIs like Dropbox Datastore or remoteStorage that's more actively developed and supported, I'd love to hear about them.

The concept of apps built on user-provided and user-controlled data-sources, envisioned by projects like remoteStorage and Solid (https://solid.mit.edu/), has always been immensely appealing to me. If users truly controlled their data, and only granted apps access to the data they need to function (instead of depending on each individual app to host user data in their own locked-off silos), then switching to a different app would be a simple matter of granting another app access to the same pieces of data. Lock-in would no longer be a thing!

Imagine that! We could have a healthy and highly competitive app ecosystem where users choose apps by their own merit instead of by the size of their moat built on nothing but network effects. Newcomers could unseat incumbents by simply providing a better product that users want to switch to. Like a true free-market meritocracy!

Sadly, this is a distant dream because both newcomers and incumbents today realize the massive competitive advantage lock-in and network effects afford them. Incumbents will never give up their moat and allow the possibility of interop without a fight, and newcomers all end up racing to build up their own walled-off data silos because they have ambitions to become an incumbent enjoying a moat of their own one day. Even products that are built on top of open protocols and allow non-trivial interop tend to eventually go down the path of embrace, extend, extinguish, once they reach any significant scale.

I'm starting to think strong legislation around data-portability and ownership may be the only way a future like this could stand to exist, but the incumbents of today and their lobbying budgets will never let that happen.

loomer 1 day ago 0 replies      
>Laverna for android is coming soon

I'd probably start using it right now if it was already available for Android.

rileytg 1 day ago 0 replies      
while the demo worked well, under the hood looks like a somewhat aging codebase
YouTube admits 'wrong call' over deletion of Syrian war crime videos middleeasteye.net
239 points by jacobr  2 days ago   138 comments top 15
alexandercrohde 2 days ago 7 replies      
I think youtube needs to consider backing off regulating political content.

The fact is politics and morality are inherently intermingled. One can use words like extremist, but sometimes the extremists are the "correct" ones (like our founding fathers who orchestrated a revolution). How could any system consistently categorize "appropriate" videos without making moral judgements?

itaris 2 days ago 7 replies      
I'm much a proponent of automation as anyone else. But I think right now Google is trying to do something way too hard. By looking for "extremist" material, they are basically trying to determine the intention of a video. How can you expect an AI to do that?
molszanski 2 days ago 1 reply      
Let's look at the bigger picture. First, in March some newspapers find an extremist video. It has ~14 views and YT advertising all over it. They make a a big deal out of it. As a result YouTube looses ad clients and tons of money.

Then, as a response, they make an alg. They don't want people to call them a "terrorist platform" ever again. Hence they take down the videos.

Now, this algorithm is hurting the bystanders. IMO the real problem is a public and business reaction to the initial event.

And this peace of news is an inevitable consequence.

RandVal30142 2 days ago 0 replies      
Something people need to keep in mind when parsing this story is that many of the effected channels were not about militancy, they were local media outlets. Local outlets that only gained historical note due to what they documented as it was unfolding.

In Syria outlets like Sham News Network have posted thousands upon thousands of clips. Everything from stories on civilian infrastructure under war, spots on mental health, live broadcasts of demonstrations.


Including documenting attacks as they happen and after they have happened. Some of the effected accounts were ones that documented the regime's early chemical weapons attacks. These videos are literally cited in investigations.

All that is needed to get thousands upon thousands of hours of documentation going back half a decade deleted is three strikes.

Liveleak is not a good host for such outlets because it is not what these media outlets are about. Liveleak themselves delete content as well so even if the outlets fit the community it would not be a 'fix.'

jimmy2020 2 days ago 0 replies      
i really don't know how to describe my feeling as a syrian when i know the most important evidence that witnessed the regime crimes were deleted because of wrong call. And it's really confusing how artificial algorithm get confused between what is is obvious as isis propaganda and a family buried under the rubble and this statement makes things even worse. mistakenly? because there is so many videos? just imagine that may happen to any celebs channel. Will youtube issue the same statement? dont think so.
ezoe 2 days ago 0 replies      
What I don't like about those web giant services is, to get a human support, it requires to start social pressure like this.

If they fucked up something by automation, contacting to human support is hopeless unless you have very influential SNS status or something.

tdurden 2 days ago 2 replies      
Google/YouTube needs to admit defeat in this area and stop trying to censor, they are doing more harm than good.
balozi 2 days ago 2 replies      
Well, the AI did such a bangup job sorting out the mess in comment section that it got promoted to sorting out the videos themselves.
osteele 2 days ago 0 replies      
HN discussion of deletion event: https://news.ycombinator.com/item?id=14998429
DINKDINK 2 days ago 0 replies      
What about all the speech that's censored that doesn't have enough interest or political clout to make people aware of the injustice of its censoring.
williamle8300 2 days ago 0 replies      
Google (parent company of YouTube) already sees itself as the protector of the public's eyes and ears. They might be contrite now but they behave as a censorshipping organization.
norea-armozel 2 days ago 1 reply      
I think YouTube really needs to hire more humans to review flagging of videos rather than leave it to a loose set of algorithms and swarming behavior of viewers. They assume wrongly that anyone who flags a video is honest. They should always assume the opposite and err on the side of caution. And this should also apply to any Content ID flagging. It should be the obligation of accusers to present evidence before taking content down.
pgnas 2 days ago 1 reply      
YouTube (google) has become the EXACT opposite of what they said they were not going to do.

They are evil.

miklax 2 days ago 0 replies      
Bellingcat account should be removed, I agree on that with YT.
762236 2 days ago 5 replies      
Automation is the only real solution. These types of conversations seem to always overlook how normal people don't want to watch such videos. Do you want to spend your day watching this stuff to grade them?
Wekan: An open-source Trello-like kanban wekan.github.io
302 points by mcone  3 days ago   92 comments top 11
tadfisher 3 days ago 10 replies      
If you want to do Kanban right, double down on making it possible to design actual Kanban workflows. Pretty ticket UI with checklists and GIFs must be secondary to this goal.

Things that most actual Kanban flows have that no one has built into a decent product[0]:

 - Nested columns in lanes - Rows for class-of-service - WIP limits (per lane, per column, and per class-of-service) - Sub-boards for meta issues
The actual content of each work item is the least important part of Kanban; it could be a hyperlink for all I care. Kanban is about managing the flow, not managing the work.

[0] Please prove me wrong if there is such a product out there!

bauerd 3 days ago 4 replies      
I thought for a second my touchpad just broke. Might want to make the landing page look less like there's content down the fold
nsebban 3 days ago 5 replies      
While I like the idea of having open source alternatives to the popular applications, this one is a pure and simple copy of Trello. This is a bit too much IMO.
tuukkah 3 days ago 0 replies      
Gitlab needs a better issue UI and perhaps this could be integrated.
Fej 3 days ago 4 replies      
Has anyone here had success with a personal kanban board?

Considering it for myself, even if it isn't the intended use case.

anderspitman 3 days ago 0 replies      
I think lack of an OSS alternative with a solid mobile app is the only thing keeping me on Trello at this point.
thinbeige 3 days ago 1 reply      
Trello got so mature, has a great API, is well integrated with Zapier and hundreds of other services AND is free (I still don't know why one should get into the paid plan, even witn bigger teams, the free version is torally fine) that it must be super hard for any clone or competitor to win users.
number6 3 days ago 3 replies      
Does it have Authentication yet? Last time I checked there were no users or administrations or any permissions
alinspired 3 days ago 2 replies      
what's the storage backed for this app ?

Also shout out to https://greggigon.github.io/my-personal-kanban/ that is a simple and offline board

onthetrain 3 days ago 1 reply      
Is it API-compatible with Trello? That would rock, being able to use Trello extensions.
yittg 3 days ago 2 replies      
what i only want to know is why a chinese-like name: kanban ^_^
A Tutorial on Portable Makefiles nullprogram.com
300 points by signa11  19 hours ago   94 comments top 15
erlehmann_ 18 hours ago 5 replies      
An issue I have with make is that it can not handle non-existence dependencies. DJB noted this in 2003 [1]. To quote myself on this [2]:

> Especially when using C or C++, often target files depend on nonexistent files as well, meaning that a target file should be rebuilt when a previosly nonexistent file is created: If the preprocessor includes /usr/include/stdio.h because it could not find /usr/local/include/stdio.h, the creation of the latter file should trigger a rebuild.

I did some research on the topic using the repository of the game Liberation Circuit [3] and my own redo implementation [4] it turns out that a typical project in C or C++ has lots of non-existence dependencies. How do make users handle non-existence dependencies except for always calling make clean?

[1] http://cr.yp.to/redo/honest-nonfile.html

[2] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[3] https://github.com/linleyh/liberation-circuit

[4] http://news.dieweltistgarnichtso.net/bin/redo-sh.html (redo-dot gives a graph of dependencies and non-existence dependencies)

carussell 16 hours ago 2 replies      
> Microsoft has an implementation of make called Nmake, which comes with Visual Studio. Its nearly a POSIX-compatible make, but necessarily breaks [...] Windows also lacks a Bourne shell and the standard unix tools, so all of the commands will necessarily be different.

What I've been mulling over is an implementation of make that accepts only a restricted subset of the make syntax, eliding the extensions found in either BSD and GNU make, and disallowing use of non-standard extensions to the commands themselves (and maybe even further restricted still). In theory, a make that does this wouldn't even need to depend on a POSIX environmentit could treat the recipes not as commands but instead as a language. It wouldn't even take much to bootstrap this; you could use something like BusyBox as your interpreter. Call it `bake`.

Crucially, this is not another alternative to make: every bake script is a valid Makefile, which means it is make (albeit a restricted subset).

0x09 18 hours ago 5 replies      
The problem comes in as soon as you need conditionals, which is likely when attempting to build something portably. There may be some gymnastics that can be done to write around the lack of their presence in standard make, but otherwise your options are:

- Supply multiple makefiles targeting different implementations

- Bring in autotools in all its glory (at this point you are depending on an external GNU package anyway)

- Or explicitly target GNU Make, which is the default make on Linux and macOS, is very commonly used on *BSD, and is almost certainly portable to every platform your software is going to be tested and run on. The downside being that BSD users need a heads up before typing "make" to build your software. But speaking as a former FreeBSD user, this is pretty easy to figure out after your first time seeing the flood of syntax errors.

brian-armstrong 14 hours ago 2 replies      
Honestly, just use Cmake. It is far easier to make it work cross playform and better yet cross compile. There's no good reason to write a Makefile by hand and no large projects do it anyway
kccqzy 18 hours ago 3 replies      
No one wants to manually do dependency management in even a moderately sized project. I really haven't found an ideal way to have these -MM -MT flags integrated into Makefiles; I've tried having an awk script automatically modify the Makefile as the build is happening, but of course the updated dependencies will only work for later builds, so it's only good for updating the dependencies. Any other approaches HNers used and really liked?
ainar-g 17 hours ago 2 replies      
Doesn't cmake take care of most of this? Is there any reason not to use cmake on middle to large scale projects?

I am genuinely curious. I've only recently started looking at cmake, and it seems like they should generate portable Makefiles, or at least have an option to generate them.

c3d 7 hours ago 0 replies      
This article barely addresses what really causes trouble in practice, namely non-portable tools. Sed for example has different switches on macos and linux. MinGW is another world.

Also check out https://github.com/c3d/build for a way to deal with several of the issues the author addresses (but not posix portability)

thetic 12 hours ago 0 replies      
> The bad news is that inference rules are not compatible with out-of-source builds. Youll need to repeat the same commands for each rule as if inference rules didnt exist. This is tedious for large projects, so you may want to have some sort of configure script, even if hand-written, to generate all this for you. This is essentially what CMake is all about. That, plus dependency management.

This isn't a case for CMake. It's a case against POSIX Make. The proposed "portability" and "robustness" of adherence to the POSIX standard are not worth hamstringing the tool. GNU Make is ubiquitous and is leaps and bounds ahead of pure Make.

majewsky 18 hours ago 1 reply      
Wait... "%.o: %.c" is nonstandard?!?
JepZ 17 hours ago 0 replies      
It's been a while since a I wrote a make file but as far as I remember it was very easy to create a full featured cmake file if the project used the layout which cmake assumed (easy for new projects).

However, porting existing projects from traditional make files to cmake could be next to impossible.

kayamon 9 hours ago 0 replies      
I love that their definition of "portable" is software that runs exclusively on UNIX.
git-pull 16 hours ago 1 reply      
More nifty portable Make facts:

- For portable recursive make(1) calls, use $(MAKE). This has the added advantage of BSD systems which can electively install GNU Make as gmake being able to pass in the path to gmake to run GNU Makefiles [1]

- BSD's don't include GNU Make in base system. BSD's port and build system uses Make extensively, and has a different dialect [2]

- In addition to that, you will likely choose to invoke system commands in your Makefile. These also have the same GNU-specific features that won't work on BSD's. So keep your commands like find, ls, etc. POSIX-compliant [3]

- Part of the reasons tools like CMake exist is to abstract not only library/header paths and compiler extensions, but also the fact POSIX shell scripting and Makefile's are quite limited.

- Not only is there a necessity to use POSIX commands and POSIX compatible Make language, but the shell scripting must also not use Bash-isms and such, since there's no guarantee the system will have Bash.

- POSIX Makefiles have no conditionals as of 2017. Here's a ticket from the issue tracker suggesting it in 2013: http://austingroupbugs.net/view.php?id=805.

- You can do nifty tricks with portable Makefile's to get around limitations. For instance, major dialects can still use commands to grab piped information and put it into a variable. For instance, you may not have double globs across all systems, but you can use POSIX find(1) to store them in a variable:

 FILES= find . -type f -not -path '*/\.*' | grep -i '.*[.]go$$' 2> /dev/null
Then access the variable:

 if command -v entr > /dev/null; then ${WATCH_FILES} | entr -c $(MAKE) test; else $(MAKE) test entr_warn; fi
I cover this in detail in my book The Tao of tmux, available for free to read online. [4]

- MacOS comes with Bash, and if I remember correctly, GNU Make comes with the developer CLI tools as make.

- For file watching across platforms (including with respect for kqueue), I use entr(1) [5]. This can plop right into a Makefile. I use it to automatically rerun testsuites and rebuild docs/projeocts. For instance https://github.com/cihai/cihai/blob/cebc197/Makefile#L16 (feel free to copy/paste, it's permissively licensed).

[1] https://www.gnu.org/software/make/manual/html_node/MAKE-Vari...

[2] https://www.freebsd.org/cgi/man.cgi?query=make&apropos=0&sek...

[3] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/fi...

[4] https://leanpub.com/the-tao-of-tmux/read#tips-and-tricks

[5] http://entrproject.org

cmm 16 hours ago 2 replies      
where, except for Windows, is requiring GNU Make a problem?
susam 5 hours ago 0 replies      
We invoke shell commands in a Makefile, and if we are concerned about POSIX conformance in the Makefile syntax, we need to be equally concerned about POSIX conformance in the shell commands and the shell scripts we invoke in Makefile.

While I have not found a foolproof way to test for and prove POSIX conformance in shell scripts, I usually go through the POSIX.1-2001 documents to make sure I am limiting my code to features specified in POSIX. I test the scripts with bash, ksh, and zsh on Debian and Mac. Then I also test the scripts with dash, posh and yash on Debian. See https://github.com/susam/vimer/blob/master/Makefile for an example.

Here are some resources:

* POSIX.1-2001 (2004 edition home): http://pubs.opengroup.org/onlinepubs/009695399/

* POSIX.1-2001 (Special Built-In Utilities): http://pubs.opengroup.org/onlinepubs/009695399/idx/sbi.html

* POSIX.1-2001 (Utilities): http://pubs.opengroup.org/onlinepubs/009695399/idx/utilities...

* POSIX.1-2008 (2016 edition home): http://pubs.opengroup.org/onlinepubs/9699919799/

* POSIX.1-2008 (Special Built-In Utilities): http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3...

* POSIX.1-2008 (Utilities): http://pubs.opengroup.org/onlinepubs/9699919799/idx/utilitie...

The editions mentioned in parentheses are the editions available at the mentioned URLs at the time of posting this comment.

Here is a list of the commands specified in POSIX:

Special Built-In Utilities: break, colon, continue, dot, eval, exec, exit, export, readonly, return, set, shift, times, trap, unset

Utilities: admin, alias, ar, asa, at, awk, basename, batch, bc, bg, c99, cal, cat, cd, cflow, chgrp, chmod, chown, cksum, cmp, comm, command, compress, cp, crontab, csplit, ctags, cut, cxref, date, dd, delta, df, diff, dirname, du, echo, ed, env, ex, expand, expr, false, fc, fg, file, find, fold, fort77, fuser, gencat, get, getconf, getopts, grep, hash, head, iconv, id, ipcrm, ipcs, jobs, join, kill, lex, link, ln, locale, localedef, logger, logname, lp, ls, m4, mailx, make, man, mesg, mkdir, mkfifo, more, mv, newgrp, nice, nl, nm, nohup, od, paste, patch, pathchk, pax, pr, printf, prs, ps, pwd, qalter, qdel, qhold, qmove, qmsg, qrerun, qrls, qselect, qsig, qstat, qsub, read, renice, rm, rmdel, rmdir, sact, sccs, sed, sh, sleep, sort, split, strings, strip, stty, tabs, tail, talk, tee, test, time, touch, tput, tr, true, tsort, tty, type, ulimit, umask, unalias, uname, uncompress, unexpand, unget, uniq, unlink, uucp, uudecode, uuencode, uustat, uux, val, vi, wait, wc, what, who, write, xargs, yacc, zcat

adamstockdill 9 hours ago 0 replies      
target [target...]: [prerequisite...]
Mastodon is big in Japan, and the reason why is uncomfortable medium.com
278 points by keehun  2 days ago   217 comments top 22
coldtea 2 days ago 13 replies      
"Uncomfortable" as in "offends my American puritan-inspired sensibilities".

"Pardon him, Theodotus: he is a barbarian, and thinks that the customs of his tribe and island are the laws of nature". George Bernard Shaw, "Ceasar and Cleopatra".

(Slightly off topic: Feynman had a nice story in one of his books about how the main in a Japanese guesthouse he stayed walked in while he was naked and having a bath. She didn't flinch and just went on about her business like nothing had happened, and he was thinking what a fuss/embarrassment etc that would have caused if it happened in a hotel in the US -- when it's just an adult being naked with another adult present. It's not like everybody hasn't seen genitals before or it's a big deal.)

Animats 1 day ago 3 replies      
At last, something that could potentially challenge Facebook's world domination. Somebody gets a federated social network running with a substantial user base, and it runs into this.

The US position on child pornography comes from the Meese Report during the Reagan administration.[1] The Reagan administration wanted to crack down on pornography in general to cater to the religious base. But they'd run into First Amendment problems and the courts wouldn't go along. So child pornography, which barely existed at the time, was made the justification for a crackdown. By creating ambiguous laws with severe penalties for child pornography and complex recordkeeping requirements, the plan was to make it too dangerous for adult pornography to be made commercially. But the industry adapted, filling out and filing the "2257 paperwork" as required.[2] After much litigation, things settled down, porn producers kept the required records, and DoJ stopped hassling them about this.

So that's how the US got here. That's why it's such a big deal legally in the US, rather than being a branch of child labor law. Japan doesn't have the same political history.

Federated systems are stuck with the basic problem of distributed online social systems:anonymity plus wide distribution empowers assholes. That's why Facebook has a "real name" policy - it keeps the jerk level down.

[1] https://babel.hathitrust.org/cgi/pt?id=mdp.39015058809065;vi...[2] https://en.wikipedia.org/wiki/Child_Protection_and_Obscenity...

rangibaby 2 days ago 5 replies      
I have lived in Japan since I was quite young (late 20s now) and don't see what the problem with lolicon is. It's not my thing, but if someone enjoys it that's their business, they aren't hurting anyone. That's just my gut feeling on the matter, I'm interested in hearing others' thoughts.
kstrauser 1 day ago 0 replies      
I own a Mastodon instance and love its federation options. For instance, I could decide to outright disconnect from that instance (in Mastodon speak, to "block" it) so that my users don't see it (and vice versa). I chose in this case to "silence" it, which means:

- My users can still talk to its users and see posts from people they follow.

- Posts from that instance don't show up on my "federated timeline" (which is a timeline of all posts made by my users and by the people they follow on other instances; great way to find new interesting people).

- I don't cache any media sent from that instance. The default is to cache images locally: if a user on a tiny instance has 10,000 followers on a busy one, the busy one don't make the tiny instance serve up 10,000 copies of every image.

So again, my users can talk to their users just like normal, but no one on my instance sees anything unless they specifically opt in to, and any content I dislike never travels through my network or gets stored on my server. I'm happy with that arrangement.

xg15 2 days ago 2 replies      
I'm all for decentralized communication but I don't think the example of the article is particularly convincing and I wonder if the article is asking the right questions.

So the uncomfortable reason why Mastodon is so popular in japan is that Pixif operates a large Mastodon node which is used to share/discuss questionable images.

Discussions about lolicon aside, does any of this actually has something to do with the detail that Mastodon supports federation?

The article states that decentralisation is important to allow different rules for different communities. However if, e.g. if Pixif disabled federation or switched from Mastodon to something proprietary, would that change anything? Similarly, Reddit is highly centralized technically but - currently - provides freedom for each subreddit to define their own moderation rules (within the restrictions of Reddit, the company).

I feel there is a difference between the "decentralisation" when talking on the social or the technical layer and that difference should be kept in mind.

CurtMonash 2 days ago 3 replies      
Images of all sorts of criminal acts are deemed acceptable, as long as no harm is done to actual individuals during those images' creation.

I've never seen why child porn should be a exception.

That I would think poorly of somebody for enjoying certain categories of child porn is beside the point.

jancsika 2 days ago 1 reply      
> Its a constant struggle for Tor to recruit everyday users of Tor, who use the service to evade commercial surveillance.

That doesn't seem to be a struggle at all. All kinds of users leverage Tor for all kinds of reasons.

The struggle is to recruit everyday users who have the inclination, technical expertise, and rhetorical skill necessary to defend the technology against all kinds of fearmongering tactics.

There is a general lack of such people. If the same set of interests bent on defeating Tor set their sights on TCP, you can bet that technologists would be struggling to find ways to defend it that could resonate with the general public.

klondike_ 2 days ago 0 replies      
This really shows the advantages to a federated social network. People have all sorts of sensibilities about what is acceptable content, and a one-size-fits-all moderation approach like on Twitter will never work for everybody.
SCdF 2 days ago 2 replies      
On the topic of Mastodon, I wonder if the reason it hasn't caught on so much (outside of this use case) is precisely because it's federated.

When a new social network comes along, I often sign up ASAP just to try to grab SCdF, because I'm a human and vane. I will usually give it a bit of a crack once I've done that, but the need to squat my username is a big (and I realise, stupid) driver for me.

I've known about Mastodon for awhile now, but I don't feel any pressure to sign up and check it out because there is no danger of someone else taking my username. Worst case I could just host my own instance against my domain.

emodendroket 2 days ago 0 replies      
Lolicon can also refer to live action stuff where the model is of age but looks younger. Also, the rules on this stuff in the US are quite murky and vary by state, rather than being simply illegal across the board as this article wants to suggest.
bryanlarsen 2 days ago 0 replies      
Porn is too ubiquitous and accepted on the common web to really drive technologies the way it used to.

For example, bittorrent started with porn, but that's not what drove its growth or made it successful. If the credit card companies didn't allow porn transactions on their networks, bitcoin would probably be much larger today. Tor is a similar story, I assume.

nihonde 2 days ago 0 replies      
Saying something with a few hundred thousand users is "big in Japan" is a stretch, at best. There are 130MM+ people in Japan.

I mean, I have an iOS app that has about that many MAU, and I consider it to be basically a failure.

codedokode 1 day ago 0 replies      
> lolicon drawings are prohibited

> gory, bloody and violent pictures are allowed

They must have something wrong with their head.

SCdF 2 days ago 2 replies      
The big surprise to me is that Deviant Art is supposed to be about photography!?
ygaf 1 day ago 0 replies      
>Its a constant struggle for Tor to recruit everyday users of Tor, who use the service to evade commercial surveillance. Those users provide valuable cover traffic, making it harder to identify whistleblowers who use the service, and political air cover for those who would seek to ban the tool so they can combat child pornography and other illegal content.

Wait - I thought people weren't meant to use Tor (thus its bandwidth) if they didn't need it. Or are they recruiting not just any people, but those who will contrive to browse all day / not download heavily?

fundabulousrIII 2 days ago 0 replies      
Thought they were talking about the band and the decibel level.
Eridrus 2 days ago 0 replies      
Most people are on Twitter because of network effects.

Twitter made this a non-issue for lollicon users by banning them, but it's also interesting to note that it sprang up due to support from an existing website.

Most people (myself included) who are dissatisfied with aspects of Twitter are not motivated enough to try to fix them.

mirimir 2 days ago 3 replies      
Well, it's not just pictures.

> After the enforcement, there will still be high school girls out there who are going to want to earn pocket money, and the men who target these girls wont disappear, either, said an official from the Metropolitan Police Department.

> The police come inside, so there are no more real JK girls at the shop. Most of the business is being arranged over the internet, through enko (compensated dating) services.


Global Internet morality is unworkable.

reustle 2 days ago 2 replies      
I was expecting to read about the heavy metal band, Mastodon.
coldtea 2 days ago 2 replies      
amelius 2 days ago 1 reply      
Sounds similar to the story of BetaMax versus VHS.

Edit: sorry for the brevity, pfooti below explains it well.

whipoodle 2 days ago 1 reply      
Child porn and Nazi stuff have long been really bright lines in user content. Recent events have revealed more acceptance of Nazis and adjacent groups in our society than previously thought, so I guess I could see the taboo against child-porn easing up too. Very sad and scary.
GitHub CEO Chris Wanstrath To Step Down After Finding Replacement forbes.com
262 points by ahmedfromtunis  3 days ago   49 comments top 13
andygcook 3 days ago 5 replies      
Random story about Chris...

I saw him speak at a startup event in 2010 at MIT called Startup Bootcamp. It was probably my first startup-related conference and he was the first talk in the kick off slot at 9am. He gave a great talk recapping the origin of GitHub and how it grew out of another projected called FamSpam, a social network for families.

After the talk I had to run to the restroom and happened to run into Chris out in the entry way. I introduce myself and we starting chatting. As we were talking, people started walking into the event late. They saw us standing in the entrance, and started asking questions on where to go.

Instead of deferring responsibility to someone working at the event, Chris sat down at the empty welcome table and started checking people in by giving them schedules and helping them create name tags. We ended up checking in a few dozen people together while we talked more. No one knew who Chris was when they walked in, and just assumed he was a member of the event staff. I think had they known he was the co-founder of GitHub they probably would have paid more attention to him.

I ended up sending him a t-shirt and he took the time to shoot me back an email saying thanks. The subject line was "Dude" and the text was "Got the shirt. It's so awesome. You rock. Tell your brother yo, too!"

Anyways, I just thought it was kind of cool he took it upon himself to help out with checking people in at the event even though he had volunteered to travel all the way to Boston to speak for free to help out young, aspiring entrepreneurs by sharing his learnings. It always kind of stuck with me that you need to stay down to earth and pay it forward no matter how successful you get.

matt4077 3 days ago 0 replies      
Github is among the best things that ever happened to OSS. Compared to anything that came before, it is a pleasure to browse, it is intuitive, and it has managed to corral millions of people with vastly different backgrounds into a golden age of OSS productivity.

In the 10 years+ before Github, I never even tried to contribute codeeach project had its own workflow, and sending an email somehow felt intimidating. Today, spending an hour here or there to improve it slightly has almost become a guilty pleasure.

So, I guess what I'm saying is: Thank you!

forgingahead 3 days ago 2 replies      
Forbes is ad-infested hell, here is an Outline link:


jdorfman 3 days ago 0 replies      
When I was at GitHub Satellite last year in Amsterdam, I saw Chris walk in to the venue and look around at the amazing production and smile. You could tell how proud he was of his team and the brand he helped create. I am glad to see he is staying with the company, I'm sure the new CEO will need his advice from time to time to keep GitHub great for the next 10 years.
DanHulton 3 days ago 0 replies      
Why the title change? As far as I can tell, it's factually incorrect, as well.

Wanstrath is planning on stepping down and hasn't stepped down yet.

geerlingguy 3 days ago 8 replies      
Any other way of viewing this story? On my iPad with Focus, I just get a blurred out screen when I visit Forbes.com now. I remember it used to show a 'please turn off your ad blocker' dismissible splash screen, but that seems to not be the case any more.
tdumitrescu 3 days ago 0 replies      
I've never met the guy but have a ton of respect for his work - his open source projects like Resque and pjax were awesome for their time. I imagine GitHub has benefited a lot from having real coders at the helm for so long.
nodesocket 3 days ago 0 replies      
> Wanstrath plans to focus on product strategy and the GitHub community after stepping down from the CEO role, working directly on products and meeting with customers.

Just a theory, perhaps they bringing in a new professional CEO for an IPO?

jbrooksuk 3 days ago 0 replies      
> GitHub may seek to become more of a marketplace that can help developers show off their work and take on additional projects, with GitHub taking a portion as a fee, says Sequoia investor Jim Goetz.

They already have a Marketplace offering.

grandalf 3 days ago 0 replies      
Chris is one of the few well-known developers who conveys a deep love of software engineering. Looking forward to reading some of the code he writes in the coming months.
ShirsenduK 3 days ago 0 replies      
The title is misleading. He plans to, but hasn't!
amgin3 3 days ago 1 reply      
PHP_THROW_AWAY1 3 days ago 1 reply      
Wrong title
Flutter A mobile app SDK for iOS and Android flutter.io
273 points by Mayzie  2 days ago   97 comments top 16
thinbeige 1 day ago 3 replies      
Nowadays the issue with app devlopment is not only having two OSes two develop for (and I doubt that Flutter is a real help here), the much bigger problem is user acquisition.

User acquisition got so insanely expensive for apps that there are few to none business models where you can justify or break even the user acquisition costs.

PascalW 2 days ago 7 replies      
This looks pretty neat, Dart is a nice language.

Flutter looks pretty different from React Native on one side and Cordova/webview based frameworks on the other side. Flutter is not based on webviews, but is also not using the native widgets but instead rendering custom widgets.

To me, this is a little weird. One of the downsides of Webview based apps is that it's harder to align with the native OS look and feel. React Native solves this problem, but Flutter clearly has the same problem.

sathis 1 day ago 3 replies      
The real downside of using flutter is that you can't embed (or inline) any native widgets like video or maps.
JamesSwift 1 day ago 1 reply      
> We test on a variety of low-end to high-end phones (excluding tablets) but we dont yet have an official device compatibility guarantee. We do not offer support for tablets or have tablet-aware layouts.

Thats a pretty serious, and surprising, limitation.

ziggzagg 2 days ago 5 replies      
Why is that Flutter does not have a web target? Everything is nice and fast about it, it's a shame that after building a cross mobile apps, you'll app to start the web app from scratch using another platform.
grey-area 2 days ago 2 replies      
Anyone using this and have experiences to report? I'm thinking of using it for a project soon. Specifically, how does it compare with 2x native apps for Android and iOS. How was Dart as a language, and the bindings to different native SDKs? What problems did you encounter when building apps on both platforms?
victor106 2 days ago 0 replies      
Xamarin.Forms is another option to consider in this space.
devdoomari 1 day ago 1 reply      
I'll jump to flutter when1). scala supports dart backend (just my preference)2). flutter solves 'calling native libraries/sdks' better. (current 'message passing' seems so weak - I want to do video processing/etc)

but for other use cases, flutter seems nice (for 90% of app use cases?)

zanalyzer 1 day ago 0 replies      
Flutter is also the name of a company doing vision based gesture UI that Google bought in 2013 and hasn't been heard of since.


mwcampbell 1 day ago 1 reply      
The FAQ says Flutter has basic accessibility support. I wonder what's missing. If there's a Flutter-based app on the iOS App Store that uses some non-trivial widgets, I'd like to try it out with VoiceOver.
rhubarbcustard 1 day ago 1 reply      
Do you think this is a better option than Apache Cordova? I've been starting to look at Cordova to build some pretty simple apps for business applications.

Does anyone have an opinion on whether Flutter would be a better choice? Why?

I also looked at Xamarin but that seems a little in-depth for what I need, which is basically some data-input screens (using standard Web-style controls) and then to upload the data to an API.

natch 2 days ago 2 replies      
From the FAQ:

>We are aware of apps built with Flutter that have been reviewed and released via the App Store.

Which apps? I'd like to try them out and see how they look and feel.

mk89 1 day ago 0 replies      
Looks really promising!
tomerbd 1 day ago 1 reply      
if it supported web site as well i would have clicked the link and check it out.
0xbear 1 day ago 2 replies      
Stop trying to make Dart happen. It's not going to happen.
Poland's oldest university denies Google's right to patent Polish coding concept pap.pl
307 points by Jerry2  22 hours ago   78 comments top 18
CalChris 21 hours ago 3 replies      
I'm not understanding how Google and employees is claiming to be the original inventor here.

Each inventor must sign an oath or declaration that includes certain statements required by law and the USPTO rules, including the statement that he or she believes himself or herself to be the original inventor or an original joint inventor of a claimed invention in the application and the statement that the application was made or authorized to be made by him or her.


How is Google+Co the original inventor?

nxc18 21 hours ago 2 replies      
Wow, fuck Google. They really should consider re-adopting "Don't be evil" for PR proposes at the very least.

(This isn't too say other companies don't pull the same shit; fuck them all just as much)

wmu 20 hours ago 0 replies      
Side note. When I was preparing biographies of Abraham Lempel and Jacob Ziv (the inventors of LZ77 and LZ78), I read an interview with Lempel. He was asked why they hadn't patented their algorithms. And he replied like this: we're scientist, our goal is to improve the world, not be rich. His answer surprised me. They clearly knew that the invention is remarkable and would be profitable, but deliberately made it free.
willvarfar 21 hours ago 1 reply      
(For those interested in data compression, https://encode.ru is very active. This thread covers the rANS patent problems: https://encode.ru/threads/2648-Published-rANS-patent-by-Stor... )
aaimnr 20 hours ago 0 replies      
I stumbled upon this edit war concerning Huffman Coding article on Wikipedia [1], where the ANS algorithm author (Jarek Duda) justifies his edits back in 2007 as a way to "shorten the delay for its [ANS] current wide use in modern compressors, leading to unimaginable world-wide energy and time savings thanks to up to 30x speedup."

Sounds dramatic, but today it seems like he had a point. The other guy (guarding Wikipedia against self promotion) has a point too, though.

[1] https://en.wikipedia.org/wiki/Talk%3AHuffman_coding

alecco 16 hours ago 2 replies      
#3 168 points 7h IOCCC Flight Simulator

#72 280 points 6h Poland's oldest university denies Google's right to patent Polish coding concept

(had to scroll to middle of 3rd page)

(and it's #1 on Algolia 24hs top)

Makes sense, perfectly explainable.

woranl 21 hours ago 1 reply      
Today's Google is a sugar coated evil corporation. "Don't be evil"... pathetic.
agsamek 19 hours ago 1 reply      
This post had 251 points in two hours. It was no 1 post for some time and now it was downgraded to 42ND position in the list. 2 hours after posting with 251points. How is it possible????
Cpoll 20 hours ago 2 replies      
Can anyone explain Google's rationale here?

As I understand the US patent system, patent trolls can and do make these sorts of patent filings all the time, and the legitimacy doesn't matter, because their victims can't afford to defend themselves in court.

Isn't it irrational not to file patents like these?

Or is Google planning to use this patent "offensively?"

informatimago 22 hours ago 0 replies      
Google, the universal evil company.

(That's where you realise emojis lack a pinky finger, that could become google's logo).

RandomInteger4 20 hours ago 0 replies      
I don't understand how companies can be so bold as to file for patents on things that are already in industry use by more than the filer of the patent.
654wak654 21 hours ago 1 reply      
Does the article mean ENcoding and not just coding?
userbinator 20 hours ago 0 replies      
It's interesting that arithmetic compression and its variants seem to be a favourite of those looking for something to patent. From the description of ANS, it looks very similar to the QM/Q-coder for JBIG/2, JPEG, and JPEG2000, which was patented by IBM a long time ago (since expired.)
kuschku 21 hours ago 0 replies      
This is related to the ANS patent of Google, which was previously discussed at https://news.ycombinator.com/item?id=14751977
master_yoda_1 21 hours ago 1 reply      
Someone should stop the monopoly of google in computer science and AI. Otherwise its going to be dangerous.
mirekrusin 19 hours ago 1 reply      
Why this news, posted 2 hours ago, slided from front page to 40th position in about 2 minutes? It's got 251 points, 52 comments which is way more than anything on the front page?
Sylphine 21 hours ago 2 replies      
e-beach 20 hours ago 1 reply      
Sorry, but I wouldn't trust an article written by the Polish state media. The title of the article, labeling the idea a "Polish coding concept", clearly presupposes that Google's claim was baseless.
Introducing WAL-G: Faster Disaster Recovery for Postgres citusdata.com
214 points by craigkerstiens  2 days ago   63 comments top 10
drob 2 days ago 2 replies      
This is great. Can't wait to be using it.

We've been using WAL-E for years and this looks like a big improvement. The steady, high throughput is a big deal our prod base backups take 36 hours to restore, so if the recovery speed improvements are as advertised, that's a big win. In the kind of situation in which we'd be using these, the difference between 9 hours and 36 hours is major.

Also, the quality of life improvements are great. Despite deploying WAL-E for years, we _still_ have problems with python, pip, dependencies, etc, so the switch to go is a welcome one. The backup_label issue has bitten us a half dozen times, and every time it's very scary for whoever is on-call. (The right thing to do is to rm a file in the database's main folder, so it's appropriately terrifying.) So switching to the new non-exclusive backups will also be great.

We're on 9.5 at the moment but will be upgrading to 10 after it comes out. Looking forward to testing this out. Awesome work!

kafkes 2 days ago 9 replies      
Hello everyone, I'm the primary author for WAL-G and would be happy to answer any questions.
sehrope 2 days ago 2 replies      
I've used WAL-E (the predecessor of this) for backing up Postgres's DB for years and it's been a very pleasant experience. From what I've read so far this looks like it's superior in every way. Lower resource usage, faster operation, and the switch to Go for WAL-G (v.s. Python for WAL-E) means no more mucking with Python versions either.

Great job to everybody that's working on this. I'm looking forward to trying it out.

upbeatlinux 2 days ago 1 reply      
Wow, great work! I am definitely going to test this out over the weekend. However AFAICT the `aws.Config` approach breaks certain backwards compatibility w/how wal-e handles credentials. Also wal-g does not currently support encryption. FWIW, I would love to simply drop-in wal-g without having to make any configuration changes.
jfkw 2 days ago 1 reply      
Will WAL-G eventually support the same archive targets as WAL-E (S3 and work-alikes, Azure Blob Store, Google Storage, Swift, File System)?
craigkerstiens 2 days ago 0 replies      
For those interested in the repo directly to give it a try you can find it here: https://github.com/wal-g/wal-g
jarym 2 days ago 1 reply      
"WAL-E compresses using lzop as a separate process, as well as the command cat to prevent disk I/O from blocking."

Good to see people sticking to the unix philosophy of doing one thing well and delegating other concerns - cat and lzop are both fine choices!

gigatexal 2 days ago 0 replies      
I wonder where python will end up in the next five or so years if Go is continually chosen for concurrent or high perf code things like this.
mephitix 2 days ago 0 replies      
Fantastic intern project, and fantastic work by the intern!
X86BSD 1 day ago 0 replies      
Why would this be a better option than a simple zfs snapshot, zfs send/recv backup and recovery strategy?
Google's stance on neo-Nazis 'dangerous', says EFF bbc.co.uk
253 points by dberhane  3 days ago   360 comments top 8
sctb 3 days ago 0 replies      
corobo 3 days ago 22 replies      
While I definitely don't support the people they're booting off I do have to agree with the EFF here.

For example, "And music streaming services offered by Google, Deezer and Spotify have said they would remove music that incites violence, hatred or racism."

Now these services have put it out there as policy someone has to define what's violent, hateful or racist in music. Racism? Ok nobody's really going to bat an eye at that disappearing.

Violence and hatred though? As an off again on again heavy metal listener.. almost literally every track could be described as violent or hateful. That's the genre. The same could be said for other genres and their sub-genres. Rap comes to mind. Is Eminem next on the chopping board?

Music was the easy example, there's other examples available for the other services (registrar, DNS, hosting, CDN) as to why making this policy is a bad idea. Now anyone needs to do is convince someone at the corresponding target that a site is similar enough that it should be taken down.

South Park had a two-parter that addressed this exact problem [1][2]

[1] https://en.wikipedia.org/wiki/Cartoon_Wars_Part_I [2] https://en.wikipedia.org/wiki/Cartoon_Wars_Part_II

tgb 3 days ago 3 replies      
I think the hypothetical that people like me on the left need to consider is the following. Our current vice president is extremely anti-abortion. It's no stretch of the imagination to see that portion of the country growing in strength to be there dominant view in power within ten years. In their view abortion doctors are literal baby killers, websites arguing the benefits of abortion are literally advocating the killing of babies. In their eyes, this is literally as bad as Hitler. If you set the standard at "ban everything that the populace deems to be as bad as Hitler" then today we get rid of Nazi sites and tell KKK members they can't use our gyms, but tomorrow who will be condemned? (Note that this isn't even a slippery slope argument: it's saying that who gets to define the slope changes.)

The other argument is that if Google and co have never ever bowed to political pressure to remove something except as required by law, that gives them a great argument to push back against some of the less progressive governments which they must work with. If Assad starts demanding that internet companies in Syria ban his political opponents, then Google could reply "we didn't even ban Nazis, why the hell would you expect us to ban anyone for you?"

And in case this all seems hypothetical, remember that the current US government recently requested all visitor logs for an anti-Trump website.

bedhead 3 days ago 6 replies      
It's increasingly uncomfortable to realize that a handful of tech companies are in many ways more powerful than the government. I don't like the direction anything is headed in.
meri_dian 3 days ago 2 replies      
This is how extremism spreads:

1. A Reasonable Position is expressed, in this case - 'Nazi's are very bad'. The Reasonable Position often involves an Enemy that must be stopped. Most reasonable people will agree with the Reasonable Position.

2. The Reasonable Position becomes the overriding factor in any situation that involves it. All other factors and considerations are dwarfed by it and forgotten.

3. Because the Reasonable Position comes to dominate the thinking of the Extremist - who often means well - they come to believe one can only ever be for or against the Reasonable Position. There is no room for moderate positions that try to balance the Reasonable Position with other important considerations and values - in this case, freedom of speech.

4. In order to show support for the Reasonable Position, third parties are forced to action in accordance with the world view of the Extremist. If they try to balance other considerations against the Reasonable Position, they are seen by the Extremist as sympathizing with the Enemy.

5. The fervor of extremism charges through society, trampling on other values and considerations.

apatters 3 days ago 0 replies      
It seems to me that the right to exclude certain types of speech from your privately owned platform is in itself a form of expression, and important to preserve. Where we get into trouble is when one entity obtains monopoly or near-monopoly control over a means of spreading information, and thus gains the power to tell everyone what they can and can't know.

And Google is not that far off. They have a monopoly in at least one market and the EU has already found them guilty of anti-competitive practices. The US government has not brought an anti-trust case against Google, and you could argue it's failing to do its job--the ties between Google and the US government run disturbingly deep, with Google allegedly serving as an arm of US foreign policy in many ways: https://wikileaks.org/google-is-not-what-it-seems/

Either way, the most important point is simply that monopolies are dangerous. And the best solution is to weaken them, whether through regulatory action, consumers voting with their feet, or other companies introducing competition. I think the most interesting project in this space is Searx, which allows me to aggregate results from Google and other search engines, and flip a switch to turn each engine on or off. Searx is a great step in the direction of breaking Google's monopoly and thus hindering its ability to severely limit free speech. https://github.com/asciimoo/searx

undersuit 3 days ago 2 replies      
I don't think private-sector companies have any obligation to host anything. The problem is we in the tech community have watched with only minor concerns as the web grew increasingly centralized and left the power to these companies. The Daily Stormer has no right to an domain name or search results or ad revenue, no site does. The Daily Stormer has every right to exist, but it doesn't have a right to be served fast and conveniently(no, I'm not advocating against net neutrality, any host for the Daily Stormer should treat it exactly as they treat all their other customers). I think despicable sites like the Daily Stormer have a right to exist, but I'd rather they be hosted on a personal computer with a non-static address and every now and then the dial up connection get's interrupted when the site admin has to call David Duke about when the next Klan rally is.
nxsynonym 3 days ago 13 replies      
>"Because internet intermediaries, especially those with few competitors, control so much online speech, the consequences of their decisions have far-reaching impacts on speech around the world."

Maybe it's just me but I think enabling hate-speech and bigotry is much worse than failing to maintain 100% neutrality.

There's nothing stopping these maniacs from starting their own intermediaries to host the content(trash) they want to peddle.

Show HN: Product Graveyard Commemorating the most memorable dead products productgraveyard.com
255 points by ndduong  3 days ago   125 comments top 30
ndduong 3 days ago 12 replies      
Hi, I'm the creator of Product Graveyard, a fun way to keep track of and commemorate our favorite products that are with us no more.

I worked on this as a side project during my summer internship at Siftery. For building the site, I used a bootstrap grid for front-end structure and node.js to help with filtering and inserting the data.

Please join in by contributing a funny story or eulogy for one of the featured products.

mmanfrin 3 days ago 6 replies      
I'm still sore about Google Reader. I haven't found another reader that has quite found the right UX to replace it.
onion2k 3 days ago 0 replies      
What I find really interesting about lists like this one is that many of the entries are really great ideas that only failed due to poor timing or bad luck or a single error. The fact that someone failed to build something huge the first time around is not evidence that copying one of the entries wouldn't work now. It's just really hard to know which idea might work if it launched today instead of two years ago.
tradersam 3 days ago 2 replies      
Funny story, Lync[1] still exists. Actually it was an update to Skype for Business, at least on our systems at work.

I'm using it right this minute: http://imgur.com/a/qQ648


bicx 3 days ago 3 replies      
It's sobering to look at all these products and think about how developers poured thousands of hours into something that no longer exists.
mfrommil 3 days ago 3 replies      
Missing one of my all-time favorite dead products: Google Wave
wingerlang 3 days ago 1 reply      
If you had a newsletter like "new dead website of the month" if something died, I'd sign up.

I also would appreciate a gallery of screenshot for each product to get a feel for what it was.

AndrewKemendo 3 days ago 2 replies      
I feel the worst for Meerkat. They basically had a few weeks between blowing up huge at SXSW and then getting effectively shut down by Twitter with the launch of Periscope.

No justice in this world.

akeruu 3 days ago 2 replies      
I really like the tone and the realization.

Just a small thing that bother me is that on my desktop machine, the second column is not aligned as neatly as the others (due to two lines descriptions maybe ?)

dspillett 3 days ago 0 replies      
If you are going that far back how about including the original LapLink (and intersvr in MSDOS6 that implemented similar features). Pushing files over null-modem or the parallel port equivalent for extra speed was a godsend back when proper networking was relatively rare at home (or in small offices) so floppy-net was the main alternative.

The company still exists (was "Travelling Software, now renamed to Laplink Software) but obviously that specific product is pretty meaningless in today's environment unless you are playing with museum-piece hardware for nostalgia/shits/giggles.

ghostly_s 3 days ago 2 replies      
I love everything about this except the name. To laypeople, "product" != "software product", and it's revealing your bias. Why don't you just call it Software Graveyard?
franciscop 3 days ago 1 reply      
From the feedback here in HN it seems clear that you need a "suggest product" button. Maybe it could even be a Disqus on the bottom on the home page, which would automatically give you up/down votes functionality (;
CM30 2 days ago 0 replies      
Congrats on making such a neat site! It's quite interesting to see all the dead products and services that (often) never quite achieved their full potential.

That said, one thing does bother me here, and I'm not sure whether it's a mistake or not.

Basically, the all products lists don't seem to link to the individual pages for the closed products. In most cases that's likely fine (since I doubt you have separate pages for every single product listed), but it would be convenient to have them link to the product's page for more details when they're available.


Other than that, it looks pretty good.

arscan 3 days ago 2 replies      
Great job, this is a fun concept that is well executed.

I was going to add Geocities, but I was surprised to find out that it is still available in Japan. Anyone have insight as to why Yahoo kept it alive in that market?

hasselstrom1 3 days ago 0 replies      
Upvoted on PH as well - You did a great job mate. Well done on the UI and the concept.
daxfohl 3 days ago 0 replies      
Huh, I knew I had a zombie bitcasa account that I assumed I'd been paying for but was too lazy to cancel, and was surprised to see it on your list! I think there's a market for a product that individually curates a person's miscellaneous accounts (say it watches your bank/credit card accounts or whatever) and alerts them when fees increase or the company goes bankrupt or it looks like a zombie account (and maybe offer to close it for $10 ($30 for comcast)).
roryisok 3 days ago 1 reply      
Great site, brings back memories. A few little issues I found

1. On mobile I have to scroll past all the featured products to get to "all products". A link at the top or a hamburger menu would be great!

2. No search?

3. "all products" doesn't appear to include "featured products"? For instance Picasa and Google reader are in featured but not in all.

Other than that its a lovely design and a good concept. Well done.

snth12oentoe0 2 days ago 1 reply      
Love the site! However, it looks like you have some apps listed on the main page, but not in the list of all apps. For example, I submitted Aperture because I couldn't find it in the "A" section of the list of all apps. But I see that it is actually there on the main page near the bottom.
srcmap 2 days ago 0 replies      
I love google desktop search (RIP 2011.)

Last year I found a windows version of it online and found it still usable even in windows 10. Very unsafe I know - I did use ProcExploer+VirusTotal to check its binary signatures on 60+ scanner sites.

It still much better/faster than the native Win10 Cortana search.

Love know if there any open source clone of it?

SippinLean 2 days ago 0 replies      
>Fireworks was not a unique child. It was not different from Photoshop or Illustrator so Adobe shut it down.

That's not true, it was notably different than the two, it was replaced by Adobe XD.

protomyth 3 days ago 0 replies      
I still miss Lotus Improv (I think it not on the list).
warrenm 3 days ago 2 replies      
Code Warrior
tolgahanuzun 3 days ago 0 replies      
Wow, I remember the times I used LimeWire. It was a nice service with the alternatives offered.
leoharsha2 3 days ago 1 reply      
They should've added Orkut. Met my first girlfriend in that platform.
unixhero 3 days ago 1 reply      
No submit button?

Ok here then:


Great peer to peer file sync tool.

Acquired by Microsoft and shut down. Slowly shaking head

quickthrower2 2 days ago 0 replies      
Mtgox? Liberty Reserve?
prabhasp 3 days ago 1 reply      
Google wave!
warrenm 3 days ago 0 replies      
Microsoft Bob
paxy 3 days ago 2 replies      
Bit premature to list Flash..
warrenm 3 days ago 1 reply      
Making Visible Watermarks More Effective googleblog.com
288 points by runesoerensen  3 days ago   96 comments top 17
FTA 3 days ago 6 replies      
This reminds me a bit of the common argument for locks: it keeps regular people honest. Watermarks are designed really to deter you from casually copying an image and pasting it on your site, unless you don't care at all about a watermark showing (e.g. a blog site with a few followers).

I'm sure a mixture of the original computer vision technique plus some smearing on the original image could wash away even these randomly perturbed watermarks.

Just with many other security matters, you will always be trying to stay one step ahead of someone attempting to circumvent protections. So really the best protection against someone removing watermarks is to file a copyright infringement against the infringing party. DMCA (fixed acronym order :]) is a powerful tool in the USA.

In either case, this was a neat article.

mseebach 3 days ago 0 replies      
I wonder if this approach can be combined with "deep photo style transfer" to make a watermark that is clearly visible to human eyes, but is specifically permuted and adapted to the target image, in a way that it both appears to "belong" in the image (and thus is less disruptive for the legitimate purposes of evaluating images for suitability), but also destroys enough of the original image data to be impossible to remove without significantly and visibly altering the image?


molmalo 3 days ago 2 replies      
Honest question:

Recently I took a 6 weeks vacation. One of my cameras had a dust particle that resulted in several thousand pics having a semi translucent watermark-like impression.

I think that the process shown here to defeat those watermarks would be ideal to batch-correct my pictures. (As of now, I have to manually correct the ones I love and leave the others as they are). Does anyone happen to know a tool that would allow me to do something like that?

As I remember, the same thing happened in one of my friend's wedding. The photographer they hired had one of his lenses with dust, spoiling a lot of pictures... It would be a really nice tool for those situations if they release that code.

dtech 3 days ago 2 replies      
It seems the problem here is that the watermark is transparent, and thus still contains information about the original image.

Similar to how if you want to censor a part of an image, you should always use a single solid color because e.g. a blur can be inverted.

But I guess that would degrade the quality of the watermarked photo too much.

cs702 3 days ago 2 replies      
The researchers identify the watermarks to be removed by finding pixel patterns that persist across a large number of images, as shown on this animation: https://1.bp.blogspot.com/-cJwNoUxIBzM/WZTDpw3ru6I/AAAAAAAAB... . Their proposed solution is to randomly warp the watermarks.

Unfortunately, their solution could be quickly defeated with image-to-image generative adversarial convnets trained to... remove watermarks from image pairs. (That is, instead of training a model to change, say, image style or resolution, train it to remove artificially added watermarks.)

ouid 3 days ago 1 reply      
Seems like it is saying the following:

We can find the watermark in images and subtract it from the image. If we distort the original watermark, but subtract the average watermark, then you will not recover the original image.


jansho 3 days ago 0 replies      
Nice. I wonder though, will the randomly placed watermarks distract the legit viewer, and affect their judgment of the images?

Example: When I go through Adobe stock photos, although I find the watermark initially annoying, I would quickly learn to "unsee" it in the next photos because I know how it looks like and where it is on the photo.

With varied watermarks, I'm not sure if the same mental technique can be applied. Shrugs, I may just be overthinking it.

bwang29 3 days ago 3 replies      
I think the question is why would we need protective watermark anyway if stock photo companies are already crawling and sometimes phishing for use of licensed stock photography on the web and then directly send out an DCMA or a charge?

I've been in many situations where the copyright owners reached out for damage fees after downloading a full-res, un-watermarked photo from free stock photos sites in blog posts, so I'm sure the tech is all there already.

firefoxd 3 days ago 1 reply      
Sometimes the problem is that you just can't know the copyright on an image.

I'll push for this again, https://github.com/ibudiallo/imgcopyright

Html has a lot of meta attribute, why not one for copyright.

kuschku 3 days ago 3 replies      
This is just adding a warping effect, and Im quite sure that if this technology had already existed, then the same team at Google that did this research would have, with a similar amount of work, been able to circumvent this technique, too.

I mean, de-warping warped imagery is something that Googles image stabilization software used on YouTube can already do very well. Adapting it for this purpose should be possible.

whipoodle 3 days ago 0 replies      
Heck, I was just impressed by the clever method of removing watermarks. (I mean, if you think about machine learning more than I do, it probably doesn't seem very clever. But I thought it was.)
Ajedi32 3 days ago 2 replies      
I wonder if removing watermarks from images would be something a machine learning algorithm would be well suited for. Admittedly my understanding of machine learning is rather limited, but it seems like it'd be pretty easy to generate a large training set of watermarked and unwatermarked images for the algorithm to train on. No idea how effective it'd be though.
frandroid 3 days ago 1 reply      
Add a warped new MD5 for each watermark and they'll never be removable.
miheermunjal 3 days ago 0 replies      
Could we transform the watermark in randomized distances (X and Y) to avoid the subtraction? If the training on the pattern itself isn't as robust, the method starts to fall apart a bit more.
user5994461 3 days ago 1 reply      
Why do they use transparent watermark????
droidist2 3 days ago 2 replies      
Didn't Google invent this watermark defeating mechanism? So they're proposing protection against an attack they created?
nxsynonym 3 days ago 4 replies      
Thinking out loud here - but would this be a good use case for blockchain technology?

The problem with visible watermarks is it detracts from the image visibility. Nobody wants to look at photos or digital art pieces with huge ugly watermarks on them. Could blockchain tech help establish ownership in a way that would make watermarking obsolete?

edit: cool - downvotes for asking a question. Real nice guys.

Startups should not use React medium.com
291 points by nbmh  15 hours ago   136 comments top 32
pluma 3 hours ago 10 replies      
Nerds shouldn't write opinion pieces about subject domains they don't understand.

Seriously, stop this. Sometimes you just need to admit you have no idea what you're talking about and shut up.

The author honestly thinks using Preact or Inferno could protect them from patent lawsuits. Oh, wait, maybe "Facebook holds any software patents on the Virtual DOM or the React APIs" so better use Vue and Cycle.

Unless you actually know

1) which patents Facebook holds and

2) which patents are relevant to each framework/library (i.e. React and various its alternatives)

stop giving people legal advice about which library they should be using.

The cosmic irony would be if Facebook didn't hold any patents covering React to begin with but DID hold patents covering parts of Angular, Ember, Vue and Preact, over which they can sue who they like because Facebook never gave them a patent grant for those. Sounds far-fetched? It isn't because we don't know which actual patents these could be and who holds them.

Or for all you know Google might sue you. Or Apple.

This isn't a discussion, this is literally just a bunch of nerds ranting on the Internet about problems they don't sufficiently understand, playing Three Blind Men with the elephant that is Software Patents.

cbhl 9 hours ago 4 replies      
It's worth noting this "you can't sue us for violating your patents if you use our non-free open source software" is working as designed.

Facebook claims that if every company adopted a React-like license, that software patents as we know it would basically die. It's worth noting that both Google and Facebook's patent lawyers are generally of the opinion that software patents are net bad, but differ in their opinions of how to express that intent without exposing their companies to additional risk from patent trolls.

If you want to be acquired, then this is the opposite of what you want. You file patents for every part of the product you can; you audit your dependencies to avoid copyleft (AGPL and GPL) and React-like licenses, so your software can be folded into a 100% closed source product or shut down or whatever your acquirer wants.

If you run a start-up, and you're worried about the React license, you should be speaking to your own legal counsel about the best way forward.

scandox 3 hours ago 1 reply      
Trust. Trust. Trust and Trust again. My brain becomes exhausted within seconds of reading a licence. Not just because I'm lazy, but because I know that however closely I think I'm reading it, I probably won't be reading it closely enough to be 100% sure of my conclusions (viz. the differences of opinion here from people that actually have read this thing).

So what do I do? I trust certain organisations and I don't trust others.

No-one in their right mind can trust Facebook. You might as well trust the Ocean.

franciscop 1 hour ago 1 reply      
The author is making assumptions about what Open Source is and what should or shouldn't be. While many developers would like Open Source to be about "creating communities to build better software together" (myself included), open source just means that everyone can read the code.

Different developers and companies might use Open Source for different reason, included but not limited to: reduce Q&A, brand relevance, increase hiring power, strategic positioning, ideals that code should be _libre_, etc. Some companies and devs might even want several of those!

In this line, Facebook is a private corporation who I think we all agree their main reason for releasing React.js or any code at all doesn't seem to be purely idealistic. I would say strategic position (the best tool in the dev world, notably against Angular) and increasing their hiring power are really high within their reasons to release Open Source.

It is patently absurd to tell companies what to do and patronizing to tell developers what to do. Also, something that I don't see anyone arguing for/against is why so many big companies, even ones competing with Facebook, can use React.js freely and without worries? It's a point that anyone arguing against React is conveniently ignoring but I'd love to hear about.

pluma 2 hours ago 1 reply      
Aside from the validity of the article's claims about patents (see my other tirades about that) I'm not sure the point even makes sense.

React, the library, is at its core a glorified templating system. It provides plenty of escape hatches that make migration as well as inclusion of foreign UI components and libraries a breeze. It's stupidly simple to migrate away from.

If you are a high valuation startup looking to get acquired for your technology (rather than acquihired) I find it extremely unlikely your valuation hinges on your frontend code. And even if it does I find it extremely unlikely your frontend is tied so closely to React you won't be able to spend, say, 1MM replacing React with Vue or what have you (maybe at the cost of a little pizzazz).

If your frontend is animation-heavy, that likely doesn't live in React land. If your frontend is mostly static, it should be trivial to replace React as well.

If your startup is valuable, being sued over some frontend library is probably the least of your concerns. If the company looking to acquire you has enough cash in the bank to sue Facebook, they have far more than enough cash in the bank to replace React.

williamle8300 17 minutes ago 0 replies      
Facebook is like the Disney in the tech world. They want to be that trove of intellectual property.

They take free-to-use stuff (Disney is cheap ripoff of Hans Christian Anderson's fables), and create "magical" stuff that they protect with their arsenal of lawyers.

If Facebook is able to pull the wool over our eyes this time... OSS is gonna be in a bad place in the next century just like how Disney single-handedly lobbied to change public domain laws in America.

matthewmacleod 4 hours ago 4 replies      

There are, AFAIK, no known patents on React. This means you can go ahead and sue Facebook for patent violations to your heart's content. The license they granted to you to use any of their patents applied to React (of which there are none) is terminated, and you can merrily continue using React.

If this is incorrect, and Facebook actually do hold patents on React, then all of the popular alternatives almost certainly infringe on them as well. So, the worst-case scenario is no different.

sheetjs 8 hours ago 2 replies      
There was a time when React was Apache v2! https://github.com/facebook/react/blob/3bbed150ab58a07b0c4fa... shows that license.

Has anyone seriously explored forking React from the last Apache v2 version?

jasonkester 9 hours ago 3 replies      
I really like the idea behind this license.

They want to see a world where software patents no longer exist. So they write a term into their licensing that makes it really difficult for people who do like software patents to use their stuff.

I think I will move my projects over to a similar license. The only thing I would change would be to broaden it to invalidate if your company sued anybody over any patent.

If everybody did that, maybe software patents would finally go away.

vim_wannabe 6 hours ago 1 reply      
Does this mean I should primarily use services from startups that use React, so that they won't get acquired and the service shut down?
chrisco255 8 hours ago 1 reply      
Do most software startups even have patentable technology? I'm rather curious about this. Most consumer and SaaS apps I know of are built on non-patented software so I generally question this advice.

The fridge example was a case in point of how ridiculously low the odds of any company getting into patent litigation with Facebook are. To go to battle with FB you're gonna need millions and it's going to take years. That's not a light decision.

danielrhodes 9 hours ago 1 reply      
Are companies getting asked about React in M&A due diligence or has any lawyer recommended this, because otherwise this post is pure clickbait.
thomyorkie 9 hours ago 2 replies      
> If all giants agreed to open source under the BSD + patents scheme, cross-adoption would grind to a halt. Why? If Google released Project X under BSD + Patents, and Amazon really liked it, rather than adopting it and losing their right to ever sue Google for patents, they would go off and build it on their own.

This seems like a reasonable argument, but it doesn't seem to have deterred several big name companies from using React. Airbnb, netflix, and dropbox for example.

afro88 1 hour ago 1 reply      
There were a lot of people in the older thread about the patents stuff saying things like "well, are you ever going to sue Facebook?? You don't need to worry about the patents stuff".

But consider this: Facebook do something disastrous, like leak a bunch of private or financial data and it affects you really badly. There's a class action against Facebook. Now you can't join it, because you don't wanna rewrite your app without React to ensure Facebook can't counter sue over a patent that may or may not exist on React.

codingdave 2 hours ago 0 replies      
Even if everything in this article were 100% correct, which is clearly arguable, think about how this would truly play out. Company X would sue Facebook. Facebook would sue them back for using React... and then... lawsuits would ensue. Attorneys would do their things. Cases would be argued out of court. Lots of legal stuff would be going on, and plenty of time would be had for the engineers to select and move to a new framework.

Yes, I think there are problems with the license, and I'm not using React. But do I really think those problems will result in some scenario where you have an overnight show-stopper of your business because of it? Extremely unlikely.

Startups need to stop fearing the law and start understanding it.

npad 6 hours ago 1 reply      
What happened to the "software patents are ridiculous and should never be granted" argument?

Now it seems that the same sort of people advancing the anti-patent argument are angry about FB's licence. This seems like pretty muddled thinking.

blackoil 1 hour ago 0 replies      
Someone with knowledge should bring clarity to all this noise!

My understanding is, if I sue FB for some patents, they can sue me back with any patents they may hold on React. We do not know of any such patents they own. So practically I am no safer if I use preact/vue or even Angular, since they may own some patents that cover those tech.

tldr; Do not sue FB unless you have muscles.

hoodoof 7 hours ago 0 replies      
"So you've sewn up the market eh? Here's your check for $500million."

"But don't you want to know what technology we built it with?"


CityWanderer 3 hours ago 3 replies      
What makes the PATENTS file legally binding? If I install React via NPM/Yarn, or even as a dependency of another project, I will not see this file.

LICENSE is a pretty common convention and you could argue I should seek out this file in every one of my dependencies' dependencies - but how would I know to look for PATENTS?

Are all statements in the code base legally binding? Could they be hidden in a source file somewhere?

amelius 5 hours ago 0 replies      
I'm not using React for another reason. I don't agree with the way they treat their users (i.e., as a product).
tchaffee 2 hours ago 0 replies      
I wonder if Facebook's claims that they are doing this in order to make patents useless would have legal standing. In other words, if they become "evil" about this patent clause at some point in the future and try to enforce this in the bad ways that people are imagining might happen, then doesn't Facebook's clearly and publicly stated intentions hurt any claim they would make which goes against those intentions?
vladimir-y 5 hours ago 1 reply      
Can the title be generalized? Like don't use anything from FB?
skrebbel 7 hours ago 0 replies      
This is a badly written article full of FUD. It's written by an angry backend engineer, not a lawyer, and it shows.

He goes from this:

> The instant you sue Facebook, your patent rights for Reactand any other Facebook open source technology you happen to use)are automatically revoked.

To this:

> If you use React, you cannot go against Facebook for any patent they hold. Full period.

"Full period", really? Because the first does not imply the second. This is now how patent law works.

Now, I'm not a lawyer either, but broad assertions like these should tell you that there's emotion at work here, not reason. In his fourth update, he made a list of companies that add something about patents to their open source licenses, implying that somehow that that proves something.

So the thing that people confuse here is patents and copyrights. The BSD license grants you the right to use works copyrighted by Facebook people and contributors. The patents clause, further, promises that should Facebook hold any patents that cover the OSS, they won't use them against you, unless you sue them first.

There is the whole idea floating around the internet that a BSD license somehow ensures that nobody will sue you for patent infringement. I really don't understand where this comes from. Hell, Android is Apache Licensed (which includes a patent grant) and still anyone who makes an Android phone has to pay license fees to all kinds of patent trolls (Microsoft most notably). These things are totally separate.

So first, if you sue Facebook for patents, you lose their patent grant (so they can sue you back, which everybody always does anyway - it's the only defense companies have in patent wars). But you don't lose the BSD license or anything. That's not how it works. All you lose is Facebook's promise not to sue you because you use React.

Secondly, and this is the core point, patents don't cover code, they cover ideas. Any patents that Facebook might have that, right or wrong, cover React, will surely be written broad enough that they also cover Preact, Inferno, Vue.js probably, and I bet also Angular. Not using React but one of these other libraries therefore makes no difference - in both cases, Facebook can use their React-ish patents to sue you.

To my understanding, patent lawsuits rarely get to the nitty gritty details of actual patents in reality. It does not matter whether a Facebook patent written broadly actually covers Vue.js or not - in practice, more often than not, companies will compare the height of the patent stacks they have, and agree on a settlement based on that.

All this patent grant says is that Facebook gets to use their patents that cover OSS to make their stack of paper a bit higher. Like they would if they hadn't made a patent grant at all.

So, repeat after me: using open source does not shield you from patent infringment lawsuits.

BukhariH 9 hours ago 3 replies      
Can someone please share what patents cover react?

Because if they're revoking patents that don't cover react then there should be no problem to continue using react right?

bitL 4 hours ago 1 reply      
It truly seems non-mature businesses should stop relying on open-source with "baggage" and utilize only free software (AGPL3+) that has dual-licensing for commercial use with support as e.g. in Qt, unless you are 100% sure for your product lifecycle you won't get into direct business collision with the "baggage" author.
k__ 8 hours ago 3 replies      
What is the safe alternative here?

I mean probably FB got patents.


Probably they have at least one that covers things React can do.

Almost every framework moved to components and virtual DOM.

So there is a big chance that any framework out there could infringe some of these React patents.

So their either can

revoke your React license when you sue them


Sue you over patent infringement if you don't use React

guelo 8 hours ago 1 reply      
This doesn't convince me. As a consumer patents and patent lawsuits are almost always bad. Patents reduce options in the market, lawsuits between companies waste resources, startups being acquired reduce market options. The only real argument is that it will prevent communities from forming. But I don't buy it. Open source needs competition too, monolythic ecosystems are bad. As an example, Apple didn't want to contribute to gcc so they created LLVM which is a boon to everybody.
hoodoof 8 hours ago 1 reply      
"Look, we were going to buy you for $500million but our thorough due diligence has turned over a rather nasty stone that you probably wished we didn't look under. You know what I mean don't you? YES - we found out your dirty little secret that you're using ReactJS. Due to this, we have decided to pull the deal in favor of your competitor who uses AngularJS. What you need to understand is that although you've cornered the market with your superb software and business model, we are dead serious about never buying companies that have built on ReactJS. We have a deep, and we think entirely valid, concern that Facebook will, at a point in time, suddenly pull the carpet from under you and Mark Zuckerberg will be laughing at us saying 'suckers... we sure got you with the whole ReactJS ruse didn't we!'"

"We're also not very enthused about you building on Amazon - surprised you'd take a risk like that, it doesn't indicate much business sense."

"Sorry to say, but your business, due to the ReactJS decision, is worth $0."

jlebrech 5 hours ago 0 replies      
my reason is that your app doesn't need the whiz bang reactiveness of react of any other frontend framework just yet. it's just extra overhead.
halfnibble 10 hours ago 0 replies      
I've been saying this for months. Don't use React!
notaboutdave 9 hours ago 2 replies      
Easy workaround: Install Preact. No code changes required, at least not for me last year.
dimillian 6 hours ago 1 reply      
Yeah because small startups will totally go after Facebook. Make sense. Wow.
What it feels like to be in the zone as a programmer dopeboy.github.io
262 points by dopeboy  1 day ago   109 comments top 36
sktrdie 1 day ago 4 replies      
I get this too, but it's very draining - similar to doing an intensive workout, or given a talk at a conference.

The negatives are obvious; less sociable, more easily irritated, wanting to be by yourself. After you've spent a day in the zone, you're not really "party material".

The positive (apart from being very productive) is that I use it to get my negative feelings out of the way - anything that is bothering in my life is somehow gone when I am in "the zone" - it is truly a zen feeling as the author explains.

It's also important to mention that you can't force yourself to be in the zone. It comes and goes, with very little control on your behalf. People that try to force themselves in the zone by working harder, are not truly in the zone. It happens seamlessly without you even knowing or wanting it.

For instance, I'm hardly in the zone. It happens probably once every two weeks, if not less - it also depends on what I'm working on; if it's something new and exciting I'm more predisposed to get in the zone.

Being in the zone is like getting an adrenaline rush - you can force yourself to do it more often (go skydiving for instance), but if you do it too often you'll quickly drain out and not enjoy it as you used to.

tekromancr 21 hours ago 2 replies      
I haven't been in the zone for months. It's mostly general dissatisfaction with my job, but it's gotten worse of late.

On an average day, there will be 4 hours of calls spread an hour or less apart for the first half of the day, with the potential for surprise calls for the rest of the day. The irony is that a lot of these calls are about why things aren't getting done.

The surprise calls are the worst. Even if I might have 2 hours of uninterupted time at the end of a day (when I am most tired and frustrated) it is impossible to get focused when there is always a looming threat of interruption. It's gotten so bad that I only get anything done late at night or over weekends, but then I am tired during weekdays and resentful that I had to throw away my freetime in order to move a project forward.

tastyface 1 day ago 3 replies      
Speaking of The Zone:

I often see programmers on HN talk about building mental castles of their programs, but I feel like I don't really code the same way. Instead, my thinking seems more "functional". For a given problem, I can often make out the faint outline of an optimal solution, but there's a lot of cruft and misplaced bits in the way. Most of my work involves mentally simulating the consequences of different options and then bending the architecture into such a shape that the whole thing just sort of assembles on its own. I'm only "in the zone" when I have to make that final leap. There's very little castle-building along the way.

As a result, I feel like I'm somewhat incapable of working on massive, multi-part architectures, since I just can't see the running state in my head. Once I zoom in to work on a single component, the rest fade from memory and I lose the big picture. On the other hand, I have no problem working in open-office environments: I don't mentally deal with a lot of program state, so I'm able to just dive right back in. This also influences my code to be more functional, as I know I can rely on e.g. idempotent methods to keep doing what they're supposed to regardless of any finicky global state.

I wish I could get better at building those "mental castles" since it's a huge barrier to designing complex architectures (like games). I don't want to be stuck forever working on the leaves of the tree. Might be related to OCD: I've had the disorder for a long time and I've sort of conditioned myself to avoid keeping running thoughts in memory before the "OCD daemon" distorts them into something horrible. As a result, much of my thinking is necessarily spontaneous and intuitive, or at the very least wordless.

Can anyone else relate?

rhizome31 1 day ago 9 replies      
I don't think I've ever experienced anything like this. As I practice TDD, my usual workflow is think -> test -> code, code usually being the easiest part. Things like "problems break down instantly", "everything becomes effortless" sound strange and exciting. I wonder how that relates to the concepts of maintainability, cowboy programming and 10x engineer.

I've met a few programmers in my career who were able to write a huge amount of code doing wonderful things without testing at all. One guy I think of would spend days coding without even trying to compile his code and apparently, except for minor typos he could quickly fix, his code was working when he decided to compile and test it. He impressed bosses and colleagues with amazing features developed in a very short time but, on the other hand, nobody on the team was able to maintain his code. This was explicitly stated and accepted by team members, we knew we couldn't maintain his code but we were ready to accept it given the productivity of the guy. It was a trade-off.

This way of working is completely alien to me. I can't think things in my head out of nothing and write working code. I need to start building something and get feedback from the computer to go to the next step. That's why when I was introduced to TDD it immediately made a lot of sense to me. It matched the way I was already operating. If I didn't have this workflow I think I would be unable to write even mildly complex code.

It's interesting how people can operate differently. In a way I'm a bit jealous of those "zone" programmers who can produce amazing things very quickly. But, on the other hand, I can see that I'm also useful because companies hire me and want to keep me. I've seen many times people taking over my code, maintain it and develop it further. I've even been explicitly told a few times that my code was very easy to understand and maintain. Seeing people taking over my code and develop it further is one of the most satisfying things in my work.

xaedes 1 day ago 1 reply      
The Dexterous Butcher

Cook Ting was cutting up an ox for Lord Wen-hui. As every touch of his hand, every heave of his shoulder, every move of his feet, every thrust of his knee zip! zoop! He slithered the knife along with a zing, and all was in perfect rhythm, as though he were performing the dance of the Mulberry Grove or keeping time to the Ching-shou music.

Ah, this is marvelous! said Lord Wen-hui. Imagine skill reaching such heights!

Cook Ting laid down his knife and replied, What I care about is the Way, which goes beyond skill. When I first began cutting up oxen, all I could see was the ox itself. After three years I no longer saw the whole ox. And now now I go at it by spirit and dont look with my eyes. Perception and understanding have come to a stop and spirit moves where it wants. I go along with the natural makeup, strike in the big hollows, guide the knife through the big openings, and following things as they are. So I never touch the smallest ligament or tendon, much less a main joint.

A good cook changes his knife once a year because he cuts. A mediocre cook changes his knife once a month because he hacks. Ive had this knife of mine for nineteen years and Ive cut up thousands of oxen with it, and yet the blade is as good as though it had just come from the grindstone. There are spaces between the joints, and the blade of the knife has really no thickness. If you insert what has no thickness into such spaces, then theres plenty of room more than enough for the blade to play about it. Thats why after nineteen years the blade of my knife is still as good as when it first came from the grindstone.

However, whenever I come to a complicated place, I size up the difficulties, tell myself to watch out and be careful, keep my eyes on what Im doing, work very slowly, and move the knife with the greatest subtlety, until flop! the whole thing comes apart like a clod of earth crumbling to the ground. I stand there holding the knife and look all around me, completely satisfied and reluctant to move on, and then I wipe off the knife and put it away.

Excellent! said Lord Wen-hui. I have heard the words of Cook Ting and learned how to care for life!

Translated by Burton Watson (Chuang Tzu: The Basic Writings, 1964)


QuercusMax 17 hours ago 1 reply      
I find that doing TDD makes it much easier to get in the zone; more importantly, it helps to get back into the zone if I get off track.

I typically stub out a bunch of tests (just empty methods named based on what I plan to test), then go one by one and fill in the tests and write the implementations.

In the codebase I work in, we use a lot of mocks / fakes, so I typically write my tests "in reverse" - first the verification of results, then the mock / fake expectations for what methods should have been called. Then I'll write the actual implementation, and then fill in the mock inputs.

This way, if I get interrupted, it's very easy to transition back into what I was working on, as I make sure to always leave a breadcrumb trail for the next piece (when I run my test, the failure will give me a hint as to the next step to take). And since I have a bunch of stubbed out test methods, once one bit is finished, I can move onto the next one and repeat the process.

stevewillows 19 hours ago 0 replies      
Years ago I went through some neurotherapy with a local doctor [1]. One part of the process had clips on my ears and a sensor on my head to read biofeedback (or something along those lines).

The game was simple: there's a silo on the screen with a hot air balloon on the left side. When I get into 'the zone', the balloon goes up and around the silo. This will loop for as long as I can hold it.

It took about two sessions with minor success, then suddenly it clicked. Now I can easily enter that state on demand.

This might sound odd, but the neurotherapy helped eliminate a lot of the negative parts of ADHD without losing the edge that a lot of medications take away. I still have a lot of energy, but I can always sit and focus on the task at hand when I need to.

[1] http://www.swingleclinic.com/about/how-does-neurotherapy-wor...

Excluse 19 hours ago 0 replies      
Contrary to what a lot of people here have said about boring tasks, I find that's the easiest way for me to get into the zone.

While it may not be my most economically productive part of the day (aka I'm not working on the hard, important problems) there's no doubt that for the 10-15 minutes one of those menial tasks requires, I'm in that special state.

An environmental trigger for me is to play familiar music. It doesn't have to be a special playlist; any album I've listened to >50 times will suffice.

Remaining in the zone requires incremental progress (momentum) which I think is easy to find in a boring, repetitive task that's squarely in your wheelhouse.

The real productivity sweet spot is when you're able to get that momentum going on a valuable project.

EdgarVerona 22 hours ago 2 replies      
Getting in the zone is something exhilarating for me - I experience euphoria, though I don't really notice the feeling until after it's over or if I get broken out of it.

It was actually something that as of late has started to disturb me: I notice that I live for moments like that, when I get in the zone and the whole world melts away. I feel like a junkie seeking a high, and thinking back on my youth and how destructive I was to my body and my interpersonal relationships in pursuit of "code all the time," I wonder whether that analogy is even more accurate than I'd like to believe.

I look back on my life and wonder if I've actually been a lifelong addict who is lucky enough to have a productive output of his addiction rather than a functioning member of society.

I don't know if others feel or have felt this way, or if it's just a phase that I'm going through. But these are my thoughts at the moment.

robotmay 1 day ago 0 replies      
I find that isolating myself from my surroundings can actually help encourage me get into the zone. I work from home, usually by myself, but if I want to get into the zone I'll chuck on a pair of headphones; something about that helps focus me, prevents distractions, and pulls me into the zone more easily. I suggested this to a friend recently whilst he was writing his dissertation, and he found that it worked well for him too and really helped him get through it.

However I can't have folk music, as I stumble across a great tune too often and get up to play it instead :)

It is much easier to stay in the zone when there's a physical barrier between you and other people. Even so much as being asked if you want a cup of tea is enough to pull you out of it. I recently asked my boss to stop other people from phoning me if they want me to be productive, and that really helped. I don't think most people understand what it's like; that amazing feeling you get when you're in the zone when programming, and it can be difficult getting them to understand why it's so frustrating being pulled out of it when a simple message would have sufficed.

EliRivers 1 day ago 2 replies      
Oh yeah, here comes the zone, here comes the zone, it's taken an hour of intense study of all this code but now I can see all the pieces at once inside my head and I can feel exactly how to thread this code right through the middle of all of it and -

COLLEAGUE LEANS OVER FROM NEARBY DESK IN OPEN OFFICE:"Hey buddy, what's the password for the - oh no, wait I remember."

Wait, what was I doing? What's all this code on my monitors? Why was I looking at any of this?

gggdvnkhmbgjvbn 1 day ago 0 replies      
Used to get this feeling from video games, now i get paid for it as a programmer. As a result I've found it hard to go back.
cpayne624 1 day ago 2 replies      
I'm a Fed engineer and spent a 4 mo. assignment on an integrated team with Pivotal pairing exclusively. It was a long 4 months for me. There was no "zone." I'm not built for pairing.
Uptrenda 1 day ago 1 reply      
Good luck ever getting in the zone if you do anything with modern blockchains (especially Ethereum.) All of the documentation is terrible and you waste hours trying to find a bug only to realise it was a problem with the library all along... Assuming of course: that you don't give up after seeing the "developer tools." What little tools you have for solving problems feels like you're trying to carve a delicate ice statue with a giant hammer while wearing clown gloves.

How do you deal with the related stress of having to struggle against needlessly difficult tools, libraries, documentation, and bugs caused by other people?

luckydude 19 hours ago 0 replies      
Back when I worked at Sun, and got in the zone all the time, I worked on a ~32 hour daily clock. Because some of the work I was doing would take me about 8 hours to get back to the state of mind where I was yesterday. So instead of working 8 hours I would work for about 16, so I actually made 8 hours of forward progress. The 32 hour "day" was so I could have the rest of a normal day to eat, sleep, etc.

This got to be common enough that someone made a clock, where you could move the hands, it said "Larry will be in here" and stuck it on my door. I think it was sort of a joke but I think some people actually used it.

I couldn't come close to doing anything like that now. And at 55 years old, I can tell you that the days where you get in the zone, for me at least, are few and far between. I used to be able to just go there, now it sort of happens to me and I have to drop everything else and ride it before it fades away.

atom-morgan 1 day ago 0 replies      
What it feels like to be in the zone as a programmer is what it feels like to be in the zone doing any task that can put you into flow state.
dghughes 1 day ago 0 replies      
I'm trying my hand at programming and I'm surprised at my progress so far.

But as a person who is very unfocused and poor at math programming has got to be the worst thing on earth for me. But I like it, and math.

As with anything learning to focus takes effort it's different for each person. But a clean desk, calm environment, goals, lots of sleep, eat well and I find post exercise all helps. Not just learning to program but any task.

amelius 1 day ago 0 replies      
I notice that I can be in the zone while programming, but then when I need to research something (do real thinking rather than work by reflexes), I pop out of it.
fnayr 1 day ago 2 replies      
This is almost disturbingly accurate to how I feel in the "zone" as well. A consequence of this is it's hard to have a healthy life as a self-employed programmer. If I want the app I'm working on finished faster (and I do or I'll run out of money), I must stay in the zone as long as possible. Which means I must ignore people as long as possible and put off eating/exercising as long as possible as well.
depressedpanda 21 hours ago 0 replies      
What a great article; it concisely and succinctly describes what's going on, and does so much better than I could.

I shared it with my significant other, in order for her to better understand the grumpy responses she sometimes gets when asking seemingly innocuous questions like "would you like some tea?"

twodave 20 hours ago 0 replies      
I disagree with the premise of this article (though I haven't always). I generally find that I'm always in the zone for _something_, and after more than a decade writing code I've found that often when I'm feeling less productive at it, it's because there is some deficiency in my life, be it social interaction, nutrition, fitness, over-exertion, etc. Over the years I've come to know myself better, which allows me to take better care of myself holistically in order to be not just more productive at work, but more content with life in general. Keep everything in it's proper place and all that.
engnr567 17 hours ago 1 reply      
When I was single, for most of my big projects I used to get 70-80% of the work done in 4-5 days of being in the zone. And then spend months on changing the bells and whistles. Now I have to be home at a reasonable hour and hopefully in a good mood. So I have become hesitant to even get into the zone, because getting out of this state of high efficiency would make me extremely irritable.

How do married people or those with kids balance such bursts of creativity with personal commitments to their family ?

d33 1 day ago 3 replies      
How does "being in the zone" compare to being in the state of "flow" [0]? Are those synonymous?

[0]: https://en.wikipedia.org/wiki/Flow_(psychology)

djhworld 1 day ago 0 replies      
I sometimes find myself in this situation too. I often find that I feel most productive when I reach this state. But it's quite rare, a lot of my day is interrupted by colleagues, meetings, noisy office etc

It's cool, but you can see the downsides. A few weeks ago I basically disappeared for a few days writing some code. Great fun for me, but not exactly boosting growth opportunities for the team.

NicoJuicy 1 day ago 0 replies      
This happens a lot to me. Although i always go out friday and saturday evenings. It's just hard sometimes switching it off and takes a reasonable effort...

Sometimes i'm more quiet the entire evening and sometimes it's easier. In my mind, i'm constantly thinking about code then and it's hard to be social then.

All arround, i'm a very social guy. Just when i leave the zone, i'm not.

Tepix 22 hours ago 0 replies      
Recommended related reading: Zenclavier - Extreme Keyboarding by Tom Christiansen


subwayclub 18 hours ago 0 replies      
I try to not stay in flow state. It means that the problem I'm working on is too familiar and I should automate the programming of it so that I'm grinding on something hard again.

Edit: but it's okay if it's a prototype

tiku 19 hours ago 0 replies      
I've done a minor in Flow, the theory about getting in the zone. Very interesting. It manifests itself mostly when the challenge is hard enough and your knowledge is also good on the subject. Boring tasks won't trigger flow etc.
chrisfinne 1 day ago 0 replies      
Well articulated and very concise. This could have been laboriously drawn out into a 10 page article.

"Half as long" writing lesson from "A River Runs Through It"https://www.youtube.com/watch?v=7vRhOdf-6co

astrod 23 hours ago 0 replies      
I started using guarana tablets to help stay 'super' focused, but only when required. I find it helps me a lot with productivity, often 3+ hours of optimum output. No side effects, only other supplements I take regularly are fish oil and I dont drink coffee or energy drinks.
kolari 21 hours ago 0 replies      
I guess when a programmer is in the zone, he/she is much more effective communicating/instructing the machines (in the language defined between humans and machines), than communicating with other humans (programmers or not).
amiga-workbench 1 day ago 0 replies      
I haven't been getting this much for the last few months, but I think that's due to my scattered workload. I'm about to start a new project build and am looking forward to falling back into the flow.

Its a wonderful feeling, its like the fog in my mind has been lifted.

bcrisman 23 hours ago 0 replies      
I get there as well, but it takes a bit. Generally, the zone hits me when I'm in crunch time and I know that I won't have any meetings for a while. My ideas all work together and if I get stuck on something, it's not long before I can figure it out. I can generally get a ton of work accomplished.

But then, someone knocks on my cube to say, "do you know where the elevators are at?"

macca321 21 hours ago 0 replies      
Then the next day you realise how to achieve the same thing in a tenth of the code...
orthoganol 19 hours ago 0 replies      
Prerequisites for 'the zone' (why not call it flow? isn't it the same thing?):

a) You have to be interested and eager to get started. If you're not happy with the project, if anything else going on in your life is taking your attention, you will not experience it.

b) When you experience it, you feel like you're a 'real' engineer, like that is your true identity now, your imposter syndrome disappears. So ultimately, if you don't identify as a programmer, as opposed to identifying as someone who programs because it pays well or view it as just a temporary phase of your career until you do management or become a startup CEO or something, you may never experience it.

c) After you experience it, your brain goes "whoaaa" and needs to recover. You won't be able to experience it for at least another 2-3 days, in my experience.

klarrimore 17 hours ago 0 replies      
You mean when you sit down at your keyboard 45 minutes after you popped those Adderall?
Marko An isomorphic UI framework similar to Vue markojs.com
259 points by jsnathan  1 day ago   138 comments top 16
alansammarone 1 day ago 15 replies      
I've focused on backend for the last 7 years or so, so I've been kind of out of contact with the frontend world. Recently I started working on a personal project, and I thought it would be a good time to learn some of the modern tools people have been using for frontend dev.

I was completely baffled by the myriad of options out there, how complex they look (note I've been working on very high performance, distributed backend applications, so complexity on itself is not an issue), and how it's very unclear when to use any one of them or what each one is good for. I tried Angular and React, and both feel like almost a different language. You have to learn their internals to work effectively with them, and it often looks like they create more complexity then the original complexity they were trying to reduce. I have no problem learning new things, in fact, I love it! It just feels like there are other things to learn that will stick around for longer - JS frameworks/libraries seem to be very hype-driven these days. What are your thoughts on this?

bryanph_ 1 day ago 4 replies      
Nowadays I only consider switching front-end frameworks if there is a substantial conceptual improvement. React did this for me due to its uni-directional dataflow and component-based architecture. There is nothing new here conceptually.
tangue 1 day ago 4 replies      
I've discovered Marko in one of the various react-alternative topic that emerged yesterday and it looks like something sane, which is rare in the js ecosystem. I'm wondering if anyone on hn used in in a real world project and how it was.
jarym 23 hours ago 0 replies      
So many new UI frameworks, yet no one really mentioned SmartClient.com (LGPL licensed).

I've been using it for almost 10 years and some of the concepts they pioneered have only recently been discovered by the new kids.

I still use it, though some of the 'fixes' they had to put in place to support old browsers are often polluting the DOM unnecessarily in modern browsers (this is something I hope they fixed).

My favourite aspects of it are that I can declare components declaratively, it has a technique called autoChildren that allows managing a tree of components as a flat set (useful for complex components like tabsets), and the data binding layer. The documentation is top notch (which it needs to be given the depth of stuff in there).

Again, all of this was around in 2009 since when I started using it - and not sure how many years before I found it they'd been going.

jcelerier 1 day ago 5 replies      
Meanwhile in real reactive environments:

 import QtQuick 2.7 import QtQuick.Controls 1.1 Rectangle { property real count: 0 Column { Text { text: count color: "#09c" font.pointSize: 24 } Button { text: "Click me!" onClicked: count++ } } }
Also the so-called "60fps smooth" animation has noticeable stutters on Firefox on linux.

znpy 8 hours ago 1 reply      
I just skimmed the page and seen that sort coloured of sine wave... Then read The above animation is 128 <div> tags. No SVG, no CSS transitions/animations. It's all powered by Marko which does a full re-render every frame..

Well, as soon as my browser renders that thing, the browser process reaches 122% cpu usage (according to htop). And i'm using a 4th gen core i7 processor. I can literally (literally in the literal sense of the word) hear my fan spin up. That hurts battery so much.

brianon99 1 day ago 0 replies      
Dont abuse the term isomorphics.To prove 2 groups are isomorphic mathematically you have to show there exists a product preserving map between the groups.

Just kidding.

forkLding 8 hours ago 1 reply      
Quick question, is there anything conceptually interesting about Marko thats different in ideology and structure from Angular and React or Vue?

For background to why I'm asking: I'm an IOS mobile dev and was a web dev before and I often use web dev structures and ideas as there is less structure, frameworks (unless you count RxSwift) and general philosophies I find in IOS mobile dev aside from best practices and tips like avoid Massive View controllers, etc.

SeriousM 9 hours ago 1 reply      
I don't believe the marketing when it says that working with something is "fun". Working is maybe enjoyable sometimes, but it will stay hard work if you're doing it right.
spacetexas 1 day ago 1 reply      
Ecosystem is so important these days, there might be technical reasons for choosing this but considering the support (knowing stack overflow answers will be available) and pre-existing component ecosystems for Vue & React, I can't see a reason anyone would pick this.
pier25 19 hours ago 1 reply      
Here's a great introduction to Marko by its main dev:


stuaxo 14 hours ago 0 replies      
This looks pretty decent.
dolphone 11 hours ago 0 replies      
You had me at isomorphic.
dmitriid 1 day ago 1 reply      
Oh hi there, yet another awkward not-really-html-not-really-js templating language

 class { onCreate() { this.state = { count:0 }; } increment() { this.state.count++; } } <div>The current count is ${state.count}</div> <button on-click('increment')>Click me!</button>

akras14 16 hours ago 0 replies      
Yay, another front end framework! /s
       cached 21 August 2017 15:11:01 GMT