hacker news with inline top comments    .. more ..    11 Aug 2017 News
home   ask   best   2 weeks ago   
1
Ad blocking is under attack adguard.com
252 points by tiagobraw  1 hour ago   128 comments top 34
1
anilgulecha 37 minutes ago 2 replies      
It was admiral that did this:https://blog.getadmiral.com/dmca-easylist-adblock-copyright-...

They even clearly state they used the only tool available to them, DCMA. From all the current summaries on this, DMCA does not apply to a line entry in easylist. A domain can be trademarked.

This should be added back in. And if github cannot standup to DMCA abuse, then well, easylist and all other developers should be giving a clear hard though to their continued use of the github platform.

2
nilved 53 minutes ago 1 reply      
The DMCA only applies to the US. Fork the repo and move on. It's time to forget about American hosting.
3
nkkollaw 13 minutes ago 0 replies      
I didn't get what the hell happened.

So, Admiralan anti-adblocker companycontacted EasyList and told them to remove a domain from their list. This domain was a server they needed for their anti-adblocker platform to work.

EasyList told Admiral that they would only do it if GitHub agreed, so Admiral contacted GitHub and the domain was removed from the EasyList list after GitHub told EasyList they should comply.

The "attack" is that any company can tell lists to remove their website via using a DMCA violation, so lists become useless.

I have two questions:

1. how would a domain name on a list violate copyright2. why aren't lists hosted anywhere else but the US so that they can't be controlled by DMCA requests.

4
mnarayan01 42 minutes ago 3 replies      
Via https://github.com/easylist/easylist/commit/a4d380ad1a3b33a0... by one of the Easylist maintainers:

> If it is a Circumvention/Adblock-Warning adhost, it should be removed from Easylist even without the need for a DMCA request.

Anyone know why he's saying this?

Edit: I guess he's referring to the following via https://easylist.to/2013/05/10/anti-adblock-guide-for-site-a...:

> Anti-Adblock should only be challenged if the system limits website functionality or causes significant disruption to browsing

5
richardknop 50 minutes ago 6 replies      
There has been a growing trend of websites that will ask me to either:

a) whitelist their site in my adblocker b) or subscribe to their monthly subscription and keep reading their site with adblocker

6
cft 15 minutes ago 1 reply      
It's amazing that they are so uneducated about DMCA. This is called defective DMCA notice, and it should be ignored. The sender of the notice can now file a suit, since the publisher of the list is no longer under safe harbor. But they won't, since they know it is defective. And if they did, it would be thrown out with a summary judgment.
7
superflyguy 38 minutes ago 1 reply      
Store a hash of the URLs instead of the URLs themselves.
8
aphextron 47 minutes ago 2 replies      
>"This domain accepts GET and POST requests over standard HTTP ports (TCP 80 and TCP 443) via strictly web based traffic originating from browsers"

lol

9
spilk 40 minutes ago 0 replies      
Am I the only one who thinks its laughable that adding their domain to a blocklist "circumvents" their protection technology? If I was a content provider I don't think I'd want to rely on something so flimsy.
10
pbhjpbhj 16 minutes ago 1 reply      
The only way I see DMCA applying here is contributory infringement in enabling modification of the page.

With that in mind I think the question to be answered is does a user/viewer have the right to modify a page in their own web browser or must they view the page as intended by the creators?

You can get in to technicalities then like are Links creators committing contributory infringement, but I'm not sure that's helpful.

Should a user be able to purposefully block parts of a page?

11
K0nserv 9 minutes ago 0 replies      
Surely if you are truly conserned about the copyright of your work you wouldn't freely send the content to the user's device before issuing some sort of challenge e.g login at which point circumventing that becomes a crime. Websites shouldn't be allowed to dictate how the user agent decides to execute the content received. This seems like a case of wanting to have the cake and eat it too.
12
dhimes 46 minutes ago 4 replies      
Is it illegal to claim something under DMCA when it does not in fact represent a DMCA claim?
13
tyingq 41 minutes ago 1 reply      
Just encode the list of urls with htmlentities, base64, rot13, whatever. Then the DMCA can't be used to attack it.

Edit: Disagree? Please comment. It's attackable via the DMCA because the word/brand is in the file in it's copyrighted form. If you encode it such that it isn't in that form, it's protected from that approach.

14
eli 44 minutes ago 0 replies      
This seems a bit hyperbolic. If you believe there's a battle between ad servers and ad blockers, it's obvious the ad blockers are winning by a lot.

IANAL but this seems like a questionable use of DMCA Takedown notice which, in any event, isn't actually binding. EasyList could probably file a counternotification and risk a lawsuit.

15
delegate 16 minutes ago 0 replies      
As I understand they're using copyright laws not to prevent me from copying content, but to preventing me from not consuming content.

In other words, the copyright law is used to force users to consume content.

16
zb3 24 minutes ago 1 reply      
If in the future I am forced to view ads, I won't click on them and I won't buy any products advertised via forced ads.
17
EFruit 9 minutes ago 0 replies      
uBlock and uMatrix have a "page blocked" function when the entire page matches a filter, could something similar be implemented for these situations?

Say, if a page requests content from these blocked domains, the ad blocker display an SSL warning-esque page that says something along the lines of "This page contains ad content protected by the DMCA. These ads may contain malicious material, and can not be safely viewed. Please alert the site owner to resolve this issue." If you want to make it really effective, you may even be able to pull contact email addresses from WHOIS data and make the contact line a mailto: URL.

18
SoMisanthrope 20 minutes ago 1 reply      
Personally, this could be a real problem, if Ad services decide to pry open the doors on our ability to deny them our screen real estate. I block Ads, primarily due to the fact that I don't want my web surfing history and behaviors captured by some uncontrollable mob a ad-servers. Further, most advertisements are akin to the dreaded <flash> tag of old! animated, ugly, and obtrusive.

We are coming to a point in the Internets genesis, where technology should evolve to allow the "information consumer" to directly reward the "information producer"... without the need of an intermediary like Google Adwords, etc. etc.

19
wnevets 47 minutes ago 1 reply      
a DCMA request for listing a inherently public domain name? Am I the only one confused?
20
_jal 10 minutes ago 1 reply      
127.0.0.1 www.functionalclam.com functionalclam.com
21
tomschlick 11 minutes ago 0 replies      
They should just base64 encode domains. if the name isn't in cleartext can they issue a DMCA for trademark infringement?
22
pmoriarty 24 minutes ago 0 replies      
The DMCA itself should be taken down. Where is the organizing against it?
23
gremlinsinc 44 minutes ago 4 replies      
Wonder if we could create a git in the blockchain... where DMCA just can't exist/happen since there's no central authority to ask to take stuff down...
24
rnhmjoj 53 minutes ago 1 reply      
Can't we host EasyList somewhere safer?
25
troyvit 28 minutes ago 0 replies      
grep functionalclam.com /etc/hosts0.0.0.0 functionalclam.com
26
firefoxd 29 minutes ago 2 replies      
I use privacy badger, and unless I'm mistaken, it doesn't use a list to block content, it just blocks any 3rd party request that send cookie information.

Why are we relying on a list?

27
tomc1985 15 minutes ago 0 replies      
Sounds like it's time to relocate to a self-hosted git repo
28
natch 23 minutes ago 0 replies      
ROT-13 the entries in the list. Update ad blockers to read that format. Solved.
29
guelo 13 minutes ago 0 replies      
This seems like Github's fuck up.
30
fnkcjjvjjv 13 minutes ago 0 replies      
I guess my approach of blocking domains in the dns server turned out to be a win after al
31
Dolores12 44 minutes ago 1 reply      
Is it safe to type this blocked URL in browser? Will i get sued?
32
ehxcaet 49 minutes ago 2 replies      
What's to stop people from forking off that list and transitioning to another one? Like what if ad blockers just switched to use EasyList2 instead of EasyList?
33
mtgx 48 minutes ago 0 replies      
This is what you get when companies have all the incentive to file DMCA takedown requests and there's no punishment for filing a bogus one. And modern technology is making this problem increasingly worse, with all the automated takedown tools.
34
such_a_casual 35 minutes ago 2 replies      
2
Crafting plausible fantasy maps mythcreants.com
111 points by fanf2  2 hours ago   15 comments top 9
1
danarmak 31 minutes ago 0 replies      
If you're making a fantasy world (which is not a past or future Earth in disguise), you don't have to stick to plate tectonics and volcanism as they exist on Earth. Geology varies a lot between planets, even in our solar system!

That's without even going into whatever makes your world "fantasy". Un-Earthly geology could be plot relevant (The Fifth Season, N. K. Jemisin). Or it could be a bit of colorful background that's still scientifically valid. Maybe you live on a tidally locked moon, or a geologically dead world with a cold core, or an ocean world with an ice crust (not of water necessarily). Maybe the locally important geological phenomena come not from inside the planet (plates, volcanoes) but outside (asteroids, meteors). Maybe your planet's liquid core has currents so strong that islands float around at meters per second and bounce off each other.

Fantasy only needs to be consistent with yourself - not with our Earth. Otherwise where's the fun in worldbuilding?

Relevant article from the same site: https://mythcreants.com/blog/should-your-fantasy-world-resem...

2
kibwen 14 minutes ago 0 replies      
Fun read. This reminds me of Rich Burlew's (of Order Of The Stick) series of articles on designing fantasy worlds: http://www.giantitp.com/articles/xO3dVM8EDKJPlKxmVoG.html

> And the fractal pattern of rivers is directional: rivers will always merge as they flow toward the coast, never split.

Interestingly, it is possible for rivers to split, sort of. Rivers naturally meander, and it is possible for two adjacent rivers to meander such that they intersect each other. Most of the combined flow will end up along the steeper path, true, but that doesn't mean the shallower path will dry up completely. For a real-world example, this is basically the catastrophe that Louisiana's Old River Control Structure has been staving off for decades.

3
seanalltogether 1 hour ago 0 replies      
> Humans (especially pre-industrial, agricultural humans) like to settle near rivers.

Apropos of nothing, back during the rush to build the transcontinental railroad, the Union Pacific thought Cheyenne would become the new hub of the west and Denver would turn into a ghost town. But a quick look at a map shows the abundance of rivers and waterways flowing through Denver while Cheyenne has just a small river flowing through. Not even a railroad was enough to overcome the lack of resources around Cheyenne.

4
hownottowrite 54 minutes ago 0 replies      
"Here Dragons Abound"[0] is another interesting take on this topic. The guy's been working for about a year now to beef up Martin O'Leary's work[1]. Fascinating read through if you start at the first post.

[0] https://heredragonsabound.blogspot.com

[1] http://mewo2.com/notes/terrain/

5
mastax 58 minutes ago 0 replies      
If you like this, you'll love https://reddit.com/r/worldbuilding
6
serhei 42 minutes ago 0 replies      
The reasoning on this guy's website is often pretty fascinating: http://www.worlddreambank.org/P/PLANETS.HTM -- these are entire fantasy planets. For a warm-up exercise, he started out by tilting the Earth in different ways and working out how that would affect climate and biomes.
7
clarkmoody 1 hour ago 0 replies      
Here's a Twitter account that generates interesting procedural maps: https://twitter.com/unchartedatlas
8
petewailes 57 minutes ago 0 replies      
If you're interested in fantasy maps, I'd recommend https://watabou.itch.io/medieval-fantasy-city-generator as an interesting tool.

Also, if you're into tabletop RPGs, I'm going to give a shout out to my own blog - https://wail.es/

9
logfromblammo 1 hour ago 4 replies      
I wish I could take all that advice back in time and give it to JRR Tolkien. I have lost count of the number of times I have looked at a map of Middle-Earth and thought, "mountains don't do that".
3
MIT teams school-bus algorithm could save $5M and 1M bus miles wsj.com
58 points by frostmatthew  1 hour ago   33 comments top 9
1
payne92 44 minutes ago 1 reply      
I worked with a startup years ago that did these kinds of optimizations (for deliveries). The logistics & transportation market in the US is about a TRILLION dollars -- small optimizations can make a HUGE impact.

The challenge for practical implementations is that there are lots and lots and lots of optimization factors that are very hard to account for. Often, you don't even know what they are until you try to automate the process.

For example: preferring small buses on side streets, as the article mentions. Or, knowing about a major work project in an area (to avoid for the coming year), or specific left turns that are tough for a bus. Also, highly optimized schedules usually have (by definition) much less "slack" in the overall system, and are less resilient to real world changes and variations.

My point: in my experience, there can often be a significant gap between enthusiastic technologists (myself included) and real-world implementations.

2
chrisbennet 1 hour ago 4 replies      
Since this forum for founder/entrepreneur types, let me inject a bit of caution from my own experience with dealing with government organizations:

If you save a business organization money, they appreciate it.

If you save a government organization money, the next year the "savings" is likely to be deducted from their budget. They don't get to benefit from the savings and thus their motivations aren't what you might expect if you are used to dealing with businesses.

It makes sense but when I encountered this the first time it caught me by surprise. I mean who wouldn't want to save money, right?

3
towndrunk 1 hour ago 6 replies      
Kind of off topic but... why is no one converting school buses to electric? Seems like a perfect opportunity. Lots of space for batteries. High torque. Short runs in the morning and afternoon. Think of how many diesel school busses there are across just the USA every morning.
4
jmull 1 hour ago 0 replies      
It will be interesting to see how it works out in real life.

As the article notes, they've tried this before and the algorithmically generated route map didn't work well in real life. Assuming the algorithm was good, the issue was probably that it didn't incorporate significant real-life factors.

Perhaps they are getting more sophisticated though, by having the people who have been doing this work by hand review the results and make adjustments (as the article says, e.g., at first the algorithm allowed big busses on narrow streets, which apparently doesn't work well).

Speaking of the people who have been doing the route maps manually... they seem to have been doing an impressive job since the algorithm is expected to be only 4% better.

5
vineet 1 hour ago 3 replies      
I am glad that people are using algorithms to increase efficiency. But, this seems to only save $5M out of a $120M budget - I was expecting and am hoping for more.

I feel like it would be an awesome project to dissect a large public institutions funding and to discuss ways that algorithms can help reduce costs without reducing the quality of service and agility of the organization.

The site would not allow me to read the full article, so I might have missed something.

6
rce123 40 minutes ago 0 replies      
How do they arrive at only 20,000 lb carbon emissions (presumably CO2), from 1M bus miles traveled? According to the EIA, 1 gallon of diesel produces ~22.4lbs of CO2. Even assuming ridiculously optimistic MPG ratings on the buses, these numbers don't seem consistent with each other.
7
yedpodtrzitko 52 minutes ago 1 reply      
Locked article. So I am just curious - is that a different algorithm than the one with "no turning left" rule which saves money to UPS?
8
cropsieboss 1 hour ago 1 reply      
Pickup and delivery optimizing algorithms have been optimizing stuff in private businesses for years.

What do you think what kinds of algorithm makes schedules for trashbin collection, newspaper delivery, sodapop machines etc.

If you aren't ondemand and there's multiple drivers there is someone who can optimize it to (almost) optimality.

This is a pickup & delivery problem and the algorithms exist and work in practice for decades.

9
Sir_Cmpwn 17 minutes ago 0 replies      
Flagged, paywalls without a workaround are not permitted on HN.

https://news.ycombinator.com/newsfaq.html

4
We fucked up: why we had to switch from Braintree to Stripe deekit.com
25 points by blakenomad  55 minutes ago   7 comments top 5
1
0x0 0 minutes ago 0 replies      
Halfway through the article I got a huge popup with a button "Create your whiteboard". When I clicked it, I ended up on a 404 page.
2
thoughtpalette 15 minutes ago 0 replies      
Great write up with drawing comparisons between the two. I've looked into Braintree as I have some colleagues that work there (Easy tech support channel) and the Ignition program (first $50k transaction fees waived) seemed like a no brainer for lean startup implementation. Had no issues with the node API and the documentation was very helpful.

That being said, I've seen nothing but praise for Stripe payments. The dev and design team are very solid and supposedly have great documentation as well.

I wonder, since Braintree is owned by PayPal, if companies have experienced the same pain points with getting locked out of funds for arbitrary reasons on that platform, or if Braintree is completely agnostic to PayPals TOS.

3
buf 16 minutes ago 1 reply      
Funny, I just moved https://www.castingcall.club from Stripe to Braintree. I find Stripe to be superior, but the users wanted subscription Paypal payments rather than using credit cards. Having it in one place made sense.

Follow the users.

4
bpchaps 4 minutes ago 0 replies      
I truly do not trust Braintree as a secure payment service.

As a company, they host an asinine number of public events at their 100+ person meeting hall at Chicago's Merchandise Mart. To get to that hall, you have to walk about 200ft past all of their workstations and meeting rooms. Infrastructure diagrams and unlocked workstations are pretty much everywhere.

Sure, they do a few things to mitigate the risks of people who come in by requiring sign-ins and presumably cameras everywhere, but it still feels very surreal to be a few meters away from having potential access to a large payment processor's infrastructure. I've seen at least one person there who was very clearly using a burner laptop.

5
galonk 8 minutes ago 1 reply      
Off topic, but yet another startup where the main page (which is actually labelled "Product"!!!) is blank and doesn't tell you anything about the product.
5
Critical security updates for Git, Subversion and Mercurial marc.info
262 points by mnw21cam  6 hours ago   82 comments top 16
1
jjnoakes 2 hours ago 2 replies      
It's interesting to me that the fix was pattern matching the ssh hostname and banning a starting hyphen, rather than (say) passing "--" to ssh to signal the end of the intentional options so a hostname of "-oProxyCommand=whatever" is interpreted by ssh properly (as a hostname which can't be reached, instead of as a rogue argument).

I thought this was a fairly well known way to pass arbitrary strings to commands and ensure they aren't interpreted as options (for commands which honor "--", like ssh does).

2
icc97 4 hours ago 7 replies      
Kudos to Chocolatey on Windows, they immediately updated their Git package [0] to v2.14.1, so a simple `choco upgrade -y git` gets me up to date. If only life on Windows had always been this hassle free.

[0]: https://chocolatey.org/packages/git

3
skj 3 hours ago 0 replies      
Report from the person who discovered the vulnerability: http://blog.recurity-labs.com/2017-08-10/scm-vulns
4
jomar 3 hours ago 1 reply      
A "ssh://..." URL can result in a "ssh" command line with a hostname that begins with a dash "-", which would cause the "ssh" command to instead (mis)treat it as an option.

It's a shame, because the Git dispatching code ought to be able to invoke the ssh command via

 ssh -p 22 -etc -etc -- <hostname>
to prevent interpreting options in <hostname>, thus defusing the in-band signalling causing this. But I suppose it can't depend on every ssh implementation understanding this "--" POSIX utility syntax guideline.

5
innocenat 4 hours ago 1 reply      
Just a note for everyone on Ubuntu, the fixed version for Ubuntu 16.04 is git v2.7.4-0ubuntu1.2 [0].

[0]: https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2...

6
bburky 3 hours ago 0 replies      
Wow, git's url bugs always seem to become easily exploitable due to .gitmodules.

I found CVE-2015-7545 a few years ago, a malicious URL using the ext:: scheme could cause code execution. It was only easily exploitable because you can ask the client to fetch any URL you want via git submodules. (This vulneriblity was fixed, and since then the entire ext url scheme was disabled by default.)

7
aroberge 1 hour ago 0 replies      
This may be of interest to people using git on Windows and have malwarebytes installed.

I tried to install a new version and found I could not as there was another version that was present. Git did not appear anywhere as a program to uninstall. I tried to delete it (from an admin account) and it failed with an access denied - and no other information.

The solution was to use LockHunter (https://lockhunter.com/) which informed me that malwarebytes was the program preventing me from deleting it. Using LockHunter, I removed the lock and successfully removed the old version of git.

8
rubayeet 4 hours ago 1 reply      
At the time of writing this, no official binary release for Mac[0], you have to build from source.

[0]: https://git-scm.com/download/mac

9
HeadlessChild 3 hours ago 0 replies      
Fix has landed for Debian Jessie/Stretch in the security repo

https://security-tracker.debian.org/tracker/CVE-2017-1000117

10
djstein 1 hour ago 0 replies      
For MacOS users simply use:`brew install git`Then restart terminal instance.
11
VrEdxxx 1 hour ago 0 replies      
I wonder how can I do it on OSX without brew or port?
12
captn3m0 4 hours ago 0 replies      
Arch Linux has the new package in testing: https://www.archlinux.org/packages/testing/i686/git/
13
cyphar 3 hours ago 0 replies      
This was already submitted earlier today (https://news.ycombinator.com/item?id=14984044).

It's actually quite easy to reproduce (the RCE aspect comes from the ProxyCommand and LocalCommand SSH options that you can set from the SSH command-line).

14
ing33k 3 hours ago 2 replies      
15
numbsafari 2 hours ago 1 reply      
I wonder how many "git client libraries" in various languages are also impacted by this.
16
diegoperini 3 hours ago 2 replies      
How can I do it on OSX without brew or port?
6
Sundar Pichai Should Resign as Googles C.E.O nytimes.com
145 points by smaddali  45 minutes ago   103 comments top 30
1
Shank 17 minutes ago 11 replies      
Honestly, I think he made the right move just from a PR perspective. His firing makes sense: the CEO and HR are both acting to protect the company. The author of the manifesto caused Google a PR disaster and created a huge state of internal conflict, valid or not. The shareholders are probably really happy that their CEO removed a person who managed to get Google so much bad press in so little time.

This is the reality of business. One does not simply create a PR disaster for the company, with no positive return, and still maintain their job.

2
lisper 10 minutes ago 2 replies      
The primary thesis of Damore's memo [1] was not that women are biologically unsuited to STEM careers. The primary thesis was that, at Google, you cannot even advance the hypothesis that biology might be a factor without putting your career at risk. Ironically, by firing Damore, Pichai proved him correct.

[1] https://assets.documentcloud.org/documents/3914586/Googles-I...

3
EduardoBautista 16 minutes ago 7 replies      
Convincing women to focus on a career in STEM is telling them that their choices for careers in nursing, teaching, and any other career dominated by women are wrong choices. I don't believe that, they are essential to our society and are arguably more important than helping create better ads at Google and Facebook.
4
codegeek 0 minutes ago 0 replies      
Regardless of who is right or wrong here, I don't get what the point of that Memo was ? I mean there are other ways of voicing your opinions on controversial topics like these. If I work for a large organization and I send a memo that discusses a sensitive and controversial topic, I am really creating trouble. I am not supporting his firing on this but he could have just avoided doing this. Companies exist only for 1 reason: profit for their shareholders. That is the hard fact, whether we like it or not. If a company gets bad PR because of one employee, they will fire the employee.

Even if there was some merit in his argument, he could have chosen to do it differently. Bad choice which blew up pretty fast.

5
Overtonwindow 0 minutes ago 0 replies      
How would this have played in other countries? Is there someone from the U.K. or broader EU that could weigh in? I seem to recall that other countries have greater protections for speaking ones mind like this, which would have protected the author from termination.
6
plinkplonk 0 minutes ago 0 replies      
Sundar Pichai does seem to be off balance and a bit out of his depth.

That said, why should he resign? In the overall scheme of things, this isn't much of a crisis. If this were the standard to fir CEOs by not many companies would have CEOs left.

7
hliyan 17 minutes ago 0 replies      
Isn't this author's reaction to the firing the same as Google's reaction to the memo -- over-outrage?
8
onebot 3 minutes ago 0 replies      
I also believe it was the correct move. But not because he wrote it, but because what he actually said in it. He made many generalizations that contribute to gender inequality and never backed any of those statements up with any scientific data. Thus, it could be reasoned that he clearly displayed his own biases and showed that he is a contributor to gender inequality.

Furthermore, the PR backlash against his firing is far more news worthy if it was "wrongful" and he has gone on a media tour acting as the "victim". I am not sure news outlets would approach the firing objectively.

9
peoplewindow 4 minutes ago 0 replies      
I am minded to agree. And that's a shame, because Pichai has done good things for Chrome and Android when he was leading those.

The article doesn't really touch on Pichai's biggest mistakes here.

Mistake one: Damore's memo alleged discrimination, both against men and conservatives. Gender and political affiliation are both protected classes in California and they just fired him for whistleblowing. He has now filed a complaint with the NLRB. This seems like a legal headache that a better CEO could have avoided by not firing the guy. Put him on the roof or something, wait for things to blow over, find some other solution but the moment they fired him, they set themselves up for this.

Mistake two: Google shareholders asked at the last shareholder meeting if it was true that Google was a hostile work environment for conservatives (or words to that effect). They assured shareholders that this wasn't true. Clearly that answer has problems. Employees are leaking like crazy to Breitbart of all places that Google is extremely hostile to conservatives. I don't know what happens if leadership misleads shareholders in these sorts of questions, maybe nothing. But it can't be good.

Mistake three: Google managers have been publicly announcing within the firm that they are blacklisting employees for not being sufficiently pro-feminist or even for just questioning the policies or the mob reaction to it. There are screenshots of this along with interviews, again, on Breitbart. This seems like a fantastically unhealthy culture that Pichai has allowed to grow on his watch. I have heard from other Googlers that in one incident, a manager claimed he'd blacklist anyone who was subscribed to an internal mailing list for discussion of conservative viewpoints, and then when people objected, that he'd blacklist them too (so they couldn't transfer to his team). Again this seems like a cut/dried case of discrimination against people of certain political affiliations.

Mistake four: this debate is happening because Googlers are furiously attacking each other through leaks to the press. This is happening in both directions: the original leak was clearly intended to get Damore fired and publicly shamed, now others are leaking screenshots of internal communications and Pichai's emails. Pichai has quite clearly lost control of his own workforce to a staggering degree.

How much more of Google's guts spilling out onto the street will shareholders tolerate?

10
Torai 37 minutes ago 3 replies      
> There are many actors in the whole Google/diversity drama

Says the guy who has just claimed its CEO should resign.

11
ynniv 2 minutes ago 0 replies      
I must have read a different memo than everyone else. The one I read said that the massive gender imbalance in software could be due to genetics, and because of that we shouldn't try to fix it with policy. But the reality is that there is a gender imbalance in software because of the way women are treated, and the best way to correct that is through policies that treat them better.

It shouldn't be a stretch to see that arguing against treating people respectfully is, by itself, offensive.

12
thrill 0 minutes ago 0 replies      
Actually the Board should make this decision for him, but looking at who the members are, I don't expect any displays of leadership in this matter.
13
moneytalks 0 minutes ago 0 replies      
Let's take bets on how long this survives on the front page.

I'm going with 15-20 minutes from now.

Who wants the over/under?

14
visarga 20 minutes ago 0 replies      
I read that as "Sundar Pichai Resigns as Googles C.E.O" and was shocked for a second.

I think he made a moral mistake by firing the memo guy, but he has to worry about potential legal ramifications, should an employee sue for "condoning discrimination" or something. Too bad for diversity of opinion, can't have that under threat of lawsuit.

15
mankash666 16 minutes ago 0 replies      
This is a surprisingly accurate, well written, neutral article, save the click baity headline.

Everyone here should treat suggestions of resigning with a grain of salt, the author just wants your attention

16
samfisher83 7 minutes ago 0 replies      
Whether he resigned would it really make that much of a difference in grand scheme of things? He got 200 mil he can do whatever he wants and he doesn't worry about what his next job is.
17
tootie 19 minutes ago 0 replies      
I think that's even more nuts than the media response to the actual incident. Pichai is a product guy who made handled a personnel issue poorly. I think this is still a minor issue and one where he has hopefully learned some lessons and we can move on.
18
wudangmonk 4 minutes ago 0 replies      
Going by this piece from the NYTimes no less it seems like the US university social justice warriors are starting to spill out into the mainstream population.
19
hugh4life 8 minutes ago 0 replies      
Completely nuts... and I generally agree with the guy who was fired. First, this issue has little to do with the end product of the corporation. Second, CEOs have less control of the socio-political environment they find themselves in than say the "Newspaper of Record".
20
sounds 13 minutes ago 2 replies      
Please be aware this is an opinion piece by David Brooks. This does not represent a majority opinion of the New York Times shareholders.
21
dvfjsdhgfv 13 minutes ago 2 replies      
Wow, I'd never imagine to read this piece in NY Times.
22
j45 8 minutes ago 0 replies      
I can't seem to locate articles by Mr. Brooks calling for the resignation on other CEO's who have recently had worse issues.

Happy to read anything from NYTimes or this opinion writer on other tech CEO's.

https://www.google.com/search?q=NYTimes+Travis+Kalanick+Davi...

23
nikolay 8 minutes ago 0 replies      
I never found him fit for this role anyway.
24
aaron-lebo 15 minutes ago 0 replies      
lol what a clickbait obnoxious headline.

What was he supposed to do? He's got a workforce of tens of thousands, lots of who were offended rightly or not. To keep the guy around would've been just as bad. You publish something like that you accept the consequences. You make yourself obnoxious to your bosses who expect you to be a quiet cog in the machine, that's what happens. Duh.

It's kind of funny that Damore dropped out of Harvard to work at Google and he's now been fired from there. Kind of throwing away his advantages.

25
alexashka 12 minutes ago 2 replies      
If we had CEOs resigning over every hissy fit an employee throws - we'd have no CEOs left.
26
petraeus 13 minutes ago 1 reply      
But the question is, why would he do that? He made the best possible choice out of many bad choices forced by the anti-diversity memo guy himself.
27
jalayir 18 minutes ago 1 reply      
The big assumption in David Brooks' piece is that the so-called "manifesto" was Damore's one and only "transgression". Were there other, previous internal complaints about him, and this was the straw that broke the camel's back?
28
systems 11 minutes ago 1 reply      
I dont understand, why did this google employee send a memo, why didnt he just write a blog about .. like normal people do

I honestly think this is his biggest mistake (the google employee not the ceo) .. he didnt understand boundaries

It is sad, that someone gets fired for his opinion, even if this opinion is offensive ... i do believe in ones right to be offensive

But I think the medium used to express his opinion , and the boundaries where choose to do so .. is erratic

29
xrd 10 minutes ago 1 reply      
This is comment in the memo that changed the consequences: "Stop restricting programs and classes to certain genders or races." He's advocating restricting programs that reduce the massive gap in distributions. If he had left his arguments and asked people to make their own conclusions, he might still have a job. Brooks is wrong here.
30
zellyn 11 minutes ago 2 replies      
It seems disingenuous not to mention the many debunkings of the memo's Science by Scientists.

For example: https://www.quora.com/What-do-scientists-think-about-the-bio...

7
Negative Result: Reading Kernel Memory from User Mode cyber.wtf
84 points by gbrown_  5 hours ago   4 comments top 3
1
titzer 16 minutes ago 0 replies      
I think the answer to this is that the processor doesn't speculate past dependent loads, and either the MMU does its access checks before ever issuing a load to the cache hierarchy, or the access check is required to complete before the result of a load is available for further speculative instructions. It's almost certainly the later case, because even though the L1 is virtually indexed (allowing a result to be found and returned), to correctly handle aliasing, the physical tag check must also pass, and address translation has to happen for that.

Which means that basically any instruction dependent on the load has to wait until after the access check passes. No value available means that no further speculation (e.g. using that value to issue further value-dependent loads).

2
dom0 4 hours ago 1 reply      
An (unfairly short) tl;dr could be that not only the memory subsystem itself can be used as a side-channel or leak, but also speculative and OoO execution itself. Speculative execution in this instance also does not consider (likely for performance reasons) certain dependencies (such as between a load setting an exception and a subsequent instruction depending on the load). While the result is still correct after settling, this introduces timing deltas.

This basically adds another set of tools to the architectural-level attack toolbox. From reading this I expect we'll see some interesting developments in the future.

3
FrozenVoid 2 hours ago 0 replies      
For reading memory wouldn't the idea be to test N bits of Kernel Int each in a loop:for each bit in IntX, try speculative execution of bit dataget the bit and compute something that switches on 1/0 but with different timings.if(bit==1){expensive execution}else{do nothing}record all timings and reconstruct the IntX bits one by one.I assume something of that sort will allow reading memory without access.
8
The world in which IPv6 was a good design apenwarr.ca
472 points by dbenamy  12 hours ago   143 comments top 26
1
hueving 8 hours ago 2 replies      
>They have to be special, because an IP node has to be able to transmit them before it has an IP address, which is of course impossible, so it just fills the IP headers with essentially nonsense

Not nonsense! The global IP broadcast is specified as 255.255.255.255 and is used by other protocols. The source IP address for the initial discovery is indeed 0.0.0.0, which is not intuitive, but the rest of the DHCP exchange is handled with real IP addresses like normal IP traffic. DHCP is very much an IP protocol (see DHCP relay for how it transits IP networks).

>Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.

Ugh, come on! RARP doesn't provide you with a route to get out of the network or other extremely useful things like a DNS server.

>and DHCP, which is an IP packet but is really an ethernet protocol, and so on.

No, it's not an ethernet protocol. It's a layer-3 address assignment protocol that runs inside of IP, which is normally encapsulated in ethernet frames. You can have a remote DHCP server running any arbitrary L2 non-ethernet protocol and if it receives a relayed DHCP request it will reply with IP unicast perfectly fine with no ethernet involved.

2
hueving 8 hours ago 1 reply      
>In truth, that really is just complicating things. Now your operating system has to first look up the ethernet address of 192.168.1.1, find out it's 11:22:33:44:55:66, and finally generate a packet with destination ethernet address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 is just a pointless intermediate step.

This is completely wrong, it's not pointless.

First, this can be used to easily swap out routers in a network without reconfiguring any clients or even incurring downtime. Without the intermediary gateway IP representation, this would mean you would either have to spoof the MAC on the second router or reconfigure all of the clients to point to the new gateway.

Second, ethernet addresses are a layer-2 construct and IP routes are a layer 3 construct. Your default gateway is a layer-3 route to 0.0.0.0/0. There are protocols for exchanging layer-3 routes like BGP/RIP/etc that should not have to know anything about the layer-2 addressing scheme to provide the next-hop address.

Third, routers still need to have an IP address on the subnet anyway to originate ICMP messages (e.g. TTL expired, MTU exceeded, etc).

Fourth, ARP is still necessary even for the router itself to know how to take incoming IP traffic from the outside and actually forward it to the appropriate device on the local network. Otherwise you would have to statically configure a mapping of local IP addresses to MAC addresses on the router.

So ARP is critical for separation of concerns between L2 and L3. We don't live in an ethernet-only world.

>excessive ARP starts becoming one of your biggest nightmares. It's especially bad on wifi.

Broadcast can become a nightmare. Excessive ARP is a drop in the bucket compared to other discovery crap that computers spew onto networks.

The pattern of most computers now is to communicate with the external world (from the LAN perspective) and not much else. So on a network of 1000 computers (an already excessively large broadcast domain), your ARP traffic is going to be a couple of thousand ARP messages every few hours. If this is taking down your WiFi network, you have much bigger problems considering all of those are about a modern webpage load of traffic.

3
Animats 10 hours ago 3 replies      
What he's really arguing for is a circuit-switched network, so that connections can be persistent over moves. He just needs a unique connection ID.

One amusing possibility would be to do this at the HTTPS layer. With HTTPS Everywhere, most HTTP connections now have a unique connection ID at the crypto layer - the session key. If you could move an HTTP connection from one IP address to another on the fly, it could be kept alive over moves. HTTPS already protects against MITM attacks, and if the transfer is botched or intercepted, that will break the connection.

I'm not recommending this, but it meets many of his criteria.

The trouble with low-level connection IDs that don't force routing is forgery. You can fake a source IP address, but that won't get you the reply traffic, so this is useful only for denial of service attacks. If you have connection IDs, you need to secure them somehow against replication, playback, etc.

4
djrogers 1 hour ago 0 replies      
> In truth, that really is just complicating things. Now your operating system has to first look up the ethernet address of 192.168.1.1, find out it's 11:22:33:44:55:66, and finally generate a packet with destination ethernet address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 is just a pointless intermediate step.

Bollocks. The abstraction allowed by using an IP address instead of a MAC address is essential, considering that IP addresses are dynamic (even when statically configures, devices can and do get replaced) and MAC adresses are set at the factory. Can you imagine updating the routing table of every device in your network because you had to replace a core router and the MAC address was different? Its the equivalent of publishing your website on an IP address instead of a DNS hostname...

* yes, I know MAC addresses can be configured by software in many devices, but thats even more of a hack than using arp to determine a MAC address.

5
hueving 8 hours ago 3 replies      
>And nowadays big data centers are basically just SDNed, and you might as well not be using IP in the data center at all, because nobody's routing the packets. It's all just one big virtual bus network.

The opposite trend is true in large data centers. L3 fabrics where everything is routed have become extremely popular because BGP (or custom SDN setups) can be used to migrate IPs and you get to utilize multiple paths (rather than the single path offered by STP convergence).

6
Hikikomori 4 hours ago 0 replies      
Interesting article, but it contains some weird statements.

>It is literally and has always been the software-defined network you use for interconnecting networks that have gotten too big. But the problem is, it was always too hard to hardware accelerate, and anyway, it didn't get hardware accelerated, and configuring DHCP really is a huge pain, so network operators just learned how to bridge bigger and bigger things.

IP forwarding (longest prefix match) is more complicated than mac forwarding yes, but it has been done in hardware (ASICs, typically NPUs today) for a long time now.Operators (I assume ISPs) do not build large bridged networks as they need their networks to scale as they grow, or they will hit a breaking point where their network collapses. ISP's typically use centralised DHCP servers (as opposed to configuring their access routers) and configure their routers to use DHCP relay. DHCP server configuration is easily automated by just reading your IPAM data, it's a non-issue.

7
noahl 1 hour ago 4 replies      
This was a very informative article for me, but there was one thing I didn't understand. At the end he made the case that mobile routing needed essentially two layers: a fixed per-device (or per session) identifier, and then a separate routing-layer address that could change as a device moved. QUIC has session identifiers, and that's great and could solve the problem.

But earlier in that very article, he already pointed out that every device already has a globally unique identifier used in layer 2 routing ... the ethernet MAC address.

Would someone please explain to me why we can't use MAC addresses as globally unique device IDs?

(Is MAC spoofing the issue?)

8
okket 8 hours ago 1 reply      
> Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.

Actually, no. You can only set an IP address with RARP, not even a netmask (RARP comes from pre-CIDR age) or other important stuff like default gateway, DNS server, etc like you can with DHCP.

9
tyingq 11 hours ago 4 replies      
>One person at work put it best: "layers are only ever added, never removed."

Find this in the software world as well. Something about the java culture seems especially fascinated with multiple layers of abstraction.

Edit: Ok, some factions of the culture. "Convenient proxy factory bean superclass for proxy factory beans that create only singletons"

10
hueving 8 hours ago 0 replies      
>Network operators basically choose bridging vs routing based on how fast they want it to go and how much they hate configuring DHCP servers, which they really hate very much, which means they use bridging as much as possible and routing when they have to.

Very rarely does a network operator use bridging to avoid configuring DHCP. All modern protocols are built on IP so you still need an addressing scheme and most people want the Internet so the 169 auto addressing is out. So even in big bridged networks, you still have a DHCP server. In fact, you configure less DHCP in a big bridged network than DHCP for a ton of tiny networks.

The advantage to big bridging networks is that you have to setup very little routing (just the router to get in and out). If you routed between every port on the network, there would be an excessive amount of configuration involved to setup prefixes on every single interface.

11
anilgulecha 7 hours ago 6 replies      
One big UX mistake of IPv6: it was not made backward compatible with IPv4. (v6)0.0.192.168.1.10 == 192.168.1.10(v4).

This simple design when planning and rolling it out would have meant incrementally updating the networking stack to also support v6. Now it turns out v4 and v6 are completely different, and no one has a big enough reason to make the change until everyone else makes the change. Hard chicken-egg problem.

12
akshayn 10 hours ago 1 reply      
"If, instead, we had identified sessions using only layer 4 data, then mobile IP would have worked perfectly."

Mobile IP can still work with the current infrastructure -- https://en.wikipedia.org/wiki/Mobile_IP

This proposal was basically a service which would host a static IP for you (similar to the LTE structure but with IP underneath instead of L2), and forward to whatever your "real" IP was using IP-in-IP encapsulation.

As the author states, layers are only ever added :)

13
ktRolster 11 hours ago 2 replies      
We'll switch to IPv6, and every service will still go through port 80.
14
fundabulousrIII 2 hours ago 0 replies      
This article was some of the most egregious nonsense I've read in a while.
15
mjevans 11 hours ago 4 replies      
Ok, so QUIC or some other common layer 4/4+5 'Modern TCP over UDP for network compatibility' solution.

Lets just throw away the concept of 'addresses' for authentication and actually use a cryptographic authentication identifier of somekind, combined with some mux iteration ID.

16
femto 9 hours ago 0 replies      
The "Internet Mobile Host Protocol" (IMHP) was written as a draft RFC in 1994. As far as I know it was never adopted, but is it still relevant, even as an inspiration for IPv6?

[1] https://www.cs.rice.edu/~dbj/pubs/draft-johnson-imhp-00.txt

Edit: Its official entry at the IETF: https://datatracker.ietf.org/doc/draft-johnson-imhp/

17
Aloha 11 hours ago 2 replies      
Part of the difficulty here - is you're not just upgrading the whole stack, you're instead layering on whatever stack is already there - its a needed part of deploying any new technology without replacing everything from the basement up. I'm not sure what this guy would do instead however - as someone with a decent networking background, I got completely lost in the end.
18
mirimir 5 hours ago 0 replies      
I found the piece informative and entertaining. But I'm not technical enough to comment much. I would have liked to see what he thought of MPTCP as a replacement for TCP.
19
davidreiss 1 hour ago 1 reply      
Is anyone else shocked at the low level of adoption of IPv6? I remember how in the late 90s people were saying we were going to run out of addresses and everyone need to migrate to IPv6 ASAP. Now, it seems that IPv4 is going to be around for a long while.
20
undoware 10 hours ago 0 replies      
Easily the best technical document I've ever read. Holy heck. "Now I see with pulse serene, the heart of the machine"
21
peterburkimsher 9 hours ago 2 replies      
That is a beautifully-written article.

The IEEE hardware and IETF software guys have been busy adding complexity to the networks, with so many legacy protocols (when everyone just uses TCP/IP) and extra ports (when everything happens on port 80 - seriously, even email is now on cloud services).

I can't get LTE because of political problems. So I just gave up trying to be online, and started caching everything possible.

Meanwhile, storage is getting larger capacity, smaller size, and cheaper. I've got a 512GB SD card in my pocket all the time, with a backup of my laptop in case my bag gets stolen.

My phone does everything offline if possible. Offline MP3 music. Offline maps. Wikipedia. StackOverflow. Hacker News. FML. UrbanDictionary. XKCD. The few YouTube videos I actually want to see again.

The only thing I need Internet for is communication. To send a message, I walk around looking for open WiFi and type my message to them on Facebook Messenger. If they need to reach me urgently, they can just use my phone number (which keeps changing every 6 months for the same political problems).

What if access points had large caches with mirrors of the content people want? Instead of asking Google's server in the US to send me a map tile, what if I could just get it from the local WiFi AP's web server? It would be much faster, and save so much trouble with networking.

Sure, there are some things that people need the network for (e.g. new content, copyrighted material). But so much else is free of licenses, and would be possible to mirror locally everywhere.

22
_pmf_ 4 hours ago 0 replies      
> To save on electronics, people wanted to have a "bus" network

It was also to save sanity and avoiding having to rip apart every office building for installing hundreds of cables.

23
marasal 9 hours ago 0 replies      
This was a great read.
24
killjoywashere 5 hours ago 0 replies      
Bookmark
25
tardo99 7 hours ago 2 replies      
What if the server needs to send you a packet while you're mobile but you haven't sent it a packet yet so it can update its cache? That packet will be lost in his scheme. Nice try.
26
beagle3 7 hours ago 2 replies      
I am very glad IPv6 didn't catch on. The world in which it was designed was not a world in which everyone (NSA, Google, Facebook) was trying to document and correlate every tiny thing you do, whether it is related to them or not.

If IPv6 eventually becomes widespread, I hope it comes with ISPs that will let you replace your prefix, and phones/hardware that will randomize your suffix - otherwise, the internet becomes completely pseudonymous.

9
DMCA, Easylist, Adblock, Copyright Access Control and Admiral getadmiral.com
27 points by bitshiffed  1 hour ago   3 comments top 3
1
gergles 1 hour ago 0 replies      
The thing I found most hilarious about this is that the author of this post previously worked at Grooveshark, a company that was only remotely successful because they flaunted copyright the entire time they operated.

That he's now using the DMCA (incorrectly) to protect his business model is the height of ridiculousness in my book.

2
seretogis 1 hour ago 0 replies      
It looks like the EFF has offered assistance, and I really hope they follow-through to prevent this from setting a precedent.
3
bitshiffed 1 hour ago 0 replies      
Follow up from yesterday's EasyList takedown https://news.ycombinator.com/item?id=14978228 .

EDIT: Summary of issue up to now http://telegra.ph/Ad-blocking-is-under-attack-08-11 .

10
Show HN: A Node.js open source library to send transactional notifications github.com
38 points by bdav24  2 hours ago   14 comments top 6
1
richthegeek 21 minutes ago 1 reply      
We just wrote something similar, using RabbitMQ (https://github.com/richthegeek/beatrix) as the main manager for things.

Seems like retries and delays are not part of this, but I guess they are added easily enough to the individual queue system used. Any desire to build them into it? Retries should be easy enough if you're happy to modify the bodies, although delays are not so easy (we're using the 'delayed message exchange' plugin for Rabbit to do this currently).

Maybe some built-in/example queue integrations (Rabbit, Redis, etc)?

Also, it's not super clear to me how you might host the SMS/Push/email/whatever provider on a different process/system to the publisher?

But it looks lovely and clean! Major props!

2
marksomnian 31 minutes ago 1 reply      
The logo looks like a recoloured Telegram logo [0]. I'd recommend you change it before this gets big to avoid trademark shenanigans.

[0]: https://telegram.org

3
michaelmior 55 minutes ago 1 reply      
What is a "transactional" notification? This doesn't seem to be explained anywhere in the README.
4
squaro 38 minutes ago 0 replies      
fallback + round-robin : <3Awesome idea. fallback : could be very usefull if you main provider is downround-robin : send more free emails using multiple providers
5
garysieling 1 hour ago 1 reply      
Are there any well-tested Node.js libraries to do scheduling alongside something this? E.g., if I wanted to send the notifications once a week, etc.
6
hitgeek 2 hours ago 1 reply      
from the docs this looks like a really practical library. The testing and dev tools look great.

whats the backstory for this library?

11
UX brutalism uxbrutalism.com
220 points by takinola  10 hours ago   120 comments top 28
1
SwellJoe 6 hours ago 15 replies      
This is funny, but I think also misses the point of Brutalism entirely.

Brutalist architecture is functional first (but often beautiful, too). The examples of Brutalist web design almost entirely miss out on being functional by being hard to read, abstract (not merely containing abstract elements, but abstract at its core), and lacking in the clean hard lines commonly seen in Brutalism.

There are some examples in the linked gallery (http://brutalistwebsites.com/) that I think can be fairly compared to Brutalist architecture, but nowhere near the majority of them. Most are just ugly and gimmicky. Brutalism may have been ugly and gimmicky at times, but it wasn't the core motivating force for any Brutalist work of note.

2
whack 2 hours ago 1 reply      
Ironically enough, WaPo has an entire article about the brutalist trend in web-design, and the very first example they give is Hacker News.

https://www.washingtonpost.com/news/the-intersect/wp/2016/05...

3
JackC 3 hours ago 3 replies      
I like brutalist websites because they remind me of those weird passion projects you find on the web from time to time ...

Netochka Nezvanova [1]

Ted's Caving Page, with the story of his discovery in a local cave [2]

TempleOS [3]

Jon Bois' Future of Football [4]

They have the vibe of one person with a keyboard and a strange dream, and that's what I love about the web.

I feel weird about a UX design shop critiquing this aesthetic, because they're such different worlds, and I don't want the professionalized web shutting down the personal web. But maybe that's the point: web brutalism is a cool, freeing thing in projects that really are personal, but risks becoming selfish or self-indulgent when applied to the wrong project.

[1] http://web.archive.org/web/20121023110850/http://www.salon.c...[2] http://www.angelfire.com/trek/caver/page1.html[3] http://www.templeos.org/[4] https://www.sbnation.com/a/17776-football

4
deepakkarki 6 hours ago 4 replies      
Heh, it took me a while to figure out that this class of design was called "brutalist" by the design folk. I had seen such designs before and wanted to make a similar themed webpage for my side project. I was frantically searching the web for "minimalist design", "black and white design", "newspaper/magazine like web design" etc.

Web link for those interested https://discoverdev.io

5
have_faith 6 hours ago 6 replies      
What comes after brutalism in web design? romanticism? new-sincerity? Button labels like "I would appreciate it if you clicked me because I generate ad revenue".
6
jccalhoun 1 hour ago 1 reply      
Funny. I just ran across the Brutalist Framework the other day http://www.brutalistframework.com/I still can't tell if it is a joke or not.
7
gandutraveler 6 hours ago 1 reply      
Reddit or HN are best examples of Brutalism.
8
bane 3 hours ago 0 replies      
It's interesting how the site really reminds me of the GUI design of late 80s early 90s professional software.

http://www.guidebookgallery.org/screenshots/win203

http://www.guidebookgallery.org/screenshots/geosapple

http://toastytech.com/guis/gem11.html

9
Numberwang 6 hours ago 1 reply      
I don't understand the need for this. Material Design is the pinnacle of human creation. Everything should be Material Design forever.
10
artur_makly 4 hours ago 0 replies      
what goe around comes around i guess. Myspace i feel was the epitome of ths so-called "brutalism". worked wonders for them in terms of product differentiation at a time when the first semblemnce of coherent UX design standards were appearing on the web.

its great for fringe underground magazines, gamers, music aficionados, and artists of all kinds.

personally i hope it trends as the general web has become mind numbingly boring / predictable as fuk.

11
geff82 6 hours ago 0 replies      
Brutalism is meant to expose the function in a "brute" way, not necessarily to hurt the eye (yet it does not care). Bad user design might be brutal, but not Brutalism.

I think "UX/WEB brutalism" should be based on the minimalism that has spread through the western style web, then taking away some of the beautifying/graphical elements, yet adding "brutally clear" navigation/interaction elements (and not hiding them in a hamburger menu, for example). It should be a pure focus on what is presented (from a content perspective) and how you can use it. Any form of beauty for beauty's sake (like nice background pictures, too much graphics, fancy effects...) should be avoided.

12
chris__butters 5 hours ago 0 replies      
You can see brutalism in a lot of late modernist graphic design, which to me is quite ironic seen as it is usually applied to architecture to be function first rather than "lets just make this look brutal" - as with everything this comes down to the audience.

Brutalism among graphic design is (from my perspective) mainly targeted at the graphic design elite although when used digitally is targeted at the same as well as those who can see past the prettiness to achieve what they are looking to without any issues.

13
pc86 1 hour ago 0 replies      
> The secret to great brutalist UX is contrast.

Said on a page with dark grey text on a light grey background.

14
mhz 2 hours ago 0 replies      
That website had 18 JS warnings on load :/Isn't it supposed to be highly functional?
15
jansho 3 hours ago 0 replies      
I adore UX brutalism.

It's a weird paradox that I enjoy design, but at the same time feel that MOST (caps necessary here) of them are actually faff and pretentious. I keep coming back to brutalist style, as the one I feel the least uncomfortable with, and I'm so glad it's now officially recognised.

(Or maybe give it two years before I start complaining again.)

16
muzani 3 hours ago 0 replies      
After a career of developing MVPs, this really clicks with me on a deep level. This site isn't it though. I do think websites and apps need to be built really fast and easily, but they still have to look better.
17
tmaly 3 hours ago 0 replies      
I liked how the site gave me the feeling that it is like a learn startup type method of design.

I signed up for the newsletter to just see, and plenty of the previous content they make available seems intriguing.

18
nkkollaw 6 hours ago 0 replies      
This sums it up perfectly: "framework for designing brutalist experiences that your design peers will love".

This style is made for other designers, not users. I often hit the back button because besides Bloombergafter they tweaked it to be less extremebrutalist websites are absolutely unusable and a nightmare to use. Also, they just seem broken and that reflect bad on the company.

Designers should design for users, not themselves.

19
retube 5 hours ago 0 replies      
Way better than material
20
Bromskloss 3 hours ago 0 replies      
If only brutalism would have confined itself to website design, I would be satisfied.
21
thefuzz 2 hours ago 0 replies      
Would you say https://www.technologyreview.com/ is brutalist design?

If so, I'd say it's one of my favourite brutalist websites!

22
werber 1 hour ago 0 replies      
"UX brutalism is a relatively new concept, and we don't expect it to last too long."
23
hatsunearu 2 hours ago 0 replies      
some (all?) of it isn't brutalism, it seems postmodernist.
24
stillhere 2 hours ago 0 replies      
Regarding brutalism: I think Prince Charles said it best, "You have to give this much to the Luftwaffe. When it knocked down our buildings, it didn't replace them with anything more offensive than rubble."
25
werber 3 hours ago 0 replies      
This reminds me of Butt magazines aesthetic
26
frik 2 hours ago 0 replies      
UX and UI brutalism meets in MetroUI/ModernUI of Win8/10, ugly as hell color schemes and designs. It reminds me of ugly brutalism/"modern architecture" that was so common in 1960s (prefab buildings).

The Win3x/9x/XP/Vista/7, iOS6 & current macOS/iOS, Android 5+ UI and UX are so nice.

27
digitalengineer 5 hours ago 1 reply      
>The brutalist persona document aggregates all the assumptions you have about your users into a single place.

This can't be serious right? Assumptions? What about data & research?

28
ianyang 5 hours ago 2 replies      
I can't think of anything I hate more.
14
Almost All of FCCs New Advisory Panel Works for Telecoms thedailybeast.com
39 points by croon  1 hour ago   6 comments top 4
1
zero_one_one 2 minutes ago 0 replies      
I've been involved in standardisation within my industry for some time, and it's interesting to see how many people representing vendors are able to sway proceedings away from that which is technically neutral (and non-proprietary), to that which is financially beneficial to one or more implementers of the standardisation procedure's results.

I see the same thing in play here - the airwaves are being treated like tracts of land, and those who currently own the larger tracts don't want anyone using the smaller plots of no-man's land on their borders, in case there are accidental (or otherwise) trespasses onto 'owned territory', so to speak.

I'm not advocating one way or another, but it's interesting to see how an invisible physical space is being monetized so aggressively.

2
maxxxxx 17 minutes ago 0 replies      
That's the thing that scares me the most about the current administration. They seem to think that the only players in the economy are businesses, especially big ones. Workers don't count, the environment doesn't count, science that doesn't benefit business doesn't count. I think that's the area where they will do the most long term damage.
3
solotronics 37 minutes ago 2 replies      
This is a perfect example of how the silicon valley echo chamber can have implications in DC. We all want cheap, fast, and open internet. Instead of the right people advising the current administration I feel like the polarization politics recently in the US is contributing to these advisors being picked. For example if the elites in silicon valley presented themselves publicly as more moderate and impartial they would have more say in the current admin. Its a bad idea to alienate yourself from half of the country no matter which side you are on.
4
HillaryBriss 55 minutes ago 0 replies      
given that and the relatively hot economy, it's the perfect time to raise broadband rates. open your wallet...
15
The ghostly radio station that no one claims to run bbc.com
40 points by rubenv  4 hours ago   9 comments top 4
1
11thEarlOfMar 16 minutes ago 0 replies      
Any speculation that it's as mundane as a test tone for a specific radio device? Is your radio working, is it calibrated properly...
2
wichert 57 minutes ago 1 reply      
For the lazy who were hoping to find an audio clip in the article: you can find some here: http://www.sigidwiki.com/wiki/The_Buzzer_(ZhUOZ_MDZhB_UZB76)
3
ajmarsh 1 hour ago 0 replies      
Neet, I used to listen to machine sent morse code number stations as a kid on my shortwave.

https://lifehacker.com/5961035/how-to-listen-to-real-spy-bro...

4
criddell 58 minutes ago 4 replies      
I read this article last weekend and after reading it, I put batteries in my shortwave radio to see what was out there. Turns out, there's pretty much nothing.

I couldn't hear anything on 4625 kHz. I let my receiver scan and when it was done I checked out what it detected and found absolutely nothing interesting. The most powerful signals were Christian broadcasts that had a kind of doomsday feeling about them.

It seems like there's a lot of underutilized spectrum.

16
Why our outdated brains are making us unhappy medium.com
92 points by wllchng  2 hours ago   57 comments top 20
1
theEXTORTCIST 18 minutes ago 0 replies      
In society we have an endless amount of social warnings, "Don't drink too much, you'll get ill. Don't do drugs, you'll become an addict. Don't drive without your seatbelt, you can die. Don't watch too much TV, it's not good for you."

But we still lack any sort of warning in the greater contemporary society about the risks of overuse of the hyper stimulus that comes along with social media. I am definitely beginning to see this take shape in our society (with people rejecting social media applications, articles like this, the way people speak to the overuse of such platforms)

2
thewarrior 2 hours ago 4 replies      
What I've noticed is that Instagrams recommendation algorithms are creepily accurate.

If you like browsing pictures of butts you'll get lots of butts and of the exact kind that you'd like. I'm not sure how they do it. I don't even post on Instagram. And yet I'm addicted now to the Explore tab. They've genuinely managed to build something addictive in a way that other networks are not.

Also Instagram is completely sanitized of politics and any other hot button stuff. So its just a space where you zone out.

Instagram is the crack cocaine of social media. A cocktail of addiction and narcissism refined into its most potent form. I predict that Instagram will continue to grow and might one day rival Facebook itself.

3
ejlangev 1 hour ago 0 replies      
Seems the author's intentions are good but it's such an odd way to frame it. He outlines that many companies are explicitly designing products in such a way as to exploit ("hack") people's brains at great detriment to those people. But his response is to frame it as that we should place blame on our own brains rather than on the people doing these things or the social structure that rewards them. Feels like he could think a little bigger here.
4
jathu 1 hour ago 1 reply      
I've been reading a lot about this "ancestral hijacking" lately and have tried to cut out on a lot of them. My phone now only has two possible notifications: phone calls and Things app. I have 0 social apps or games. I am also slowly cutting out sugars and extra carbs. I think you guys should read about supernormal stimulus [1].

There is this comic [2] about supernormal stimulus that shows how man, and man alone, has the ability to overcome it.

[1] https://en.wikipedia.org/wiki/Supernormal_stimulus

[2] http://www.stuartmcmillen.com/comic/supernormal-stimuli/

5
graphitezepp 2 hours ago 1 reply      
Nobody will even bother to listen to me talk about social media anymore, because I am always telling people it is unhealthy and they should get off it (and yet I'm here). It is the worst fast food imaginable for our human need to socialize, transient and shallow.
6
suresh70 2 hours ago 0 replies      
Instagram follows the same path as Facebook when it comes to curating content to the core of user's likes and dislikes. When a content that is tailored to the user's preference starts to surface more on the feed ,it tends to keep the user engaged passively even if not active, leading to ad revenue for the company. Once you are out of the platform and exposed to different, if not opposing views,people find it tough to handle and lead to depression.
7
zitterbewegung 25 minutes ago 0 replies      
Why are people treating engagement as a new thing?

AdTech companies use the same strategies that newspapers used to use. Yellow journalism has become fake news. Tabloids are now blogs fueled by rumor mills. Newspapers trying to promote celebrities are the same as having a twitter / snapchat / instagram account.

A rose by any other name is still a rose.

8
overcast 2 hours ago 2 replies      
This goes for just about all "social media", and just one of the factors for my removal from them. The other being toxic communities.
9
zzalpha 1 hour ago 1 reply      
Didn't add much to the discussion, unfortunately.

The post is just material copied from Hooked (which is a creepy book worth reading!), plus observations that have already been made (social media causes us to compare ourselves to everyone else's highlight reel), followed by a bunch of generic, currently popular fads/lifestyle advice (keto diet, exercise, meditate, stoicism).

A far simpler solution is available to all of us: stop using Instagram.

10
visarga 1 hour ago 0 replies      
Every week, news articles and people here on HN say that neural nets can be easily fooled by adversarial images. Yet human behavior has the same flaws. Apparently there are adversarial triggers for unintended/undesired human behavior.
11
Boothroid 1 hour ago 0 replies      
Its difficult to want to be too critical about this since I think the author's motives are only good in that he seems to genuinely be trying to impart good advice; but so what, I'll criticise anyway. This reads like parody of a certain time, peppered as it with trendy buzzwords and ideas; it just happens that the time it is unintentionally parodying is right now. There's a faint air of desperation about the whole thing: the banality of a life punctuated by 'cheat days' rather than one lived with dietary equilibrium; the tacit assumption that the world is right and we are primitive; the unspoken message that we are effectively living in a world that is hostile to us, but that the problem is not the world, and we should rather learn to adapt to it instead of bending it to serve our interests as human beings.
12
justzisguyuknow 2 hours ago 4 replies      
> Today, we live in a completely different world as our ancestors. Yet, our biology is still exactly the same.

Is that really true? I've heard this kind of argument so many times that it seems right, but I've rarely seen any scientific evidence to back it up. Surely our bodies have evolved in some subtle and diverse ways over many thousands of generations?

13
jonmc12 1 hour ago 0 replies      
For more on cultural evolution and the impact on evolution of our brain, check out Joseph Henrich's "The Secret of Our Success: How Culture Is Driving Human Evolution". There are many youtube videos of the author discussing the book as well. The ideas in the book give deeper context to cultural evolution discussed in the Wait But Why Neuralink article.
14
1337biz 2 hours ago 1 reply      
To be honest I found instagram much more real than other social media platforms. Especially the story modus seems to make people more open to show their real emotions even when they are unhappy.
15
deltafreq369 1 hour ago 0 replies      
I tried 80% fat diet and blood ketone monitoring for a few months. I felt more energetic and happy. My girlfriend tried the same thing, she felt lethargic. I think individual customization based on genetics and lifestyle is necessary if you want to do any effective biohacking.
16
anotheryou 2 hours ago 1 reply      
also important: your friends always have more friends/likes than you.

edit: https://en.wikipedia.org/wiki/Friendship_paradox

17
davidreiss 2 hours ago 0 replies      
It also applies to media and websites like medium.com that use clickbait to make money for itself.

Anything in excess is terrible. It's why so much of news industry and people who watch too much too get depressed.

People who religiously watch foxnews and the members of NPR or subscribers to the NYTimes are just as depressed.

Ultimately, this is an article that says nothing. It's a clickbait form of water is wet. And yes, too much water is also bad for you.

18
graycat 45 minutes ago 0 replies      
To read the original post (OP), I had to look up the meaning of the acronym PPL -- "push, pull legs".

I never did see what "the 'gram" meant.

Uh, we have the Roman alphabet, the English language we can write using the Roman alphabet, and dictionaries where we can look up words in the English language written with the Roman alphabet. Commonly we can also use Google to look up such meanings.

Then acronyms such as PPL and abbreviations such as "the 'gram" are not in an English dictionary and, thus, are obscure and poor means of communications. Similarly for icons.

So, in the interest of clear communications, ease of use, and a good user experience (UX), the Web pages for my startup are in English using the Roman alphabet and have no acronyms, abbreviations, or icons, obscure or otherwise.

Doing the same, the OP would be easier to read.

19
whipoodle 1 hour ago 0 replies      
Maybe what's inside our brains isn't the problem. Maybe it's what's outside.
20
chairmanwow 2 hours ago 1 reply      
YOU WON'T BELIEVE WHAT SHADY TACTICS SV ENTREPRENEURS ARE USING TO GET AHEAD. NUMBER 8 WILL MAKE YOUR JAW DROP!
17
The Man Behind AMD's Zen Microarchitecture: Jim Keller wikipedia.org
47 points by geezerjay  2 hours ago   13 comments top 6
1
powercf 16 minutes ago 1 reply      
Zen is the work of a huge team of talented engineers. To single one out as "the man behind Zen" seems very wrong. I don't know what Jim Keller's contribution to Zen was (and without a blog or autobiography or similar from someone well placed inside the team, then neither do most commentators), but if he did work on the Zen architecture, it's hard to believe that he would have accomplished much without the help of a good team. Keller is the main AMD engineer singled out for praise on The Internet, while the hard work (and given that Zen is such a success, it's surely the result of a mountain of hard work) of everyone else is mostly ignored.
2
kirse 1 hour ago 2 replies      
Arguably his A64/x86-64 work has been more impactful (so far). Imagine a young Jim Keller from Penn State University, makes you wonder how many SV firms today would toss out that resume in favor of a less... generic... institution. Even worse, he works for DEC, which is a boring big company. This guy is like vomit in the mouth of startup culture.
3
tbrock 57 minutes ago 1 reply      
Too bad he went to work at Tesla. It's nice to see that people are interested in the self driving car thing but, as in the case of Chris Latner, isn't it more fun / more leverage to work on the stuff that enables all this?
4
geezerjay 2 hours ago 0 replies      
I've submitted this thread after stumbling on this HN thread on High End CPUs

https://news.ycombinator.com/item?id=14986105

Among the comments, a user pointed out Jim Keller's contributions to the CPU industry. Trully fascinating read.

5
Hikikomori 35 minutes ago 1 reply      
Afaik Michael Clark was the chief architect for Zen, Keller was above him though.
6
agumonkey 45 minutes ago 1 reply      
K7, K8 (x86_64), A4, Zen... Only that.
18
Confession of a so-called AI expert huyenchip.com
40 points by parinvachhani  7 hours ago   2 comments top 2
1
Dzugaru 15 minutes ago 0 replies      
> Even though Im one of the beneficiary of this AI craze, I cant help but thinking this will burst.

I don't think it will. Level off - maybe.

I've started my work in Computer Vision with classical algorithms (SIFT features, geometry, correlation filters and things alike people were researching for decades). These really worked like garbage, it was a nightmare.

Then we jumped on DL bandwagon - and CV just clicked for me. Now I see it working, not perfectly, not at human level yet, but it works, it's better than everything else and it certainly brings value - not just in CV! Maybe there will be some expectations delayed or even ruined (AGI, fully self-driving cars, dunno), but the tech isn't going anywhere.

At it requires at least some experience and a specific mindset, slightly unusual for a generic programmer. So I don't see a problem with experts, courses, degrees and the like.

2
frgtpsswrdlame 23 minutes ago 0 replies      
The article is an interesting mix of imposter syndrome and bubble speculation. I guess a good question is: if you know you're in a bubble and you feel like an imposter, are you right?
19
Filecoin Suspends ICO After Raising $186M in One Hour financemagnates.com
156 points by ianopolous  5 hours ago   206 comments top 28
1
joeblau 4 hours ago 6 replies      
I listened to this YC podcast[1] twice with Juan Benet. Listening to Juan talk about a lot of historical information and the correlations he drew was fascinating. When he tried to explain the need for Filecoin to Dalton, I just didn't hear it. Dalton kept asking him over and over what the point was and he kept giving non-answers (at least in my opinion which doesn't mean much to be honest). I understand the goal of the protocol and the incentive structure behind the protocol, but I don't see a clear vision with the product.

[1] - https://blog.ycombinator.com/ipfs-coinlist-and-the-filecoin-...

2
sktrdie 3 hours ago 9 replies      
What's important to note about ICOs is not necessarily whether the idea is actually practical and that it will work.

What matters right now is whether the idea can stir our intellect and get the crowds excited.

This is because behind the idea there's a new token which value is completely driven by the excitement of the crowds (it's pure speculation).

If I invest N amount of money in a token which gets people excited, I will surely get back N + <crowd excitement>.

So it's sort of like a snake that bites its own tail, and we see these crazy prices specifically for this reason. People are not investing hoping for the project to be developed - they're investing because they know others will as well, which will drive the prices up.

ICOs are a new breed in economy where people can speculate, anonymously, using tokens that are distributed to the crowds in various ways (PoW, initial distribution, etc.). What does this mean for our future economy? I'm not sure, but it surely looks exciting (even though most projects behind ICOs are complete crap).

If our future economies will be driven by these tokens, we might think back at this period of time, looking at perhaps the people who will be the %1 of the future.

3
Torai 5 hours ago 3 replies      
So much people with much money trying to find new ways to make money out of money. That says much about the current state of the economy.
4
fpgaminer 31 minutes ago 0 replies      
$186M ... that's crazy.

And here I've been sitting wondering if anyone would bother investing in my idea for a Bitcoin mining co-op. I want to create a U.S. based mining farm that allows anyone to buy-in and have access to a portion of the hashrate. The idea being that, today, the average Bitcoin user has no way to reasonably voice their opinion through the hashrate because they can't compete with the efficiency of the big miners. We fix that by being a big miner; build our own ASIC, hardware, everything, and sell that back to average users at market rate. Level the playing field.

But the level of investment needed is in the tens of millions (custom ASIC, custom hardware, custom datacenters, etc). It's not an incredible amount, and I'm not worried about the technicals (I previously co-founded a company making FPGA miners and my previous company was a hardware startup), but it's enough to give me pause and sit on the idea for now. But if Filecoin got $186M through an ICO ...

5
wscott 4 hours ago 2 replies      
I decided not to invest after reading this:https://medium.com/token-economy/the-analysis-filecoin-doesn...
6
runeks 2 hours ago 1 reply      
Can anyone tell me how they arrived at the $186m figure? Are they just multiplying the last traded price by total supply, and forgetting about market depth/slippage?

I mean, if I sell a plastic cup to someone for 10 cents, and then go produce 100 million of these plastic cups, do I have $10m in plastic cups? Or could it be that Im unable to find 100 million buyers each willing to pay 10 cents for a cup, even though thats the last traded price?

As far as Im concerned, the amount of USD theyve raised equals what they could earn by executing a limit sell order with a price of zero into the Filecoin/USD market (thus eating up all bids).

7
prepend 4 hours ago 5 replies      
I don't understand why they would do the coin offering now. The network is inoperable and won't be for quite a while.

The main advantage to me seems to be a more of a community-driven storage approach. S3 is really cheap and really reliable and really old. A distributed file store would be cool, but as a community/free network like P2P. The $200M seems really high for a community project.

8
manigandham 4 hours ago 1 reply      
So instead of on-premise SANs or cloud provider block storage, store your files all over random devices protected by a complicated crypto that requires miners and tokens to stay active?

Fantastic. I'm sure the enterprise sales are going to go really well.

This fallacy that everything needs to be distributed is really getting out of hand.

9
moe 4 hours ago 3 replies      
This is probably a money laundry/tax evasion operation.

I don't believe these "big name Silicon Valley investors" just flush such amounts of money down a toilet without a hidden agenda.

10
fav_collector 16 minutes ago 0 replies      
So are the developers of Filecoin just going to retire in Hawaii now after becoming rich?
11
ckastner 4 hours ago 1 reply      
After glancing over the whitepaper, I get the motivation, but I just don't see how this scheme is viable, financially.

Participants with little storage to give, have little to earn, but must remain connected to the network. Why bother with that?

Participants with large amounts (petabytes) of storage to give are competing with Amazon S3, Google Cloud Storage, etc. At this scale, it's hard to imagine competing financially with these and other giants. (Furthermore, if you have petabytes of storage idling around, you might have a business problem anyway.)

12
zero_one_one 1 hour ago 0 replies      
This is what's good (and bad) about the cryptocurrency / blockchain (etc.) state at the moment.

It's all pure speculation, and makes a fantastic study from an anthropological and economic standpoint into how the perception of value (and the belief in that value in something) can increase the value of that 'thing'.

I think there's a lot of hyper-salesmanship at play at the moment - my real worry is that the hype catches up to the technology which is (at the base level at least), proven. When this happens, I believe a lot of these ICOs are going to crash and burn due to a lack of understanding of the base technologies they're implementing in order to generate revenue.

Abstraction is not always a good thing.

13
HashThis 2 hours ago 1 reply      
This is pointing out something profound. Investing is powerful when you can have liquidity right away.

AN ASIDE: The valuation in the ICOs too high. But the following profound point still holds true...

Imaging if an angel investor can invest in a company early, like Dropbox. Then when it grows some, he could sell 10% of his shares on a highly public and liquid market (like Ethereum coin exchange). He then invests that in AirBnB when it was tiny. He may invest in a startup that goes down after the Series A. Oh well, he may sell for 30% the valuation he invested in. The power is there by those who make big gains in the big winners, and that they can then move money out very soon and into new places.

14
pavlov 5 hours ago 3 replies      
I don't get it. Competing with Amazon S3 seems like a very unpromising business idea, blockchain or not.
15
richardknop 1 hour ago 2 replies      
Is this a joke? Regardless whether the concept has any merit, at this stage of idea/startup we would normally be talking about seed level funding round so small sums of money to get a small team and build a product.

Almost 200 million is obscene as a seed funding. If this isn't the proof this is tullip mania then I don't know what is.

16
Lon7 1 hour ago 3 replies      
I have a (not very well supported) prediction that Filecoin will be the first ICO company to incur the wrath of the Securities and Exchange Commission.

Last month the SEC released a report [0] that said in many cases these ICO tokens can be classified as securities, and therefor going forward, could be subject to all the same laws and regulations. These are not laws you want to be breaking. This Filecoin ICO has been highly publicized, and importantly, came after the SEC's report.

It doesn't matter if it all only exists on a blockchain. When the SEC decides to make an example of one of these companies it will be a bloodbath.

Matt Levine has been covering this topic recently. [1]

[0] https://www.sec.gov/news/press-release/2017-131

[1] https://www.bloomberg.com/view/articles/2017-07-26/tokens-va...

17
bhouston 3 hours ago 1 reply      
For most storage, one needs it to be local to compute or easy to put behind a CDN - e.i. in a datacenter or on a very fast network that is connected to the datacenters.

If not, you are cut out from the main commercial storage market.

What is left is:

- People who want to store stuff that is can not be legally stored on existing networks, such as illegal drug or porn stuff.

- People who need secure stuff stored in a distributed fashion.

- People who want to have a way of backing up their PC? But will this be faster and cheaper than using Backblaze-like solutions that buy storage in bulk and optimize for costs?

Is there enough redundancy to ensure that no data is ever lost?

Could someone actually attack the network to cause data loss?

What are the costs?- Inbound, new storage per GB.- Outbound, access per GB.- Deleting storage.- Static storage on a per GB/hour.

What is the performance of this?

Can Backblaze make more money being a provider to Filcoin or should Backblaze use Filecoin for it storage? Just like Bitcoin mining Filecoin should, if successful, be dominanted by people like Backblaze.

18
joosters 4 hours ago 0 replies      
I assume that the VC investors actually bought a stake in the company? It's misleading to lump them together with the ICO investors, who are buying nothing more than a vague, non-enforceable promise (see any ICO terms & conditions, you have absolutely no rights or ownership as a coin holder)
19
crypt1d 3 hours ago 0 replies      
I'm kind of amazed how Filecoin managed to create such a hype, even though its nothing more than a whitepaper right now.

I mean, there are already 2 or 3 coins that are doing something very similar and already have at least a working prototype (eg Siacoin, Storj), or have an interesting algo like Proof of Storage from Burst. Is there anything that makes Filecoin superior compared to the other's, or is it just that its VC-backed?

20
noway421 4 hours ago 2 replies      
Their Filecoin Token Sale Economics says that 10% of all tokens is allocated to investors. Not sure how much they sold, but even if it's almost all out, does it make it a $1.8bn capitalisation?
21
brianwawok 1 hour ago 3 replies      
So if I am personally bearish on the entire "alt-money" movement, how do I turn this into a potential profitable bet? Any tips on where I go to "short" coins?
22
memossy 1 hour ago 0 replies      
A lot of money for what's basically Pied Piper without middle-out.

Seriously though these coins won't be worth much given a competitor could either a) dominate mining or b) fork it.

23
bhouston 3 hours ago 0 replies      
The end result of Filecoin will be extreme centralization just like Bitcoin mining unless there are extreme legal issues that makes everyone scared of being part of the network (e.g. child porn.)

Thus there will be a few ultra large cost optimized providers probably similar to BackBlaze that handle 90% of all storage on the network.

Does that extreme centralization fulfill the vision of Filecoin?

24
progx 4 hours ago 0 replies      
What is the difference compared to storj?
25
tyingq 4 hours ago 2 replies      
Not completely thought out, but I believe parasitic storage might be a better approach to scaling out a censorship resistant distributed object store. Some kind of cryptocurrency could help fuel it as well.

A parasitic storage approach would leverage existing "free" tiers of services like google drive, onedrive, dropbox, yandex.disk, etc. One of the hard parts of scaling that would be creating the fake accounts to hold the data, so a cryptocurrency+contract could fuel the reward needed for people to create these by hand. You use the reward to buy storage space, so there's some incentive for the people creating them.

I'm less clear on how you would combat the various providers finding the fake accounts and deleting them along with the data. Would clearly need duplicate storage of objects to work around that, and some central reference table to map keys to the various copies, and provide the high level "api" for creating buckets, etc.

26
ico_skeptic 4 hours ago 0 replies      
Half the money these ICOs are raising is going to end up being paid to the SEC in fines. Maybe not Filecoin because they have at least made an attempt to pretend to comply with the rules - although Christ knows what will happen when Filecoins start trading on exchanges! But there are a LOT of companies out there who are going to be totally screwed.
27
andy_ppp 4 hours ago 0 replies      
Great, another Ponzi scheme I can get in at the start with... how do I invest?
28
gaetanrickter 4 hours ago 1 reply      
These could go big too... "3 ICOs Set To Out-Perform Long Term Benchmarks In The Cryptocurrency Markets" https://medium.com/@alexanderwestin/3-icos-set-to-out-perfor...
20
100-year old fruit cake found in Antarctica's oldest building nzaht.org
66 points by kensai  8 hours ago   55 comments top 17
1
michelb 2 hours ago 1 reply      
Just as well-conserved as the Huntley & Palmers website: http://www.huntleyandpalmers.org.uk/ixbin/hixclient.exe?a=fi...
2
y-haminator 0 minutes ago 0 replies      
They should send a piece of it to steve mre
3
dzaragozar 4 minutes ago 1 reply      
Hahaha I read the title with fruitcake as the slang for lunatic: really old lunatic found in Antarctica's oldest building :)
4
leephillips 3 hours ago 1 reply      
Amusing that the cake was in excellent condition, while the metal box it was stored in was partly decomposed. Once again, reality has one-upped all the jokes.
5
warvair 2 hours ago 3 replies      
Amazing that after all this time it remains inedible.
6
gideonparanoid 3 hours ago 2 replies      
My girlfriend has done some work in Antarctica before, she said that it wasn't uncommon to find food ten or twenty years past its use-by-date. This is kind of the next level though.
7
throwanem 2 hours ago 2 replies      
Still every bit as edible as the day it was forged, I'm sure. Those things are terrifying. But apparently they make very durable emergency rations! On the one hand, they can last a century, and on the other, you can be sure no one will molest them absent desperate necessity.
8
DannyB2 1 hour ago 2 replies      
It's just as good as any fruitcake passed around at Christmas. It never gets opened. It stays perfectly preserved. When you receive the fruitcake at Christmas, you leave it wrapped, and gift it to someone else next Christmas. Only a finite number of fruitcakes need be manufactured. Thus resources are conserved. Only a finite number of fruitcakes ever had to be purchased. Again, resources and economics. It's economical because when you receive the fruitcake as a gift, you have a free gift to give someone next year without spending money. It can be passed around almost indefinitely Best if used by August 9, 2047.
9
arethuza 3 hours ago 1 reply      
It does rather remind me of the cakes that seemed to be popular in the UK in my youth (1970s) that only seemed to be given as presents - nobody ever seemed to eat them.
10
aunty_helen 3 hours ago 0 replies      
The weirdest part about all this is that they have to keep it frozen the whole time and then at the end of it all fly it back down to Antarctica, trek it out to the hut and then leave it there.

They did the same with a crate of whiskey they found a couple of years ago.https://www.nzaht.org/pages/shackletons-whisky

11
mixmastamyk 1 hour ago 0 replies      
I once visited a whaling station down there and it was quite cool to see all the "steampunk" tech laying around, though rusting.

Also there was a British (I think) outpost that had canned food on the shelves from the late 1950s, was a bit surreal.

12
retox 3 hours ago 1 reply      
13
grondilu 3 hours ago 0 replies      
It looks better than some of the stuff in my fridge.
14
wsgeek 1 hour ago 0 replies      
Apparently nobody liked Fruitcake 100 years ago either
15
obilgic 2 hours ago 1 reply      
Why does it come in a metal box?
16
ChrisRR 3 hours ago 1 reply      
I'll wait for the video of Ashens eating it
17
defterGoose 4 hours ago 0 replies      
In before the cake rots.
21
Body Count of the Roman Empire necrometrics.com
109 points by Red_Tarsius  14 hours ago   41 comments top 9
1
Red_Tarsius 2 hours ago 2 replies      
If the numbers are true, Genghis Khan killed more people in a few decades than the Romans did in a thousand years. It's estimated that 15M died in the Mongols' five-year invasion of central Asia. https://www.washingtonpost.com/archive/entertainment/books/2... Rome used to turn the defeated tribes into slaves (though it was not a permanent status) or tax-paying citizens.
2
DanAndersen 9 hours ago 2 replies      
Relevant: Dan Carlin just came out with a new episode of Hardcore History called "The Celtic Holocaust," which deals with the subject of the Roman subjugation of Gaul and the Celtic people.

http://www.dancarlin.com/product/hardcore-history-60-the-cel...

3
FlashGit 2 hours ago 1 reply      
The Romans weren't afraid to win wars by attrition. Some of their generals were definitely good but they could just swarm any opposing force with highly disciplined, well armed troops.

Hannibal crushed the Romans several times, it didn't matter. Carthage still got wiped off the map.

4
meanonme 9 hours ago 0 replies      
For anyone interested in historical atrocities, "The Great Big Book of Horrible Things: The Definitive Chronicle of History's 100 Worst Atrocities" by Matthew White hardcover is on sale for $7. Oddly, the Kindle version is $10 and paperback $11.

http://amzn.to/2voMw7Y

5
beloch 11 hours ago 0 replies      
The author of this seems to have quit on the Eastern (Byzantine) empire several hundred years early...
6
afterburner 10 hours ago 0 replies      
This is (oddly enough) a part of Matthew White's "Historical Atlas of the 20th Century", one of my favourite early history sites.
7
dsfyu404ed 1 hour ago 2 replies      
It took them nearly a millennium to do only a fraction of what communists accomplished in about a century.

If anyone wants to crunch numbers and determine a per capita "death as a result of state action" for various empires over time I think it would be really interesting.

How does a bad year in the roman empire compared to a purge year in the USSR? Average year vs average year? What about the great leap forward?

8
fastball 7 hours ago 1 reply      
1000 casualties averaged a year?

That's actually amazing.

9
Animats 12 hours ago 3 replies      
Smaller than Russian losses during WWII.
22
HyperCard On The Archive archive.org
285 points by dogecoinbase  15 hours ago   91 comments top 25
1
rhencke 15 hours ago 7 replies      
Hypercard is largely responsible for my love of programming. As a kid, I would work with my sisters on making games in it. It was a beautiful combination of half painting program, half drag'n'drop GUI creation that we used to make adventure games. Each card represented a room, and my sisters would draw on it using the paint tools, and I would follow up after and add invisible buttons over doors and the like to allow for 'moving' through rooms. We'd then use the built-in MacInTalk speech stuff to make characters say things, too. Granted, they were silly little games without much point to them, but... as a kid, man. It was like magic, learning you could have computers do this.

I was sad when Hypercard fell out of general distribution with the Mac, but I'm happy to see it here.

2
tedmiston 14 hours ago 2 replies      
Hypercard in a lot of ways is what the Web could have been.

If never became as easy to create high quality freeform sites and apps as it was multimedia Hypercard decks and games. Hypercard changed my life in childhood, even without learning about its scripting features.

HyperStudio was pretty good too. We used that in school quite a bit.

3
jamestnz 9 hours ago 4 replies      
In addition to echoing all the dev-related stories in this thread, I have very fond memories from my childhood of playing a 1988 HyperCard-based game called The Manhole (on our SE/30, and later Power Mac 8100).

It was an immersive and extensive visual world, where the main point was just to explore. It was implemented as a series of linked HyperCard stacks, each sized to fit on a floppy disk. You'd come to remember the exact points of the game that would throw up a modal dialog prompting the insertion of the next required disk.

And, it happens to have been made by the brothers Rand and Robyn Miller, who later went on to create Myst (which itself is very reminiscent of The Manhole).

https://en.wikipedia.org/wiki/The_Manholehttps://www.youtube.com/watch?v=YyOTq1EpV5o

Unrelatedly, I wonder if anyone in this thread remembers SuperCard, a third-party knock-off of HyperCard that offered such amazing innovations as color graphics. (I also seem to recall some kind of hack where you could use ResEdit to get colour images into HyperCard stacks even though it wasn't officially supported, but the details are fuzzy).

4
pmarreck 13 hours ago 4 replies      
Bill Atkinson apparently kicks himself for not being the first to realize that simply making the stacks work over a network could potentially have been the first "web browser" (or at least, internet hypertext engine)

HyperCard was totally awesome at the time.

5
CaliforniaKarl 15 hours ago 5 replies      
HyperTalk was the first programming language I ever learned. Now I understand how each card was an object that I was manipulating.

It was so good! It's too bad it never became more popular and disappeared.

6
eligundry 47 minutes ago 0 replies      
Justin Falcone has a fantastic talk about the importance of HyperCard. He did all the slides in HyperCard and he gets super into it.

https://www.youtube.com/watch?v=8i60_REoeIY

7
sthielen 10 hours ago 0 replies      
Hypercard was before my time; I heard about it recently when someone compared it to what we're building with Metaverse [0].

When you so dramatically reduce the friction required to create that anyone, especially non-technical folks, can do it, all kinds of amazing things happen. I watched an 11 year old build the "Not Hot Dog" app from Silicon Valley, using Google's Vision API, in ten minutes (from never having seen the Metaverse Studio to having her app deployed on device, cross-platform, and sending it to her friends; this is how creation SHOULD be for 99% of people!).

[0] - http://gometa.io (also check out how easy it is to create apps that integrate with IoT devices: https://www.youtube.com/watch?v=SPrBLPG3Smk -- Hypercard for the modern age!)

8
dustingetz 13 hours ago 0 replies      
My project is a bit of a spiritual successor to HyperCard: https://github.com/hyperfiddle/hypercrud.browser
9
zopf 14 hours ago 0 replies      
HyperTalk was my first programming language!

I helped a friend build a choose-your-own-adventure murder mystery game called Blood Hotel, and found myself obsessed with the feeling of inventive power that programming enabled.

I ended up building an animated space invaders game, and even tried my hand at writing a "virus" in HyperTalk that would infect other stacks with its code.

Ah, the good old days. Lovely to see this at the top of HN!

10
brentjanderson 12 hours ago 0 replies      
HyperCard was my first foray into programming at the Christa McAuliffe Space Education Center - Apparently Starfleet runs on HyperCard, [here's a video][1] showing the program in action. Most of the software in the video is built in HyperCard.

[1]: https://www.youtube.com/watch?v=XG2lSb1xrNM "Four Hours: A Space Trip"

11
bsclifton 9 hours ago 0 replies      
HyperCard was amazing. My first online experience was with AOL in 1993 and they had a HyperCard section where you could upload/download your stacks. I racked up huge bills hanging out in that area (pay by the minute)
12
oso2k 9 hours ago 0 replies      
I love the stuff the Archive keeps coming up with. I'm glad I finally started donating to their cause last year.
13
kylestlb 10 hours ago 0 replies      
My 7th grade 'computers' class was basically a HyperCard course. It was amazing and I made a cool choose-your-own-adventure game.
14
setori88 12 hours ago 0 replies      
My project http://www.fractalide.com is looking to build out a new hypercard type environmont
15
coldcode 14 hours ago 1 reply      
I loved Hypercard for prototyping UI back then. UI designers didn't really exist and programmers like me typically designed stuff, don't laugh, having artists involved was a web era thing for the most part. Being able to prototype and animate quickly was incredibly useful for explaining an idea to a product manager, or showing another programmer what you had in mind. Today there are great tools but they are clearly meant for a different audience.
16
hsivonen 10 hours ago 0 replies      
I did my first programming in HyperTalk, which I learned from the HyperCard 2.0 manual.

Back then, software came with well-written paper manuals, and the translation quality (into Finnish in my case) was very good, too. I feel like Apple manuals peaked with HyperCard 2.0.

17
ontouchstart 13 hours ago 2 replies      
I can even play it on iPhone:

https://twitter.com/ontouchstart/status/895833140467572736

Moving cursor with touch is kind of challenging though.

18
spiderjerusalem 7 hours ago 2 replies      
Any oldies here who can recount why exactly Hypercard was killed? Seems like such a wonderful piece of software.
19
jacquesm 13 hours ago 3 replies      
So, who will do a hypercard for the web? Or better still: a hypercard based alternative to the web?
20
samgranieri 14 hours ago 2 replies      
I loved hypercard and wish it was still around. I learned how to code in that an in TI-Basic
21
twsted 8 hours ago 1 reply      
I loved HyperCard.

As an Amiga user, I remember a good clone named CanDo. It was really interesting.

22
smegel 13 hours ago 0 replies      
They are some really nice web based emulators. It's almost certainly 100% nostalgia but there is a certain charm to these early Mac games...simple but somehow deeply detailed monochrome graphics...easy to use point and click interface...it's great to see them spring back to life in a web browser.
23
watersb 12 hours ago 1 reply      
I will dig up some old projects...

Does anyone know if XCMDs are supported?

24
Kristine1975 9 hours ago 0 replies      
Fun fact: The game Myst was created in HyperCard (at least the original Mac version was).
25
poisonarena 14 hours ago 1 reply      
if you are a real sicko you can actually emulate it on basilisk and keep making stacks..
23
Four Earth-sized planets detected orbiting the nearest sun-like star ucsc.edu
598 points by mrfusion  1 day ago   279 comments top 14
1
ExactoKnight 20 hours ago 23 replies      
I am flabbergasted that as a society we aren't rushing to build a 100 metre wide telescope mirror large enough for us to directly image the spectra of the potentially habitable exoplanets around us.

A telescope this large could tell us whether any of these potentially habitable planets contain oxygen, and thus, biological processes.

Yet thanks to funding cuts in science the biggest telescope we have in the pipeline right now is one with a 30 metre mirror. This telescope won't be big enough, and as a result, our failure to push now for bigger sizes is almost certainly going to push back for decades humanity's ability to answer one of the most important questions we face:

Why are we here, and are we alone.

2
semaphoreP 23 hours ago 1 reply      
This title is a bit imprecise. They detected four planets with lower bound on their masses to be down to 1.7 Earth masses. Because these planets don't transit, there are no direct measurements from their radius. They can use mass-radius relations to infer the radius of these planets, but the key finding is their masses (actually lower bounds on their masses).
3
kilroy123 20 hours ago 2 replies      
I really really want project Starshot to become a reality. I think this is our best bet for scoping out these near by star systems. At least within our lifetime.

If we could hit 50% speed of light we could do a fly-by mission in ~25 years. Then another 12 years waiting for the data. Honestly, ~37-40 years isn't bad for an interstellar mission. Remember the Voyager programhas been going on for that long! So we already have experience with long space missions.

https://breakthroughinitiatives.org/Initiative/3https://en.wikipedia.org/wiki/Breakthrough_Initiatives#Break...

4
baron816 22 hours ago 16 replies      
Ok, let's assume we find a warm, watery planet like Earth's within ~20 light years, and we figure out a way to travel >= 50% the speed of light, making it somewhat reasonable to get there. If the planet's gravity is greater than 10% different from Earth's, or its Day/Night cycle is much different from Earth's, wouldn't it still be a nightmare to live on.

Anatomically modern humans have lived on Earth for 200,000 years, and the creatures we descended from have lived on Earth for 541 million years. Stuff as dumb as the moon cycles affect us. How are we going to live somewhere that isn't exactly Earth?

5
deanCommie 23 hours ago 4 replies      
Key line to mitigate disappointment:

"The outer two planets around tau Ceti are likely to be candidate habitable worlds, although a massive debris disc around the star probably reduces their habitability due to intensive bombardment by asteroids and comets."

6
deepGem 12 hours ago 0 replies      
Unlike more common smaller stars, such as the red dwarf stars Proxima Centauri and Trappist-1, they are not so faint that planets would be tidally locked, showing the same side to the star at all times.

In such planets, the most habitable zone is around an equator like region where the light and dark regions kind of merge to produce a reddish sunset like hue all through the day. I think one of the planets that Kepler discovered is like that. Life would evolve to absorb these light wavelengths. So for instance plants would all look black. Nova has a great episode on these exoplanets. https://www.youtube.com/watch?v=5HZsFMqqGJo&t=793s

7
u801e 11 hours ago 0 replies      
I wonder if an observer 12 light-years away with similar technology to us would be able to tell the difference between Venus and Earth in terms of whether they are potentially habitable.
8
chrismealy 23 hours ago 3 replies      
The fastest spacecraft ever built would take 4000 years to travel one light year.
9
frgtpsswrdlame 23 hours ago 4 replies      
Is there any benefit to the planets being earth-sized? I would think the important part is that they're in the habitable zone.
10
RandomedaA 17 hours ago 1 reply      
I feel like something similar to this is announced every year, and nothing ever comes of it.
11
mbfg 16 hours ago 0 replies      
Would there be any value in putting a telescope on the moon? You wouldn't have the atmosphere problem, and i'd expect servicing it would be mildly easier than have it out at L2 or something.?

I suppose the fact that the moon was tidally locked would be something of a problem for full sky observation. Is that the main issue?

12
arkainW123 22 hours ago 0 replies      
Hearing distances like 12 light years makes you think if it is ever possible to travel there.However, when you start to think about it, nihilist thoughts start to kick in.
13
dnprock 13 hours ago 0 replies      
We need to send bacteria to those planets. That'd make life multi-planet. Maybe, that's how life arrived on Earth.
14
SilverPaladin 19 hours ago 1 reply      
I wonder if the Mormons will be starting their ship construction now?
24
Bootstrap 4 Beta getbootstrap.com
243 points by jdorfman  11 hours ago   112 comments top 36
1
inertial 9 hours ago 5 replies      
I've been using bootstrap 3 for most of my public projects for a few years and bootstrap 4 for some internal projects. I make it a point to try new & popular CSS frameworks every few months and ultimately thank God for bootstrap. It has stood the test of time for all these years.

Things that I like about bootstrap are : stability, performance, easy custom builds, community support (almost every problem has been addressed on Stackoverflow), ecosystem (themes, extensions, tools). The design out of the box might look cookie cutter, but if you know basic CSS, it is very easy to customize it with minimal effort. Just lookup examples on codepen/stackoverflow.

Some people complain about bloat but this is amazing : https://getbootstrap.com/docs/3.3/customize/ . I have public facing websites with thousands of visitors per day. The performance rating is better than 90% of the websites.

Truly want to thank @mdo @fat @cvrebert and the rest of the team for this amazing project.

2
shubhamjain 9 hours ago 8 replies      
I highly recommend UIKit [1] for anyone looking to pick a front-end framework. It solved tons of design problems that I faced with Bootstrap. Here are a few

1) Every class name is prefixed with "uk-": a wise decision to avoid conflicts and make it easy recognise the framework classes.

2) Useful generic classes: UIKit has many helper classes to avoid adding another rulefor removing padding, introducing a small margin, rounding the borders, adding a box shadow.

3) Clean theme: The default UIKit theme looks an order of magnitude better than Bootstrap's.

4) Additional helpful components: Loading spinner, Cards, Notifications, Sortable list. Shipping them by default saves a lot of time in searching and integrating them in your application.

5) Default Icons Support: The icons look clean and beautiful and because they are SVG, they don't require a font file. (Although, requiring Javascript can be turning off for some)

6) Much better components: Try creating an input box with an icon in bootstrap; with UIKit it only takes a little markup.

Whenever I develop with Bootstrap, I have to spend half the time muting the existing styles and introducing new ones. With UIKit, it never felt like a problem.

[1]: https://getuikit.com/docs/

3
mdo 8 hours ago 1 reply      
Its past midnight by me and Im heading to bed, but Ill pop back here tomorrow to answer questions and follow up with folks. Thanks for the kind words so far, and any and all feedback coming our way :).

Huge thanks to the contributors and our team for pushing so hard recently. Feels amazing to have a beta out finally.

4
dgut 1 hour ago 0 replies      
I've been using BS 4 Alpha for a while now, this is great news. A million thanks to mdo and the rest of the team.
5
reificator 5 hours ago 1 reply      
I think Web Reflection covers my concerns with Bootstrap more elegantly than I can myself. It's 2017, the browser targets are very modern, so why does it need jQuery at all?

https://www.webreflection.co.uk/blog/2016/04/22/about-jquery

6
harel 7 hours ago 3 replies      
I cannot recommend Semantic UI enough, and in particular the React port of Semantic. Its a very complete, themable and quality piece of toolkit. After using it I could not understand why I've been using bootstrap before.

http://semantic-ui.comhttp://react.semantic-ui.com

7
lewisl9029 5 hours ago 4 replies      
A few questions for those who might be using component-oriented CSS-in-JS libraries like styled-components [0] or glamorous [1]:

1. Do you still use a global, class-based, cascading CSS framework like Bootstrap, Semantic UI, etc, to provide baseline styling/theming for your pages and components? And if not, do you just essentially roll your own custom CSS framework in JS? Or are there styling frameworks out there that don't cascade and can compose more naturally with component-oriented approaches to styling?

2. If you do use cascading CSS frameworks in addition to something like styled-components, what are your approaches to limiting the unpredictable side-effects of cascading styles, dealing with specificity and load order issues, and ensuring proper style isolation inside your components?

It seems like using a cascading styling framework could defeat much of the purpose of using a well-isolated component-oriented styling library.

Although on the other hand, the alternative of writing all styling on my own seems like a futile exercise in reinventing a vastly inferior wheel, since so much designer talent has been poured into projects like Bootstrap to the point where an amateur designer like myself could never even approach the same level of polish, flexibility and consistency in their designs.

I'm hoping someone here could enlighten me on a better approach that takes the best of both worlds, or at least share a middle ground they're happy with?

[0] https://github.com/styled-components/styled-components

[1] https://github.com/paypal/glamorous

9
tannhaeuser 9 hours ago 2 replies      
It's probably not the right time and place to ask, but have there been considerations for post-Bootstrap 4 yet? I could imagine CSS grid and CSS variables will reduce the need for sass/less in the future. So it might make sense to look into whether the CSS abstractions build into BS 4 will hold up well in the presence of CSS grid/vars, or have their design informed by/aligned with these upcoming features.
10
_Codemonkeyism 7 hours ago 1 reply      
Switched from Bootstrap to Semantic some time ago. I like it more, but development seems to be very very slow.

Took a look at Foundation, and the new XY Grid is really really nice and much better than the current Semantic grid.

https://www.youtube.com/watch?v=Xl5DjEzKn1g

I especially love

1. It also works for Y2. how you specify responsive, "small-4 medium-6 large-2" means on small devices 4 wide, on medium 6 and on large 2 wide. This makes layouting responsive much easier.

11
huskyr 6 hours ago 0 replies      
I heavily used Bootstrap 2/3 in projects before, but the last two years or so i really don't need such a large CSS framework anymore. Especially with all major browser supporting Flexbox grid-like layouts have become so much easier. Nowadays i just use SASS and a really small mixin library (1) and i'm perfectly happy.

1: http://hay.github.io/valenski/

12
tannhaeuser 10 hours ago 1 reply      
A big thumbs up and thank you to the Bootstrap team.

Building Bootstrap 4 from source (which is what you want to do for your own themes and customizing) requires a somewhat hefty npm install due to babel, autoprefixer/postcss and clean-css, and in particular due to node-sass which is using libsass/node bindings, thus requiring gyp and Python. I'd imagine a leaner setup based on a precompiled sassc binary for SCSS processing could be useful.

13
DigitalSea 10 hours ago 0 replies      
For anyone wondering, the beta is comprised of 392 Github issues: https://github.com/twbs/bootstrap/milestone/41?closed=1 with some more interesting tidbits in this issue here: https://github.com/twbs/bootstrap/issues/21568 - good to see Bootstrap finally leave alpha after so many years, the grid system is fantastic. Congratulations to everyone who contributed to Bootstrap to make this happen, solid work.
14
knodi 23 minutes ago 0 replies      
$99 for such simple themes is overpriced
15
ShirsenduK 7 hours ago 0 replies      
Have been using Bootstrap 4 on production at https://www.maplenest.com for almost a year now.

It has Flexbox!!! Although our team has added additional classes to get some of the UI done, I am sure, we could have done it using only Bootstrap. We get lazy :P.

The primary reason I like Bootstrap is because of the semantics. Columns are col, Tables are table, Cards are etc. Primary is main action, Danger is error etc. etc. As they say in UX, "Don't make me think!"

16
Matheus28 9 hours ago 1 reply      

 <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" integrity="sha384-rwoIResjU2yc3z8GV/NPeZWAv56rSmLldC3R/AZzGRnGxQQKnKkoFVhFQhNUwEyJ" crossorigin="anonymous">
The css given in their docs has an invalid integrity hash

This is the correct hash:

 sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M
Edit: someone has already filled an issue, beating me by 3 minutes: https://github.com/twbs/bootstrap/issues/23284

17
cunningfatalist 6 hours ago 0 replies      
I like how minimal Bootstrap 4 is. I also like the SCSS core itself. It's very well written and I learned a lot from reading it.

I worked with Foundation 5 + 6, Semantic UI, Spectre, Bulma and Bootstrap 2 + 3 (and some more lesser known Frameworks) and I must say that the Bootstrap 4 team really nailed it. All the other frameworks are good, but this is just great to work with.

18
calgaryeng 2 hours ago 1 reply      
Bootstrap is great and I used it on a couple projects. Unfortunately I gave up waiting on a V4 release and started using Semantic UI. This has been a long time coming!
19
pan69 10 hours ago 0 replies      
Love the new revamped form layout and possibilities. Can't wait to start using BS4.

Congrats and a massive thanks to anyone who has been working on this fantastic project.

20
champagnepapi 2 hours ago 0 replies      
I haven't used bootstrap 4, I've actually kinda moved away from Bootstrap (which I've used in a couple production products) for Semantic-UI.
21
yuchi 3 hours ago 0 replies      
That version dropdown in the navbar will change my life: I give lessons for a framework which in the previous version still used v2.3.2, now Im sure my students will not look at the wrong documentation.
22
sadlion 1 hour ago 0 replies      
What does HN think about zurb foundation?
23
tmaly 3 hours ago 0 replies      
I originally used bootstrap 3 for my food side project. I did have some issues with customizing and sheer size.

I eventually moved over to bourbon.io with the help of one of the core contributors.

24
zspitzer 3 hours ago 0 replies      
Anyone got a working requirejs example? does it need a special shim

I'm getting a "Bootstrap dropdown require Popper.js" error

25
danvoell 2 hours ago 1 reply      
Can anyone comment on what is new? What to try in 4?
26
k__ 5 hours ago 2 replies      
With Flexbox and Grid I found less and less reasons to use Bootstrap.
27
jaequery 8 hours ago 2 replies      
does bs4 now have an easy way to add spinning icons to a button?

in bs3, there was no easy way, you had to basically use javascript to achieve it.

<button class="btn btn-default btn-spin">Click Me</button>

I been waiting for something like this to be achievable for some time now.

Bulma has this perfected and it really is handy.

28
fzaninotto 3 hours ago 0 replies      
The advent of utility classes signs the doom of Bootstrap. Going down this road leads directly to universal.css (https://github.com/marmelab/universal.css).
29
microkernel 8 hours ago 0 replies      
Frequently prototyping sites and always amazed how much value bootstrap adds. Thank you for all the work done!
30
matude 7 hours ago 0 replies      
Personally prefer Foundation but glad to see Bootstrap maturing from the eternal alpha version. :)
31
Kiro 9 hours ago 2 replies      
Is it safe to start using in production?
32
ak39 7 hours ago 2 replies      
Many folks use Bootstrap for its awesome grid system. But now that CSS Grids are finally here and are way way cleaner to use, Bootstrap's days are numbered.
33
davidgatti 8 hours ago 0 replies      
Nice redesign! :)
34
aaronbrethorst 10 hours ago 0 replies      
awesome, congrats to everyone who's worked on this. It's been a long time coming, but I imagine it will be very well worth it.
35
rawoke083600 10 hours ago 0 replies      
Nice :)
36
jbob2000 3 hours ago 0 replies      
I love how you have ads in the documentation! So helpful!

What's next, ads in the source code?

25
Timescale, an open-source time-series SQL database for PostgreSQL timescale.com
312 points by pbowyer  18 hours ago   78 comments top 25
1
daurnimator 14 hours ago 1 reply      
Could you contrast this with the approaches mentioned in the series of blog posts starting here: https://grisha.org/blog/2015/09/23/storing-time-series-in-po...

That blog post grew to be tgres http://github.com/tgres/tgreshttps://grisha.org/blog/2017/03/22/tgres-0-dot-10-dot-0b-tim...

2
mnutt 15 hours ago 2 replies      
A project I work on has time series stats in postgres--it's essentially an interval, a period, a number of fields that make up the key, and the value. There's a compound index that includes most of the fields except for the value. It works surprisingly well, for tens of thousands of upserts per second on a single postgres instance. Easy app integration and joins are a huge plus. I'm really curious to check this out and see how it performs in comparison.
3
buremba 13 hours ago 1 reply      
Why do you usually advertise the write performance? Let's say that I have "100+ billion rows (the number in your landing page)", how much time it takes to run a simple GROUP BY query?

The benchmark repo doesn't actually include the performance comparison between Timescale and Postgres: https://github.com/timescale/benchmark-postgres#benchmark-qu...

This blog post (https://blog.timescale.com/timescaledb-vs-6a696248104e) has some query benchmarks and the main benefit it that the hypertable will partition the data smoothly and if we query the table by filtering with timestamp column, it will be fast since Timescale uses partitioning as an indexing method.

4
craigkerstiens 16 hours ago 2 replies      
Great to see what you all are doing.

Are there any plans to move timescale to be an extension as opposed to a fork? We've found ourselves at Citus that maintaining an extension lets us more easily stay up to date with current releases. Would love to see the same applied to timescale.

Edit: Looks like it is already one, just was unclear in the docs on the setup steps to me. Well done all.

5
joaodlf 9 hours ago 1 reply      
I've come to rely heavily on Cassandra, but I miss good old SQL and adhoc functionality. Systems like Cassandra bring orher requirements when you need flexible data (Spark, for example), technical debt is always a worry for me.

I want to give this a go for sure!

6
koffiezet 3 hours ago 2 replies      
While nice, it suffers from the same problem storing timeseries in any sql database: you have to predefine your tables. For a fixed and known set of metrics, that's all fine, but if you look at the possible outputs of for example Telegraf, things become a bit more tricky to pre-create all tables/schemas...
7
lurker456 7 hours ago 1 reply      
further evidence of how postgreSQL is eating noSQL. Every good concept first implemented in a custom noSQL solution eventually becomes an extension in postgres.
8
MightySCollins 15 hours ago 1 reply      
Please stop tormenting me. This looks like exactly what we need (I was looking into manually partitioning the other day) it's just so annoying there is not yet Amazon RDS support.
9
atemerev 6 hours ago 0 replies      
Whoa, fantastic!

I have managed to design a vanilla PostgreSQL solution, with partitions and BRIN indices, but there are too many hops to jump. I am excited to check if it will work out of the box. 100 billion rows per server sounds exciting!

10
_Codemonkeyism 7 hours ago 0 replies      
We have been using Postgres for a smaller event time series database (millions of rows) with good success.Tables are partitioned.

Some user reports (aggregations) are ~5secs so we batch-pre generate them currently.

Eeager to look into this to replace generation of reports with real time reports.

11
Throaway786 14 hours ago 2 replies      
We have a requirement of saving 100million data points every 5 mins. What options should we explore for real time system for last 15 days of data and archival system for last 3 years of data?
12
overcast 12 hours ago 0 replies      
Alright, I'm excited to check this out. Been teetering on InfluxDB for a while, but not something I wanted to just introduce into corporate. Great work guys!
13
Tostino 16 hours ago 1 reply      
This is something i've been meaning to look into for a personal project that has a lot of time series data. It'll be interesting to see what they eventually come up with to make time series data not take quite as much space.
14
gaius 2 hours ago 0 replies      
How does this compare to Vertica?
15
shady-lady 6 hours ago 1 reply      
What is the extra size on disk as a result of using this?I'm guessing there's some overhead?
16
odammit 14 hours ago 1 reply      
This looks cool. I love things that get rid of extra dependencies. Influxdb is nice but then I have to support it, get stuff into it and get stuff out of it.

Timescale isn't currently supported by RDS/Aurora though, so it looks like more influx for me wooohooooo!

17
hof 14 hours ago 1 reply      
How would this work together with something like Stolon? https://github.com/sorintlab/stolon
18
ericb 13 hours ago 1 reply      
Is the business model to charge for the clustering release?
19
hotwagon 14 hours ago 1 reply      
At a higher level, is this the same concept as Datomic?
20
continuations 15 hours ago 1 reply      
So this is based on Postgresql. How does it compared to other solutions that are written from scratch to be a time series DB like influxDB?
21
anemic 11 hours ago 1 reply      
Can this be queried with Grafana or some other visualization tool?
22
riku_iki 12 hours ago 1 reply      
How timescale fits postgres maintenance patterns(replication, backup)?
23
dpen2016 7 hours ago 0 replies      
Why no redirect to https here?
24
freestockoption 15 hours ago 1 reply      
Any support for RDS? :)
25
manigandham 15 hours ago 1 reply      
Any SQL database can do time-series well with more functionality then the specialized stuff like influxdb which doesn't really have much reason to exist at this point.

Citus is a another good alternative and SQL Server and MemSQL also have in-memory and columnstores if you need the performance and scalability.

26
High End CPUs Intel vs. AMD cpubenchmark.net
171 points by bhouston  13 hours ago   84 comments top 16
1
axaxs 12 hours ago 6 replies      
I don't mean to diminish the efforts of anyone involved, but I truly feel one man more or less moves the direction of the CPU industry: Jim Keller.

He, among others, invented x86_64 at AMD during its previous glory days. AMD dominated the competition. He led the Apple chips at A4, and Apple chips then and now dominate the mobile competition.

He came back to AMD and helped create Zen, to obvious results. Apparently he now works for Tesla.

In any event, this guy seems to have the Midas touch wrt CPUs, it's a shame there isn't more written about him or more importantly, by him.

2
xmichael99 12 hours ago 3 replies      
Wow! Hard to believe I would ever, ever, ever see AMD on the top of that list! Amazing! Forget about the price, just amazing to see AMD at the top, now factor in the price and wow Intel is screwed. Not because of this one release, but because they are getting pounded from all angles.

Intel's strategy of limiting the PCI throughput to hold GPU manufactures back is over, and these AMD cpu's are going to be paired super well with GPU makers, mostly Nvidia but of course their own ATI, which is really going to making Intel look sad soon. Boat loads of major players have been irked by Intel holding back PCI throughput, AMD let it rip, and "thread ripped" too!

3
0xbear 7 hours ago 4 replies      
In case someone from AMD is reading this: guys, you need to fix stability issues on Linux, or your Epyc is DOA. People are having some serious lock-up trouble with Ryzen, even with the latest AGESA updates and kernels. This is the only issue preventing me from recommending to purchase threadripper workstations at work. We do need that pcie bandwidth for GPU, but we absolutely can't tolerate instability.
4
fulafel 11 hours ago 4 replies      
What makes PassMark a representative CPU benchmark? These one-company CPU benchmarks tend to be quite problematic (cf. GeekBench).

SPEC just came out with CPU2017. In SPEC there's at least a bunch of peer review, transparency and attention from academics.

Here are Anandtech's AMD vs Intel CPU2006 numbers:http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-...http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-...

5
rgbrenner 12 hours ago 2 replies      
1 user reported the score:https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadrip...

Edit: just noticed the title was changed. Originally it said something about Ryzen beating Intel's processors.

6
cosmolev 5 hours ago 0 replies      
7
rbanffy 2 hours ago 0 replies      
I'd love to see how POWER8/9, SPARC64 XII and SPARC M7 stack up.

If someone could throw in some z14 PU benchmarks, I'd be more than happy. Are AMD's server-grade EPYC parts available?

(edit: I get it. These are mostly desktop processors with some low-end server parts thrown in. It's not a comprehensive high-end CPU benchmark, as it misses the whole E7 family)

8
sp332 1 hour ago 0 replies      
Why is the "AMD Ryzen 7 PRO 1700X" testing significantly higher than the 1800X?

And why don't any of the AMD chips have clock speeds listed?

9
arca_vorago 3 hours ago 0 replies      
One thing I think is that AMD has even more room to shine as software gets better at parallelization. I saw this trend back during my days managing 250gb+ data generation/day at a genetics company, and eventually got to build a 4 cpu, 64 core AMD Opteron system for physics computation. I am super-excited about the new server line of CPU's, because the Opteron line wasn't perfect and I expect they learned a lot from it. Also, I only need to get 2 cpu's to get that 64 core count again! I dream about supermicro or someone doing a 4 cpu board for the new line... 128 cores... (hey, I can dream!)
10
convery 11 hours ago 0 replies      
So, based on that performance test then AMD should have the same performance as my two E5-2660v2 at half the price. That's pretty impressive.
11
rocky1138 11 hours ago 7 replies      
The top CPU is almost 4 times faster than my CPU (i5 6600K). Is it just down to the GHz and number of cores?

Besides those two elements, what makes this processor so much faster?

12
jrs95 4 hours ago 1 reply      
Single core performance on the i9 is still going to be significantly higher though, so it depends on your use case. Testing I've seen so far has had the i9 getting about 20% higher framerates in games, for example.
13
akerro 5 hours ago 0 replies      
I would like to point out that even in some benchmarks AMD is not on the top, it doesn't mean it's no the best buy product. There are other factors like heat production and power consumption, which Ryzen has even 3 times lower than equivalent Intel CPU.
14
eatbitseveryday 12 hours ago 1 reply      
Why aren't Intel Xeon E7 processors considered in these benchmarks? Clearly they are "high-end" and carry a $5k+ price tag to show for it.
15
O5vYtytb 11 hours ago 0 replies      
Wonder what Epyc will look like.
16
cafxx 10 hours ago 0 replies      
That explains the name. They ripped Intel a new one.
27
The Myth of the Full Stack Developer themartec.com
54 points by aholdo  1 hour ago   71 comments top 35
1
_lex 1 hour ago 3 replies      
>"most full stack developers have not truly mastered front end and back end"

We need to do something about the default of developer bashing prevalent in our culture. There's no true Scotsman, nobody is 100% perfectly attuned to the latest developments on any surface.

Instead engineers develop along competencies that are required in their work. If you need to deep dive into a backend problem, you'll get better at that problem space. Same with frontend.

Yes there are separate stacks beneath the problem being solved, and yes there's discovery and learning as people spend years in a certain focus - but does focusing only on 1 thing mean that you've attained competency in that 1 thing? Does focusing on both the frontend and backend mean that you can't have attained competency in both?

I personally work with fullstack engineers that are better at frontend than some frontend engineers, AND better at backend than some backend engineers - so the answer here is clear to me.

2
purple-again 1 hour ago 3 replies      
This again. Is a doctor in his first year of being a full fledged doctor any less of a doctor than one with 20 years of experience? The more experienced doctor is almost assuredly better and preferable, but both of them are doctors.

This assumption that you have to be a master of the front end and a master of the back end to be a full stack developer is flawed from the start. Have you created economic value with a completed project that you build the front end and back end for (no matter how spaghetti)? Congratulations you are a full stack developer. A shitty one maybe, but one no less.

3
thiagooffm 1 hour ago 0 replies      
Didn't really like the article. There's no myth, there's full stack developers and of course, they are no masters in anything, but who is?

Even the person who got a DBA job title and has been working with DBs for decades won't have the knowledge about everything in DBs.

I do pretty well in web development in general: backend(ruby/clojure/go) and react/ember.js in the FE. Can optmize queries well, know a lot about computer architecture, OS etc. And I work with people who are pretty much the same. There's a lot of people who can do the same.

Also I find it funny when he tries to put some figures of years, when it generally doesn't take more than an year to know inside-out a framework(let it be a BE or FE framework) given that you work with it full-time given that you know well another one and has enough experience in software development.

One might say that "ah, but then if you do FE and BE it will always be sort of half-assed". Not really, you can test well, write very organised code and even manage the infrastructure using containers. Nowadays it's very easy to pretty much do everything given that you work in a programming language with an extensive amount of libraries, in the end, nowadays web-development is mostly about glueing stuff while writing good code, everything well-tested and perhaps split-up in different services.

But in the end...

Does it matter? No. What matters is if you enjoy working in the whole stack(or not) and if the company has a role available for you given what you want to do.

5
ropman76 1 hour ago 1 reply      
I recently moved into a full stack developer role. Not because I am a rock star ninja coder, but because I am the only developer employee in the entire company. The only difference now from some of my more specialist dev roles is that I spend a lot more time researching things on Google/Stackoverflow etc. So when I hear full stack developer I can help but thinking "So you spend a lot of time on Google as well?" :)
6
jnordwick 1 hour ago 6 replies      
Why do you only ever hear about full stack developers in web developer context?

I work in FinTech and trading systems - very high performance and low latency systems. I deal with databases and huge amounts of data too. And I've (as well as coworkers) have hacked together trading GUIs and simple CGI to throw together data visualizations. But none of us would ever call ourselves full-stack and we'd never hire a front-end person who thought they were going to go mucking around the infrastructure. They just aren't good enough. This goes equally to small startup like firms I've been a part of as well as banks.

From what I've seen "full stack" seems to be entirely a web notion. Is it just that the backends are simpler - requiring little more than glueing together some Spring components for CRUD operations?

It seems to be a very new concept.

7
bryanlarsen 54 minutes ago 0 replies      
I can and have contributed to and/or written bootloaders and kernel drivers and web servers as well as all the other stuff people generally consider part of the the full stack. Does this make me even more "full-stack"?

Nah, it just means that, like every other experienced developer, I've got a "paint drip"[1] distribution of skills. My drip is just a little wider than other full stack developers, but the # and depth of those drips is probably comparable.

And that's what full stack means: you have exposure to the entire range of skills necessary to do front and back end, and are expert in more than one of those skills. But you aren't an expert in all of them, because nobody is.

1: https://www.facebook.com/notes/kent-beck/paint-drip-people/1...

8
Raphmedia 1 hour ago 0 replies      
My title is "full-stack developer". Does that means that I'm the perfect developer? That I am senior is all the technologies that I touch? No.

It simply means that I aim to learn all parts of our stack and do my best to master them. It's not a chore to me. I don't have to kick myself to keep up with design trends and programming trends. I love both of those fields and can't even imagine not keeping up with both side of the coin.

In my team there are some very senior back-end programmers. There's also some very senior front-end developers and very senior designers. Do I consider myself above them? Like some sort of unicorn rockstar? Nope. I'm simply a good support player.

I believe that a good full-stack developer is simply someone who is able to work on all parts of a project and help facilitate the communication between the different fields. This last part is the most important of all.

9
blackkettle 51 minutes ago 0 replies      
> On the other hand, if youre working in an agency/consultancy where its important for security/reliability/maintainability reasons to keep up to date with various areas of front end tooling, database types, AI, serverless, etc, then the answer veers closer to never.

Well I mean, the fact that they mention AI in the same breath and as though it entails the same scope as frontend tooling seems a bit absurd. People with PhDs in 'AI' spend much of their professional lives keeping up-to-date with a very narrow subfield: "keyword-spotting for automatic speech recognition".

The idea that a 'full stack' developer somehow even encompasses that certainly takes it to the realm of complete fantasy.

But "competent to be usefully productive across the development spectrum", is surely possible.

10
atbentley 30 minutes ago 0 replies      
Value is only realised by completed features, so a developer that can deliver completed, value producing features by themselves is a good thing. Teams should aim to have compositions of experts in single fields with developers who know enough in all fields. This way features can be owned by a single developer, pairing when needed.
11
pbhjpbhj 38 minutes ago 1 reply      
I used to do some part-time (second job) lone web dev, this is why i don't anymore:

>"Maintaining a deep knowledge of both front end tools, libraries, and techniques (down to browser-specific quirks), as well as backend architecture [] requires years of experience dedicated to each in addition to the time to keep up to date with how those areas are changing." //

Basically the time to keep up to date with all the tech is increasing; like a growing snake, it's hard to keep track of both ends, and the middle. Also customers expectation increases as familiarity with the web increases.

My definition of full-stack web dev would be everything front-end including things like icon design and optimisation, handling hdpi, logo design, responsive page layout, SEO, font choice and optimisation, dom scripting, etc. through to in-flight issues such as browser caching, CDNs, DNS, security (certs, etc.); through to back-end reverse proxies, varnish, caching, failover, optimisation, actual production of the HTML (PHP in my case) [edit: not forgetting DB, and it's optimisation and management, I never got as far as sharding or anything tricky], and on to keeping servers updated and running securely (SSH key management, firewalls, backup, etc.).

(And in your spare time you do sales and marketing!)

I think it's near impossible to handle the full web stack now; I'd imagine splitting it in to at least 5 roles.

Their definition of full-stack web dev appears to be just LAMP (or similar). So then you've got at minimum a 3 person team: adding a graphic designer, server manager.

Full-stack elsewhere would be something like back-end, gui design, UX, _and_ packaging/installers for distribution?

12
british_india 29 minutes ago 0 replies      
What a ridiculous article. It just takes a lot of experience to become an actual Full Stack Developer. Most developers begin in the front end and then have to claw their way out of that to become back-end developers. But that doesn't mean a back-end developer suddenly forgets everything they used in the past.

Also, there are developers working in smaller companies who have no choice but to be a full-stack developer. It just takes a lot of work. Disparaging the concept just because you've not mastered both ends does not in any way disparage the concept itself.

13
thegayngler 47 minutes ago 1 reply      
Have to agree with the article somewhat. Its very difficult to be great at every part of the stack. Big companies require this as their applications last longer than the usual startups. Backend developers think they are full stack if they can throw HTML and JavaScript on the page. The current state of things is that the front end of things is way more complex and having to consider user interactions ends up being as hard as throwing crud together.

On the other side of things, I know Frontend developers who wouldn't dare touch the backend for any reason. So A full stack developer for most companies someone who is willing to do back and front end development if they have to in order to ship a product.

For myself, if I don't like the backend stack I'll stick to the Frontend of things as that is what I specialize in. If the backend stack looks interesting to me then I'll want to work in it. Fortunately, I got a job at a company who can offer me a stack Im interested in on both the front and back ends.

14
tyurok 1 hour ago 2 replies      
"Full stack developer" is just a new word for "programmer". "Programmer" could be a role for any part of the stack, you just have to solve the problem in hand and you're good.

If you're good or not, specialist or generalist, that's another discussion.

15
pawelkomarnicki 55 minutes ago 0 replies      
As a person that can get the product from scratch to production and scale it, I can say I'm a full-stack developer. Can I feel mythical now?
16
jondubois 1 hour ago 0 replies      
I've worked as a software engineer for major companies as a back end developer and as a front end developer at different times over the past few years (though big companies rarely give you the option to do both at the same time; which is a waste). With 14+ years of experience, anyone can be a full stack developer and it can really speeds things up if the company is willing to leverage your skills.
17
polote 54 minutes ago 0 replies      
> Depending on the scale of technologies in question, one could even argue that true mastery of both frontend and backend is impossible.

lol saying that is equivalent to say that mastering frontend OR backend is also impossible.

The reason of that is the term mastery. Mastery, probably doesn't exist, or at least doesn't mean anything.

Useless article

18
l5870uoo9y 1 hour ago 0 replies      
To me a full stack developer doesn't mean mastery of every technology out there or necessarily deep experience within a limited field, but a person with broad enough knowledge to independently and successfully execute a development process. This stands in contrast with the backend developer who reject any UI work possibly out a lack of aesthetic sense.
19
wyldfire 51 minutes ago 1 reply      
> Only an individual who has had exposure to and experience in each of the elements of a stack can truly call themselves a full stack developer.

This term has always struck me as pretty nebulous. Where does it end? Do I have to have written an ISR, or a bootloader? Probably not, but the term "full stack developer" is unqualified, so how can I tell? No doubt this term is very context-sensitive. "Please guess what this means from what software you can tell our business is probably based on."

20
dc2 55 minutes ago 1 reply      
I've been developing for 10 years. I have never had a team, so I have had to build each stack from scratch, myself, including the research and decision making of each tool to use in the stack.

I've put together about 4 generations of systems in this time, each with entirely fresh stacks. The first was pre-build-tools, so I had to write my own module loaders and bundlers from scratch.

The latest web stack uses containerized deployment in a micro service architecture, sql, nosql, rest, graphql, a jwt-based authentication gateway and a modern front end stack.

Do I qualify as a full stack dev?

21
jlebrech 15 minutes ago 0 replies      
full stack is someone who know's to use just enough of each technology, perfect to start an mvp.

I sometimes regret starting projects with distinct frontends and backends when something like rails would have sufficed.

22
strictnein 1 hour ago 1 reply      
23
zitterbewegung 1 hour ago 0 replies      
A "Full stack developer" is either one of two people.

1. A person starting a startup and they are the tech lead on their website.

2. A person who liaisons with other departments but is still the owner of the project. They know enough to perform the business duties and may ask for help or the system is simple enough.

If your web app becomes big enough it will be extremely difficult to have engineers that know the whole stack but they could have enough to fix most problems and figure out to escalate.

24
jrimbault 1 hour ago 2 replies      
What's your definition of a "Full Stack Developer", I'd like to have the opinions of people here ?

Reading on Wikipedia, it seems, for someone living on Linux, knowing a full LAMP is... trivial. Configuring a debian box is like riding a bicycle. Once you have some good apache confs in your personal library, configuring apache is a breeze. Configuring MySQL (|mariadb) might need some googling if you want to do it right. Writing some php or python is just like writing software. Maybe adding on top of that some iptables and ssl certs.

I'm hoping for some discussion. I know there are more complicated architectures.

25
lojack 52 minutes ago 0 replies      
"There is no such thing as a full stack developer for our specific definition of full stack developer that happens to also support our claim"
26
killjavascript 1 hour ago 0 replies      
Is there only JS frontend these days? No

Most frontend is light work except for JavaScript. Can someone please kill this ugly monster?A few years ago, one wouldn't call themselve a developer if all he/she could is string together a UI.Javascript took us backwards. UI should be the simpliest part of putting together an app.

Also, People are still writing server side htmlwpf, winforms, java, android, ios front ends. So people should stop assuming frontend is only JavaScript.

27
neilwilson 32 minutes ago 0 replies      
Does this mean a full stack developer hasn't got one?

I'll get my coat.

28
wu-ikkyu 56 minutes ago 0 replies      
There are generalists and there are specialists. It just depends on the scope of your job duties within your team.
29
megamindbrian 1 hour ago 0 replies      
Takes too long to load. I assume this is because no one can do everything? There just isn't enough time.
30
kakarot 29 minutes ago 0 replies      
Isn't this just an argument of generalization vs specialization?

Of course the specialists will know more about their domain.

But the generalist is more useful in many situations. Both play their role in an effective team.

31
kerkeslager 1 hour ago 0 replies      
POPOVER WARNING: Page opens a popover which froze my phone's browser for 20 seconds.
32
zepolen 58 minutes ago 1 reply      
I'm a full stack developer. Myth busted.

Nay sayers ask me any question you like.

33
innocentoldguy 1 hour ago 1 reply      
I used to be a full-stack developer, back when the front-end was jQuery. Once the JavaScript community started going insane with new front-end frameworks every two weeks, and none of the companies I was interviewing with used jQuery anymore, I decided to only be a backend engineer. I don't like working with JavaScript anyway.
34
hairysc 1 hour ago 1 reply      
uh... i thought i am a full stack dev... there's something above tcp/ip?
35
mmjaa 1 hour ago 1 reply      
I've come to the conclusion that the only 'real' "Full Stack Developer" is one who, indeed, knows what the stack is, and how to use it.

Also, the heap.

Think about it - those who don't know these things, and don't care - usually gravitate around a singular technology that lets them ignore the details. Those who do know these things, and how to use them properly, usually don't have any particular focal gravity, and are prone to be more flexible, in terms of tooling and methodology.

To me, the "Full Stack" developer is simply someone who cares about whats happening under the hood. The "millennial developers" simply don't.

We had this same issue in the 70's, 80's and 90's, but of course the tools were moderately different. Where once you had Visual Basic guys who just drag and drop things around to get things done, now you have 'npm monkeys' who, for the most part, manage dependencies and the interconnections between.

If you don't know what a stack/heap/allocator is, there is no better time to learn! The world is so rich because of these things - and if you do get an understanding of your runtime environment, well .. there's always another execution environment for your pleasure. Have at it, hacker!

(Also, its been 40 years: do you know where in the OSI model your application lives?)

28
Small functions considered harmful medium.com
110 points by grey-area  5 hours ago   74 comments top 33
1
whack 2 hours ago 8 replies      
During my earlier years, I would get into all types of dogmatic debates, such as "DRY-considered-harmful" or "small-functions-considered-harmful". With experience, I've realized that such abstract debates are generally pointless. Any principal can lead to bad results when taken to an extreme, or badly implemented. Thus leading to people declaring that-principal-considered-harmful, swinging the pendulum to the opposite extreme, and beginning the cycle all over again.

Now, I find such discussions valuable, but only in the context of concrete examples. Devoid of concrete and realistic examples, the discussion often devolves into attacking strawmen and airy philosophizing. If this article had presented realistic examples of small functions that should have been duplicated and inlined, I think we can then have a much better discussion around it.

That said, I do have to offer a word of warning. It's possible that the author is a good programmer who knows how to incorporate long functions in a way that is still clear and readable. Unfortunately, I've had the misfortune of working with horrendous programmers who write functions that are hundreds of lines long, duplicated all over the place, and are a pain to understand and maintain. Having short-functions and DRYness is indeed prone to abuse, but it still works as a general guideline. Great programmers may be able to ignore these guidelines, but at least it prevents the mediocre ones from shooting themselves (and others) in the foot.

2
agentultra 1 hour ago 1 reply      
It seems like the author's idea of the term abstraction is limited to substituting procedures with functions (or worse, class methods). The claims that "all abstractions leak" and that adherence to DRY makes code "hard to follow" is what gives me this impression. This line of thinking happens if you think of code in procedural terms.

And if your sense of abstraction is to hide procedural side-effects behind function applications then yes... I can see where you might get the idea.

A real abstraction like lambda doesn't leak. Using Monads to compose side effects doesn't leak. These are mathematical abstractions and we use them all of the time: even in programming and even if you don't identify as someone who's any good at maths. Learning the lambda calculus, algebra, and first-order logic will take you much farther than thinking in procedural steps.

Composing more interesting functions from smaller ones removes so many errors that procedural code has: removal of superfluous bindings, a more declarative style, and it makes code more more easy to reason about during refactoring: using familiar tools from mathematics we can manipulate expressions and types. This is where abstractions really shine: you can manipulate higher-level expressions without caring about the details of those at the lower level. This only really happens if you care about purity in languages that don't do it for you and can reason using such tools.

3
blunte 2 hours ago 1 reply      
The goal is not to make small functions for the sake of making small functions, but it's to compartmentalize some functionality into a nice, easier to reason about thing (function).

Then you compose these easy to reason about things into more complex, but yet still easy to reason about things.

For people like me who struggle to maintain multiple layers of complex abstractions in our minds, being able to see a small function and say, "Ok, I trust this one - it does X." makes it easier to navigate up and down through the abstractions.

Perhaps part of my appreciation comes from living in Clojure and Elixir (after many years of several OOP languages).

4
coroxout 5 hours ago 3 replies      
I find a lot to agree with here.

It's all very conceptually neat and (if you're lucky) easy to read from the top down, where you enter one function and read off a list of other functions which are called in order.

But then if you look into any of those other functions they also call more functions and so on, several levels deep. And when you have to debug someone else's code because the data after function 15 of 17 isn't quite right, and you have to unpick all the places it's been passed through in slightly different versions and slightly different lists of parameters, it can be a nightmare.

Same with my linter telling me to close a file within a few lines of opening it. Personally I'd rather keep all the file-munging code in one place rather than scatter it down a rabbithole of nested functions as an exciting Alice In Wonderland story for future developers.

I try to come to a compromise on these things when working in a team, though...

5
dotdi 4 hours ago 1 reply      
Aside from the clickbait-y title, I find it quite disturbing to base that whole piece on, what even the author describes as, problems in "codebases inherited from folks whod internalized this idea to such an unholy extent".

Indeed, small functions can be bad if you completely and utterly overdo it. But wait, that's true of nearly everything else.

6
moxious 5 hours ago 1 reply      
Shortness in a function is correlated with quality in design, but it doesn't cause quality in the design.

When we simply follow formulaic advice (keep all of your functions short) we lose sight of the wisdom behind why this was wanted in the first place.

The goal is to develop the wisdom, that makes you a great engineer, not to "follow all of the rules"

7
hbt 13 minutes ago 0 replies      
>>> If you have to spend effort into looking at a fragment of code to figure out what its doing, then you should extract it into a function and name the function after that what.

Fowler is right about smaller functions and OP misinterpreted his statement.

This is what Fowler means https://gist.github.com/hbt/3e71146454a2d6388338af1d76394a13

Abstract the fragment of code in a function, keep it within the function until you need it elsewhere and do not pollute needlessly the API with functions that are poorly named and used one time only.

8
throwanem 3 hours ago 1 reply      
Obsessive decomposition of the sort Fowler, for example, is cited preferring, very quickly becomes pathological - I suspect that the codebase Fowler describes in that tweet, to the developer as yet unfamiliar with it, reads like one of those old IBM field engineer manuals where a giant circuit diagram is spread across 800 letter-size pages, all bordered with dozens of numbered arrows each referencing the page on which a given trace continues.

But I sort of feel like Sridharan throws the baby out with the bathwater, too. I mean, in the CRUD example, carefully chosen abstractions make the code easier, not harder, to read - if I'm working to comprehend how the application handles UI state changes, I don't want to have that effort complicated by a bunch of user-creation-related database interaction; I'd much rather that be in a method call so I can deal with it it when I care about user creation, and ignore it when I don't. Same for email messaging and event log injection.

And I have to say that my experience gives me to think the idea of preferring duplication over abstraction is just completely, wildly off base. I mean, sure - any given abstraction is likely to change over time as feature requests and bug reports come in. That's just the job. But if the stuff that needs to change is abstracted, it only has to change in one place. If it's not abstracted, then it has to change in N places across the entire application, not all of which are guaranteed to be easy to find - after all, you probably don't have distinctive method or function names. Hope you've got good tests! Except you don't. Or maybe you do - I never have, at any point in my career where I've worked with a codebase in which copy-pasted code was prevalent, because such a codebase is a sign of an engineering culture that's far too weak to support investment in automated testing.

9
d--b 4 hours ago 3 replies      
> The idea that functions should be small is something that is almost considered too sacrosanct to call into question

Errr... Really?! I thought we all agreed that the first rule of programming style is that "it depends"...

When they say "small functions", they mean "not the 5000 loc VBA macro that has 50 Boolean arguments, and 30 side effects".

Breaking a function that does 1 thing into sub functions just so that each of them is smaller is not a good thing. And I think people realize that fairly quickly.

10
laythea 1 hour ago 0 replies      
Function borders should be drawn by defining abstractions, not by counting lines of code. Sometimes, the most useful abstractions involve large functions.
11
alexandercrohde 42 minutes ago 0 replies      
I think the author doesn't understand the small-function-philosophy (at least not the same way I do). Let me clarify how I see it:

- Build a ton of small functions that are reusable across any project. You are essentially making useful concepts. (i.e. a library). Bottom-up.

- Once you have those, a business problem often will only be about 3 or 4 of those powerful, reusable functions.

So something like sending out a newsletter might end up being:

function sendNewsletter(letter) { database.getAllUserRowsAsIterator().forEach((row) => { sendmail(x.address, letter); }}

Now if we want the whole newsletter not to fail if there's a single exception, we can make another reusable construct "count exceptions" that's a wrapper function that catches all exceptions and builds a hashmap.

If you want this to work in a larger project, this requires having reliably unit-tested code and doc-blocks so that other people can reuse your abstractions, and then having roughly comparable coding skill to you.

12
maxxxxx 16 minutes ago 0 replies      
How about "following a rule blindly without understanding its rationale and limitations considered harmful?". People need to understand what they are doing and why and make judgement calls.
13
nabla9 1 hour ago 0 replies      
Functions have several functions.

* Functions can be like new words in a language. If there is new concept that is used frequently, it should be named or abbreviated and maybe listed in a index.

* Functions can sometimes be like chapters or sections. They are entry points listed in table of contents (API description)

* Functions are sometimes used like paragraphs. Used only once to make very long section easier to read. Not really functions. Way to structure text.

For paragraph use we might want blocks of code with function like interface. Code editors could collapse and expand them.

 let a, b, c = 100 let p = 0 paragraph "Intro to foo" (constant a, modify p) { let k, l, m, ... } assert (p < a)

14
jstimpfle 4 hours ago 0 replies      
In a program, there's a lot to optimize for

 - minimize depth of call stack. It's easy to lose track in a deep call stack. - minimize function overhead - names that have to be invented - function calls and arguments that have to be written (and read again). - many small functions: "ravioli code" where it's really hard to distinguish functions by their "function" - minimize/localize state - if it's easy to separate significant state into a function, do it. (Not making a statement about objects. They are long-lived and their state doesn't go away after the first call). - DRY - Multiple identical or similar code blocks are an opportunity to make a function. We can roughly categorize into essentially (conceptually) and accidentally similar code, and the latter case is not an indication for a new function. - obvious thread of control - simple code has the nice property that it's easy to understand by following or mapping out sequentially. By contrast, highly abstracted or callback-heavy code is hard to understand at a global level. - consistency - ... helps understanding, but can also cause an implementation to be 5 or 10 times longer if applied dogmatically.
It's good to read all these articles ("avoid state", "avoid long functions", "avoid objects", "avoid functional programming", "avoid abstraction") and to deeply understand them. Which probably means making all the mistakes on one's own.

In the end it's important to know in which situations a particular style works well, and to not be dogmatic when choosing a style.

I would say a good program tends to have both large and small functions, and the large functions tend to be at the top of the abstraction stack.

15
mnarayan01 57 minutes ago 0 replies      
1. Functions which are "longer than they should be" decrease code quality.

2. Measuring the "number of lines per function" is extremely easy.

Code quality tools have a bad tendency to equate (1) and (2) for the same reason that the proverbial drunk searches for his keys under the streetlight. This has legitimately bad consequences.

This is further compounded by the way that small functions ease mock-based testing. While certainly attractive in the abstract, when a code base is overly influenced by this I find that it is substantially more difficult to understand via inspection.

All that said...I find the whole "X considered harmful" formulation almost unbelievably annoying. Here it doesn't even make any sense.

16
jondubois 1 hour ago 0 replies      
The loss of locality argument is a very good one. Having to jump around different files whilst holding layers upon layers of abstractions at the back of your mind to figure out the source of a bug is overwhelming and extremely distracting (you literally need to keep a call stack inside your brain to pull that off).

DRY is all fine until you need more information about the function than what the function name and documentation can provide - Sometimes you actually need to peek inside the code itself.

I don't think that code can ever be 100% black box; especially as you move up higher in the chain of abstraction. This is particularly true for dynamically typed languages - These days in JavaScript I often find myself peeking inside the function's code before I invoke it - It's very easy to make false assumptions about the behaviour of the function based on its name alone and often the documentation isn't enough and doesn't tell you anything about the performance of the function (is it O(1), O(n) or O(n^2)? - You wouldn't want to call an O(n) function whilst in a loop over n because then you'd get very crappy O(n^2) performance).

17
andybak 2 hours ago 0 replies      
This was interesting:

> an awful lot of times, pragmatism and reason is sacrificed at the altar of a dogmatic subscription to DRY, especially by programmers of the Rails persuasion.

I'm not making this a tribal thing but it does sometimes feel like there's a cultural tendency in the Ruby and Javascript communities to... well... preach a little bit. All advice is flawed and most maxims are only partially true.

If a community bounces from one "one true way" to another all the time then it's probably not a particularly healthy environment for those who are learning - as they tend to lack the experience to put advice into the correct context.

18
euroclydon 1 hour ago 0 replies      
If you want to have an easy time refactoring code later, forgo OO patterns that have properties and methods in the same class. Instead, make classes for your data, and make sure to give each class a deep clone method.

Then, your logic goes in static functions that do not alter the input, but rather spit out new or cloned versions of the data in the output. Then you can reason and refactor at the method layer and not worry about hidden side effects.

19
deckiedan 4 hours ago 0 replies      
I wrote something myself trying to figure out my thoughts about this a few years ago (note: I was sleep deprived when writing it...):

https://madprof.net/nerdy/refactoring-really/

There certainly has been a over-emphasis in some quarters on 'each function should only do one thing' and even 'most of my functions are only one or two lines long'. Possibly because it makes it easier to write tests for, and be certain you've covered all of the possible options. You just then end up writing 4 million tests.

It's all about clarity, I think, but different people find different things clearer in different situations.

20
moocowtruck 2 hours ago 0 replies      
I feel like an article such as this should be full of code examples, just talking about this or that in which there are many dependent situations in which "it depends" just makes me get sleepy
21
blickentwapft 2 hours ago 0 replies      
"Considered harmful"?!? Why, this must be both IMPORTANT and LEGITIMATE, and the author has AUTHORITY. When a negative opinion carries such an academic, aloof, and professional sounding headline then I am inclined to give it great credence. This guy didn't just say "here's something I think is bad", he said it in fancy and upmarket words. Let's closely follow what they say.....

Seriously, HN should have code that auto flags anything including a subject line of "considered harmful".

22
enqk 2 hours ago 0 replies      
My theory is that the small functions motif appeared with practitioners of languages where the function is the only tool introducing a new scope / block for variable definitions. Languages like Python or Javascript. What about Smalltalk?

In language where blocks can be introduced at will (stricter Algol descendants like Pascal/C/C++/Java) or are introduced by control structures, then the need for such rule is much less necessary, and only the harm done (friction, readability) by fragmenting and obscuring logic remains.

23
alkonaut 2 hours ago 0 replies      
one problem with the example of functions

 A() B() C()
Called in sequence is that they smell of imperative code. A recipe of some kind, of steps performed in sequence. That kind of code, when it occurs, should probably just be performed in a single function. That is - until either A, B or C can be re-used by other code without creating an unnatural abstraction. If the steps as separate functions can be tested separately - great. But if they are only ever used in this sequence - do you really need to test them individually? What good does the breaking up into functions do, compared to just some comment text in a longer method.

 // Step A: ... 3 lines // Step B: ... 4 lines // Step C: ... 3 lines
Answer: very little. And it forces the reader to scroll to see the relevant code. Some languages allow the use of local functions - which is basically just some trickery to help with variable scope and not have to use a comment line to call the sub-step something. Can be quite useful.

A better example of 3 functions is when you have

 y = A(B(x))
and then turn it into

 y = A(B(C(x)))
If the C can have some kind of semantic meaning (e.g. C just fetches the price of item x before the rebate is applied by B). In this kind of functional code there is usually very little harm in making more and smaller functions. Not sure where I'm getting with this but I assume It's kind of an argument for avoiding procedural code to begin with, and aiming to make actual functions.

24
imron 1 hour ago 0 replies      
For me, I generally try to break in to smaller parts any function where the logic extends over one screen in length.

There are of course always exceptions to this, but having logic take up roughly a single screen size makes it easy to reason about.

25
realharo 2 hours ago 0 replies      
This old article explains it perfectly https://whathecode.wordpress.com/2010/12/07/function-hell/

Over-optimizing for local readability can hurt global readability in the end.

26
matttproud 4 hours ago 0 replies      
Better Stated: (overly) DRY considered harmful.

This is one thing about Go programming culture and its prevalent idioms I appreciate: low context switching cost.

27
Sergesosio 4 hours ago 0 replies      
Functions exist to prevent code duplication not for commenting code, that's what comments are for.
28
Newtopian 2 hours ago 0 replies      
A function should be X Lines long... but no longer

where X is any number of lines necessary for implementing the ONE thing that function should be doing

Any attempt to replace X with a concrete number will invariably sacrifice simplicity for the sake of that number.

29
DanHulton 1 hour ago 0 replies      
Anything, taken too far, considered harmful.
30
rhinoceraptor 4 hours ago 0 replies      
I think you can avoid a lot of these problems by using small inner functions.
31
calafrax 3 hours ago 1 reply      
This discussion is predicated on the concept that function size is calculated by the number of lines which is completely wrong.

function size (function complexity actually) is measured primarily by indent levels not length and when there are multiple indent levels with nested branches and loops this is when you are supposed to create functions. length is not really an issue in most cases.

32
_pmf_ 2 hours ago 0 replies      
I blame "Clean Code"; it's recommended reading, but at its core it dumbs down refactoring to mechanically factoring out common code without actually crafting abstractions. I'm still in the process of unlearning this. The prime directive should be "craft sensible abstractions", not "avoid duplication at any cost"; even more so when we're talking about actually modular software, where duplication is much easier to tolerate than blurring responsibility lines. (I'm generally not a big fan of Uncle Bob.)
33
twii 3 hours ago 1 reply      
So, the author thinks this is harmfull?:

 var area= ( width, height ) => width * height;
If not, it is just clickbait for me.

29
A rising sentiment that IBMs Watson cant deliver on its promises gizmodo.com
469 points by artsandsci  1 day ago   252 comments top 38
1
filereaper 9 hours ago 4 replies      
I'm quite late to this thread, but I worked on Watson very briefly (not on the core development, but overall system performance improvements).

I think there's a major misunderstanding of Watson which isn't helped by IBM's Marketing efforts. IBM Marketing has been slapping the "Cognitive" label on everything and is creating unrealistic expectations.

The Jeopardy playing Watson (DeepQA pipeline) was a landmark success at Information Retrieval, its architecture is built largely on Apache UIMA and Lucene with proprietary code for scaling out (performance) and filtering & ranking. I'm not an expert on IR so I won't comment further. This is very different from Neural Nets that are all the rage in ML today.

I'd like to point the following links from David Ferrucci [1] the original architect of Watson and this technical publication at aaai.org [2].

The DeepQA pipeline wasn't fluff, the intention was to take this question-answer pipeline and apply it to other verticals such as Law and Medicine, essentially replace the Jeopardy playing Watson's corpus of Wikipedia, Britannica etc... with Legal and Medical equivalents.

Given its runaway PR success, the Watson brand was applied to many other areas which haven't been successful but I'd like to point out what the original product was here.

[1] https://www.singularityweblog.com/david-ferrucci-on-singular...[2] https://www.aaai.org/Magazine/Watson/watson.php

2
ChuckMcM 23 hours ago 5 replies      
When I worked at IBM I expressed concern that the television commercials depicting a HAL9000 level interactive dialog system were dangerously overselling what Watson could do.

The challenge, as I saw it, was that no matter how good the tools and products that were used to help companies with data analysis to improve their operations were, when they realize they can't talk to a cube and joke with it about misusing colloquial phrases their disappointment overshadows all the 'good' stuff it was doing for them.

No relationship works well if it starts with a lie and as this article shows, people do take those ads at face value and assume there really is a talking AI inside of IBM. Then they are hugely disappointed when they find out it doesn't exist.

3
tangue 1 day ago 5 replies      
Crdit Mutuel (a french bank) has adopted Watson [0] and it's not encouraging : it was supposed to help answering emails, : they had to describe manually the concepts in emails and create topics in which looks a lot like decision-trees (and reminds me of this 1985 ad for Texas Instrument's Lisp AI https://www.scientificamerican.com/media/inline/blog/File/De... scroll to see the ad)

Indeed the whole thing looks like a database with basic AI as a sales argument...

[0 - in french] http://www.silicon.fr/credit-mutuel-non-ia-watson-magique-17...

4
slackingoff2017 1 day ago 6 replies      
IBM is a dying giant, I've seen it languishing for years. Their massive screw up was a decade ago when they decided shareholder value was more important than having good engineers. They've since gutted their R&D departments and all that's left are duds and underpaid undereducated consultants rented from places like Accenture.

The only good thing to come out of IBM in years is their Hyperscan regex library and unsurprisingly they don't market it at all or build practical applications with it

5
laichzeit0 9 hours ago 0 replies      
I had problems with Watson to the effect that not even the documentation matches reality. There are some fairly basic things missing from their NERC offering. I can tell you that the functionality that is missing is so basic (e.g. negation) that without a doubt, no one in IBM has ever used this offering in practise beyond a toy example.

The idea that IBM Watson is some uniform AI in a box with a bunch of REST API's to "expose" its intelligence seems to be the sales pitch. It's not. It's just a bunch of acquired products (you can see this when e.g. Watson Knowledge Studio breaks and you see the Python scripts that glues everything together in the backend) that are poorly integrated, probably because the left hand has no idea what the right hand is doing.

Caveat emptor!

https://stackoverflow.com/questions/44796501/ibm-watson-know...https://stackoverflow.com/questions/44800879/ibm-watson-know...

6
chisleu 3 hours ago 0 replies      
I'm late but have something to add.

Until last week I was on a 6 month contract as a senior DevOps engineer for IBM/Watson. I was responsible for one of the huge real-time data ingestion pipelines that Watson receives. I left to work elsewhere in spite of being offered an excellent position. (If you guys are reading this, hi.)

I went to IBM not expecting much more than working as a cog in a lumbering giant.

Watson is the fastest growing part of IBM. If IBM has all of those eggs in one basket, it is the Watson basket. There were lots of jokes about cognitive in the office pool.

That said, it was by far one of the best managed companies I've seen. They have some fantastic data engineers and scientists. They are backing most of the open source projects related to AI and next generation tech. Spark, VoltDB...

The ads might seem sensational, but the concept of a black box that orders preemptive maintenance for an elevator isn't far fetched...

More over, Watson had so many current customers because it is valuable. The technical advisors that but products don't put faith in ads any more than we do.

7
notfromhere 1 day ago 1 reply      
The dirty secret is that IBM Watson is just a brand for their army of data consultants, and their consultants aren't very good. In my experience working for a competitor in this space, IBM Watson was widely agreed to be smoke and mirrors without much going on
8
peteretep 1 day ago 4 replies      
A couple of years ago I was given a project that was essentially "Evaluate Watson APIs to see if there's anything there we could make use of", and came away with the distinct impression that it was largely smoke and mirrors, and there was very little that was either effective or interesting there.
9
blueyes 1 day ago 2 replies      
IBM has almost zero credibility in deep learning and AI. They haven't hired anyone of note. They haven't produced any novel or influential research in the field in years. And yet they air these cheesy Dylan ads and the rubes fall for it. Watson is a Theranos-scale fraud, and it's finally coming out.
10
throwaway_ibm 1 day ago 0 replies      
I know someone who is intimately involved with IBM Watson, they are highly educated and constantly diss the system. Calling it, 'Just a large database'. If Watson was a true breakthrough, it should be gaining marketshare throughout it's specialities but it's not. Google is leading the industry with DeepMind; Facebook and Microsoft aren't far behind. I'd encourage others to be very skeptical of the PR that IBM is pushing about their Watson problem.

disclosure: I haven't read the article but wanted to share a related story.

11
ams6110 1 day ago 0 replies      
What? A brand name which is just a word meaning "IBM Enterprise Products and Services" doesn't really live up to the marketing hype? I can't imagine such a thing.
12
strict9 1 day ago 0 replies      
Many years ago when I worked for a company that decided our existing ecommerce app was too terrible to fix and would be too much effort to rebuild, we talked to a number of vendors, including IBM. The marketing materials and salespeople made a compelling case, but deeper dives into the app itself and the support engineers behind it convinced even the most enthusiastic internal cheerleaders to look elsewhere.

In recent years as news articles heralding the future of Watson for various industries (including healthcare and supply chain), I predicted a similar path. An amazing product in a very narrow environment designed specifically for marketing and selling purposes, and not very adaptable.

FTA: And everybodys very happy to claim to work with Watson, Perlich said. So I think right now Watson is monetizing primarily on the brand perception.

This is painfully obvious, as this has been IBM for a very long time.

13
scottlocklin 22 hours ago 1 reply      
Yeah, well, "duh."What boggles my mind is people will read this, nod sadly, and continue not to notice that a whole bunch of what they think they know about machine learning, autonomous vehicles and so on is also marketing department hype.
14
jjm 11 hours ago 0 replies      
They had so much time to contribute but instead chose marketing and pushed into areas where they didn't really have a handle on yet. As in management didn't understand.

I mean all the datasets, dozens of libraries, stunning NN demos and training sets, TPUs (multiple versions at that!) all could've come out of the company.

Think if keras and tensor flow were from IBM. Or all those cars now running Nvidia Jetson, or mega datacenters running NV100s or Google TPUs.

Shoot they even had a chance to enhance PowerPC ICs for NNs.

Alas but nope.

15
simonh 1 day ago 2 replies      
This just goes to show just how tragically far away we are from even beginning to build the rudiments of a strong general purpose AI. For all the fantastic achievements of systems like Watson and Alphago, and they are amazing achievements, they are radically optimised special purpose systems fine tuned to solving one extremely specific and narrow problem, and that problem only.

Watson is a case study in this, but I know Google has big plans for applying the tech behind Alphago in medicine. I wish them every success, but I'm concerned they will hit similar specialisation issues.

16
dpflan 1 day ago 1 reply      
I like how IBM does very elaborate marketing ploys to hype their wares: like Deep Blue competing against Kasparov and Watson competing against Jennings to showcase IBM's engineering prowess. But it does sell the idea pretty well I think, but perhaps the idea is too grand/far ahead of the present.
17
speeder 1 day ago 2 replies      
I actually love the idea of Watson being used for healthcare...

Sadly I think it is being used wrong...

IBM is focusing on using Watson to cure very specific diseases, like certain types of cancer.

I think a far better use for Watson would be to do initial diagnosis, for example my life got massively delayed because I got hypothyroidism as teenager, but only using internet data I could self-diagnose and self-treat (because doctors are still unwilling to help, not trusting data, and before someone come berate me for self-treatment, it is working...) as adult I could finally get my life 'started' (hypothyroidism affect physical and mental development, and slows down metabolism and the brain)

During my quest I met many, many, many people on internet, that had self-diagnosed with something using the internet as a tool. All of us would have been diagnosed properly if Watson was being used on the doctors office, using its data crunching capabilities and symptoms as input to find out what problem we had. (in my case: I have Hashimoto's disease)

18
ghostly_s 16 hours ago 0 replies      
I overheard a good-'ol-boy businessman at a hotel bar a few months back. He bore an eerie likeness to Bosworth from Halt and Catch Fire, and was telling a younger gentleman about a project he worked on. "...so Watson comes in and they Algorithm the whole thing..."

I'm pretty sure he thought Watson was a person.

19
tCfD 1 day ago 1 reply      
Obvious fix is for IBM to put Watson on a blockchain /s
20
Probooks 22 hours ago 0 replies      
Problem is deeper (and simpler). IBM does not look for clients, but rather victims. We clients end up being caught in an internal upsales fight. Nobody cares which is the best solution IBM as a whole can offer to you (their own people do not even know all their available tools!), but rather how much suboptimal stuff each salesman can load onto you. I'm on my way out of IBM...
21
megamindbrian 1 hour ago 0 replies      
Everything to do with IBM is too expensive for the average user.
22
dislikes_IBM 22 hours ago 2 replies      
IBM has a toxic culture. They are the vendor lock-in Gods. Every company I've ever worked for has cringed at the mention of IBM, never suggested them as a new solution, and always regretted whatever if anything they locked themselves into.

They are the only company that charges you to sample their API's. They are the absolute worst, an infection that needs to be cured.

23
crsv 1 day ago 1 reply      
Replace IBM's Watson with anything branded with "AI" right now and themes in the article still hold up.
24
batmansmk 1 day ago 4 replies      
You can try by yourself. https://alchemy-language-demo.mybluemix.net/

Imagine analyzing product reviews to determine if it was positive or negative.Type "I like it", and see the inaccurate targeted sentiment (neutral sentiment instead of positive).

25
ExactoKnight 14 hours ago 0 replies      
Watson's Natural Language Classifier, in particular its categorization API, is actually pretty impressive...
26
etiam 1 day ago 0 replies      
It's tempting to start whispering winter is coming, but I think one may reasonably hope that the current fashions at large have enough nuance to differentiate between this particular marketing gimmick and the broader developments in ML.

Personally I'd be happy to see the paragraphs/minutes at the beginning of far too many interviews about "intelligent" machines exchanged, from straightening out the misconception that Watson is an example of this new hot "Deep Learning" thing and one of the pinnacles of achievement in the field, for some type of more valuable type of commentary from leading researchers.

27
dboreham 1 day ago 4 replies      
Bundle up for the second AI Winter...
28
outside1234 1 day ago 0 replies      
You don't say! This is IBM consulting ware? Who would have guessed!
29
et2o 1 day ago 1 reply      
I saw a very humorous twitter exchange between a bioinformatician and IBM Watson's twitter account. The scientist asked them to provide any peer-reviewed ML publications and the best they could do was an abstract at a regional conference no-one has heard of. And it was a terrible abstract.

It's completely marketing. IBM still has a good name among people who don't know much about technology. They're trading on this and the current saturation of 'machine learning' in the popular press.

30
currymj 1 day ago 0 replies      
it's just a brand name at this point, which they attach to any machine learning they develop or acquire, and they should stop trying to sell it as a distinct technology.
31
ceedan 1 day ago 1 reply      
Does IBM itself even "use" Watson?
32
PaulHoule 1 day ago 0 replies      
This was my opinion when they started running these ads. My opinion has actually softened a little.

Some of the cognitive services they are offering today are not half bad; also I can say their salespeople are doing a gangbusters job in places.

33
diego_moita 1 day ago 1 reply      
> "In the data-science community the sense is that whatever Watson can do, you can probably get as freeware somewhere, or possibly build yourself with your own knowledge"

Any suggestions about the freeware?

34
d--b 23 hours ago 0 replies      
watson's mistake is to have gone the chat bot route. promising a natural language input for all underlying problems simply discredits everything else...
35
moomin 1 day ago 3 replies      
Completely off topic, but didn't IBM have a system called Watson in the 1990s that was used by the police? Try as I might I can't find a reference for it anywhere.
36
riku_iki 1 day ago 0 replies      
Any first firings for choosing IBM?..
37
iamleppert 1 day ago 2 replies      
The real crime is in using cancer kids to sell your product. I mean, who even does that? Even if you could cure cancer for kids, I find it incredibly tacky to go around making commercials about how you can cure cancer for kids, which aren't targeted at those who actually are in the position to use the technology, and its used to market to other tangential industries where the real money is. It's just despicable and you can tell right there its smoke and mirrors.

There's a special place in hell for anyone working at IBM or involved in the Watson project who is supporting this thing. It's damaging legitimate deep learning/machine learning industry and generally making a fool out of IBM, AND giving children with cancer false hope....just so IBM can try and stay relevant and make money?

38
potatoman2 1 day ago 2 replies      
Did they really need to stick an insult at Trump in there?
30
Implementing Font-Size in Servo: An Unexpectedly Complex CSS Property manishearth.github.io
236 points by kibwen  17 hours ago   31 comments top 10
1
tannhaeuser 7 hours ago 1 reply      
What an amazing article. Thanks for sharing.

I've been wondering for some time now whether CSS should have a formal semantics to help implementations, and for posteriority (eg. not leaving behind a mess of specs making it infeasible to implement browsers from scratch/specs for generations to come).

For an example of what I'm after, see [1], which is using attribute grammars/logic grammars for a small fragment of CSS box layout, and is one of the precious few attempts of a formal semantics for CSS.

[1]: https://lmeyerov.github.io/projects/pbrowser/pubfiles/extend...

2
Illniyar 5 hours ago 3 replies      
These kind of complexities is why, as much as I would love it to, the web as it is cannot compete with the performance of native apps.

What is in native a simple number (maybe two) is dozens of pages of specifications - which result in insane amount of work to deduce the value of a simple variable.

Layout and rendering is probably even worse in this regard.

3
christophilus 7 hours ago 0 replies      
I'm always amazed by how performant complex CSS can be. Really complicated pages render in a few hundred ms, even though so much complicated computation is happening under the hood. Browser developers are true wizards.
4
leeoniya 9 hours ago 1 reply      
> A lot of the web platform has hidden complexities like this, and it's always fun to encounter more of them.

"fun" is an odd choice of words here; () seems more apt.

also, what's the future of MathML? what's the point of continuing to support a standard that Chrome has decided not to implement? it means 50% of web users won't see MathML, so web authors will never bother writing it.

there was this a while ago: https://news.ycombinator.com/item?id=11444830

5
mynewtb 9 hours ago 2 replies      
What an interesting and educational post! Thanks!

It's baffling just how complex CSS became over the years. I wonder, is the extra work involved for all the relative tracking something to consider for website performance?

6
irrlichthn 9 hours ago 0 replies      
I wrote a free responsive website editor (named "RocketCake"), and was surprised how complex CSS and these HTML rules are to implement, although I only needed not all of them. When I told my fellow programmers, they didn't believe me. I'll send them a link to this. Nice article!
7
rhythmvs 6 hours ago 2 replies      
> monospace fonts tend to be wider, so the default font size (medium) is scaled so that they have similar widths

> the base size depends on the font family and the language Default system fonts are often really ugly for non-Latin- using scripts.

often, tend to be: I am worried. I think its a really bad idea to deflect default behavior based on such assumptions, certainly when the deviations are a blind process triggered by proxies (like language tags and some vaguely statistical rules-of-thumb for dealing with generic font family names). That is: without even looking at the actual design and metrics of the actual font involved.

What happens if the monospaced font in case has a normal x-height and/or an advance-width equal to that of its serif counterpart? What if the CJK and Devanagari fonts have characters drawn already big on the body? Then such hard-coded default moonshot-fixes which try to cater for the lowest common denominator will make things needlessly hard to debug and force the designer still to ad-hoc size-adjust font per font, but now also trying to fix the browsers fixes. (Too bad: any `normalize.css` wont help)

And yet, all of the needed data is available in the font file. Theres even a dedicated CSS property for dealing with fonts varying metrics: `font-size-adjust`. Not that browser makers care to implement, but since the OPs post concerns Firefox (which does support `font-size-adjust`, but the article does not discuss it) I wonder: is it a matter of performance that retrieving the actual font metadata and metrics is left out of the equation? Surely, the fact that local font files, base64 data-URI embedded or externally hosted ones can be used, makes implementation all but trivial

At Textus.io were going great lengths to solve typographic issues such as these. Point in case: for each font we read out the `xHeight` value, then calculate the actual font-size relative to the fonts UPM (unitsPerEm), so we have consistent apparent font-sizes, c.q. aspect ratios.

I think it all boils down to a separation of concerns: proportion and interrelated sizes (ascender, caps, descender, x heights, stem width, etc.) are up to the discretion of the font designer, overall aspect size is the business of the typesetter (css stylesheet author), and the browser ought always draw consistently, regardless of generic font family name, language and/or Unicode code range.

https://opentype.js.org/font-inspector.htmlhttps://medium.com/@xtianmiller/your-body-text-is-too-small-...https://github.com/h5bp/html5-boilerplate/issues/724https://developer.mozilla.org/en/docs/Web/CSS/font-size-adju...http://caniuse.com/#feat=font-size-adjusthttp://www.textus.io

8
LoSboccacc 5 hours ago 0 replies      
he's missing out sizing in viewport dimensions, even if is probably reducible to the pixel dimensioning adjusted for zoom.
9
_pmf_ 8 hours ago 0 replies      
Would one really expect anything at all in CSS to be simple?
10
p0nce 7 hours ago 0 replies      
Thanks, I had always wondered how font-size was implemented.
       cached 11 August 2017 16:02:01 GMT