hacker news with inline top comments    .. more ..    17 Sep 2016 News
home   ask   best   3 years ago   
1
Googles HTML Styleguide Omit Optional Tags google.github.io
105 points by franze  1 ago   56 comments top 19
1
kgtm 48 ago 4 replies      
Maybe it will make more sense once it fully sinks in, but I think in general it is a mistake to make developers think about when and where certain things can be omitted. It's more straightforward to simply do one thing, consistently, following the "explicit is better than implicit" mantra.

What happened to optimizing for mental overhead instead of file size? This simply should be a build step, part of your minification and concatenation dance, not having to consider all of these when trying to decide if I should close my <p> tag or not:

A p element's end tag may be omitted if the p element is immediately followed by an address, article, aside, blockquote, details, div, dl, fieldset, figcaption, figure, footer, form, h1, h2, h3, h4, h5, h6, header, hgroup, hr, main, menu, nav, ol, p, pre, section, table, or ul element, or if there is no more content in the parent element and the parent element is an HTML element that is not an a, audio, del, ins, map, noscript, or video element, or an autonomous custom element.

2
tedmiston 41 ago 2 replies      
Just to put the code sample here...

 <!-- Not recommended --> <!DOCTYPE html> <html> <head> <title>Spending money, spending bytes</title> </head> <body> <p>Sic.</p> </body> </html> <!-- Recommended --> <!DOCTYPE html> <title>Saving money, saving bytes</title> <p>Qed.
Does the <head> tag really not matter anymore?

3
the_mitsuhiko 51 ago 1 reply      
That's the styleguide that Flask and other pallets docs always had for many years already. People keep opening pull requests to change it and are always surprised when I point ou that it's not only not wrong but also by the spec.

If you consider how the parser for HTML5 actually works many of the closing tags you would encounter don't actually add any value unless you have some trailing text that should be attached to the parent node.

4
jap 4 ago 0 replies      
I've been omitting optional tags for a while.

One thing I've noticed is that bing webmaster tools will report "The title is missing in the head section of the page" when there is a title, but no <head>. Maybe bing can't properly crawl pages without a <head>. Another service I've used had the same problem, but can't remember which.

So it might be worth being careful with omitting <head> - and maybe other tags, I'm reconsidering whether it's a good idea.

5
niftich 17 ago 2 replies      
I know that HTML5 deliberately throws out the SGML heritage (to say nothing of XHTML) and makes all of this valid, but this just feels like another micro-optimization that Google promotes because at their scale, every little bit helps.

Besides, isn't this "visual redundancy" (not to be confused with semantic redundancy) is what compression is supposed to solve, and has been solving since, effectively forever? So that we can code to reduce our (and the 'view source'-reader's) cognitive load, and let gzip or brotli or whatever new scheme work its compressive magic before it squirts our payload across a newfangled binary HTTP/2 protocol?

6
Pxtl 57 ago 3 replies      
As somebody who does a lot of xml, I'm weirded out by the idea that the root tags are optional. I mean, get certain child elements and attributes being optional, but the parent ones? That's.... hard.
7
tangue 57 ago 1 reply      
Ok, I've juste discovered that in the html5 specification you can omit tags. I've always been reluctant to push Jade to my coworker but it makes much more sense now.
8
keeganjw 52 ago 1 reply      
However weird it feels, this makes sense. Why did we ever have do things like use an HTML tag immediately after declaring the DOCTYPE as HTML anyway?
9
LethargicStud 8 ago 0 replies      
One thing I don't quite understand is omitting protocol. If you don't know the protocol, fine it makes sense to omit it. However if you know a resource can always be loaded via HTTPS (eg from CDN), isn't it safer to force HTTPS?
10
sanketsaurav 23 ago 0 replies      
It looks counter-intuitive, though -- even if it is the spec. Especially for beginners, who might feel completely out of place. As other people have pointed out here, it's better to be implemented as a step of the build process if you really want to save on those bytes. Counter-intuitive patterns are nightmares for devs.
11
franze 36 ago 0 replies      
here is an edge case: the <head>-tag might be optional, but HTML elements do have a different behavior when places in the <head> section or the <body> section.

namely, the always beloved <noscript> tag https://developer.mozilla.org/en/docs/Web/HTML/Element/noscr...

which is a flow content element in the body section

but if used in the <head> it might include links, style and meta-tags and then it should not be treated as content element.

as the <head> element therefore changes the behavior of its child-elements, does this make it non optional?

p.s.: i think DOMParser.parseFromString() in Chrome gets this <noscript> behaviour wrong in some cases (closes the <head>-section as it treats the <noscript>-tag as content-element, even though it is in the <head> with just links & style children, so it shoudn't close the <head>...)

12
innatepirate 55 ago 1 reply      
In-spec or not, I don't like it
13
dom96 49 ago 1 reply      
Hrm, is anybody else's browser not navigating to the "Optional Tags" anchor?
14
hooph00p 59 ago 1 reply      
I can get behind this.
15
reimertz 47 ago 1 reply      
scripts in <head> tag works perfectly as well as long as you position scripts above elements you would put in the <body> . I will start doing this!

demo: http://jsbin.com/duqonahiyi/1/edit?html,console,output

16
mschuster91 6 ago 1 reply      
It's a screenscraping protection, too.

Documents malformed this way cannot be parsed e.g. with PHP's DOM functions without significant headache.

17
mozumder 52 ago 0 replies      
The rules are really complicated here. It's best to do this using a minifier instead of through hand-coding.
18
Kenji 49 ago 1 reply      
Uhh... I wonder how large the percentage of users is that see a broken site if you strictly adhere to that principle.
19
ebbv 31 ago 2 replies      
Boo. Maybe it's over the top but for me the fact that something this awful made it into Google's official style guide tells me the nuts are really running the asylum over there. Was nobody in charge doing web development in the 90s or even the early 2000s? Has nobody there ever been put in charge of a legacy site that was written this way? There's a reason we all agreed to stick to standards and make our HTML verbose in the mid-2000s.
2
In India, a Rich Food Culture Vanishes from the Train Tracks: The Salt npr.org
18 points by kposehn  51 ago   3 comments top
1
rayiner 9 ago 2 replies      
Sounds . . . pungent.

I used to take the Amtrak every day from Union Station in DC to Penn Station in Baltimore. Friday evening trains were always full of people who brought dinner onboard. The food smells were awful--there is a reason Amtrak's Cafe Car only sells bland food. Being in a train without air conditioning and the smell of curry permeating the air sounds like my own personal hell.

3
The basic neurobiology behind the 12-dot illusion theneurosphere.com
96 points by neurosphere  4 ago   10 comments top 7
1
Sharlin 1 ago 1 reply      
The fovea, the area of high-resolution vision in the middle of the field of view, is surprisingly small, just a few degrees across. The resolution falls rapidly outside the fovea [1]. A lot of the detail we perceive in the periphery is actually the brain filling in blanks based on "cached" data.

The resolution drop could in principle be taken advantage of in computer graphics, especially in VR applications with robust enough eye tracking [2].

[1] https://en.wikipedia.org/wiki/Fovea_centralis

[2] https://en.wikipedia.org/wiki/Foveated_imaging

2
chakalakasp 52 ago 1 reply      
The way the eye interfaces with the brain never ceases to amaze me. Another fun fact is that it even has it's own error correction mechanisms, one of which you can intentionally miscalibrate in order to see colors that aren't there for days, weeks, and sometimes even months after doing the calibration (which is acheived by looking at a very specific image pattern for a long time). I would not recommend actually doing this, as those who have have reported back that the illusory colors become quite distracting and actually cause a bit of emotional distress after many months, but it's crazy that the brain has this kind of chromatic abboration error correction programmed in in the first place. https://en.wikipedia.org/wiki/McCollough_effect
3
kevinalexbrown 44 ago 0 replies      
I'm not sure the central argument fully explains the illusion.

If you can simultaneously see several stars in the sky using your peripheral vision using averted gaze, why not several dots?

I suspect uncertainty plays a part, but image completion from higher-order feedback that complete the lines might drive the illusion more. Put another way, I believe if you remove the gray lines, the illusion ceases to work.

4
dharma1 26 ago 0 replies      
I was looking into this when the 12-dot illusion came out, and found an interesting refutation of the receptive field theory regarding a related illusion - the original Hermann grid. I'm not sure if it also applies to the 12-dot illusion, where the illusion seems to be more about foveal/peripheral accuracy.

You can read about it here, and play with a demo:

http://www.michaelbach.de/ot/lum-herGridCurved/index.html

http://web.mit.edu/bcs/schillerlab/research/A-Vision/A15-2.h...

5
dahart 21 ago 0 replies      
It's pretty surprising to learn how fast visual acuity falls off outside the fovea. Our brains are amazing at making us think we can see a wide field of view when we really can't.

This article was great, fun to read. I think this chart summarizes the whole thing:

https://goo.gl/images/e4JKt4

If you fixate on the dot in the middle, all letters are equally legible to your eyeballs. This lets you see directly the difference in resolution between your fovea and your peripheral vision.

6
erelde 1 ago 1 reply      
I found that once I zoomed in to have only 6 dots on the screen and zoomed out, the illusion disappeared and I was able to see the 12 dots simultaneously.

I tested it on a friend and same thing but he had to stay zoomed in longer than me, for me it was instant, he had to stay focused 5 or 10 seconds.

And now I can't not see the 12 dots even 48 hours later without being exposed to the image.

7
tomrod 1 ago 0 replies      
This was a fascinating read about the neurobiology of the visual system. Loved it!
4
Mark Zuckerberg Help in Publicising 11 Yrs Old Nigerian Tayo's Spike Rush App naijafixer.com
19 points by abula  1 ago   7 comments top 3
1
hardwaresofton 1 ago 2 replies      
Maybe I'm too much of a pessimist/cynic, but while I certainly love that Zuck is doing things like this, I hope he's not just trying to butter Nigeria up to the free basics anti-open-internet bullshit that he tried to pull on India.

Nigeria and Africa are definitely the next big frontiers for the internet, and I honestly doubt that whatever facebook's got planned is mutually beneficial enough. Though if FB decides to establish and maintain a persistent, competent power company in Nigeria, it might actually be worth it, NEPA is shit.

2
cmarschner 1 ago 1 reply      
Great to see Mark promoting talent from Nigeria. Africa is yet to untap the potential of its people, and Nigeria will play a central role.
3
vegabook 31 ago 0 replies      
Facebook's transparent attempt to own the internet in Africa is scary, distasteful, but strangely tinged with inevitability due to Mark Zuckerberg's unstoppable hunger and relentless competence. I hope Africa is as savvy as the West (has belatedly become) to his self-serving PR stunts, and gives him an almighty neo-colonial kick in the butt.
5
Ultrasound Haptic Technology Could Revolutionise Man-Machine Interaction theengineer.co.uk
5 points by M_Grey  32 ago   1 comment top
1
visarga 12 ago 0 replies      
Ultrasound haptic porn site in 3..2..1..
6
Consistency is Consistently Undervalued kevinmahoney.co.uk
167 points by kpmah  8 ago   71 comments top 18
1
vidarh 5 ago 7 replies      
My opinion is exactly opposite: Consistently is overvalued.

Requiring consistency in distributed system generally leads to designs that reduces availability.

Which is one of the reasons that bank transactions generally do not rely on transactional updates against your bank. "Low level" operations as part of settlement may us transactions, but the bank system is "designed" (more like it has grown by accretion) to function almost entirely by settlement and reconciliation rather than holding onto any notion of consistency.

The real world rarely involves having a consistent view of anything. We often design software with consistency guarantees that are pointless because the guarantees can only hold until the data has been output, and are often obsolete before the user has even seen it.

That's not to say that there are no places where consistency matters, but often it matters because of thoughtless designs elsewhere that ends up demanding unnecessary locks and killing throughput, failing if connectivity to some canonical data store happens to be unavailable etc.

The places where we can't design systems to function without consistently guarantees are few and far between.

2
brandur 5 ago 3 replies      
Amen. Whether or not the article's example is a good one, in a world without consistency you need to worry about state between _any_ two database operations in the system, so there's nearly unlimited opportunity for this class of error in almost any application found in the real world.

The truly nefarious aspect of NoSQL stores is that the problems that arise from giving up ACID often aren't obvious until your new product is actually in production and failures that you didn't plan for start to appear.

Once you're running a NoSQL system of considerable size, you're going to have a sizable number of engineers who are spending significant amounts of their time thinking about and repairing data integrity problems that arise from even minor failures that are happening every single day. There is really no general fix for this; it's going to be a persistent operational tax that stays with your company as long as the NoSQL store does.

The same isn't true for an ACID database. You may eventually run into scaling bottle necks (although not nearly as soon as most people think), transactions are darn close to magic in how much default resilience they give to your system. If an unexpected failure occurs, you can roll back the transaction that you're running in, and in almost 100% of cases this turns out to be a "good enough" solution, leaving your application state sane and data integrity sound.

In the long run, ACID databases pay dividends in allowing an engineering team to stay focused on building new features instead of getting lost in the weeds of never ending daily operational work. NoSQL stores on the other hand are more akin to an unpaid credit card bill, with unpaid interest continuing to compound month after month.

3
matt_wulfeck 10 ago 0 replies      
Maybe I'm crazy, but I never see atomic libraries that are called like this:

 bank_account2.deposit(amount) bank_account1.deposit(amount)
Isn't this kind of thing always called in some atomic batch operation?

 transact.batch([ account[a] = -8, account[b] = 8 ]).submit()

4
phamilton 26 ago 0 replies      
This profile example is missing the better approach: avoid the dependency of creating the user before creating the profile.

Create the profile with a generated uuid. Once that succeeds, then create the user with the same uuid.

If you build a system that allows orphaned profiles (by just ignoring them) then you avoid the need to deal with potentially missing profiles.

This is essentially implementing MVCC. Write all your data with a new version and then as a final step write to a ledger declaring the new version to be valid. In this case, creating the user is writing to that ledger.

5
olalonde 7 ago 2 replies      
And in case you think there's a general solution to that problem, there isn't: https://en.wikipedia.org/wiki/CAP_theorem

Still, it's funny how banking seems to be the canonical example for why we need transactions given that most banking transactions are inconsistent (http://highscalability.com/blog/2013/5/1/myth-eric-brewer-on...).

6
toolslive 2 ago 0 replies      
The problem is that when the system does not guarantee consistency, you force the application developer using the system to solve that problem. Each application developed, will have to solve the same problem. Besides the fact the same effort is done over and over again, you also are forcing application developers to solve a problem for which they probably do not have the right skill set. In short, that strategy is wasteful (replicating work) and risky (they'll make mistakes)
7
mettamage 3 ago 0 replies      
Before I read this article, the following question popped into my mind and it miiiiiight be tangentially related -- yea probably not, blame the title ;) When taking the concept of consistency, does consistency have an effect that is akin to compound interest?

For example, imagine someone doing the same thing year after year diligently. (S)he'd increase his or her skill say 10% a year (have no clue what realistic numbers are). Would that mean that the compound interest effect would occur?

I phrased it really naievely, because while the answer is "yes" in those circumstances (1.1 ^ n). I'm overlooking a lot and have no clue what I overlook.

I know it's off-topic, it's what I thought when I read the title and I never thought about it before, so I'm a bit too curious at the moment ;)

8
acjohnson55 2 ago 0 replies      
I sort of agree. The examples in the article are ways in which people play fast and loose with consistency, often using a NoSQL store that has poor support for atomicity and isolation. This is a helpful message, because I've definitely seen naively designed systems start to develop all sorts of corruption when used at scale. The answer for many low-throughput applications is to just use Postgres. Both Django and Rails, by default, work with relational databases and leverage transactions for consistency.

Then, there is the rise of microservices to consider. In this case, I also agree with the author that it becomes crucial to understand that the number of states your data model can be in can potentially multiply, since transactional updates are very difficult to do.

But I feel like on the opposite side of the spectrum of sophistication are people working on well-engineered eventually consistent data systems, with techniques like event sourcing, and a strong understanding of the hazards. There's a compelling argument that this more closely models the real world and unlocks scalability potential that is difficult or impossible to match with a fully consistent, ACID-compliant database.

Interestingly, in a recent project, I decided to layer in strict consistency on top of event sourcing underpinnings (Akka Persistence). My project has low write volume, but also no tolerance for the latency of a write conflict resolution strategy. That resulted in a library called Atomic Store [1].

[1] https://github.com/artsy/atomic-store

9
xanton 5 ago 1 reply      
Transactions can have different isolation levels. And sometimes the problem at hand can be implemented using transactions with weak isolation levels which are not that hard to implement using your favorite NoSQL database that support CAS operation. I recommend this article: http://rystsov.info/2012/09/01/cas.html
10
xarien 1 ago 0 replies      
Depends on your POV. Startups undervalue it and corporations overvalue it. At the end of the day, it's just risk management.
11
ah- 2 ago 1 reply      
You can use event logs and eventual consistency to solve this problem.

Basically you make the transfer of money an event that is then atomically committed to an event log.The two bank accounts then eventually incorporate that state.

See http://www.grahamlea.com/2016/08/distributed-transactions-mi...

But I agree that often life is easier if you just keep things simpler. If you require strong consistency like with the user/profile don't make that state distributed. If you do make it distributed you need to live with less consistency.

12
muteor 5 ago 0 replies      
If anyone is interested in this sort of thing, I found this a great article: http://www.grahamlea.com/2016/08/distributed-transactions-mi...
13
calind 6 ago 1 reply      
I think it's a bad example because this should not be the way to develop in this kind (microservices) of systems.

In these environments you atomically create objects in your application's "local" storage and have a reconciliation loop for creating objects in other services or deleting these orphan "local" objects.

14
fagnerbrack 6 ago 2 replies      
Why isn't the OP using Event Sourcing "commands" for the "Bank Accounts" example?
15
morgo 6 ago 0 replies      
Good article.

I've stopped using bank transfers as an example for Acid transactions, and instead talk about social features:

- if I change a privacy setting in Facebook or remove access to a user, these changes should be atomic and durable

- transactions offer a good semantic of which to make these changes. They can be staged in queries, but nothing is successful until after a commit.

- without transactions durability is hard to offer. You would essentially need to make each query flush to disk, rather tha each transaction. Much more expensive.

16
sz4kerto 2 ago 1 reply      
The universally hated JEE can do distributed transactions by default. Yes, with pitfalls, but it can. (It is usually hated by devs who have never used it properly.)
17
fagnerbrack 6 ago 0 replies      
In case anyone is wondering what an "atomic change" means in database terminology: https://www.gnu.org/software/emacs/manual/html_node/elisp/At...
18
bullen 6 ago 5 replies      
But consider this:

You are using mysql, you make a transaction with say deposit and withdraw.

What happens on the mysql machine if you pull the plug exactly when mysql has done the deposit but not the withdraw?

The ONLY difference between SQL transactions and NoSQL microservice transactions is the time between the parts of a transaction.

Personally I use a JSON file with state to execute my NoSQL microservice transactions and it's alot more scalable than having a pre-internet era SQL legacy design hogging all my time and resources.

7
A statement on online course content and accessibility berkeley.edu
90 points by wfunction  7 ago   48 comments top 13
1
jasode 5 ago 6 replies      
Fyi ... a related (not duplicate) HN discussion from Feb 2015:

https://news.ycombinator.com/item?id=9039798

In both this case about UC Berkeley and the lawsuit against Harvard & MIT, the legal attacks seem very wrong.

What I see is:

- Scenario A: spend $X to release free courses that benefits most of humanity

- Scenario B: spend $X+$Y to release free courses that also benefits the disabled population

(The $Y is extra costs to close-caption, transcribe to braille, etc.)

That these lawsuits are obstinate in that you must spend that extra $Y to fulfill Scenario B or humanity can't have the knowledge at all is nonsensical to me.

E.g. The school budget has finite money. Let's say It can release 15 free courses by spending just $X but to avoid a lawsuits, it can only release 10 courses by spending $X+$Y ... or release none at all because it's not worth the legal minefield. Why is the 2nd scenario more optimized for humanity?

Sure, we should encourage the universal accessibility of the video courses but to formalize it into the nastiness of lawsuits? It doesn't seem right.

2
Houshalter 5 ago 0 replies      
The biggest concern is not even that the lectures get taken down. It's that other universities see this and don't put their content up in the future. Many organizations are extremely risk averse when it comes to lawsuits, and our crazy legal system has trained them to be that way.
3
cheriot 40 ago 1 reply      
A lot of this sounds like there's an engineering solution that edX and Coursera can offer as a service. Course creators would have SOME work to do, but limited to uploading the charts and files used in their video.

From the DOJ letter:

> 1. Some videos did not have captions.

> 2a. many videos did not provide an alternative way to access images or visual information (e.g., graphs, charts, animations, or urls on slides)

> 2b. videos containing text sometimes had poor color contrast, which made the text unreadable for those with low vision

> 3. Many documents were inaccessible ... [HTML and PDF stuff]

> 4. Some links were not keyboard accessible

> 5. Websites and materials that were integrated into the course material were not fully accessible

Before we jump to debating the ADA, let's see if there's a sensible solution!

4
malloryerik 6 ago 3 replies      
U.C. Berkeley has some fantastic classes online. It would truly be a shame to lose them. Many were clearly produced on shoestring budgets.

The Department of Justice seems to have received complaints mainly regarding absence of transcripts for audio or video content. Can't speech-to-text tech help?

Other complaints include poor formatting of pdf's. If the courses are indeed taken down, I can only hope that at least this starts a conversation about how to create smarter regulations -- regulations that cause less collateral damage.

5
anotheryou 2 ago 0 replies      
I feel like it's about time to ditch all that accessibility stuff and join forces to create a more human screen reader.

Though sadly it will be a while until we will hear: "You are visiting Hacker News, in a simple look and without any images, it features a list of links to articles and a menu at the top. Do you want me to read the articles titles, continue with accessing the menu or a more detailed description of the page?"

7
hacker314159 5 ago 1 reply      
Thanks DOJ for the Kafka-esque move that will most likely deprive everyone of free educational materials. Glad to know justice was served.
8
Normal_gaussian 4 ago 0 replies      
Are any of these courses paid?

Do any of these courses or the means through which they are made available advertise paid options, paid alternatives, or other paid courses?

If both of those are 'no' then this is disgraceful. If either is 'yes' then it is completely justified.

9
Mao_Zedang 5 ago 0 replies      
Berkeley should just incorporate a non profit in another country, transfer the IP and serve it via there. Outside the jurisdiction of the DOJ.
10
reustle 5 ago 1 reply      
Sorry for sounding naive but isn't UCB a private company? I hate to say but "this is why we can't have nice things". We are trying to move into an age of more digital education, but if there are a bunch of hoops to jump through in order to even attempt it, won't the barrier to trying be too high for some institutions?
11
jimiz 4 ago 1 reply      
Unfortunate to see this free content be lost. In the same line of thinking it is also sad that in this day and age there are not easy tools / systems to help transcode and make content ADA approved. (At least cost effective)
12
DanielBMarkham 5 ago 2 replies      
"...the Department of Justice has recently asserted that the University is in violation of the Americans with Disabilities Act because, in its view, not all of the free course and lecture content UC Berkeley makes available on certain online platforms is fully accessible to individuals with hearing, visual or manual disabilities..."

Wow. Talk about harming hundreds of people for no reason at all.

IANAL, but I believe the ADA would be applicable for students of the college, not necessarily online content consumers viewing the college's courses.

Of course, the problem here is that you have to start playing expensive lawyer games to get that clarified -- and it might take years. So even if Berkeley has done nothing wrong, they still might end up having to pull the courses.

13
bluesign 4 ago 1 reply      
can someone explain how they can keep the same material for access by enrolled students but not to public considering ADA requirements?
8
Bottleneck Bandwidth and RTT ozlabs.org
206 points by 0123456  12 ago   43 comments top 11
1
Animats 12 ago 3 replies      
This is useful mostly for long-lived connections. As Google moves away from the many-short-connections HTTP model to the persistent connections of HTTP/2, connections live long enough that both bandwidth and delay can be discovered.

This is much better than trying to estimate bandwidth from packet loss.

2
kev009 11 ago 0 replies      
This is the same Van Jacobson who was instrumental in working through TCP congestion collapse growing pains over 30 years ago https://en.wikipedia.org/wiki/Network_congestion#Congestive_...
3
jnordwick 11 ago 7 replies      
Intra network, TCP slow start is often turned off to minimize latency on links especially ones that you have to respond very quickly to data initially or that is bursty.

Google BBR seems to used the same exponential probing that slow start does, so I wonder how it will perform when you are staying in network and don't often have to worry about packet loss or congestion and want the link to start off at full throttle.

Once BBR enters its steady state it intentionally cycles faster and slower, but this seems like it is creating additional latency when you don't want it. Think of a traffic burst that happens just as the link decides to cycle slower.

It also seems like the protocol intentionally runs slower that possible as to not create buffer pressure on the receiving side, if I'm understanding this quick description properly: "then cruising at the estimated bandwidth to utilize the pipe without creating excess queue".

The this line just scares me: "Occasionally, on an as-needed basis, it sends significantly slower to probe for RTT (PROBE_RTT mode).

Google is going to make patches that work for them, but that doesn't always mean it will work for everybody else. This seems very close tailed to Google's traffic issues and serving HTTP over persistent connections, and not a general purpose feature, think of games, intranetwork low-latency applications, etc.

4
otoburb 12 ago 0 replies      
Many TCP optimization algorithms report their performance improvements using CUBIC as their baseline. Will be very interesting to see how TCP optimization vendors adapt to the new Bottleneck Bandwidth & RTT patch.

From an industry viewpoint, I wonder how this will perform over traditionally higher-latency and higher-loss wireless networks.

As an aside, I love how small the patch is, weighing in at 875 LOC including comments.

5
stephen_g 11 ago 1 reply      
This is very exciting, and I can't wait to see some performance data from it. Bufferbloat is a huge problem so it's awesome to see work being done in this area. It's really cool also that it can improve things just by patching servers!

How does this interact sending traffic through routers using algorithms like fq_codel to reduce bufferbloat? Is it better to just have one or the other or do they work well together?

6
wscott 5 ago 0 replies      
Very interesting, but I suspect like many of these it works best if you are only competing with only connections using the same approach. Which is probably why Google is talking about using this inside their backbone.This estimates the queues in the network and tries to keep them empty. TCP ramps until packets start dropping, so it spoils things for the BRR connection. Perhaps combined with Codel on to drop TCP packets early the two could play nicely together.

Hmm, reading the code it says it does play well with TCP, but "requires the fq ("Fair Queue") pacing packet scheduler." In fact, later it says it MUST be used with fq.Hmm.

BTW the code is very readable and well commented.

7
falcolas 1 ago 1 reply      
Is anyone able to speak to the compute and memory overhead this requires, in comparison with the loss-based algorithm? I ask on behalf of firewalls and layer 4 routers everywhere.

Can this really just be patched in, with no changes to specalized hardware?

8
_RPM 12 ago 1 reply      
This is really interesting work. I wish I was smart enough to do this kind of stuff.
9
mhandley 7 ago 0 replies      
The linked article doesn't provide enough information to understand how they're using Acks to probe bottleneck bandwidth, but they've prior work on this. If it's similar to Van Jacobson's pathchar, I would have thought there might be "interesting" interactions with NICs that do TCP TSO and GRO (using the NIC to do TCP segmentation at the sender and packet aggregation at the receiver), as these will mess with timing and, in particular, how many acks get generated. Still, the authors are very well known and respected in the Networking community, so I expect they will have some way to handle this.
10
nieksand 8 ago 0 replies      
Has anybody managed to find a preprint of the ACM queue paper floating around?
11
quietplatypus 10 ago 0 replies      
The name is quite misleading for those not familiar with previous work.

How's this different from TCP Vegas and FAST TCP which also use delay to infer the bottleneck bandwidth?

9
From Kafka to ZeroMQ for real-time log aggregation janczuk.org
144 points by janczukt  10 ago   70 comments top 18
1
asasidh 0 ago 0 replies      
So you used Kafka for something that should have been handled by a MQTT or ZeroMQ in the first place ?
2
thomaslee 7 ago 0 replies      
I used to be on a team responsible for a single small-ish Kafka cluster (between 6-12 nodes) doing non-trivial throughput on bare metal. Without commenting on whether ZeroMQ is the right alternative: I can understand being scared off. Our hand was forced such that we had to go the other way and understand what was going on in Kafka.

The kicker is that Kafka can be rock solid in terms of handling massive throughput and reliability when the wheels are well greased, but there are a lot of largely undocumented lessons to learn along the way RE: configuration and certain surprising behavior that can arise at scale (such as https://issues.apache.org/jira/browse/KAFKA-2063, which our team ran into maybe a year ago & is only being fixed now).

Symptoms of these issues can cause additional knock-on effects with respect to things like leader election (we wound up with a "zombie leader" in our cluster that caused all sorts of bizarre problems) and graceful shutdowns.

Add to that the fact the software is still very much under active development (sporadic partition replica drops after an upgrade from 0.8.1 to 0.8.2; we had to apply some small but crucial patches from Uber's fork) & that it needs a certain level of operational maturity to monitor it all ... it's easy to get nervous about what the next "surprise" will be.

Having said all that, I'd use Kafka again in a heartbeat for those high volume use cases where reliability matters. Not sure I'd advise others without similar operational experience to do the same for anything mission critical, though -- unless you like stress. That stress is why Confluent is in business. :)

3
buster 7 ago 2 replies      
To me it sounds like Kafka was not understood in full detail (maybe because missing documentation or the high complexity) and they switched to a system they build themselves. Naturally they know in full detail what is going on and can set up the system as needed.

I am wondering if working on solving the actual problems with Kafka would have been the better route.I've never used Kafka and i find ZeroMQ great, but reading that their logging solution does drop log messages is a huge no-go for operations.How can you claim to run a serious business and say "babies will die" when you can't be sure to be able to find problems?

Because, when will you lose logs? Not in normal operation, but when weird things happen. When networking has a hiccup. When Load on the system is too high, so most likely when many people are using your service. Exactly when shit hits the fan. And you just made the decision that it's ok to drop log messages in such cases? That's not good.

I think you should either dive into Kafka/Zookeeper and fix your problems or switch to another logging solution. You should probably just drop that non-sense "streaming and real-time logs" requirement and live with a log delay of a few seconds and build something really stable instead of building something inherently unstable. Honestly, just collecting syslogs on the core vm and sending them to a central server would have been the better solution. Better then looking into fancy real-time, streaming logs on a sunday night because the system is having a breakdown and you can't even be sure that you are not missing essential logs.

4
agentgt 4 ago 2 replies      
I don't understand why people need such ridiculously fast systems when we are using RabbitMQ and crappy Apache flume and we generate more than 5k with spikes of 50k messages/second. Please author of the article tell me your metrics.

And our log messages are ridiculously big at times (15k to as big as 50k).

Our pipe never has problems. What fails for us is Elastic Search. In fact at one point in the past we did 100k messages/s when embarrassingly had debug turned on in production and RabbitMQ did not fail but Elastic Search and sadly Flume did as well (I tried to get rid of flume with a custom Rust AMQP to Elastic Search client but at the time had some bugs with the libraries.. Maybe I will recheck out Mozilla Heka someday).

There is this sort of beating of the developer chest with a lot of tech companies.. that hey listen we are ultra important and we are dealing with ridiculously traffic and we need ultra high performance. Please tell/show me these numbers....Or maybe stop logging crap you don't need to log.

Or maybe I'm wrong and we should log absolutely everything and Auth0 made the right choice given their needs (lets assume they have millions of messages a second), I still think I could make a sharded RabbitMQ go pretty far.

This goes with other technology as well. You don't need to pick hot glamorous NoSQL when Postgresql or MySQL and a tiny bit of engineering will get the job done just fine particularly when mature solutions give you such many things free out of the box (RabbitMQ gives you a ton of stuff like a cool admin UI and routing that you would have to build in ZeroMQ).

5
wcdolphin 8 ago 1 reply      
Did you ever try running 5 ZK's in the ensemble? 3 is the absolute minimum to survive a single machine failure. If you are having trouble with availability, it seems natural to increase your safety factor there.

I was surprised by the contrasting sense of importance of delivery guarantees in the article. At the start, losing a message was akin to the death of a child. At the end, shrug. Now every single machine failure (or even mq process restart) failure will lose you log messages stored in memory :(.

Glad to hear you found a solution that worked for you though! Would love to hear about difficulties you had with the new system, in particular adding brokers.

6
TheHydroImpulse 9 ago 0 replies      
FYI, Kafka doesn't need to fetch from disk every time as it caches the logs pretty aggressively, as long as you have enough memory.

Running Zk and Kafka on the same nodes is likely not the best thing.

7
htn 9 ago 3 replies      
FWIW, you can get Kafka packaged as a fully managed and HA service from https://aiven.io on AWS and also Azure, GCE and DigitalOcean.

But if the Auth0 runs their entire operations on AWS, maybe Kinesis would have been a more natural transition.

8
StreamBright 5 ago 0 replies      
The author correctly points out that he is comparing apples to oranges.

Kafka gives you features that certain systems cannot live without, like on disk persistence (saved my life couple of times) and topics. Filtering messages on the client side like ZeroMQ does it not an option in many cases, just think about security. I think Kafka has a long way to go before it can be used as a general message queue (many features are not there yet like visibility timeout for example) but if you can manage Zookeeper and have means to work with it (somebody understands it and knows its quirks) it can provide a reliable platform for distributing a large number of messages with low latency and high throughput, just like it does at LinkedIN.

9
markpapadakis 2 ago 0 replies      
Maybe TANK ( https://github.com/phaistos-networks/TANK ) would have been a good alternative on there. No features parity with Kafka, but setting it up is a matter of running one binary and creating a few topics, and it is faster than Kafka for produce/consume operations. (disclosure: I am involved in its development).
10
efangs 2 ago 0 replies      
Anyone use collectd + rrd for this purpose? Still trying to understand at what level it's worth to move to something else.
11
bachback 7 ago 3 replies      
With ZeroMQ I had the worst possible results and experience. Honestly much of what it claims is bogus. It is highly optimized for certain cases and utterly useless for distributed systems. Try and find out in PUB/SUB what the IP addresses of the subscribers are. Not possible. In many cases you will be much better off learning TCP/IP yourself. In the mentioned case you simply iterate over the vector of subscribers - much more powerful and the sane default. It seems at some point people confused internal networking solutions with the Internet.
12
jpgvm 8 ago 0 replies      
Probably should have been running ZK and Kafka queues separate to CoreOS/container shenanigans.

If deployed using the Netflix co-processes both are very durable.

13
siscia 9 ago 0 replies      
Did you consider MQTT? Sound to me a more natural choice.
14
weitzj 9 ago 1 reply      
Did you look at nsq.io or NATS?
15
k__ 5 ago 2 replies      
I'm a total message queue noob. What are the usecases for them?

I used MQTT but only as a message bus.

16
jvoorhis 10 ago 0 replies      
2015
17
Nimimi 9 ago 3 replies      
You can deploy Kafka using DC/IO and it takes care about HA for you. DC/IO is quickly becoming the go-to solution for database deployments. ArangoDB even recommends it as default.

Now about Kafka vs ZeroMQ: you want Kafka if you cannot tolerate the loss of even a single message. The append-only log with committed reader positions is a perfect fit for that.

18
bdowling 9 ago 1 reply      
But why ZeroMQ and not nanomsg?
10
Embraer Phenom 300 yaw damper fail due to loss of GPS signal [pdf] faa.gov
54 points by a-no-n  7 ago   26 comments top 8
1
dfsegoat 2 ago 0 replies      
This is a known issue - and old issue.

These jets cannot fly without GPS.

When they (the military) knocked out GPS intentionally around China Lake NAS a few months back (for testing aircraft in GPS denied environments) -- all Embraers were told to avoid the area:

THIS NOTAM APPLIES TO ALL AIRCRAFT RELYING ON GPS. ADDITIONALLY, DUETO GPS INTERFERENCE IMPACTS POTENTIALLY AFFECTING EMBRAER PHENOM300 AIRCRAFT FLIGHT STABILITY CONTROLS, FAA RECOMMENDS EMB PHENOMPILOTS AVOID THE ABOVE TESTING AREA AND CLOSELY MONITOR FLIGHTCONTROL SYSTEMS DUE TO POTENTIAL LOSS OF GPS SIGNAL.

https://www.faasafety.gov/files/notices/2016/Jun/CHLK_16-08_...

2
0xcde4c3db 16 ago 0 replies      
Anyone have a sense of what the primary failure is here? Is there some Kalman filter that gets out of whack when an error term can't be calculated, or what?
3
dogma1138 1 ago 0 replies      
How does an aircraft gets certified with out an INS for stability control?
5
mmanfrin 6 ago 2 replies      
Can someone provide context?
6
smegel 7 ago 0 replies      
> VENTRAL RUDDER FAIL, YAW DAMPER FAIL,AUTO PILOT FAIL, AND CAS MESSAGES ASSOCIATED WITHUNEXPECTED ROLLING AND YAWING OSCILLATIONS (DUTCH ROLL) ATHIGH AIRSPEEDS

Damn.

7
rsync 7 ago 1 reply      
Translation: we never tested flight with the GPS off.
8
throwaway_exer 7 ago 2 replies      
I have a commercial rating. The HN title is wrong.

What this is saying:

1) If you don't have a yaw damper, then you'll have a rough ride

2) because the autopilot will induce oscillation (dutch roll) after loss of GPS

11
Learning Web Audio by Recreating the Works of Steve Reich and Brian Eno teropa.info
155 points by Fr0styMatt88  13 ago   16 comments top 9
1
gtani 3 ago 2 replies      
For those interested in online/soft synths, sequencers and samplers (pretty deep rabbit hole)look up past discussions of Max for Live, ChucK, SuperCollider, Impromptu, CSound. Max Live is part of Ableton Suite license for around $700, I think others are open source/license. Also the clojure overtone and haskell tidal live coding libs. I think there were also threads about writing your own VST/AU plugins.

Snoman's "Dance Music Manual" and Shepard "Refining Sound" are good books to start with

(I'm old school about synths and sequencer, i think it's best to start with a knobby hardware synth, microbrute, ms2000, Minilogue, bass station II, sh201, mopho and understand how it's designed to work and how it glitches/fails gracefully)

https://news.ycombinator.com/item?id=9178933

https://news.ycombinator.com/item?id=9518601

https://news.ycombinator.com/item?id=9573471

https://news.ycombinator.com/item?id=9640218

https://news.ycombinator.com/item?id=9635037 (that particular K-S synth isn't online anymore but somebody else put up a demo)

https://news.ycombinator.com/item?id=10177716 (supercollier, alda et al)

Big fan of Terry Riley, Lamont Young, Philip Glass, Arvo Prt also.

2
eggy 20 ago 0 replies      
I have already made some comments below about livecoding, but after having read the full post - Amazing!

You gave me a lesson in modern JavaScript too, which quite frankly I have avoided for the longest.

The exposition, history, coding - the whole package - is just great. Thank you for this!

3
socmag 2 ago 1 reply      
This is a beautiful body of work along with the other articles by the author, and form a kind of "Rosetta Stone" for a new generation of musicians, developers and artists to use as a launchpad into the fascinating hockey stick of modern generative music.

I am convinced that with modern Machine Learning algorithms thrown into the pipeline it will only be a few short years before we are able to conjure up whatever type of music our context dictates. It's already happening.

It will be interesting to see how the RIAA respond to a new world where we can say

"Alexa, please play me some Led Zeppelin remixed with Rihanna in the style of Skrillex" (and you fill in the blanks)

Or even better after a one time analysis of you and your partners entire Spotify collections, just start making up new works perfectly in tune with the moment.

Really exciting times, and a stunning article.

4
bch 7 ago 0 replies      
At reading the title of this (before reading article or comments) my first thought was "learning web audio by recreating the work of John Cage"... but this is too lovely to joke about. I see recommendations for various pieces of work, and would like to add Reich Remixed[0], which had more modern (c. 1999) DJs taking on Reich with great success, in my opinion.

[0] http://www.nonesuch.com/albums/reich-remixed

5
fenomas 11 ago 0 replies      
This is insanely cool.

That said, for me many of later demos clip whenever 3+ sounds are playing - e.g. the "Cor Anglais" one. I expect that adding a compressor at the end of the audio chain would fix it.

(It might be platform dependent though - when I've experimented with webaudio, it seems like sometimes a demo will clip on windows but not on mac, or such. I guess OSes sometimes compress outgoing audio automatically?)

6
KyleBrandt 4 ago 0 replies      
Related, mind blown that kurzweil made programmed music in this presentation in 1965. https://youtu.be/X4Neivqp2K4

He was ~17 in that video, and he built the computer as well.

7
jchimienti 2 ago 0 replies      
Great article! I've been writing shit code for almost a year now and playing guitar for 2. I've been wanting to do a fun project that involves both of them. Thanks for the inspiration.
8
bane 12 ago 3 replies      
This is an incredibly beautiful tutorial. Art, deconstruction, minimalism and some of the greatest musical artists of the 20th century! What an amazing way to explore the mechanics and mechanisms behind these artists.

I had heard of other minimalist artists before, but until a couple years ago somehow had missed Reich.

I'm now an addict, especially to his later pieces when he really started growing his work into larger and larger themes. "Music for 18 Musicians" has become one of my favorite pieces of music of all time. I say this as somebody who finds a great deal of modern art fairly deplorable -- the first time I heard some of Reich's pieces I stayed up the entire night finding everything of his I could put into my ears.

Reich has a nack for finding incredibly beautiful and urgent patterns and sounds and exploring them to a kind of amazing fullness. I never had the pleasure to play any of this pieces when I was attempting to become a musician, I understand that the practice for a piece can take months to a year because of the difficulties of maintaining your part of the phase. Listening to his music is, to me, a very intense activity, because I desperately want to notice when the music starts changing and because of the phasing it never does. There's certain parts of the phasing to that I find particularly enjoyable, but I've also found that you can't just jump to them, you have to encounter them in the context of the phases that come before and after. Once you get quite familiar with his music you'll find elements of his influence all over the place (for example, careful listeners will probably recognize this piece as the core of a much later EDM hit https://www.youtube.com/watch?v=Miu19QHBQiw).

I don't enjoy his earlier, very intense explorations into phase music, but he manages to develop the concepts into a very full and beautiful music:

https://www.youtube.com/watch?v=ZXJWO2FQ16c (music for 18)

https://www.youtube.com/watch?v=zLckHHc25ww (another performance of the same)

https://www.youtube.com/watch?v=edKE10Yz_zs (six pianos)

https://www.youtube.com/watch?v=TbC5zhFX7Kw (Octet Eight Lines)

https://www.youtube.com/watch?v=O5qOtXql-oI (Desert Music)

https://www.youtube.com/watch?v=YgX85tZf1ts (sextet)

https://www.youtube.com/watch?v=Udn9cZYWmIk (Music for a large ensemble)

And this insanity, a solo performance of one of his early phase pieces https://www.youtube.com/watch?v=AnQdP03iYIo

If you liked the visualizations in this here a nice one for "Music for Pieces of Wood" https://www.youtube.com/watch?v=gy2kyRrXm2g

and then humans doing the same https://www.youtube.com/watch?v=5LbmvD7ytDc

and another good visualization of the phase music approach https://www.youtube.com/watch?v=lzkOFJMI5i8

9
sjclemmy 6 ago 0 replies      
This gets an up vote from me just for the title. Now to read the article...
13
How morality changes scientificamerican.com
25 points by bangda  6 ago   11 comments top 3
1
jondubois 2 ago 6 replies      
Regarding the Trolley problem, it doesn't make sense to push someone off a bridge in order to save 5 people, this is very different from just flicking a switch for several reasons:

- There is a chance that this plan might fail and 6 people would get killed instead of 5.

- Maybe there is a reason why the 5 people are tied to train tracks - Honest people don't usually end up like this - Maybe they're in the mafia and their deaths would be an expected consequence of their high-risk criminal lifestyle. On the other hand, the guy standing on the bridge is more likely to be a regular person who did nothing wrong.

- You would go to jail for manslaughter.

- You would psychologically damage yourself by pushing the person off a bridge.

- Maybe you have an undiagnosed case of schizophrenia and the 5 people on the tracks are not real. The odds of it being an illusion (and that you are crazy) are probably higher than it being real - It's quite arrogant to trust your own senses (to the point of killing someone) when you're confronted with such an incredibly unlikely situation.

2
throwanem 2 ago 0 replies      
Dr. Sapir, Mr. Whorf, please call your offices...
3
known 2 ago 0 replies      
"How Morality Changes in a Foreign Language" is the better title
14
To do in San Francisco this weekend: the first-ever roguelike celebration roguelike.club
122 points by jere  13 ago   37 comments top 16
1
JauntTrooper 5 ago 0 replies      
Nice, I wish I could go!

My username is inspired by my favorite rogue-like growing up, JauntTrooper:Mission Thunderbolt. It was released in 1992.

Here it is, if folks are interested in playing it: http://www.old-games.com/download/3974/jaunttrooper-mission-...

2
Merem 45 ago 0 replies      
For those with knowledge of the genre: What are the better, traditional roguelikes that you can recommend? I've only played Elona so far and quite liked it.
3
britta 8 ago 1 reply      
Aw, thanks for posting this and being excited about it! This is a labor of love organized by my friend Noah with help from a few other friends and me, just for fun since we love playing roguelikes.

We did sell out of tickets; our very graciously donated venue (thanks to Eventbrite) has an attendee limit since it's basically an office rather than a large venue. I hope you all will watch the streams! The talks will also be recorded so you can watch them later.

4
jere 13 ago 2 replies      
The speaker list on this thing is absolutely mindblowing.

I really wanted to go, but having just gone to a roguelike conference last month and this one being on the opposite coast, I just couldn't swing it. But it will be streamed!

5
TheAceOfHearts 13 ago 1 reply      
Propose changing the link to the event page [0]. This "blog post" doesn't add anything.

[0] https://roguelike.club/

7
vanderZwan 3 ago 0 replies      
Josh Ge will be speaking at this event. I have been closely following the devlog of his modern RL game Cogmind since the start and I can highly recommend it, as well as the game itself:

http://www.gridsagegames.com/cogmind/

http://www.gridsagegames.com/blog/

8
forkandwait 12 ago 2 replies      
The only computer game I play is DCSS. Once got to the vaults as a troll beserker, but I can't get any farther than that. (I know, I know .... slow down, just don't get killed...)
9
jmspring 12 ago 0 replies      
If they want CA history, hand's down reach out to Joseph and his Emperor Norton tour -- http://www.emperornortontour.com/index.html

Full of history and comedy.

10
qwertyuiop924 5 ago 1 reply      
This is so cool that it almost (but not quite) makes me wish I lived on the west coast.

But coming from the east coast, the west coast is weird.

11
Grue3 8 ago 0 replies      
Holy shit, the developers of Rogue, Thomas Biskup, Tarn Adams, and even the creator of Kingdom of Loathing (for some reason) all in the same place?
12
kqr2 12 ago 1 reply      
Looks like they are sold out as ticket sales are ended. How much were the tickets originally?
13
gragas 10 ago 1 reply      
So can I not go since tickets are already sold out?
14
SubiculumCode 8 ago 0 replies      
angbandzandbandsangbandI died in them all.
15
renownedmedia 10 ago 1 reply      
I like how the page doesn't mention the date.
16
dang 11 ago 0 replies      
15
Arts Council to impose quantitative measures of arts quality artsprofessional.co.uk
16 points by panic  4 ago   20 comments top 9
1
mjburgess 2 ago 1 reply      
This is a very poorly written article that provides very little information about the method or its motivation. And the consultants they're quoting sound like idiots.

The reality, I believe, is this: Arts Council funding is limited and decisions need to be made about which projects will be funded over others.

So they need some transparent system based on info from: the artists, their peers and the public.

Without any information whatsoever its not clear on what basis they could be making any decisions. The whims of some Art Director who happens to prefer one thing over another?

2
f_allwein 4 ago 1 reply      
Oh dear. Even if we assume that some of the metrics can be useful ("Enthusiasm: I would come to something like this again"), they are deeply subjective, so any measurment would lead to good metrics for mainstream, lowest-common-denominator work. Pretty much the opposite of what art should be, given that many great artists were mocked or ridiculed in their time.
3
fhars 3 ago 0 replies      
Does that remind anyone else of the "understanding poetry" scene from Dead Poets Society? https://www.youtube.com/watch?v=LjHORRHXtyI
4
anotheryou 3 ago 2 replies      
lol, if they succeeded it would be revolutionary. We can put that formula in a fitness function for AI and don't need artists anymore.

Seriously: Who comes up with that? We finance culture because it needs to avoid the markets not to become a dull product. At best they will aim at some mainstream, not at all supporting any advances in art.

5
ThomPete 1 ago 0 replies      
I did some work for a startup who is applying algorithm to picking up and coming artists.

http://tondoart.net/

I think the headline is mis-representative of what it's about.

It's does not sound like a measure of arts quality, but about creating a consistent set of metrics to judge with as a baseline.

Nothing wrong with that and everyone who don't believe in it don't have to use the sytem.

6
graeham 2 ago 0 replies      
I think its now well accepted that there is an 'art' to math/science/code - why should the inverse not be true? Not to say that art should become some sort of an applied math, but is it a crime that a publicly-funded body should have some methods, strategy, and follow up on where its grants end up?
7
jkot 2 ago 0 replies      
Some criteria are very easy to quantify. Authors race, age, gender, sexual orientation, disability status. It reflects on quality of art and its culture.

Arts Council is already using this criteria for grants and for hiring. It is only logical to use it on art itself.

8
ryan-allen 1 ago 0 replies      
Is there a hobo bonus?
9
nxzero 3 ago 0 replies      
Sounds toxic. Imagine this was created by someone that sees art as a collection of objects.

Art is about artist, not art - and no one wants to directly be told they are not producing "quality art" by some system.

16
Norway plans to cull more than two-thirds of its wolf population theguardian.com
36 points by eloff  7 ago   22 comments top 10
1
dazzawazza 2 ago 0 replies      
For thousands of years humans have farmed with wolves near by. The Japanese used to use Akita to fend off wolves who are looking for the easiest, lowest risk source of calories. Seems to me that the Norwegians should ask the Sami people of the region how they lived with wolves for so long while herding reindeer and maybe look at using the Lapphund to guard their herds.

Of course that may mean they would have to go out and shoot the wolves after the dogs warn them but that would have a positive effect on the wolves teaching them that they get shot if they go near the sheep. Culls don't teach the wolf packs anything.

That may also mean the farmers have to live with their herds which doesn't blend well with modern life so maybe we should scrap the sheep farming and not the wolves?

2
2trill2spill 1 ago 0 replies      
It's amazing how low the wolf population is in Norway, 68 wolfs in the entire country! While here in Minnesota we have nearly 2,300 wolfs across 439 wolf packs mid winter 2016[1]. But that puts them only at 3.2 wolves per 100 km2 of occupied range[2]. That must make the wolf density and total range in Norway absurdly low, why would they "cull" the wolfs when there are so few.

Plus the wolfs are not really a problem here in Minnesota, a few people's pets have been killed, which sucks but thats all you really here about.

[1]: http://news.dnr.state.mn.us/2016/08/22/minnesotas-wolf-popul...

[2]: http://files.dnr.state.mn.us/wildlife/wolves/2015/survey_wol...

3
jackgavigan 1 ago 1 reply      
This is sad.

Although the gray wolf as a species is classifed as "Least Concern" by the IUCN (International Union for Conservation of Nature), the Scandinavian wolf population is classified as 'Engdangered'.

Instead of killing wolves, we should be trapping them and redistributing them to help ensure genetic diversity in isolated populations like Scandinavia's.

4
smackay 1 ago 1 reply      
This decision is probably for political reasons in favour of two groups within Norway: hunters who probably want to shoot more deer and elk or perhaps moose and the Saami who get paid by the state based on the number of reindeer they have roaming the tundra in the mountains and the north of the country.

Not sure if the latter is still the case. When I last visited the far north the effects of overgrazing by reindeer were starting to be severe in places. The densities of reindeer are pretty high so wolves would have a spectacularly easy time and would do very well as a result.

5
0x07c0 2 ago 0 replies      
>That population would be below the level necessary to maintain genetic diversity. At least in NZ, the Kiwis have the decency to say that they want a species eliminated outright.The Norwegian wolf population is a common population with the Swedish population. The total is something like 400 wolfs. (For the record I support a robust wolf population.)
6
labster 2 ago 2 replies      
Who made Anders Brevik the environment minister?

That population would be below the level necessary to maintain genetic diversity. At least in NZ, the Kiwis have the decency to say that they want a species eliminated outright.

And before you downvote, ask yourself: Senseless violence to stop outsiders who are just trying to survive: is it really any less bad when the outsiders are animals?

7
known 2 ago 0 replies      
Highly deplorable :( Norway is a wealthy country; https://en.wikipedia.org/wiki/Sovereign_wealth_fund#Largest_...
8
musha68k 1 ago 0 replies      
I'm disappointed of Norway, they could do so much better than that.

It's strange as they usually do - remember how humanely and thoughtful they handled the terror incidents of 2011.

9
guard-of-terra 2 ago 1 reply      
It's weird that they have just 70 wolves in such a huge country. Heck, how could they even count them when I would expect them to roam freely from Finland and Murmansk and back. Wasn't Norway mostly huge wilderness actually?
10
tn13 1 ago 3 replies      
Why not allow people to buy these wolves to be kept in private forests ?
17
The Python Paradox Is Now the Scala Paradox (2009) kleppmann.com
51 points by tomrod  7 ago   46 comments top 10
1
labrador 1 ago 0 replies      
If hiring a JavaScript programmer, if he or she had not at least learned the basics of Dart and TypeScript and done a weekend project in it, I would consider them incurious. Considering that all the best developers are very curious people, this would be an effective filter. For c++ programmers, if they hadn't done something in D it would be a red flag.
2
noir-york 56 ago 1 reply      
What are you hiring programmers for? A science project, or a company which has to meet business goals?

If the former, by all means, pick some esoteric language. If its the latter, more mundane issues like existing codebase, skillset and, I don't know, perhaps using the right tool for the job? C# can do things Python cannot and vice-versa.

Learning Scala to show off and fit in Graham's paradox is silly and doesn't prove that you are a smarter programmer. A smart programmer is someone who comes up with smart solutions; you can write crap code in any language.

3
crazygringo 2 ago 4 replies      
"Smarter" programmers are very rarely what a company needs, in my experience -- it's extremely rare for a startup's success to depend on that. (I mean, I love clever algorithm programming, but it's just so rarely needed, sadly.)

To the contrary, programmers with the ability to deliver and iterate working code quickly is what is needed -- and especially without overengineering, or inadvertently creating huge future technical debt due to inexperience.

And if there are only 2 or 3 or 5 programmers on my team, and one of them gets sick or leaves, it's far more important to have a big pool of potential replacements who could can get up-to-speed in a couple weeks, not months -- which means a popular language and a straightforward codebase.

4
cortesoft 1 ago 3 replies      
I think there is some serious flaws in the author's logic. Having the free time to learn the 'fashionable' new language doesn't mean you are smarter or a better programmer; it just means you have more free time.

In my professional experience, people that are always learning new languages or are constantly using new libraries and frameworks are never the ones who accomplish the most. They simply are the ones who are always seeking novelty; they never become great at a language, because they are always wanting to try something new. They start lots of projects, but don't finish them.

I don't think those are the best qualities to look for in employees.

5
TheMagicHorsey 41 ago 1 reply      
I would have said Go rather than Scala, but now Go is pretty mainstream.

I think my choice today for a Python Paradox would be Erlang or Ocaml. Ocaml + MirageOS is a system level simplification that makes application correctness/security a much more tractable task than [any other language + linux].

Erlang is the most common sense way to build resilient, distributed applications. But there is a high barrier to entry to learn both Erlang and the OTP framework that makes it useful.

Having said that, I've never worked at a company where choosing Erlang or Ocaml was a possibility. I think there are companies that make bold technical choices, but I haven't had the good fortune of being given the chance to work at such a place.

6
kozikow 20 ago 0 replies      
I was there and worked in company that hired for Scala. I've seen teams who went back to Java 8 be more productive than teams that went hardcore Scala, including Scalaz.

If you filter for Scala, you are more likely to pick academic purists trying to show off with their core. If you filter for go, you are more likely to pick pragmatics, who will be aware that every choice is a tradeoff and will value maintainable code over clever code.

I already made a choice.

7
zerr 2 ago 0 replies      
These paradoxes come and go, but there is only one paradox that remains the same: C++ :)
8
JohnnyConatus 1 ago 0 replies      
I used Scala for one company I founded and the Python Paradox to be true. People who already knew Scala were generally much more accomplished programmers and for the people who didn't know, the learning of it was a great test of their ability. Here's why: as a hiring manager, I know it's possible to learn Scala. So if someone couldn't learn it and be productive within 30 days I knew they weren't sharp. (FYI, everyone did learn it quickly, but frankly, the best programmers can learn a new language in a weekend.)
9
ludamad 2 ago 1 reply      
Can't you just bias for people with personal projects in any language?
10
interrrested 2 ago 9 replies      
What language would it be in 2016 ? Go ? Rust ?
18
Beyond webhooks: webtasks janczuk.org
62 points by janczukt  10 ago   13 comments top 9
1
Rygu 8 ago 1 reply      
It's always great to explore application connectivity for the next generation. Webhooks have done a lot for the web, but the lack of standardization is starting to show.

However external deployment of code somehow doesn't feel like an improvement to me. From a DevOps perspective having webtask code run on a closed third party environment is a big risk. It makes continuous integration, testing and error reporting more difficult or even impossible if the webtask service isn't well thought-out.

The way forward for webhooks in my opinion should be standardization of push and pull protocols, with concerns like cryptographic signatures, metadata headers, and failure recovery through event sourcing.

2
spdustin 1 ago 0 replies      
I've linked it before, and I'll link it again.

Huginn [0] is an amazing self-hosted dev-friendly Zapier/IFTTT/MS Flow that, more relevant to this topic, supports acting arbitrarily on webhooks.

[0]: https://github.com/cantino/huginn

3
Juliate 5 ago 1 reply      
* how is this different than deploying your own (micro)services on different hosts where you control the code?

* if you don't control the service running the web task, how do you trust the returned result to be the actual execution of the code you submitted?

4
fiatjaf 2 ago 0 replies      
Many times you just want to get the data sent in a webhook posted to elsewhere (not something that is crucial to your app behavior, but something that would be good if worked). It isn't worth maintaining a server just for that.

For these cases I've just finished implementing https://requesthub.xyz/, which translates webhooks from one service into calls to other service, all controlled by a jq filter.

5
funshed 2 ago 0 replies      
Interesting and very useful, until it isn't. The idea of outsourcing business logic to a 3rd party hosted platform is one id consider high risk at the point it can be easily exported and hosted elsewhere. WebTask provides no independent backup - be for an outage or company failure.

It's a shame, as ideally I'm looking for business logic system (mind with database access) that would allow development with Vue.JS

6
falcolas 1 ago 0 replies      
So, XMLRPC, recast in new clothing? The concept is quite sound, imo, which is why it's been done before. Fear of the wire protocol should not be cause to avoid mature technologies and protocols.
7
endergen 2 ago 0 replies      
This is basically serverless programming as per Parse, Amazon Lambda, Google Cloud Code, and etc.
8
sigill 1 ago 0 replies      
> Customization is what distinguishes platforms from applications.

What does this even mean?

9
lowglow 9 ago 1 reply      
"Webtasks do this by allowing secure, isolated, and fast execution of custom, untrusted code directly over HTTP, with no prior provisioning."

> Isolated using containers

20
A Two-Mile Beer Pipeline Carries Belgiums Lifeblood to Be Bottled nytimes.com
57 points by jstreebin  14 ago   17 comments top 6
1
M_Grey 41 ago 1 reply      
It's really hard not to immediately picture Homer Simpson with a hacksaw and dreams of intercepting the flow. Then I thought... cartoons aside you could probably tap and divert a very small portion of the beer. Then I wondered what kind of threshold any monitoring systems might have which would be alerted to a drop in pressure, and therefore how much beer one could swindle before alerting the BierKops... mit their hops.

Sorry, I'm a little tired.

2
toomanybeersies 6 ago 1 reply      
Somewhat related, Carlsberg had a beer pipeline running into Neils Bohr's house after he won the Nobel Prize: https://blog.adafruit.com/2012/11/11/niels-bohr-had-the-best...
3
p_eter_p 1 ago 0 replies      
Now, if only we could get Brugse Zot in the states...
4
0xdada 12 ago 3 replies      
Won't these pipes get clogged over time by residue that beer leaves behind? I imagine there'd be a lot more residue than with water.
5
mschuster91 2 ago 0 replies      
They're not the first. The Oktoberfest in Munich, which starts today, sports:

1) Winzerer Zelt: 250m (http://www.abendzeitung-muenchen.de/inhalt.oktoberfest-winze...)

2) Brurosl: 240m (http://www.abendzeitung-muenchen.de/inhalt.wiesn-bier-pipeli...)

3) Hackerzelt: 250m (http://www.oktoberfest-live.de/festzelte/bier-pipeline-fuers...)

I believe that this year the workers installed more beer pipelines (kinda makes sense, given the terrorism threat and the heavy restriction on vehicular traffic), but can't find any numbers.

6
Odenwaelder 8 ago 2 replies      
It's not the first beer pipeline. Oettinger Bier in Germany has one for many years.
21
Muni ISP forced to shut off fiber-to-the-home Internet after court ruling arstechnica.com
294 points by johnhenry  16 ago   141 comments top 18
1
grahamburger 16 ago 6 replies      
I've spent most of my career (15+ years now) building and maintaining private regional ISPs that compete with big TelCos, with considerable success. It's surprisingly feasible to start an ISP in your garage with a few thousand dollars and grow it to a few hundred customers just by providing decent customer service and a working product. If you've ever been curious about what it takes to get started with something like that I'm happy to answer questions - here or email in my profile.
2
lucaspiller 11 ago 3 replies      
> There are laws in about 20 states that restrict municipal broadband, benefiting private ISPs that often donate heavily to state legislators.

I'm not that clear on how US politics and 'lobbying' works, but why don't you just call it what it is - a bribe? In this case 200 families will be back to slow speeds and poor service ISP (who no doubt will be putting their prices up) just because said ISP has enough spare cash to bribe the politicians. How is that fair?

3
openasocket 16 ago 2 replies      
It should be noted that it's not like this ISP is shutting down, it's just being barred from serving customers outside its county. This action shuts off service for about 200 people, but the ISP will continue to serve over 7,000. Still pretty bad, but not as bad as the headline makes it seem.
4
jrowley 16 ago 2 replies      
This seems like a fairly clean cut case of corruption. It's amazing how external money can drive legislation and political action. Why would these politicians ever try to block this on their own accord? Do they really fear a government monopoly that much? Or maybe they just love small government (with the exception of the military/military contractors, which need to be bigger of course).
5
bcheung 16 ago 1 reply      
One thing to point out that I think people are missing is that the ruling was that a government could not operate a business in a different jurisdiction.

It has nothing to do with free market or net neutrality concerns.

6
xupybd 13 ago 3 replies      
How is lobbying still legal in the USA? Isn't it clear as day corruption, where you pay for influence over the government? I thought Americans valued freedom?
7
johnhenry 16 ago 1 reply      
Unfortunately, the idea of government assisted monopolies rarely makes it into the net neutrality debate. :.
8
dpark 16 ago 2 replies      
What stops Pinetops from forming a municipal broadband corp that simply subcontracts everything to Greenlight?
9
ams6110 12 ago 3 replies      
I call BS on this:

The Vick Family Farms predicament was described in a recent New York Times article. The business has used Greenlight's faster Internet to support a high-tech packing plant that automatically sorts sweet potatoes by size and quality, with each spud tagged with its own bar code. Were very worried because there is no way we could run this equipment on the Internet service we used to have, and we cant imagine the loss well have to the business, farm sales head Charlotte Vick said.

Potato-sorting and tagging does not require internet access.

10
beached_whale 16 ago 2 replies      
I wonder if these municipal broadband networks can be sold to a new not for profit that does the same function
11
Animats 9 ago 0 replies      
This shutdown increases the "economic liberty" index slightly.[1]

[1] https://news.ycombinator.com/item?id=12518783

12
plandis 14 ago 1 reply      
This seems almost by definition of government working for corporations over its citizens. What a sad day.
13
vpeters25 16 ago 1 reply      
I don't how how the actual law is written but maybe the Muni could just supply dark fiber and allow private ISPs to provide the actual broadband service.
14
michaelbuddy 16 ago 1 reply      
Followed by a more expensive subpar service. I typically lean towards getting governmetn out of most business, but these muni ISPs have always struck me as more grassroots democratic very american bootstrap sort of thing and commercial ISPs who fail to serve their customers exhibiting very anti-american behavior.
15
lifeisstillgood 9 ago 0 replies      
"""There are laws in about 20 states that restrict municipal broadband, benefiting private ISPs that often donate heavily to state legislators."""

Aha! And all became clear.

16
DasIch 16 ago 2 replies      
I wonder if the necessary prerequisites for a free market will ever become common knowledge. This approach of just not regulating markets in the hope that a free market magically appears seems insane to me. It is like hitting a screw with a hammer and hoping the screw turns into a nail before the hammer connects. It never happens and you'll always get a huge fucking mess everyone somehow is surprised about.

Health care is the best example. The only thing unexpected and worrisome about it is that the executives at pharma companies have just now realized that they can increase prices this way. Doesn't exactly speak well for their knowledge of economics.

17
duncan_bayne 8 ago 0 replies      
Good. The fewer coercively funded projects like these the better.

To describe this as "community" broadband as some commentators do is really propaganda. Consider how absurd it'd sound if someone spoke of a "community Air Force".

18
20yrs_no_equity 16 ago 3 replies      
So long as there are levers of control, people will attempt to exploit them. The government at every level, should have no power to prohibit entities, whether government or not, from providing internet service.

Freedom of transaction is a basic human right (whether the Bill of Rights talks about it or not, read the Preamble to the Bill of Rights and you'll see the Bill of Rights doesn't create rights, according to the Bill of Rights, it creates limitations on government from violating those rights.)

Even if you disagree with the above, the First Amendment is unquestionably part of the constitution and thus this is a violation of freedom of speech (internet is speech.)

       cached 17 September 2016 16:02:01 GMT