What happened to optimizing for mental overhead instead of file size? This simply should be a build step, part of your minification and concatenation dance, not having to consider all of these when trying to decide if I should close my <p> tag or not:
A p element's end tag may be omitted if the p element is immediately followed by an address, article, aside, blockquote, details, div, dl, fieldset, figcaption, figure, footer, form, h1, h2, h3, h4, h5, h6, header, hgroup, hr, main, menu, nav, ol, p, pre, section, table, or ul element, or if there is no more content in the parent element and the parent element is an HTML element that is not an a, audio, del, ins, map, noscript, or video element, or an autonomous custom element.
<!-- Not recommended --> <!DOCTYPE html> <html> <head> <title>Spending money, spending bytes</title> </head> <body> <p>Sic.</p> </body> </html> <!-- Recommended --> <!DOCTYPE html> <title>Saving money, saving bytes</title> <p>Qed.
If you consider how the parser for HTML5 actually works many of the closing tags you would encounter don't actually add any value unless you have some trailing text that should be attached to the parent node.
One thing I've noticed is that bing webmaster tools will report "The title is missing in the head section of the page" when there is a title, but no <head>. Maybe bing can't properly crawl pages without a <head>. Another service I've used had the same problem, but can't remember which.
So it might be worth being careful with omitting <head> - and maybe other tags, I'm reconsidering whether it's a good idea.
Besides, isn't this "visual redundancy" (not to be confused with semantic redundancy) is what compression is supposed to solve, and has been solving since, effectively forever? So that we can code to reduce our (and the 'view source'-reader's) cognitive load, and let gzip or brotli or whatever new scheme work its compressive magic before it squirts our payload across a newfangled binary HTTP/2 protocol?
namely, the always beloved <noscript> tag https://developer.mozilla.org/en/docs/Web/HTML/Element/noscr...
which is a flow content element in the body section
but if used in the <head> it might include links, style and meta-tags and then it should not be treated as content element.
as the <head> element therefore changes the behavior of its child-elements, does this make it non optional?
p.s.: i think DOMParser.parseFromString() in Chrome gets this <noscript> behaviour wrong in some cases (closes the <head>-section as it treats the <noscript>-tag as content-element, even though it is in the <head> with just links & style children, so it shoudn't close the <head>...)
Documents malformed this way cannot be parsed e.g. with PHP's DOM functions without significant headache.
I used to take the Amtrak every day from Union Station in DC to Penn Station in Baltimore. Friday evening trains were always full of people who brought dinner onboard. The food smells were awful--there is a reason Amtrak's Cafe Car only sells bland food. Being in a train without air conditioning and the smell of curry permeating the air sounds like my own personal hell.
The resolution drop could in principle be taken advantage of in computer graphics, especially in VR applications with robust enough eye tracking .
If you can simultaneously see several stars in the sky using your peripheral vision using averted gaze, why not several dots?
I suspect uncertainty plays a part, but image completion from higher-order feedback that complete the lines might drive the illusion more. Put another way, I believe if you remove the gray lines, the illusion ceases to work.
You can read about it here, and play with a demo:
This article was great, fun to read. I think this chart summarizes the whole thing:
If you fixate on the dot in the middle, all letters are equally legible to your eyeballs. This lets you see directly the difference in resolution between your fovea and your peripheral vision.
I tested it on a friend and same thing but he had to stay zoomed in longer than me, for me it was instant, he had to stay focused 5 or 10 seconds.
And now I can't not see the 12 dots even 48 hours later without being exposed to the image.
Nigeria and Africa are definitely the next big frontiers for the internet, and I honestly doubt that whatever facebook's got planned is mutually beneficial enough. Though if FB decides to establish and maintain a persistent, competent power company in Nigeria, it might actually be worth it, NEPA is shit.
Requiring consistency in distributed system generally leads to designs that reduces availability.
Which is one of the reasons that bank transactions generally do not rely on transactional updates against your bank. "Low level" operations as part of settlement may us transactions, but the bank system is "designed" (more like it has grown by accretion) to function almost entirely by settlement and reconciliation rather than holding onto any notion of consistency.
The real world rarely involves having a consistent view of anything. We often design software with consistency guarantees that are pointless because the guarantees can only hold until the data has been output, and are often obsolete before the user has even seen it.
That's not to say that there are no places where consistency matters, but often it matters because of thoughtless designs elsewhere that ends up demanding unnecessary locks and killing throughput, failing if connectivity to some canonical data store happens to be unavailable etc.
The places where we can't design systems to function without consistently guarantees are few and far between.
The truly nefarious aspect of NoSQL stores is that the problems that arise from giving up ACID often aren't obvious until your new product is actually in production and failures that you didn't plan for start to appear.
Once you're running a NoSQL system of considerable size, you're going to have a sizable number of engineers who are spending significant amounts of their time thinking about and repairing data integrity problems that arise from even minor failures that are happening every single day. There is really no general fix for this; it's going to be a persistent operational tax that stays with your company as long as the NoSQL store does.
The same isn't true for an ACID database. You may eventually run into scaling bottle necks (although not nearly as soon as most people think), transactions are darn close to magic in how much default resilience they give to your system. If an unexpected failure occurs, you can roll back the transaction that you're running in, and in almost 100% of cases this turns out to be a "good enough" solution, leaving your application state sane and data integrity sound.
In the long run, ACID databases pay dividends in allowing an engineering team to stay focused on building new features instead of getting lost in the weeds of never ending daily operational work. NoSQL stores on the other hand are more akin to an unpaid credit card bill, with unpaid interest continuing to compound month after month.
transact.batch([ account[a] = -8, account[b] = 8 ]).submit()
Create the profile with a generated uuid. Once that succeeds, then create the user with the same uuid.
If you build a system that allows orphaned profiles (by just ignoring them) then you avoid the need to deal with potentially missing profiles.
This is essentially implementing MVCC. Write all your data with a new version and then as a final step write to a ledger declaring the new version to be valid. In this case, creating the user is writing to that ledger.
Still, it's funny how banking seems to be the canonical example for why we need transactions given that most banking transactions are inconsistent (http://highscalability.com/blog/2013/5/1/myth-eric-brewer-on...).
For example, imagine someone doing the same thing year after year diligently. (S)he'd increase his or her skill say 10% a year (have no clue what realistic numbers are). Would that mean that the compound interest effect would occur?
I phrased it really naievely, because while the answer is "yes" in those circumstances (1.1 ^ n). I'm overlooking a lot and have no clue what I overlook.
I know it's off-topic, it's what I thought when I read the title and I never thought about it before, so I'm a bit too curious at the moment ;)
Then, there is the rise of microservices to consider. In this case, I also agree with the author that it becomes crucial to understand that the number of states your data model can be in can potentially multiply, since transactional updates are very difficult to do.
But I feel like on the opposite side of the spectrum of sophistication are people working on well-engineered eventually consistent data systems, with techniques like event sourcing, and a strong understanding of the hazards. There's a compelling argument that this more closely models the real world and unlocks scalability potential that is difficult or impossible to match with a fully consistent, ACID-compliant database.
Interestingly, in a recent project, I decided to layer in strict consistency on top of event sourcing underpinnings (Akka Persistence). My project has low write volume, but also no tolerance for the latency of a write conflict resolution strategy. That resulted in a library called Atomic Store .
Basically you make the transfer of money an event that is then atomically committed to an event log.The two bank accounts then eventually incorporate that state.
But I agree that often life is easier if you just keep things simpler. If you require strong consistency like with the user/profile don't make that state distributed. If you do make it distributed you need to live with less consistency.
In these environments you atomically create objects in your application's "local" storage and have a reconciliation loop for creating objects in other services or deleting these orphan "local" objects.
I've stopped using bank transfers as an example for Acid transactions, and instead talk about social features:
- if I change a privacy setting in Facebook or remove access to a user, these changes should be atomic and durable
- transactions offer a good semantic of which to make these changes. They can be staged in queries, but nothing is successful until after a commit.
- without transactions durability is hard to offer. You would essentially need to make each query flush to disk, rather tha each transaction. Much more expensive.
You are using mysql, you make a transaction with say deposit and withdraw.
What happens on the mysql machine if you pull the plug exactly when mysql has done the deposit but not the withdraw?
The ONLY difference between SQL transactions and NoSQL microservice transactions is the time between the parts of a transaction.
Personally I use a JSON file with state to execute my NoSQL microservice transactions and it's alot more scalable than having a pre-internet era SQL legacy design hogging all my time and resources.
In both this case about UC Berkeley and the lawsuit against Harvard & MIT, the legal attacks seem very wrong.
What I see is:
- Scenario A: spend $X to release free courses that benefits most of humanity
- Scenario B: spend $X+$Y to release free courses that also benefits the disabled population
(The $Y is extra costs to close-caption, transcribe to braille, etc.)
That these lawsuits are obstinate in that you must spend that extra $Y to fulfill Scenario B or humanity can't have the knowledge at all is nonsensical to me.
E.g. The school budget has finite money. Let's say It can release 15 free courses by spending just $X but to avoid a lawsuits, it can only release 10 courses by spending $X+$Y ... or release none at all because it's not worth the legal minefield. Why is the 2nd scenario more optimized for humanity?
Sure, we should encourage the universal accessibility of the video courses but to formalize it into the nastiness of lawsuits? It doesn't seem right.
From the DOJ letter:
> 1. Some videos did not have captions.
> 2a. many videos did not provide an alternative way to access images or visual information (e.g., graphs, charts, animations, or urls on slides)
> 2b. videos containing text sometimes had poor color contrast, which made the text unreadable for those with low vision
> 3. Many documents were inaccessible ... [HTML and PDF stuff]
> 4. Some links were not keyboard accessible
> 5. Websites and materials that were integrated into the course material were not fully accessible
Before we jump to debating the ADA, let's see if there's a sensible solution!
The Department of Justice seems to have received complaints mainly regarding absence of transcripts for audio or video content. Can't speech-to-text tech help?
Other complaints include poor formatting of pdf's. If the courses are indeed taken down, I can only hope that at least this starts a conversation about how to create smarter regulations -- regulations that cause less collateral damage.
Though sadly it will be a while until we will hear: "You are visiting Hacker News, in a simple look and without any images, it features a list of links to articles and a menu at the top. Do you want me to read the articles titles, continue with accessing the menu or a more detailed description of the page?"
Do any of these courses or the means through which they are made available advertise paid options, paid alternatives, or other paid courses?
If both of those are 'no' then this is disgraceful. If either is 'yes' then it is completely justified.
Wow. Talk about harming hundreds of people for no reason at all.
IANAL, but I believe the ADA would be applicable for students of the college, not necessarily online content consumers viewing the college's courses.
Of course, the problem here is that you have to start playing expensive lawyer games to get that clarified -- and it might take years. So even if Berkeley has done nothing wrong, they still might end up having to pull the courses.
This is much better than trying to estimate bandwidth from packet loss.
Google BBR seems to used the same exponential probing that slow start does, so I wonder how it will perform when you are staying in network and don't often have to worry about packet loss or congestion and want the link to start off at full throttle.
Once BBR enters its steady state it intentionally cycles faster and slower, but this seems like it is creating additional latency when you don't want it. Think of a traffic burst that happens just as the link decides to cycle slower.
It also seems like the protocol intentionally runs slower that possible as to not create buffer pressure on the receiving side, if I'm understanding this quick description properly: "then cruising at the estimated bandwidth to utilize the pipe without creating excess queue".
The this line just scares me: "Occasionally, on an as-needed basis, it sends significantly slower to probe for RTT (PROBE_RTT mode).
Google is going to make patches that work for them, but that doesn't always mean it will work for everybody else. This seems very close tailed to Google's traffic issues and serving HTTP over persistent connections, and not a general purpose feature, think of games, intranetwork low-latency applications, etc.
From an industry viewpoint, I wonder how this will perform over traditionally higher-latency and higher-loss wireless networks.
As an aside, I love how small the patch is, weighing in at 875 LOC including comments.
How does this interact sending traffic through routers using algorithms like fq_codel to reduce bufferbloat? Is it better to just have one or the other or do they work well together?
Hmm, reading the code it says it does play well with TCP, but "requires the fq ("Fair Queue") pacing packet scheduler." In fact, later it says it MUST be used with fq.Hmm.
BTW the code is very readable and well commented.
Can this really just be patched in, with no changes to specalized hardware?
How's this different from TCP Vegas and FAST TCP which also use delay to infer the bottleneck bandwidth?
The kicker is that Kafka can be rock solid in terms of handling massive throughput and reliability when the wheels are well greased, but there are a lot of largely undocumented lessons to learn along the way RE: configuration and certain surprising behavior that can arise at scale (such as https://issues.apache.org/jira/browse/KAFKA-2063, which our team ran into maybe a year ago & is only being fixed now).
Symptoms of these issues can cause additional knock-on effects with respect to things like leader election (we wound up with a "zombie leader" in our cluster that caused all sorts of bizarre problems) and graceful shutdowns.
Add to that the fact the software is still very much under active development (sporadic partition replica drops after an upgrade from 0.8.1 to 0.8.2; we had to apply some small but crucial patches from Uber's fork) & that it needs a certain level of operational maturity to monitor it all ... it's easy to get nervous about what the next "surprise" will be.
Having said all that, I'd use Kafka again in a heartbeat for those high volume use cases where reliability matters. Not sure I'd advise others without similar operational experience to do the same for anything mission critical, though -- unless you like stress. That stress is why Confluent is in business. :)
I am wondering if working on solving the actual problems with Kafka would have been the better route.I've never used Kafka and i find ZeroMQ great, but reading that their logging solution does drop log messages is a huge no-go for operations.How can you claim to run a serious business and say "babies will die" when you can't be sure to be able to find problems?
Because, when will you lose logs? Not in normal operation, but when weird things happen. When networking has a hiccup. When Load on the system is too high, so most likely when many people are using your service. Exactly when shit hits the fan. And you just made the decision that it's ok to drop log messages in such cases? That's not good.
I think you should either dive into Kafka/Zookeeper and fix your problems or switch to another logging solution. You should probably just drop that non-sense "streaming and real-time logs" requirement and live with a log delay of a few seconds and build something really stable instead of building something inherently unstable. Honestly, just collecting syslogs on the core vm and sending them to a central server would have been the better solution. Better then looking into fancy real-time, streaming logs on a sunday night because the system is having a breakdown and you can't even be sure that you are not missing essential logs.
And our log messages are ridiculously big at times (15k to as big as 50k).
Our pipe never has problems. What fails for us is Elastic Search. In fact at one point in the past we did 100k messages/s when embarrassingly had debug turned on in production and RabbitMQ did not fail but Elastic Search and sadly Flume did as well (I tried to get rid of flume with a custom Rust AMQP to Elastic Search client but at the time had some bugs with the libraries.. Maybe I will recheck out Mozilla Heka someday).
There is this sort of beating of the developer chest with a lot of tech companies.. that hey listen we are ultra important and we are dealing with ridiculously traffic and we need ultra high performance. Please tell/show me these numbers....Or maybe stop logging crap you don't need to log.
Or maybe I'm wrong and we should log absolutely everything and Auth0 made the right choice given their needs (lets assume they have millions of messages a second), I still think I could make a sharded RabbitMQ go pretty far.
This goes with other technology as well. You don't need to pick hot glamorous NoSQL when Postgresql or MySQL and a tiny bit of engineering will get the job done just fine particularly when mature solutions give you such many things free out of the box (RabbitMQ gives you a ton of stuff like a cool admin UI and routing that you would have to build in ZeroMQ).
I was surprised by the contrasting sense of importance of delivery guarantees in the article. At the start, losing a message was akin to the death of a child. At the end, shrug. Now every single machine failure (or even mq process restart) failure will lose you log messages stored in memory :(.
Glad to hear you found a solution that worked for you though! Would love to hear about difficulties you had with the new system, in particular adding brokers.
Running Zk and Kafka on the same nodes is likely not the best thing.
But if the Auth0 runs their entire operations on AWS, maybe Kinesis would have been a more natural transition.
Kafka gives you features that certain systems cannot live without, like on disk persistence (saved my life couple of times) and topics. Filtering messages on the client side like ZeroMQ does it not an option in many cases, just think about security. I think Kafka has a long way to go before it can be used as a general message queue (many features are not there yet like visibility timeout for example) but if you can manage Zookeeper and have means to work with it (somebody understands it and knows its quirks) it can provide a reliable platform for distributing a large number of messages with low latency and high throughput, just like it does at LinkedIN.
If deployed using the Netflix co-processes both are very durable.
I used MQTT but only as a message bus.
Now about Kafka vs ZeroMQ: you want Kafka if you cannot tolerate the loss of even a single message. The append-only log with committed reader positions is a perfect fit for that.
These jets cannot fly without GPS.
When they (the military) knocked out GPS intentionally around China Lake NAS a few months back (for testing aircraft in GPS denied environments) -- all Embraers were told to avoid the area:
THIS NOTAM APPLIES TO ALL AIRCRAFT RELYING ON GPS. ADDITIONALLY, DUETO GPS INTERFERENCE IMPACTS POTENTIALLY AFFECTING EMBRAER PHENOM300 AIRCRAFT FLIGHT STABILITY CONTROLS, FAA RECOMMENDS EMB PHENOMPILOTS AVOID THE ABOVE TESTING AREA AND CLOSELY MONITOR FLIGHTCONTROL SYSTEMS DUE TO POTENTIAL LOSS OF GPS SIGNAL.
What this is saying:
1) If you don't have a yaw damper, then you'll have a rough ride
2) because the autopilot will induce oscillation (dutch roll) after loss of GPS
Snoman's "Dance Music Manual" and Shepard "Refining Sound" are good books to start with
(I'm old school about synths and sequencer, i think it's best to start with a knobby hardware synth, microbrute, ms2000, Minilogue, bass station II, sh201, mopho and understand how it's designed to work and how it glitches/fails gracefully)
https://news.ycombinator.com/item?id=9635037 (that particular K-S synth isn't online anymore but somebody else put up a demo)
https://news.ycombinator.com/item?id=10177716 (supercollier, alda et al)
Big fan of Terry Riley, Lamont Young, Philip Glass, Arvo Prt also.
The exposition, history, coding - the whole package - is just great. Thank you for this!
I am convinced that with modern Machine Learning algorithms thrown into the pipeline it will only be a few short years before we are able to conjure up whatever type of music our context dictates. It's already happening.
It will be interesting to see how the RIAA respond to a new world where we can say
"Alexa, please play me some Led Zeppelin remixed with Rihanna in the style of Skrillex" (and you fill in the blanks)
Or even better after a one time analysis of you and your partners entire Spotify collections, just start making up new works perfectly in tune with the moment.
Really exciting times, and a stunning article.
That said, for me many of later demos clip whenever 3+ sounds are playing - e.g. the "Cor Anglais" one. I expect that adding a compressor at the end of the audio chain would fix it.
(It might be platform dependent though - when I've experimented with webaudio, it seems like sometimes a demo will clip on windows but not on mac, or such. I guess OSes sometimes compress outgoing audio automatically?)
He was ~17 in that video, and he built the computer as well.
I had heard of other minimalist artists before, but until a couple years ago somehow had missed Reich.
I'm now an addict, especially to his later pieces when he really started growing his work into larger and larger themes. "Music for 18 Musicians" has become one of my favorite pieces of music of all time. I say this as somebody who finds a great deal of modern art fairly deplorable -- the first time I heard some of Reich's pieces I stayed up the entire night finding everything of his I could put into my ears.
Reich has a nack for finding incredibly beautiful and urgent patterns and sounds and exploring them to a kind of amazing fullness. I never had the pleasure to play any of this pieces when I was attempting to become a musician, I understand that the practice for a piece can take months to a year because of the difficulties of maintaining your part of the phase. Listening to his music is, to me, a very intense activity, because I desperately want to notice when the music starts changing and because of the phasing it never does. There's certain parts of the phasing to that I find particularly enjoyable, but I've also found that you can't just jump to them, you have to encounter them in the context of the phases that come before and after. Once you get quite familiar with his music you'll find elements of his influence all over the place (for example, careful listeners will probably recognize this piece as the core of a much later EDM hit https://www.youtube.com/watch?v=Miu19QHBQiw).
I don't enjoy his earlier, very intense explorations into phase music, but he manages to develop the concepts into a very full and beautiful music:
https://www.youtube.com/watch?v=ZXJWO2FQ16c (music for 18)
https://www.youtube.com/watch?v=zLckHHc25ww (another performance of the same)
https://www.youtube.com/watch?v=edKE10Yz_zs (six pianos)
https://www.youtube.com/watch?v=TbC5zhFX7Kw (Octet Eight Lines)
https://www.youtube.com/watch?v=O5qOtXql-oI (Desert Music)
https://www.youtube.com/watch?v=Udn9cZYWmIk (Music for a large ensemble)
And this insanity, a solo performance of one of his early phase pieces https://www.youtube.com/watch?v=AnQdP03iYIo
If you liked the visualizations in this here a nice one for "Music for Pieces of Wood" https://www.youtube.com/watch?v=gy2kyRrXm2g
and then humans doing the same https://www.youtube.com/watch?v=5LbmvD7ytDc
and another good visualization of the phase music approach https://www.youtube.com/watch?v=lzkOFJMI5i8
- There is a chance that this plan might fail and 6 people would get killed instead of 5.
- Maybe there is a reason why the 5 people are tied to train tracks - Honest people don't usually end up like this - Maybe they're in the mafia and their deaths would be an expected consequence of their high-risk criminal lifestyle. On the other hand, the guy standing on the bridge is more likely to be a regular person who did nothing wrong.
- You would go to jail for manslaughter.
- You would psychologically damage yourself by pushing the person off a bridge.
- Maybe you have an undiagnosed case of schizophrenia and the 5 people on the tracks are not real. The odds of it being an illusion (and that you are crazy) are probably higher than it being real - It's quite arrogant to trust your own senses (to the point of killing someone) when you're confronted with such an incredibly unlikely situation.
My username is inspired by my favorite rogue-like growing up, JauntTrooper:Mission Thunderbolt. It was released in 1992.
Here it is, if folks are interested in playing it: http://www.old-games.com/download/3974/jaunttrooper-mission-...
We did sell out of tickets; our very graciously donated venue (thanks to Eventbrite) has an attendee limit since it's basically an office rather than a large venue. I hope you all will watch the streams! The talks will also be recorded so you can watch them later.
I really wanted to go, but having just gone to a roguelike conference last month and this one being on the opposite coast, I just couldn't swing it. But it will be streamed!
Full of history and comedy.
But coming from the east coast, the west coast is weird.
The reality, I believe, is this: Arts Council funding is limited and decisions need to be made about which projects will be funded over others.
So they need some transparent system based on info from: the artists, their peers and the public.
Without any information whatsoever its not clear on what basis they could be making any decisions. The whims of some Art Director who happens to prefer one thing over another?
Seriously: Who comes up with that? We finance culture because it needs to avoid the markets not to become a dull product. At best they will aim at some mainstream, not at all supporting any advances in art.
I think the headline is mis-representative of what it's about.
It's does not sound like a measure of arts quality, but about creating a consistent set of metrics to judge with as a baseline.
Nothing wrong with that and everyone who don't believe in it don't have to use the sytem.
Arts Council is already using this criteria for grants and for hiring. It is only logical to use it on art itself.
Art is about artist, not art - and no one wants to directly be told they are not producing "quality art" by some system.
Of course that may mean they would have to go out and shoot the wolves after the dogs warn them but that would have a positive effect on the wolves teaching them that they get shot if they go near the sheep. Culls don't teach the wolf packs anything.
That may also mean the farmers have to live with their herds which doesn't blend well with modern life so maybe we should scrap the sheep farming and not the wolves?
Plus the wolfs are not really a problem here in Minnesota, a few people's pets have been killed, which sucks but thats all you really here about.
Although the gray wolf as a species is classifed as "Least Concern" by the IUCN (International Union for Conservation of Nature), the Scandinavian wolf population is classified as 'Engdangered'.
Instead of killing wolves, we should be trapping them and redistributing them to help ensure genetic diversity in isolated populations like Scandinavia's.
Not sure if the latter is still the case. When I last visited the far north the effects of overgrazing by reindeer were starting to be severe in places. The densities of reindeer are pretty high so wolves would have a spectacularly easy time and would do very well as a result.
That population would be below the level necessary to maintain genetic diversity. At least in NZ, the Kiwis have the decency to say that they want a species eliminated outright.
And before you downvote, ask yourself: Senseless violence to stop outsiders who are just trying to survive: is it really any less bad when the outsiders are animals?
It's strange as they usually do - remember how humanely and thoughtful they handled the terror incidents of 2011.
If the former, by all means, pick some esoteric language. If its the latter, more mundane issues like existing codebase, skillset and, I don't know, perhaps using the right tool for the job? C# can do things Python cannot and vice-versa.
Learning Scala to show off and fit in Graham's paradox is silly and doesn't prove that you are a smarter programmer. A smart programmer is someone who comes up with smart solutions; you can write crap code in any language.
To the contrary, programmers with the ability to deliver and iterate working code quickly is what is needed -- and especially without overengineering, or inadvertently creating huge future technical debt due to inexperience.
And if there are only 2 or 3 or 5 programmers on my team, and one of them gets sick or leaves, it's far more important to have a big pool of potential replacements who could can get up-to-speed in a couple weeks, not months -- which means a popular language and a straightforward codebase.
In my professional experience, people that are always learning new languages or are constantly using new libraries and frameworks are never the ones who accomplish the most. They simply are the ones who are always seeking novelty; they never become great at a language, because they are always wanting to try something new. They start lots of projects, but don't finish them.
I don't think those are the best qualities to look for in employees.
I think my choice today for a Python Paradox would be Erlang or Ocaml. Ocaml + MirageOS is a system level simplification that makes application correctness/security a much more tractable task than [any other language + linux].
Erlang is the most common sense way to build resilient, distributed applications. But there is a high barrier to entry to learn both Erlang and the OTP framework that makes it useful.
Having said that, I've never worked at a company where choosing Erlang or Ocaml was a possibility. I think there are companies that make bold technical choices, but I haven't had the good fortune of being given the chance to work at such a place.
If you filter for Scala, you are more likely to pick academic purists trying to show off with their core. If you filter for go, you are more likely to pick pragmatics, who will be aware that every choice is a tradeoff and will value maintainable code over clever code.
I already made a choice.
However external deployment of code somehow doesn't feel like an improvement to me. From a DevOps perspective having webtask code run on a closed third party environment is a big risk. It makes continuous integration, testing and error reporting more difficult or even impossible if the webtask service isn't well thought-out.
The way forward for webhooks in my opinion should be standardization of push and pull protocols, with concerns like cryptographic signatures, metadata headers, and failure recovery through event sourcing.
Huginn  is an amazing self-hosted dev-friendly Zapier/IFTTT/MS Flow that, more relevant to this topic, supports acting arbitrarily on webhooks.
* if you don't control the service running the web task, how do you trust the returned result to be the actual execution of the code you submitted?
For these cases I've just finished implementing https://requesthub.xyz/, which translates webhooks from one service into calls to other service, all controlled by a jq filter.
It's a shame, as ideally I'm looking for business logic system (mind with database access) that would allow development with Vue.JS
What does this even mean?
> Isolated using containers
Sorry, I'm a little tired.
1) Winzerer Zelt: 250m (http://www.abendzeitung-muenchen.de/inhalt.oktoberfest-winze...)
2) Brurosl: 240m (http://www.abendzeitung-muenchen.de/inhalt.wiesn-bier-pipeli...)
3) Hackerzelt: 250m (http://www.oktoberfest-live.de/festzelte/bier-pipeline-fuers...)
I believe that this year the workers installed more beer pipelines (kinda makes sense, given the terrorism threat and the heavy restriction on vehicular traffic), but can't find any numbers.
I'm not that clear on how US politics and 'lobbying' works, but why don't you just call it what it is - a bribe? In this case 200 families will be back to slow speeds and poor service ISP (who no doubt will be putting their prices up) just because said ISP has enough spare cash to bribe the politicians. How is that fair?
It has nothing to do with free market or net neutrality concerns.
The Vick Family Farms predicament was described in a recent New York Times article. The business has used Greenlight's faster Internet to support a high-tech packing plant that automatically sorts sweet potatoes by size and quality, with each spud tagged with its own bar code. Were very worried because there is no way we could run this equipment on the Internet service we used to have, and we cant imagine the loss well have to the business, farm sales head Charlotte Vick said.
Potato-sorting and tagging does not require internet access.
Aha! And all became clear.
Health care is the best example. The only thing unexpected and worrisome about it is that the executives at pharma companies have just now realized that they can increase prices this way. Doesn't exactly speak well for their knowledge of economics.
To describe this as "community" broadband as some commentators do is really propaganda. Consider how absurd it'd sound if someone spoke of a "community Air Force".
Freedom of transaction is a basic human right (whether the Bill of Rights talks about it or not, read the Preamble to the Bill of Rights and you'll see the Bill of Rights doesn't create rights, according to the Bill of Rights, it creates limitations on government from violating those rights.)
Even if you disagree with the above, the First Amendment is unquestionably part of the constitution and thus this is a violation of freedom of speech (internet is speech.)