Want a filter clause? Got it.
Need a weighted distinct count? Now it's part of the language.
Also, you can rephrase a lot of these SQL features as subqueries. It's surprising how many database bugs you can find when you do it. Not so much in Postgres, but I probably found a dozen in Redshift. I mean rephrasing this:
select sum(x) filter (where x < 5), sum(x) filter (where x < 7) from generate_series(1,10,1) s(x) ;
with t as ( select * from generate_series(1,10,1) s(x) ) select (select sum(x) from t where x < 5), (select sum(x) from t where x < 7) ;
My guess is that such extensions, while useful, are somewhat marginalised features in terms of usage. Thus, no one ever learns them formally and just Googles for what they need -- if it comes up -- and get the CASE solution, in this case (pun unintentional). Hence perpetuating that pattern. Also, of course, the CASE solution is a lot more powerful as the returned expression, that gets fed into the aggregate function, can be basically anything.
Combine a system like that with a requirement that N (some reasonable number of randomly chosen people) must agree when combining pieces or splitting pieces before the change is applied to the master board. And add the ability of the project admins to scan the board and lock in place obvious good solutions. That might work.
Or maybe the stuff around anonymous crowds will always contain destructive assholes.
People might be interested in the previous discussion of this article (only 25 comments) https://news.ycombinator.com/item?id=8499452
Here's a different post about the 2011 challenge, with some interesting comments: https://news.ycombinator.com/item?id=9021383
And here's another one with 50 comments: https://news.ycombinator.com/item?id=3164466
You cannot just give trust away like hugs. Trust must be earned. Trust is a social construct, and if that's not built into your system, your system is bound to fail.
Almost everyone will be honest and genuine, but that's not enough. It takes just one asshole in a thousand to burn down the building, if they have nothing to lose.
The latest work, MathBoxes, uses the recognition engine from the starPAD sdkhttp://graphics.cs.brown.edu/research/pcc/research.html#star...
It's a great toolkit for building pen-centric computing tools (especially math recognizers and tools), but unfortunately it is heavily tied to the old Windows 7 tablet APIs and so isn't easily generalized.I've been hoping to port it to work on newer hardware for the past few years, but have not yet found the time. If anyone wants to take on that project it would be incredibly useful (especially since there seems to be a resurgence of pen-centric computing on the near horizon).
You can find more pen-math work on Brown's website:http://cs.brown.edu/research/ptc/FluidMath.html
Stop doing all these slightly-better-in-some-way-but-not-really things (Zeppelin, etc). You've lost.
But I'm so frustrated trying to use Jupyter on a tablet. The compute model is perfect for using my tablet, but the UI just doesn't work that well.
It seems to me using these tools is enough in to provide a suspicion and thereby having the opposite effect than what they are intended for.
So essentially, using Chrome on Windows, though perhaps less secure makes you less likely to be targeted than using Tor on Windows or on Tails.
Case in point, attribution based on the skill of these attacks does not dox the attacker, but the end result of their attacks. Meaning these may not have been sponsored attacks, but someone farming intel to capitalize on.
This is happening to us here. I think they're doing it by using the "No true Scotsman" informal fallacy and other methods. We'd blame the hundreds of thousands of new users. I believe they've found a way to unwind this community. Nothing revolutionary will ever come from here (how could it?). I think the community has been compromised and that we won't find out until 50 years from now when most of us are dead.
This is how I feel after years of watching the community. I have no stake in it either way. I'm open to opposing views.
If we're going to interpret and strengthen moves we make, it might be smarter and safer to leave the interpretation of movement out of the suit. So wear a light, easy-to change into/out of suit with sensors, that controls a machine in front of you. Also saves space as you won't have to make space for a pesky, fleshy, squishy human.
(Just thinking out loud here)
Has anyone actually benchmarked this recently? It's such a trivial optimization for a modern compiler.
I have trouble believing that they'll produce different code when the return value isn't used.
I found the next posts/chapters on Marshalling and Encoding even more interesting with comparison and pros/cons of different encoding techniques. I was not aware of the FlatBuffers library and see why the author likes it and may have to look into using in future projects.
Furthermore the unforgiving drive for consistency is a reason why people don't update their beliefs when new evidence comes to light. Consider Superforcasting (a book about people with an unusual ability in forcasting the future) the author says that one common trait among superforcasters is that they have a larger capacity for tolerating cognitive dissonance. The drive to avoid cognitive dissonance shackles you to your existing beliefs (see confirmation bias).
It's a weird thing, alarm bells going off everywhere in your head, but you still tell yourself: It's alright, I'm ok this is what I have to do, it's the best life choice, it's not that bad, everyone else is wrong, ...
For me personally, the subconscious reason why I acted that way is that I knew the repercussions when I would try to leave. My whole family, friends, everyone I cared for would start shunning me. I would've been kicked out of my the house by my parents, completely on my own, no contact at all. That's a scary though when you've been thought the world is a wicked and evil place. This year, the group has even become more aggressive when it comes down to shunning, showing emotional propaganda videos on their conventions .
If success is defined from an economic point of view in absolutes it makes sense. As the best predictor of your wealth is your parents wealth.
"Intelligence" is harder to add to the equation as it is more difficult to measure than parents wealth and there are known bias (https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect) that makes people think that they are smarter/more competent that they really are. So it is even possible that a superiority complex or insecurity are relevant even when the participants in the survey don't think so. (https://en.wikipedia.org/wiki/The_Triple_Package)
This results show the consequence of a diminishing socio-economic mobility. The same traits and skills in a high mobility society are going to have a big weight in "success" achievement. In an stagnated society where the system is rigged to make poor stay poor and rich stay rich "success" becomes an inherited trait making "socioeconomic background" the only and best predictor.
The reasoning seems to go like this: Asians-Americans make more money. Chinese-Americans are Asian-Americans. Raise your kids the way Chinese parents do. If you look at the stats, Chinese-Americans don't do especially well.
You'd be better off raising your child in the "Filipino style" if such a thing exists.
Indian American : $127,591
Taiwanese American : $85,566
Filipino American : $82,389
Australian American : $76,095
Latvian American : $76,040
British American : $75,788
European American : $75,341
Russian American : $75,305
Lithuanian American : $73,678
Austrian American : $72,284
Scandinavian American : $72,075
Serbian American : $71,394
Croatian American : $71,047
Japanese American : $70,261
Swiss American : $69,941
Slovene American : $69,842
Bulgarian American : $69,758
Romanian American : $69,598
Chinese American: $69,586(including Taiwanese American)
Awesome game. Hard as hell, though.
Every time you start the game all the levels are randomly generated. Each level has a treasure room which contains an item that may give a boost to your status or give you a special ability. Some items make the playthrough a lot easier, some can make it even harder. The items you get are basically down to luck.
The game is still hard anyway. There's no item that guarantees you'll win and, fortunately, there are strategies that help you make the most out of what you got. If you're good enough at the game the items may not even matter, you're just that good.
Unfortunately we don't have infinite tries at life like we do in a video game to learn how to get better. You have to learn while you play the only run you've got.
That's how I see the whole issue of luck vs. merit.
Also if you say "my group succeeded because we believe in our group" then you need to not analyse one generation but several generations together, because that statement is not about a single generation. E.g. the question is not why is the Jew John Doe successful, but why are Jews on average more successful now despite having faced a lot of trouble in the past?
While this article seems logical in itself, to me it actually gave the idea that the book may very well have a point, exactly because of common sense.
If you identify that 56% of successful people have Trait X you have no idea if Trait X is associated with success unless you also discover how many unsuccessful people have Trait X. Trait X can be "belonging to a given subculture".
Taking the entire population and subtracting the successful people does not leave you with the unsuccessful people. Unless you are prepared to define someone who is in the process of attempting something as unsuccessful.That's apart from the pointlessness of defining "success" for a large population of individuals (happiness? wealth? freedom? connectedness?).
Given the self-selection bias that's involved, that doesn't sound very reassuring, granted the paper probably has a better discussion of methodology. Still as it stands it's pretty much impossible to guess the significance of the result from the news article.
Most of the time what I see is the Texas Sharpshooter fallacy. "The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a target centered on the tightest cluster of hits and claims to be a sharpshooter."https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy
Does it just mean that people can root all devices using this chipset? Or something worse?
(sorry if this is obvious, I'm just not in the know)
In particular this line in the screenshot hints at what he did: "Overwriting syscall_table_5 pointer"
The issue is likely applicable on particular qualcomm devices, and a software patch should be possible.
If it's not, well, uhh, yeah this is kind of a problem.
(... unless you are one)
A little context wouldn't have hurt anyone.
The only real sin of RSS (beyond the holy wars and format bikeshedding and committee madness and and and...) is that it's too honest a format. It's a format for stuff that matters, for content that deserves to be read; it's too pure to survive in a world of content silos, stalking analytics and inaccessible material-designs. Its innocence doomed it in a very poetic way.
Perhaps there's a better way of handling this. I can't read it directly on my Kindle because the browser there - optimistically described "experimental" - is shocking. Pocket is great because it does a good job of producing a page which consists of just the typing without the usual horrific web fluff (although sometimes it gets it wrong and the graphics go missing).
It seems a shame that, when most of what I'm interested in started life as someone else essentially entering text into a document, there's no way of obtaining it in that form but instead it has to be manipulated into something sensible. I don't want an "experience"; I just want to read what you've typed.
Would it help if I gave you my email address?
I agree it's interesting to look at your content when loaded in an RSS reader. IMHO most feeds are actually more readable when loaded in a clean uncluttered RSS reader than in the original webpage. If the content is good, the reading experience should not be harmed by focusing just on its text and images and removing extra styling.
Shameless plug: the RSS reader I maintain is https://www.feedbunch.com , comments are welcome.
For the Reuters feeds, this works out fine. The content is text, not markup. There are few or no HTML tags. The Reuters feeds are headlines and a sentence or two. The Associated Press also has RSS feeds, and it's very similar. The Voice of America's feeds are much wordier; they often have the whole article.
Space News has an RSS feed. The Senate Democrats have an RSS feed covering what's happening on the Senate floor. (The GOP discontinued their feed.) The House Energy and Commerce Committee has a feed with markup in embedded JSON. Not sure what's going on there. Even The Hollywood Reporter has an RSS feed.
So for real news, RSS is in good shape. RSS seems to be doing fine for sources that have something important to say.
 http://feeds.reuters.com/reuters/topNews http://spacenews.com/feed/ https://democrats.senate.gov/feed/ http://www.gop.gov/static/index.php https://energycommerce.house.gov/rss.xml?GroupTypeID=1 http://feeds.feedburner.com/thr/news
In a more general sense than RSS, I also have to install extensions to format JSON. Considering how much browsers are targeting developers these days, might they consider rendering JSON, XML, etc in some standard way that is useful to developers (as an option at least). I am talking about syntax highlighting as well as some basic interactive features like expanding/collapsing.
Bonus: the @code attribute can be substituted into the URL for an image, and visually identify the weather condition referred to by the code:
Just replace "26" part of "26.gif" with another value.
Edit: I mean for the purposes of spam, not for the purposes of making an app like the one OP linked to.
Almost every iMessage user who activated with a Chinese mobile number has been receiving 3~10 spam iMessages everyday about online gambling and etc since ~2012 when iMessage service went live in China. The content for those messages have been pretty much the same with some minor variant on wording.
With the scale of the spam, I believe it's likely not sent thru UI scripting, but the iMessage protocol might have been well reverse-engineered and exploited by spammers. So disabling UI scripting won't help anything but cause trouble for developers with legit usage.
If you want something more open, that exists too, which is great. It doesn't make sense to complain that iMessage isn't what you want it to be. Choice means that open and closed solutions exist in the market.
Perhaps the OP really is talking about the Messages.app UI, but the screenshot on github imply that this is some sort of alternate UI (curses-based). Perhaps I'm misundestanding.
There were people selling apps that bought iMessage to other platforms, by using VMs in servers running the desktop App and using the scripting functionality.
Still technically works if you disable SIP, but that's a big ask just to be able to use my little toy app.
It is understandable that Apple doesn't want malware to easily read people's messages.
Yes, guilt is the means by which so much bad stuff gets installed in our minds. Ideas that get passed easily from mind to mind regardless of truth content are called memes. I propose the term remes for ideas that get recalled easily in an individual's mind. These get rehearsed more frequently than other ideas regardless of their truth content and so they persist; they achieve this by generating guilt.
Good starting point: http://geneticgenie.org/
They even have an implementation of methylation analysis on this website.
And this excellent talk: https://www.youtube.com/watch?v=yFIa-Mc2KSk
- Two different diet treatment regimes: fasting-like for 3 months followed by Mediterranean diet (FMD), and ketogenic diet (KD).
- Control diet was simply telling people to eat the same way as usual. So there's no real accounting for how well a placebo would do here.
- Measurements included a 54-question survey, adverse event counts, and various lab measurements. These measurements were taken at start, 1-month, 3-month, and 6-month intervals.
The problem then is that what was report was:
- Results of the first half of the first treatment (fasting-like for 3 months) for a subset of the measurements. What if things only worked in the second half? Or if things worked only for KD? So many implicit comparisons here.
- Comparison against the control group at 3 months, with reported p-values. Even though one of the reported measurements was the overall survey results, all of the values reported are p-values without any mention, that I can find, of multiple hypothesis testing across all measurements. This comes despite the fact that for all of the mouse results, they explicitly state they used Bonferroni correction.
- Baseline performance which involved no placebo. How many people would have improved if simply given some bullshit diet? Or if they had simply been given a diet that was vegetarian, or something that gave them the impression it was a treatment? Especially in surveys, placebo effect is a huge thing to look out for. Their more hard metrics like lab results show a more mixed bag with WBC dropping for fasting subjects. Sure, it returns once the 3 months is over, but then the supposed quality of life scores drop; so you can't have it both ways, though their writing makes it sound that way.
I'm not saying it isn't a great result from a bio standpoint. I'm sure the Cell reviewers found the mouse model results compelling. I just don't see any way to conclude the broad sweeping title of the article from the actual content of the paper, and it's unethical to do so without strong evidence.
"All disease begins in the gut," as someone wise has said in the past.
Incidentally, the antidote to the Pauli Effect is to tape a raw sausage to your circuit. Everyone knows that your circuit always works when you put your finger on it, and the sausage emulates your finger. (There is actually some truth to this joke, as the sausage/finger provides a little parasitic capacitance, which can make an unstable RF circuit become stable.)
If we have a guy like that again to suspect of such effect, then we could try the isolation of parameters to see if there was in fact just a bias or it could lead to deeper issues. And interference from quantum biology might not be completely out of the question https://en.m.wikipedia.org/wiki/Quantum_biology
"When an angry Wien asked how a storage battery works, the candidate was still lost. Wien saw no reason to pass the young man, no matter how brilliant he was in other fields"
edit: Plus, I'm not too fond of Waldorf's philosophy, and even less of the Anthroposophy's.
Come to think of it, this was very well written - it had a nice flow to it, building up to some profound observation, and then starting over with some different viewpoint. My only complaint is over those weird "repeat myself in giant text surrounded by quotation marks" things scattered throughout the article, but that seems to have become an acceptable/recommended practice?... (Really, when did this become a thing? I'm really curious on the history of it, but I don't know which term to search for).
I will say that I used to read only nonfiction texts because I enjoy learning about history and the sciences, and these subjects seemed more important than what can be found in fictional stories. But then I began reading more stories, especially slice-of-life type things, and I realized that I was wrong. Reading these stories lends me insight into my own social life - how to be a better friend, etc - and really makes me contemplate which values I want to live by and how I can uphold those in my day to day life.
I liken this to the distinction between classroom schooling and life learning. There's a decent-sized class of subjects that are more effectively taught through experience and self-discovery than via instruction. Interestingly, this class of subjects seems to be the most foundational, as they tend to lend insights into things like what makes an individual feel fulfilled, whereas the subjects taught in school are usually more along the lines of tools (that could potentially be applied to the former). But what use is it to learn a tool if you have no sense of what to apply it to? Engagement increases when a student is seeking knowledge of their own accord, usually to satisfy some goal, curiosity or creative drive - none of which are likely to be conceived within a classroom. Certainly some balance is needed.
(Previously linked and discussed here: https://news.ycombinator.com/item?id=8486440)
Most of the "wildness," or more to the point, "hatefulness" clearly came from the parents. The racism, the homophobia, the Disneyesque self-absorption, the intolerance for anything outside of the present iteration of pop culture and regional sport -- all seeded, fostered, and fed by the family. The kids were barely old enough to even understand any of this stuff, let alone hate someone for it, yet they did!
I doubt that the Thoreau experience is going to have much effect on damage inflicted outside of the classroom, and this is a big thing. It's why we have "good" school districts and "bad" school districts within the same county -- expenditure per pupil is the same, buildings are the same, teachers are mostly the same, but the upbringing is not.
Can education be improved? Certainly! However, mass home schooling is not the answer if you want to keep any semblance to current society. Most people have to work to make enough $s, for instance.
1. In dynamic languages, simple type mismatches, wrong variable names etc. are now caught in "top level system level test". Yes these are bugs that should have been caught by a compiler had we had one.
2. There's no documentation as to how something should work, or what functionality a module is trying to express.
3. No one dares to refactor anything ==> Code rottens ==> maintenance hell.
4. Bugs are caught by costly human beings (often used to execute "system level tests") instead of pieces of code.
5. When something does break in those "top level system tests", no one has a clue where to look, as all the building blocks of our system are now considered equally broken.
6. It's scary to reuse existing modules, as no one knows if they work or not, outside the scope of the specific system with which they were previously tested. Hence re-invention, code duplication, and yet another maintenance hell.
Did I fail to mention something?
UT cannot assert the correctness of your code. But it will constructively assert its incorrectness.
The initial people who came up with the idea thought about writing down the execution of a usecase, or a small part of that, as a test. Then they ran their code against it while developing it. That gave them insight into the the usecase as well as the API and the implementation. This insight could then be used to improve tests, API and implementation.
But most professionals aren't about making quality. They are about paying their rent. So when they started to learn unit tests, they just wrote their code as always, and then tried to write tests, no matter how weird or unreasonable, to increas the line coverage of their test suite. The proudest result for them is not to have a much more elegant implementation, but to find the weird test logic that moved them from 90% coverage to 91%.
I believe that's how you get a lot of clutter in your unit tests. However what is described in the document are sometimes example of people really trying, but that are just early in their development. Of course when you learn to do something by a new method you will first do crappy, inefficient stuff. The idea here is how much do you listen to feedback. If that team that broke their logic to get higher coverage learned that this was bad, then they probably adapted after some time, and then they did exactly what unit tests are there for.
Writing code is expensive. If you have more test code than real code it means you value correctness over features. If I can skip unit tests entirely and have a 95% functioning system, I'm not sure that 5% is worth an extra 500% or so lines of code needed in unit tests for 100% code coverage.
Unit tests are not free as they are also code that much is obvious. Coplin however delves also into less obvious aspects of impact of unit tests on design and also the organizational aspects. Ultimately coding patterns are going to reflect the incentives that govern the system.
Software development is a lot about trade-offs. There is plenty to be learned here how to do it. A addendum by him can be found here: http://rbcs-us.com/documents/Segue.pdf but the meat is in the 2014 article.
For example, I'm working on an internal project that creates VMs with some provider (be it Virtual Box, AWS, etc) and then deploys a user defined set of docker container to it. I've found that I don't have bugs in situations I would typically test using mocking/stubbing/etc in traditional unit tests. I usually need to have the real AWS service with the docker service running to get any value out of the test. And at that point it's more work to mock anything else than it is to just start up the app embedded and do functional testing that way.
I'm becoming more of a fan of verifying my code with some good functional tests in areas that feel like high risk and then some contract testing for APIs other apps consume. Then if I find myself breaking areas or manually testing areas often I fill those in with automated tests.
Does it make more sense for a human to do all the aspects of the testing by hand? Of course not. Nobody has budget for that. It's much better to automate as much testing as possible so testers can focus in higher level tasks. Like the risk assessment involved in marking a build as releasable.
Then, unit testing encourages people to construct their software for verification. This software construction paradigm in itself is enough of a benefit even if unit tests are absent.
Construction for verification diminishes coupling, and encourages developers to separate deterministic logic from logic depending on unreliable processes that require error handling. Doing this frequently trains you to become a better developer.
Unreliable processes can be mocked and error handling can be tested in a deterministic way.
Testing spoils the fun as now I need to write another piece of code for each, single thing that my original piece of code is doing.
I am no longer a wizard casting fireball into a room. I'm also the guy that has to go over the corpses and poke each one with a stick strong enough and for long enough to check if they are absolutely totally dead.
Which may very well be true! But I am amazed at the conclusion: That because tests are badly written, writing tests is a bad thing. No! Any code can be badly written, it doesn't mean that writing code is a bad thing. Tests, like any other piece of code, also need to be designed and implemented well. And this is something you need to learn and get experience with.
As to whether well-written unit tests are worth it, I cannot imagine how someone could efficiently maintain a codebase of any size without unit tests. Every little code change is a candidate to break the whole system without them, especially in dynamic languages.
Unit tests are not albotrosses around the neck of your code, they are proof that the work that you just did is correct and you can move on. After that they become proof that any refactor of your code was correct, or if the test fails and doesn't make sense, that the expectations of your test were incorrect. When you go to connect things up after that, and they don't work, you can look at the tests to verify that at least the units of code are working properly.
I am no TDD fan, but I do believe that writing your code in a why that makes it easy to test generally also improves the API and design of the entire system. If it's unit testable, then it has decent separation of concerns, if not, then there may be something wrong (and yes this applies to all situations). I use this methodology for client/server interactions as well where I can run the client code in one thread and the server in another, with no sockets, to simulate their functioning together (thus abstracting out an entire area of potential fault that can be tested in isolation from network issues).
The article/paper raises good points about making sure that the tests are not just being written for the sake of code-coverage, but to say they are useless is just sloppy. Utilize the testing pyramid , if you adhere properly to that, everything about your system will be better.
I have a serious question, given that this was written by a consultant, is it possible that tests get in the way of completing a project in a timely manner, thus causing a conflict of interest in terms of testing?
 - http://martinfowler.com/bliki/TestPyramid.html
I became a 'convert' after having to clean up a fairly large mess. Without first writing a bunch of test code there would have been no way whatsoever to re-factor the original code. That doesn't mean I'm a religious test writer and that there is 150% test code for each and every small program I write. But unit testing when done properly is certainly not wasteful, especially not in dynamic languages and in very low level functions. The sooner you break your code after making changes the quicker you can fix the bug and close the black box again. It's all about mental overhead and trust.
Unit tests are like the guardrails on the highway they allow you to drive faster confident that there is another layer that will catch you in case something goes wrong rather than that you'll end up in the abyss.
I like Haskell because I can skip most of the unit tests. Integration tests are still good, and some unit tests like "confirm that test data are equal under serialization and then deserialization" help with development speed. But I can usually refactor vast swathes of code all I want without having to worry about breaking anything.
If you do write unit tests and your test passes on the first try, make sure you change the output a little bit to ensure it fails. It's more common than you'd think to accidentally not run a test.
Why unit tests are good:- You get well-tested parts that you can use in your integration tests, so that the integration tests truly catch the problesm that couldn't be caught at a lower level. This makes trouble-shooting easier.
- Decoupled design - one of the key advantages of TDD
- Rapid feedback. Not all integration tests can be run as quickly as unit tests.
- Easier to set up a specific context for the tests.
There are more details in the blog post I wrote as a response .
It might make sense if you're working for a huge corporation with a LOT at stake. Unit tests then become a form of risk management - It forces employees to think REALLY LONG AND HARD about each tiny change that they make. It's good if the company doesn't trust their employees basically.
I MUCH prefer integration tests. I find that when you test a whole API/service end-to-end (covering all major use cases), you are more likely to uncover issues that you didn't think about, also, they're much easier to maintain because you don't have to update integration tests every time you rename a method of a class or refactor private parts of your code.
About the argument regarding using unit tests as a form of documentation engine; that makes sense but in this case you should keep your unit tests really lightweight - Only one test per method (no need to test unusual argument permutations) - At that point, I wouldn't even regard them as 'tests' anymore, but more like 'code-validated-documentation'; because their purpose then is not to uncover new issues, but rather to let you know when the documentation has become out of date.
I think if you're a small startup and you have smart people on your team (and they all understand the framework/language really well and they follow the same coding conventions), then you shouldn't even need unit tests or documentation - Devs should be able to read the code and figure it out. Maybe if a particular feature is high-risk, then you can add unit tests for that one, but you shouldn't need 100% unit test coverage for every single class in your app.
They informed me that they had written their tests insuch a way that they didn't have to change the tests when thefunctionality changed.
A lot of people seem to miss many of Kent's subtly, but intentionally phrased advice. Unit Tests are a liability, so use them responsibly and as little as possible, but not at the expense of removing confidence in your software.
Also, delete tests that aren't doing you any favors.
I have written a very large amount of Java code in my career, but after having spent a lot of (personal) time on a Common Lisp project (web application) I can safely say it's still possible to build modern applications using a bottom-up approach. I recommend people try it, it can be quite refreshing.
Any links related to it will be helpful.
The ideal case is that your codebase is entirely made up of code that never fails and tests that always pass. Obviously sometimes you are going to have tests that fail and introduce bugs that cause tests that used to pass to fail. But that's the reason that you write those tests, to find those problems.
The author gives the silly example of a method that always sets x to 5, and a test that calls it and makes sure x is now 5. That seems like a bad test but anyone who's actually done work as a developer understands why it isn't. If you skip the tests that are simple and straight forward and seem like a waste of time and only write more complicated tests then you will have a hard time reasoning what failed when the complicated test fails. Was your x = 5 method faulty? You don't think so but you don't have proof since it wasn't tested. Having the test, as silly as it seems, lets you know that method is working.
Anyone who has been on a team that skips easy/simple tests knows what a mistake it is. And if you don't, you will eventually.
The debate about whether UT or system tests or something in the middle is better is missing the point. A test should be understandable at any level. 5+ mocks per test generally doesn't help the next guy understand what you are trying to test.
If you can abstract your system behind an API to drive and test it, you'll have much longer lasting tests that are more business focused and importantly are clearer for the next person to understand.
I can see great value in identifying the slow and rarely failing tests and running them after the quick / more information producing tests. Aee there any CI support for such things? I know TeamCity can run failing tests first...
"... Large functions for which 80% coverage was impossible were broken down into many small functions for which 80% coverage was trivial. This raised the overall corporate measure of maturity of its teams in one year, because you will certainly get what you reward. Of course, this also meant that functions no longer encapsulated algorithms. It was no longer possible to reason about the execution context of a line of code in terms of the lines that precede and follow it in execution,"
Unit tests which break code are stupid. Refactoring is good, but just splitting a large function into smaller pieces does nothing to improve the value of code unless it's done so that there is an understanding of the algorithm available and communicated.
Everything can be abused if not used craftly.
For blackbox mode, I am not that convinced that it is the proper strategy, especially when the product is being built incrementally. The typical example is when an entity state is being modified in a UC and the function to test the state is not yet developped. I'd prefer in that case to have the test verify that the state has been properly updated directly in DB.
Sure I believe that there are people who can do this and get the code right by just thinking it through. But I have a question for even these people (and their organizations).
- What happens when these people leave and the new junior dev becomes the maintainer?- What happens when the code is migrated/reused somewhere else?- How do you make sure your code works at least the same as it did before after you: * modify it * update some dependent component somewhere, say some open source library that your code uses internally. These have bugs too you know...
Simply you don't unless you write the tests. The real value of testing (unit testing, regression testing, system testing) comes once you have that nice test suite and you can automate it and make sure that on every change nothing breaks. This is a beautiful thing to have, because simply no human can understand these software systems fully to make sure that some innocent looking change doesn't break something. These things unfortunately happen.
Sure testing is hard if you make it so. I said before that I get an average ratio of about 2/5 of actual code/unit testing code. This doesn't mean that "this is so bothersome let's not do it", what it should mean is that you design your system with testability in mind. A primary architectural goal in any design should be testability How easy it is to write unit tests for your classes and methods. The easier it is, the smoother the unit tests are to write and maintain and the better code quality you will have in the end.
Also note that when you design your software to be testable you get a few other abilities for free such as reusability. One of the most important starting points is to find the right separation of truly orthogonal components and define their interfaces right. Way too often I see code where the engineer didn't understand this and clumped several unrelated concepts together. This really makes the testing hard and painful.
Yes, I've seen thousand line files of boilerplate unittests that don't actually say anything useful about the system. I've also written unit tests that tell me in 2 minutes rather than in 3 weeks that somebody has broken my code.
If your standard for a system of testing is that it guarantees that people can only write good code, you're insane.
It's the most bizarre reasoning I've seen for a while. Well, yes, of course, and my string trimming function will always have coverage of essentially 0% no matter what I do, since strings can be of any length, and there's way more possible strings than atoms in the universe... we clearly have a different concept of "functionality" though
"When developers write code they insert about three system-affecting bugs per thousand lines of code. If we randomly seed my clients code base which includes the tests with such bugs, we find that the tests will hold the code to an incorrect result more often than a genuine bug will cause the code to fail! Some people tell me that this doesnt apply to them since they take more care in writing tests than in writing the original code. First, thats just poppycock."
Of course, because in reality it's not about being more careful or attentive. It's about the fact that tests don't (or aren't supposed to) contain any logic! (Meaning conditional statements of any sort, loops, etc.) It's logic that breeds an overwhelming majority of bugs. If your tests contain any, it means they're written badly.
"even if it were true that the tests were higher quality than the code because of a better process or increased attentiveness, I would advise the team to improve their process so they take the smart pills when they write their code instead of when they write their tests"
No, it's not about "smart pills" (this condescending tone is so annoying), it's about the fact that - otherwise than with my tests - I can't keep logic out of my production code.
"Most programmers believe that source line coverage, or at least branch coverage, is enough. No. From the perspective of computing theory, worst-case coverage means investigating every possible combination of machine language sequences,ensuring that each instruction is reached..."
...oh God, enough.
I don't know the author, but it's quite clear he's some CS professor rather than a real-life full time software dev
Remember this article? http://blog.triplebyte.com/who-y-combinator-companies-want
https://phaven-prod.s3.amazonaws.com/files/image_part/asset/... - that's who YC startups want. When I saw it for the first time, I wondered why "academic programmers" rated the worst.
I think the corollary is that if you cannot afford to test something you cannot rely on it.
I almost got fired from a teaching position in Sweden because I told my 3rd year bachelor students that many did not bother to add their names or the assignment number, or using paragraphs and in some cases even basic interpunction on their assignments. And that this was well below the level required to to pass secondary school, and that I expect better from them.
This was apparently too confrontational, and a few upset students later I got chewed out and almost fired. Meanwhile, from my point of view, I was just doing my job and already sugarcoating it by Dutch standards.
Having said that, yes, even by Dutch standards I would say that Dijkstra liked to troll people a bit.
PS: I really like the following insight from Dijkstra's review: But whereas machines must be able to execute programs (without understanding them), people must be able to understand them (without executing them).
- Alan Kay
I only skimmed it but it is definitely an interesting read.I didn't interpret their stance as arrogant though. In my book both are enthusiastic about their subject and think the other party is misguided. Both take some effort not to hurt the others feelings while still bringing their point across.
> The profound significance of Dekker's solution of 1959, however, was that it showed the role that mathematical proof could play in the process of program design. Now, more than 40 years later, this role is still hardly recognized in the world of discrete designs. If mathematics is allowed to play a role at all, it is in the form of a posteriori verification, i.e. by showing by usually mechanized mathematics that the design meets its specifications; the strong guidance that correctness concerns can provide to the design process is rarely exploited. On the whole it is still "Code first, debug later" instead of "Think first, code later", which is a pity, but Computing Science cannot complain: we are free to speak, we don't have a right to be heard. And in later years Computing Science has spoken: in connection with the calculational derivation of programs and then of mathematical proofs in general the names of R.W. Floyd, C.A.R. Hoare, D. Gries, C.C. Morgan, A.J.M. van Gasteren and W.H.J. Feijen are the first to come to my mind.
Backus as quoted in EWD692 (Djikstra attacks this):
> One advantage of this algebra over other proof techniques is that the programmer can use his programming language as the language for deriving proofs, rather than having to state proofs in a separate logical system that merely [sic!] talks about his programs.
From a perspective of a PL person like myself, these ideals are very compatible. Without correctness by construction, there is not enough proof reuse so formality will forever be doomed for niche applications (aka computers embedded in dangerous things). Likewise after trying out intuitionistic type theory--based things (e.g. Agda) separating the program and proof langauges just seems clumsy. Overall separating programming and proof whether spatially (seperate languages) or temporally (correctness proved after the fact) is bad, and the reasons hardly depend on the dimension of separation
Dear John, ... But when you open your letter with: I am quite sure that you have never read any paper Ive sent you before it is my pleasure to inform you that - although quite sure - you were very wrong.
I found internaut's comment to be insightful, well-conceived and on-topic. What's wrong with this comment?
EDIT: Moreover, who downvoted me so quickly for asking this question? This happened almost immediately after I posted this. Are there some nasty bots at place here?
My only fully successful prints aside from the calibration were the "hello world" roctopus and a toy car.
I expect (hope) that this will improve with time, but so far it's been a little disappointing.
Finally, being incredibly proud, I handed my dad the tool, and expected success along with much congratulations from my father for solving his problem.
It was about a millimeter too small to fit around the filter. I had failed. My printer had failed. He went and bought a new oil filter wrench at a car parts store.
One of the problems with STL's is that they're unit independent, they don't care if the part is measured in millimeters or inches. (they also ignore the internal structure of the part, but that's a rant for another time.) And one problem with the 3d printer community is that with all these different printers, there is a wide variety in the tolerances/accuracy of the printed parts. Even with well set-up printers, changing one setting, such as amount of infill, may change the size of the final part. And even if you have a precisely calibrated printer, the person who made the model you're about to print may not.
In this particular case, I don't believe the model was off, but rather to make the tool incredibly strong, I printed with ABS plastic and included extra support and extra shell layers, which I believe may have produced too much material and effectively over extruded plastic, making the part just a bit too small. But I took the part and slammed it into the driveway as hard as I could, and the part didn't even think about breaking, it was quite strong, even if it was useless.
I'm a big fan of Bondo, fiberglass/fiberglass resin, and my trusty dremel tool. Recently a plastic part to a dryer cracked, and those approaches didn't seem strong enough and would probably make it not fit any more, so I heated up some little nails on the gas stove and pressed them into the plastic, which melted around them, making a very strong part that was no bigger than the original.
And I say this as someone who has a long, long history with 3d CAD modeling.
It's not been easy, and I laugh at my earlier attempts, but it's been a fun learning experience. The fun thing is that I've had to re-learn a bit of trigonometry because of my choice in CAD tools (openscad).
I have a repository of the household items I've printed: https://github.com/elliotf/reprap-household-misc
I never learned autocad or ME, but I do remember my ME buddies from college taking their required autocad class and taking many hours to design a simple part like a pulley, and then being oh so proud of their creation. But, hey, I am sure it was much more fun than their Fortran class which they also had to take.
My point is it takes special skills, training and a lot of patience to be able to design a part and get all the dimensions right.
It would have been more interesting if he was able to glue the broken part together, then scan it, and then print a copy of the scanned part. Is that something that is easily doable nowadays?
Any suggestions regarding tools for_capturing_ 3D (or at least 2D) models from photos or videos of existing objects? I imagine if there are multiple reference objects (of known size and shape) visible in the scene together with the object-to-capture, the photogrammetry should be tractable.
A good start would be an app able to produce a 2D CAD file from a prototype shape (e. g. cut from cardboard) photographed on a background of squared paper. Does such a beast perhaps already exist?
However I do feel he could have achieved the same result a simpler way.
1. Repair the original part in a basic fashion
2. Make a silicon cast of the repaired original part. The original will have been injection molded, so try to use their mold separation lines as a guide for where your separation line should be
3. Spilt out to make a mould
4. Pour in resin to make an very exact replica. The place where the original was broken will be replaced with functional resin
5. De-mold and put in place.
This guy makes his own rubics cube style puzzle via this method and documents it well: https://m.youtube.com/watch?v=i-HXU4cfvdc
I then had a moment of silence for all of the super-glue messes of my childhood that never fixed anything and a touch of envy for my own kids growing up in what to my 10yo self would seem a Star Trek future.
But if you said "We value reliable software over meeting deadlines", then you'd actually have said something.
The power of the Agile Manifesto was that it clearly identified tradeoffs they were willing to make.
While we are currently able to build self-healing systems of a sort, fault-tolerant Erlang systems being a good example, it doesn't seem we can build systems that are truly antifragile on their own, that would require strong AI. Otherwise, humans monitoring and intervention are required to improve systems under duress.
This manifesto does include "team" and "organization" to account for that, but a true autonomous antifragile system is a ways off.
Unfortunately, for me, whenever I see the word "manifesto" in relation to computers, this is the one I immediately remember...
Back in 2014 I wrote a blog post "Antifragile Software Ecosystems" that discusses how IMO antifragility relates to how software is developed.
Quality Time Cost
Personally, I think this is actually a useful conceit when considering something like this (which I believe I've actually seen on HN multiple times in the past) where a connection is being made that perhaps one had never considered before.
Tag along clauses for corporate investors are advisable to prevent them from blocking acquisitions etc.
Possibly due to the described desire for consensus, I found the organisation to be incredibly bureaucratic with incredibly lengthy processes. Releasing a paper usually involved around a dozen rounds of review with various groups, often arguing for days about linguistic style more than Physics content.
The lack of clear top-down control makes resource allocation very challenging. There were frequent complaints that Higgs analyses had too much manpower while less "sexy" tasks were chronically understaffed.
The lack of clear assignment of responsibilities also leads to lots of nasty internal politics between institutes. Especially the Higgs analysis where people were eternally engaged in attempts to land grab so they could claim responsibility for bits of the eventual discovery.
Overall, I enjoyed working there a lot. It is a unique structure and the sense of teamwork and lack of hierarchy is very nice. But this article is a bit of a whitewash. I don't think it should be lauded as some incredible model, it has at least as many problems as any other organisation of its size.
Any alternative management system proposal needs to answer questions like,
1. How people gets hired? Who creates job posting, how interviews are conducted, who does negotiations, approvals and how talent gets attracted and retained?
2. Is there differentiated performance reviews? If so who exactly conducts, signs off these? Is there curve? Is there expected distribution? Who approves promotions/pay raises? Who sets up these rules?
3. If there minimum expectations for performance? Who determines firing and how?
4. If there is no real manager and everything gets decided by commeeties, who sets up these commeeties? How work assignment is done? Who is accountable for tracking progress, success or failure? Who has final say in ties when conflicts occur?
5. What options employees have when they want change? How transfer happen? Who approves these and what are the official rules?
As a point of reference, starting salary for a teacher in the Palo Alto Unified School District is $57.5k. Someone who has taught in the district for 30 years with significant post-credential grad school makes double that.
(Of course, because these two sensible policies would not be in favour of landowners. Subsidies demand is.)
Everything I see seems to indicate that the price of housing expands to fill the amount of money available.
Cheap interest rates managed to change the base price of homes such that the mortgage payment is roughly the same. Now it is individuals & condo developers collecting that difference rather than banks.
Or am I totally off base here?
Increasing property taxes shifts the burden of ownership to the people who have money... though suppose that would get passed through to renters as well.
You'd think that just having applications not use user-provided filenames during the conversion process would prevent much of what's been disclosed lately, but no: the MVG decoder exposes all those features to a user that merely controls the content of the input file.
The mitigations from the last big vulnerability will still work against this, but people who merely updated to an ImageMagick that fixed the curl command escaping would be in trouble.
Is it just me or is this the wrong way to tackle this? The question to me is why the file name being interpreted in some way in the first place, not why popen is being used on the result of the interpretation.
Also, why are pipes even being allowed in the file name in the first place? (I'm asking about POSIX/*nix here, not about ImageMagick.)
I'm trying to figure out a way to patch this that doesn't involve recompiling imagemagick. Like most people I get it from dist, compiling it is kindof a PITA.