And then you could add scripting...
I think it's really cool, and I hope you guys like it. I would appreciate any kind of feedback!
What is the most advanced application you have created with this? What functionality do you think is still lacking?
If you were to try to create a full featured app I imagine you'd find that working in Swift is the better option.
What this reveals to me is that the App Store submission and update process is so time consuming you would rather write your logic in Json than in native Swift/ObjC.
If Xcode let you instantaneously push app binary updates would this be as useful?
If you had enough JSON-to-native "modules", basically anyone could write a native app in a few hours (since functionality in most apps is pretty much standard stuff)!
Hell, if you pushed this further you could create an app store for code, letting you buy additional modules to plug into your app like lego.
Now if only it were possible to place a generic version of this app on the App Store and allow users to load in the JSON for whatever app they want. Sadly I very much doubt Apple would allow it.
But It can easy lead to inner platform effect. Not needed complexity. And in JSON you can't write JS code (needed for e.g. event handlers).
So I switched to JS. Still data-driven approach (simple hierarchical objects with data), but with power of JS (and if you're use Babel, you could use JSX for your DSL).
So: I don't think JSON is best format for what you try to achieve. It's just so limited.
Besides what is added value of Jasonette? When I look on Jasonette I have feeling that it's just reinvents ReactJS but in JSON not JSX. Not sure if there is much profit with JSON format alone.
This one is because of this comment: https://news.ycombinator.com/item?id=12872108
That said, if you actually get the standard and look at the section on the intra prediction modes, how to compute them is spelled out in very detailed pseudocode.
10 minutes is an eternity and 510^(-22) seconds is closer to what I'd consider 'very unstable'
I can recommend pjsip though, very reliable so long as you read its docs before writing a script to leverage it.
I'm sorry I've attempted to read it. Now I need a dose of Loony Tunes to wash the feeling away.
It introduced me to the idea that 'is' should be treated very carefully. Any assertion outside of strict formal languages that use it are half-truths at best. It also introduces heightens the emotional tone of a discussion. If you say "John is foo" you tend to create the impression that John will always and has always been foo. Foo-ness is a taint on his soul. Contrast that with reformulations that make it explicit that John's foo-ness is a fleeting association related to both his present situation, your current perception of it and the current socially accepted meaning of foo along with all it's implied baggage.
I realise I might be rather off-topic :-)
CETA has ISDS as well, and if only on that point alone, CETA is objectionable. This argument comes off as disingenuous, the similarities between the deals are not imagined. ISDS isn't even the only similarity; CETA also contained objectionable new copyright provisions (though apparently those are mostly gone now), for example.
You should scrutinize your own thoughts and opinions and others to see if they are just believing something because it was true in the past.
Lastly, I've found it best to not be so opinionated about everything. Sure having some opinions are great but you develop too many biases otherwise. So what if something sucks, use it anyway. You might learn something new, or maybe you can help improve it if it has potential.
I'm finding it helpful to view every signal my body encounters as a chance to choose how to process it, including what I do, taste, or hear.
Since adopting this view, I've effortlessly enjoyed eating foods I've hated my entire life (tomatoes, olives, CILANTRO?!), listening to country music, and doing things like chores that used to bore me to tears.
If anyone sees danger in learning to view the world that way by default, I'd love to hear about it.
Brinkmanship rarely serves to get anything done, and burns bridges when it does actually accomplish something.
I have lost almost every major argument I've had in a corporate environment.
Once someone has reached this point, logic simply cannot reach them. Successfully defeating their arguments will only strengthen their resolve (the backfire effect), because they're being driven by the amygdala, which only understands threat response. They will grab onto any argument, no matter how flimsy, and be completely unaware of how little sense it makes. Any further argument with them will at best do nothing, at worst make you look as much a fool as he.
The wise man learns to recognize this state and back off.
So that's when it's time for Ruby 3 the transition will be pretty painless.
More info: https://wyeworks.com/blog/2015/12/1/immutable-strings-in-rub...
(Frozen string literals allows strings to be in memory only once and not having to reallocate each time, so a pretty big memory and cpu optimization)
(Also for instance rubocop already recommends adding the magic comment to all ruby files)
"_Not_only_was_it_already_a_much_improved_agreement_from_the_start_,but it kept being modified from the initial public version of it to the one that was finally sent to national parliaments."
Either the writer of this is an expert on the topic, well-known in the field and the weight of this judgement on its own is a valuable primary source; or, the writer is referring to such an analysis conducted by other experts but has not bothered to include a citation/link; or, the writer has their own critique but instead of presenting _that_ has just stated an opinion which they know to be controversial.
All of the above possibilities contribute substantially to the noise around any discussion.
It's why we like to have e.g. "(2013)" added to anchor texts on HN, for example.
After all, the an opinion is just an answer to the question, "What do I think of this?"
So programming languages, we have to pick a few and become good at them. It's one thing to take another hard look when applying for a new job for example, but we cannot keep track of all programming languages and their evolutions.
Highly worth read: http://paulgraham.com/identity.html
And of course, try to have as wide and deep an understanding of the subject as possible before forming strong publicised opinion in the first place.
If the argument is presented as if something is and will always be a certain way (or even if the argument is presented without admitting that something may change) it can probably lead a lot faster to groups of people assuming the argument will be valid forever.
EDIT: Or can be misinterpreted that someone presenting an argument believes the argument will remain valid forever.
btw. never saw the talks the author cites, and have not followed the trade agreements very closely so I'm only speaking generally here.
as i see it, the issue is less about parroting other's outdated technical opinions, it's about not being vocal enough about the change of heart.
List of some of the things I don't like for which I have to occasionally take another peak to see if I'm finally wrong:
1. (In languages) Garbage collection and the idea of "safe code". I didn't like it then and still don't.
3. (Relational) Data models with compound keys flying around as FKs everywhere.
4. The idea of self service BI (like PowerBI etc in the hands of a business user)
Also TIL that "orthodox" file manager is an actual term, and there's even a huge online book about them . That's funny, we used to call then just "file managers"... ;)
The live example isn't working either.
DeepMinds last triumph (beating the best human Go players with AlphaGo) is impressive, but Go is a great fit for neural networks as they stand today; its stateless, so you can fully evaluate your position based on the state of the board at a given turn.
Thats not a good fit for most real-world problems, where you have to remember something that happened in the past. E.g. the fog of war in a strategy game.
This is a big open problem in AI research right now. I saw a post around the time of AlphaGo challenging DeepMind to tackle StarCraft next, so it is very cool that they have gone in this direction.
When Googles AI can beat a human at StarCraft, its time to be very afraid.
Im curious if a startup can be built from this.
I'm not sure how familiar people are with StarCraft II, but at the highest levels of the game, where player have mastered the mechanics, it's a mind game fueled by deceit, sophisticated and malleable planning, detecting subtle patterns (having a good gut feeling on what's going on) and on the pro scene knowledge of your opponent's style.
Allowing researchers to build AIs that operate on either game state or visual data is a great step, IMO. Being able to limit actions-per-minute is also very wise. The excellent BWAPI library for Starcraft Broodwar that is referenced (https://github.com/bwapi/bwapi) provides game state - and was presumably used by Facebook to do their research earlier this year (http://arxiv.org/abs/1609.02993). For mine, the significant new challenges here not present in Go are the partial observability of the game and the limited time window in which decisions need to be made. Even at 24 frames per second, the AI would only have 40 milliseconds to decide what to do in response to that frame. This is more relevant to online learning architectures.
The open questions here are how freely this will be available - and in what form. Will I need to buy a special version of the game? Clearly there will be some protection or AI detection - to ensure that competitive human play is not ruined either by human-level bots, if they can truly be developed, or by sub-par bots. Starcraft 2 (presumably the latest expansion, Legacy of the Void, will be used here) does not run on Linux, whereas most GPU-based deep learning toolkits are, so having a bridge between the game and AI server may be necessary for some researchers.
Besides being great for AI researchers, this is probably good for Blizzard too, since it will bring more interest to the Starcraft series.
2017 can't come soon enough.
Ontann, Santiago, et al. "A survey of real-time strategy game ai research and competition in starcraft." IEEE Transactions on Computational Intelligence and AI in games 5.4 (2013): 293-311.
Starcraft is a really fun game, and I think it's enough to engage kids a little more than something like Minecraft where there's plenty of room for some cool ML hacking, but not enough stimulation from it. Instead of just seeing blocks here or there or whatever, starcraft has hard goals that will force them to use critical thinking skills, their knowledge of the game, their own personal strategic insights, and the ML skills they accrue.
So exciting! Love the feature layer idea also, well done!
Kudos to both Blizzard and DeepMind. Anticipating a lot of fun with this. StarCraft 2 could indeed become the AI pedagogy standard.
I think the worst possible outcome for society would be if we ended up with capable AI but with the algorithms only accessible for a handful "AI-as-a-service" companies.
The second concern is understandability of the algorithms: from what I've read, it's already hard to deduce "why" an ANN in a traditional problem behaved like it did. An AI that is capable of coping with dynamic, unpredictable situations like an SC game (with only pixels as input) is impressive but it seems less useful to me if it's not understood how that is done.
I sometimes play strategy games and I always find the AI disappointing. Any game with a great AI would be my favorite for years. Heck, I would even pay a few dozen cents/hour to be able to compete against a proper AI.
Or maybe the answer is never, other companies are supposed to do the hard work? We only play games?
But I've never done it because of the risk of bans. I'm glad that Blizzard has opened it up for people to experiment with this. I wonder how it will interact with any sort of anti-cheat systems in place, etc.
This is awesome. I've only ever reached the Platinum league in Starcraft II (1v1), but I'd almost feel more driven to create bots to (hopefully) surpass that skill level, than actually playing the game.
Why not give it lots of data to solve real problems? Training it on useless games will have no benefit.
I hope some of the advances in SC2 AI can be integrated into the in-game AI. e.g: a trained neural network that plays better than the "hard" AI, but can run on a consumer box and not on a massive cluster.
Because yeah, that's where this is going to be used first and foremost.
Most people are not intelligent enough to understand how to secure their internet banking, and now we're going to bake-in hosting tcp connectable servers?
These security prompts better have some real clear language and require giving permissions every time.
Now I can see some good things for this too, start a flyweb from your desktop and easily transfer some stuff from your phone for instance (something that still sucks in 2016)
I just think that most of it's use will be malicious.
To me, the power here is in using the technology to foster local, human-scale interaction.
Intranets are totally underutilized. How many people do you know who can reliably transfer personal files over a local area network? Not nearly as many as those who know how to use google or send an email... that's absurd to me, given how ancient of an application file sharing is.
It's my opinion that the survival of the internet may very well rest on p2p webs like this.
One interesting thing to figure out is the combination of local and global. When I have an iot device and I'm away from home, or someone collaborating with me from a different location, the same app needs to fall back to using standard internet based interfaces. Not sure if that disqualifies it from being a potential use case of this.
Edits: speling correctoins
I understand why they hide the real IP addresses behind UUIDs, but I think there should be an option to also convert it to the real IP/host address. Because often you want to share the address of the embedded device with your coworker, use the address in another tool, and so on.
However I'm not sold on the idea and state of the webserver in the browser API. It just leaves a lot of questions open: E.g. pages are often reloaded, how will this impact the experience. Or HTTP request and response bodys are possibly unlimited streams, the simplified API however does not expose this. What attack vectors are enabled through that and how will it limit use-cases?
Secondly, the FlyWeb server gives you access to a really flexible API for serving just about any content.
It feels like federated content, we just need to question whether it should be locked to the local network.
[edit: ok, so it is cool, but I'm not sure it's secure, and I'm not crazy about web pages from other domains being able to setup local discovery on my network. Seems like a massive security problem. Uuids sounds like obfuscation, not security. ]
[edit: ok, well at least they've started thinking about it: https://wiki.mozilla.org/FlyWeb#Security
Would like to see this fleshed out some more. ]
I think this technology is intriguing and with some real use cases (more peer to peer) but the api seems disorganized. I cant tell if it wants to be another webstandard or be something different.
A part of me wants to dislike this and consider it as a distasteful competitor to pre-existing technologies that have learned to survive without "the web". Another part realizes that sandboxing these technologies protects and enables the average user in regards to awesome tech. This certainly wont replace torrent, webrtc or other existing p2p technology. But I certainly think its a cute way of opening up the field.
The first seems useful. The second seems to need a more compelling use case. Also, opening the browser to incoming connections creates a new attack surface in a very complex program.
Along that track, it would be nice to see native DHT support in the browser, for global server-less discovery.
Unfortunately, just using WebRTC is not a great fit for a DHT, because of connection costs. Also it makes more sense to have DHT persist between app sessions.
I lean towards using bluetooth as a discovery mechanism rather than wifi. Google's "Physical Web" I think does something along these lines, though I am not sure whether or not they are thinking about web servers on these local devices. I think that is a key part of the idea.
Meshnetwork torrent trackers with DHT anyone?
In the desktop FF Nightly the Flyweb menu must be picked from the customization menu (Menu, Preferences, drag the Flyweb icon to the toolbar). I think Mozilla forgot about this in their page.
Another important bit of information is how to install Nightly alongside with the current FF http://superuser.com/questions/679797/how-to-run-firefox-nig...
My take on this: interesting, especially the server side part. Instead the server inside the browser could be at best a way to drain batteries and at worst a security risk because of the increased attack surface. I wonder how locality applies to phones on the network of a mobile operator vs on a smaller WiFi network.
Anyway, if we have to rely on browsers to implement the discovery mechanism I'm afraid that it won't fly (pun intended). I'd be very surprised if Apple, Google and even MS will include this into their browsers. I got a feeling that they might want to push their own solutions to sell their own hardware. I hope to get surprised.
Maybe there will be apps autodiscovering those services or other servers acting as bridges to a "normal" DNS based discovery service.
Btw: Mozilla should test their pages a little harder. I had to remove the Roboto font from the CSS to be able to read it. The font was way too thin in all my desktop browsers and FF mobile. Opera mobile was OK, it probably defaulted to Arial.
Apple have hidden it behind flags in Preferences -> Advanced in recent versions, but when enabled, you get a "Bonjour" item in the favourites menu, which will show the internal settings websites of compatible printers etc. that are on the LAN.
That's not all it opens up. "Enabling web pages to host servers"--who thought this was a good idea?
To top it off, later in the page, they tell users how to upgrade Node by running `curl ... | sudo bash -`. Good grief, the anti-patterns!
This FlyWeb site has me seeing red.
Unlike the author, I think I still like computers, but only in their essence. I like programming, the detective game of debugging, learning new paradigms, getting lost in abstraction, the thrill of watching powerful automation doing it's thing.
But I don't like what computers and the internet have become. Without constant mindful adjustment, all my devices inevitably become attention grabbing pushers of just-so packaged bits of media. I don't let that happen, but that's clearly their essential inclination. Keeping this at bay feels like swatting away the tentacles of some persistent deep sea creature.
I feel everyone's attention span eroding. I feel people packaging themselves for social media, opening their self-image and self-worth to the masses. I see a flood of undifferentiated information, the spread of hysteria and belligerence, the retreat of quietude, humility, and grace.
This is all downside, but lately I'm losing the upside. While I still love the technology underneath it all, more and more I feel like I'm working in the service of something that's driving humanity collectively insane.
You see the difference is that I was much more patient and tolerant then. Now, thanks to the Internet - I have become very impatient, anxious and my attention span has dropped almost to zero.
I hate theee things. The way technology has changed me. This is why I have grown more and more disinterested in technology and all its promises. Even though if I were being honest we have never had it better in terms of the range, options and diversity of the field.
I think technology has made me a worse person. More informed but less interested. It's given me more opportunnities at a time when i feel most exhausted and apathetic. Perhaps this is normal considering we are going through the "internet" revolution. A lot of changes. Many of which I don't like within myself and society in general.
I don't buy gadgets, I own tablets to watch the odd movie and for device testing otherwise I would only have one, my phone is a 4 year old Nexus 4 which I broke the back on and covered in black electrical tape (I could replace it but I don't care enough to do so), I use a 17" Vostro for working the odd time I can't be at my office or at home and it's dented and has stickers stuck all over the scratches, I'm not even sure I remember what the stickers where for.
I'm just not excited by new hardware like I used to be, I only care when it'll have a demonstrable impact on my enjoyment of programming where once I'd have lusted after the latest and greatest I couldn't even name the best model of i7 or whatever at the moment, I only care about that stuff when I'm building a new desktop.
What does excite me is how technology is having a meaningful impact on peoples quality of life.
I think in a way thats just part of getting older (I'm 36).
That and every time I interact with technology that isn't one of my linux machines I come away feeling like I should hunt down whoever wrote the software with a bat and some bad intentions, one of the downsides to been a programmer is that the deficiencies of everything are so much more obvious.
Prime example, I bought a LiFX 1000 bulb (WiFi/IoT bulb) to put into a ships lamp as a christmas present for the GF, it took me 45 minutes to set the thing up, followed all the instructions to the letter, de nada then I thought "I wonder if changing the wireless channel might work" and lo and behold changing from Channel 13 to channel 9 made it work.
Nowhere was that documented in the instructions (which I read) and had I not been a techie I'd have never thought to try it, my point been where once I'd have thought "This is cool" now I just resent the 45 minutes I won't get back.
The few things I find interesting are things I keep to myself because every time I've tried to make a discussion about them, nobody else seems to be interested. Or it may even be met with hostility.
I still enjoy programming but only that and when I get to program my own hobby programs and focus on the parts and problems that I find worth solving and doing. I don't enjoy the SW dev work at work, doing stuff that I don't care (or the world doesn't care about), solving problems like fixing build files, or having a shitty tool that crashes or having all these stupid useless (meta) problems and the general nonsense prevalent in the IT/tech industry. Just as an example of what I mean the other day I was having a problem with automake (WARNING: 'aclocal-1.14' is missing on your system.) when building protobuf (not going to get into details, it's very obscure). My motivation for this kind of (nearly daily) crap is about absolute 0. I'm sick and tired of it all.
The only reason why I'm still doing this is because I haven't realized what would be a feasible alternate job for me to do and which direction to go to.
Overall I feel like this job has changed me as a person as well. I'm extremely cynical these days about anything related to tech/IT. But hey at least I have a great taste for cynical and sarcastic humor now (for example Dilbert)!
Besides the internet, one thing that's changed is that computing has become a much less solitary activity: in the 90s and 2000s we were still seeing the tail end of the microcomputer era which was very much built by individuals hacking away on stuff at home or in tiny businesses -- and when larger businesses hired "microcomputer" (and to some extent PC, and web) people, they still worked in very much the same way.
Today, the IT workplace is all about "teams and practices", and even if you're working on something intensely personal as a side project, there's still a degree of expectation that if you want it to amount to anything you need to get it out there as a collaborative, open source project. Or a company with other people involved.
At least for introverts, computing used to seem like something of a refuge. That's definitely less true today unless you deliberately do something that's totally personal.
I don't like games. I don't like VR. I don't like AR. I don't like television. Also reading HN too much makes me feel empty.
However, I do like smart things that do stuff for me and get out of my way. I really like waking up in a warm bedroom while the rest of the house is allowed be cold. I like the convenience of telling Alexa, "play something relaxing" when I come home from work. I like having to clean a little less thanks to a Roomba. I like not having to switch off stuff because it's done automatically. I like an AI to schedule my appointments.
Every computer that minimizes my interactions with computers or gives me time, the most precious resource, I like!
But that's why I say I share the feeling: what drew me to computers when I was little was the tinkering with this fascinating machine that did as you told it (so you better told it the right things or else it would end up in a mess without mercy). It was a time were you felt you could still reach a point were you're actually in control, computers were still simple enough that one person could pretty much understand all of it.
This is no longer the case today. The complexity of the modern IT landscape is just intimidating. You couldn't possibly feel like you could one day be in control or on top of things anymore. Everything's changing, everything's growing at too fast a pace to keep up.
Therefore, if what drew you to computing in the first place was a personal connection and interaction between yourself and the machine, it's no wonder that that magic has gone now.
The secret to human interests is that they have an arc. A beginning, a middle and an end. Are you still doing the same things you were doing when you were ten? Maybe, but maybe not. I'm certainly not. There were no computers when I grew up. Well maybe a few ;)
It's natural to be bummed out when your interests (work interests, love, play, etc) change. It feels weird and uncomfortable, like we are losing something. It feels bad. You wonder if you are in a deeper funk...like real depression. Will it return? Is it a phase? You don't know.
The best way I have found to deal with this is just to watch. Observe. Hmm. I'm really not feeling this today and haven't for a while. That sucks. Don't get too caught up in it. Let the feelings rise and fall. Keep noticing. What is it that I do get turned on by? Well, I'd really like to be reading right now. So make time and do it. Let your urges take you where they will. Trust them. Let them lead you towards something that does it for you. The author seems to have that covered. He (she) is aware of things that are interesting. Keep doing these things. Let the things that interest you reveal themselves. Have faith in this cycle. It does eventually resolve itself.
I realize that this whole deal is tough due to responsibilities. Family, etc. People are counting on you. You have bills to pay. Appointments to keep. Keep them. Stick to the routine while you explore. This is important, because learning about yourself is easier when the external drama levels are low.
You will know if this course works, because you will feel better. If you still have angst and it is getting worse, then you may need to talk to a real person (a whole other kettle of fish).
My advice: listen and watch. Do what you need to while exploring what makes you happy.
There's been one small bright spot - I tried learning Haskell and loved the way functional programming stretched my brain but there's an awful lot to learn to do anything useful. But Elm, wow, do I love Elm. I feel the excitement I felt when I saw Ruby on Rails for the first time ten years ago. It's finding something interesting and useful to build with Elm that I'm struggling with now.
I wonder if it's the message that if you're not building a product that will build a unicorn company, then it's not interesting that's part of the general malaise.
It was great fun before it all got so serious. Very funny and true ;)
Am I just old? I'm in my mid 30s. According to Douglas Adams, that's kind of the age at which new things are just perceived as being against the natural order of things. Kids these days are being raised thinking that talking to Alexa and having it bring back accurate results is completely normal and natural.
Are there young people out there who think modern pocket computing is just plain wrong? Do they have any second thoughts about putting their entire life online under the control of 3rd parties?
A child is intrinsically motivated to play, you loose this as an adult. No biggie but your shit just needs to get the job done, and the job is not learning as much about the shit as you can. Such is life, you have other things to do now, like raising a kid, and getting enough sleep while doing it.
As with life I learned a lot when young, taking the time to learn the stuff that I still use now. Perhaps computers extend the playing age because they are intellectually satisfying for much longer than other forms of play, but eventually you're done playing.
I know you say you've been watching your hours, but burnout doesn't maybe just come from hours.
For me, it's the first signal I need to do something when I start to feel there's no food that is really tasteful anymore, there's no games I like playing and there's no job or person in the world that could possibly make me happy.
I get that's not the issue of the post, but maybe it's something to think about for all of us?
I can happily spend hours immersed in the past, and when I'm done, returning to modern digital life is somehow refreshing.
See also, the Computer Chronicles YouTube channel: https://www.youtube.com/user/ComputerChroniclesYT
- read your Marcus Aurelius
- listen to some Alan Watts
- you are not alone, or the first person to get existential, many before you did and many after you will. Detach from anything technology from time to time, and spend some serious time reading about who people think they are, and what all this is about.
Heck, there are so many research projects out there completely changing what it means to compute (i.e. Bret Victor), let alone rediscovery of what the founding scientists (i.e. Turing) had for their original vision (did you know Turing generated music from his computer? Decades before the first synthesizer?! or Bell Labs, or PARC.
There is so much to know and so very little time to even scratch the surface. Maybe I'll get bored later, but right now there are things to do!
I care about things that remind me why I got into computing in the first place: For the sheer joy of it.
If you listen/read closely to people that work in all sorts of fields, this feeling is quite common.
You made a career out of a hobby you really enjoyed. And after a few years it became your work and you no longer enjoy it. You now find joy in some other activity. That new thing? That's your hobby now.
I got this impression after years of thinking of throwing myself into video-game journalism or bicycle mechanics as a profession (2 of my favorite hobbies). When I started speaking to actual video-game journalists and bicycle mechanics I immediately noticed that I couldn't find a singe one that still enjoyed his respective activity anymore.
I'm not going to try to play "psychology expert" here, but for me the reason seems to be pretty simple:Those people could no longer spend their time playing the video games they liked or riding and fixing their own bikes. They now had to play all the games they were "told" to play and on top of it take notes and write meticulously about them, the bicycle guy now had work on a bunch of strangers bikes he didn't care about and keep up with a bunch of new bike tech he actually thought was needless bullshit, and he had to sell bullshit Lycra shorts and stuff like that.
To this day (37yo) its one of the decisions I think I got "the most right" in my life. Not turning one of my hobbies into my job. (curiously this runs right against the common advice "Take what you are passionate about and make that your life's work.")
There is a generation of people that got into computers becausethey were a tool for empowerement and creativity. When I was achild, my younger sister would create movies editing frame byframe in MS Paint while I would learn Pascal to make a sequencerto play "melodies" using the PC speaker bell commands. Her friendwould learn HTML to create a manually updated blog where shewould post fantasy short stories. In the Internet, we all hangaround with nicknames in chat rooms and learn to make flashywebsites and get through the chain emails from relatives. Weneeded no Netflix or Facebook to share stuff, we had P2P andemail and IRC. Then we learnt about GNU/Linux: the ultimate toolto get control of our machines. It was all organized chaos,instant communication that no one could control, limitlesscreativity, the ultimate dream of a post-capitalist anarchistsociety...
At some point, some got to believe that if only these tools wouldbecome mainstream, the mainstream would adopt these values. Atecho-revolution!
This overestimated the transformative power of technology. Whathappened was otherwise: technology is now mainstream and hasbecome a tool for social control and the ultimate frontier ofconsumerism. Tech didn't change society, society changed tech...
I still want to believe in these utopic values. But I understandthat it is a long way traveled in little steps whose significanceis hard to see while at it. In the meantime it's often tiringand lonely to live in the computing underground. One has toexplain people why you don't have a smartphone (and it getsharder to reach people without having Wassapp and so on), one hasto explain relatives why you don't want to work for BigTechCorp,while tryin to stay "up to date" one has to go through the angryrants of Apple users on HN  or the celebration of the newMicro$oft facelift, and the collective systemic submission in thestartup world in this new gold rush...
The hardest part for me is to find stuff that I can do well andthat I find valuable to the world... and still get paid for it.And I am an Software Engineer, the profession of the future! Howcan I be so obnoxious to have plenty of well paid jobs around meand not be interested in them? This makes me very sad and makesme feel deeply alienated...
 You are not angry because of the design of a computer, youare angry at the realization that you are so personally investedin a technology that you have no control of, but has control overyou!
But don't misunderstand that this is a luxury that you need to be able to afford. If you don't have rich parents, or saved enough money to live without the internet, you must must must find a place in the internet world that you can stay at (e.g. some FOSS 1990 style mailing list) and at least find some way to use social media (gnu social or G+ anyone?) in some reasonable way and have some kind of internet presence (e.g. a github page and some foss projects you supply commits to).
Really, try if you can't afford the luxury to ignore it. Politics always talks about the gap between rich and poor that gets bigger and bigger. But the same is true for the gap between people who take part in the internet and use it to their advantage, and those who ignore it. Both these gaps already overlap to some degree, and that overlap will continue to grow!
It seems like you have one real hobby but no one should have one real hobby.
Welding is really rewarding. You're working with a really dangerous machine to turn 2 piece of metal into 1.
Working in the world of the 1st class currier is great to. A lot of time driving or traveling around where you live.
I didn't realize how fun working with motors is until I tried fixing something in my car. I'm trying to get my hand on a motorcycle that's broke so I can rebuild the engine to further learn how they work.
Also if you're fed up with the internet and still want to communicate with people become a HAM and learn all about RF propagation and other important things. Really fun, one of my favorite hobbies
My advice to the OP: go work for a company that uses your computer skills to do something good, something meaningful to you. It will change your perspective.
I liked video games. I wanted to make video games. To do that requires programming a computer. Ok, how do you do that? Let's go down that rabbit hole. 30 years later and I'm still going down the rabbit hole. I've haven't made a video game yet, only bits and pieces and some mods, but at this point, I don't really like video games much anymore. So now what?
I've been doing a lot of hardware, electronics, arduino and general maker stuff. I still like making stuff, but it doesn't have to all be on the computer, and it doesn't have to be a game. I'm more interested in how a HAM radio transmitter works than the latest js framework these days.
Other advancements, like the automobile, also changed society, but at least once you leave your car and are having a real conversation with someone, your car won't suddenly take your attention away.
But Twitter, Facebook, Netflix, Spotify, Snapchat, or Uber, have nothing to do with tinkering or creating something.
I also don't want to surround myself with the internet of things because I know how insecure and broken everything is. I'd rather be buying appliances that I can leave for my children when I am gone, rather than buy new ones every two years.
I'm still perfectly happy with the 5 year old MBP. I hope it will last another 5 years - even more with luck.
After work you should do what you want to do. If that includes sports, going out and eating nice food etc, good for him. That is a balanced lifestyle, and worthy an effort to make.
I am kind of the opposite though when it comes to computers & IT and wanting to retreat a little. I started my coding career at 43 years old. I have worked in tech all my life, so the industry is nothing new to me, but I was never in engineering / software development. I was more of a Linux / network admin / systems integration engineer or run of the mill network architect (lots of time in powerpoint, visio (yuck!)). I kind of always had a healthy envy of developers, as I knew they were working within the real guts of computers and creating things. I was always the one trying to mop up the mess of a less bright developer who managed to get something dire into production. All this made me even more curious to get into that area myself.
With the advent of cloud, namely OpenStack and all the other devops'y type applications in the eco-system such as kvm, containers, vagrant, ansible, puppet etc etc, I found my nix skills could be reinvented and started learning python, brushing up my shell scripting, learning about serialising data, restful API's, messaging, models, views, controllers yada, yada, and then in turn learning lots of new tools including, git, gerrit, travis etc.
I am now loving what I do and I am super keen to learn more and more, so I do spend lots of spare time now absorbed in writing code and getting up to speed on different tooling available to developers.
Right now my spare time is spent learning rust as I would really like to get into systems programming and work with the kernel space for networking based apps.
Its weird, in that now is the time when I should be just specialising and not being so absorbed (a lot of senior guys do this at my firm, they are happy just to sit looking at some spreadsheet or project plan until 5pm and then go home), but instead I really want to develop a new career as a programmer over the next 10 - 20 years, and I love the idea of that.
I now have a laptop covered in stickers, have grown a big beard and I go all goey at the sight of some new snazzy framework. My wife jokes about it being a mid-life crisis.
I don't seem to be slowing down either, but in fact going quicker then ever before.
I am with him on instagram though, and I have no idea what alexa is.
Here's what I've learned since:
Part of what you are experiencing is real. This will never leave you and will transform you. It is part of maturation. It is natural to start seeing that what matters in the world are its life, its people, its wonder, and its love, and that you have human failings which over and over again will leave you feeling guilt for not reaching a potential. Or perhaps you will transcend this and just be ok with everything, or to devote your life to doing everything the best way you can, and accepting you will fail along the way in a way that limits self-pity.
Part of what you are experiencing is due to your health and circumstance. This is something you can affect. If you are tired, maybe you need more sleep and exercise. Maybe CPAP or an oral appliance from your dentist could help with sleep apnea. Maybe you shouldn't drink before you go to bed as often. Maybe you could see a recommended psychiatrist and get some medication. Maybe yoga, a martial art, tai-chi, or guided meditation would help. Maybe you should read more.
Computers in their many forms, but particularly mobile computers ("phones") are way too distracting. So is streaming entertainment. Too much of our lives are wasted on them. Go buy a bicycle, or some running/walking clothes and shoes, and get out into nature. Buy a tent, camp stove, ramen, sleeping bag, inflatable mat, and backpack and go camping.
Feel like what you are doing is B.S.? If you're smart, join Geekcorps and travel to another country doing something cool: http://www.iesc.org/geekcorps . Even the peace corps has jobs in dev/IT like: https://www.peacecorps.gov/returned-volunteers/careers/caree...https://www.glassdoor.com/Jobs/Peace-Corps-software-engineer... Or if you're an engineer: http://www.ewb-usa.org/
Im not actively trying to be a luddite or think I need to stick it to the man. But I cant shake the feeling that many technologies coming out simply dont care enough about humans to warrant actually being used. Thats not to disregard side projects and such. Most of the time the creation out of those projects is out of pure intentions. A lot of those same intentions get thrown out the window when money and company survival and thrown in.
I have an iPad for 2 years that I haven't used, almost ever (got it as a present). I don't want a smart fridge. I run without monitoring myself all the time. I don't play on my (otherwise high-end) smartphone, I only use 6-7 apps.
The reason: I realized that these stuff are not _that_ smart yet. When I use their 'smartness', they consume more time than the non-smart things. We use a simple post-it for grocery lists with my wife because opening Trello is much more complicated than just picking up the pen when I realize that we don't have more garlic in the fridge.
I still enjoy hacking things for the sake of hacking, but that activity is not 'sold' as something smart that will save me time. It doesn't save any time, just makes me feel good.
The signal to noise ratio of the modern internet has changed for the worse, western/global culture has lost it's manners, and what signal there is left shows leaders have either lost their culture or their clothes..
none of these global trends are anything to do with you personally .. those trends are external!
thus even if you look after yourself, if you avoid burnout from todays overpaced pace, if your hardware is ready and able to be inspired ..
Then, to feel that inspiration again, you must really appreciate and nuture the inspirations you find amongst the noise
Personally I believe the next frontier is hacking and implementing political/social/power cultures and social mores inspired by Libre Values
Maybe I learned to deal with my own cynicism, but the turning point was probably when I started looking at my work (and computing) less as a goal/ultimate meaning and more as just another piece in peoples' lives; a way for them to accomplish non-tech goals.
"You know, I see the look on your faces. You're thinking, 'Hey Kenny, you're from America; you probably have a printer. You could have just gone on the internet and printed that bitch.' Yeah, you know what? I could have, 'cept for one fact: I don't own a printer. And, I fucking hate computers. All kinds. I come here today, not just to bash on fucking technology, but to offer you all a proposition. Let's face it, y'all fucking suck."
I find too though that when I am left to pursue my own projects at home, on weekends, some of the magic comes back a bit. Perhaps it is just Corporate America that has sucked the life out of my soul when I am at the workplace.
Fortunately I do enjoy every new day and can still become excited about new technology, which is emerging all the time. And I truly believe that computers and the internet have become much better and ever more interesting. You just have to be very selective in the vastness of things out there.
Instead I started on other hobbies, I repair physical things (mechanical, electrical, electronic), I enjoy photography, I work on my car.
I used to make the joke that if ever computers were no longer a thing for me that maybe I'd move to New Zealand and make violins for a living... that time isn't here yet, but I can feel it.
Sitting at a computer all day actually is quite uninteresting and boring. It's the light at the end of the tunnel or big ah-ha moment many programmers have.
It's simply more fun to socialize all day.
Many big projects say self driving cars or FB or whatever don't actually require that many antisocial, introverted engineers; only 10's of thousands so things like that will always get built anyway.
It pays well but its not good for your health to sit at a computer all day nor is it fun to socialize only through a chat screen all day.
Time to take a break.
The meaningless Internet bullshit used to be meaningless, but mean a lot to me; the big news would be a 2% drop in Firefox users or something and everyone would lose their minds. Now, the meaningless Internet bullshit is some site with no vowels and no revenue selling for twenty billion dollars, and it actually means something because twenty billion dollars is a lot of money in the real world; for a sense of perspective, read this  and realize that twenty billion dollars could supply all of those things yearly for _a decade_. And yes, I know that 15 years ago there was a bubble too, but it was a lot smaller. In 1999 there was barely north of ten billion total invested in software by VCs.
I don't know exactly when it was, but at some point I went from excitedly tracking the latest versions of distros, googling "shareware" and installing whatever I could find, getting wrapped up in flamewars, formatting my hard drive every week (and it was a hard drive, not an SSD; in 2010, the price of a 120gb SSD dropped from $420 to $230.)
I think that point was systemd. Six years ago, the initial version was released. That was 2010, and things seemed different. No way I would accept a complex, huge init system on MY carefully tuned $distro_of_the_week.
Today? I'm 100% in support of systemd. It makes my life easier. I have zero desire to tweak a complex mess of init scripts. And sure, I run Arch Linux, but that's mostly because it Just Works (tm) and I'm used to Linux. If someone gives me a Windows box, I won't lecture them on how they're contributing to the downfall of humanity, I'll take the damn machine and write the code they're paying me to write. I shudder at the thought of googling "shareware" and just randomly installing programs, and it looks like I'm not the only one; it seems that trend died... yep, around 2010. 
I no longer give friends USB drives of "cool software", and if they gave me one I'd think it's a strange joke. I no longer read stuff like WinSuperSite; I'm sure Paul Thurrott is still churning out the same quality content as always but I have no interest in reading about the latest features of whatever.
 Turns out WinSuperSite is gone. Or, technically it's there but it's not Paul. "SuperSite Windows is the top online destination for business technology professionals who buy, manage and use technology to drive business"... wow, that's depressing. The old URLs even 404. :(
Wait, no. Operating Systems are broken.
Wait, no. It should all just be the Web.
But wait, no .. the Web is broken.
Ah well, I guess its time for something new. Something, not-broken ..
> But Apples immune system was suppressed. It allowed a disruptor to emerge from within. Apple gave birth to its future by suppressing the reaction to that new seemingly parasitic organism. It took an immense willpower to allow this to happen.
It's strange that they're criticizing Apple for successfully making a product, the iPhone, that can take off a lot of your PC work, browse/emails/basic things.
Clearly Apple is trying forge a distinctly Macbook Pro path for themselves without diving into phone/tablet territory. Until we have more reviews and impressions from the touch bar out in the wild it's a bit early to write off that effort.
Before disagreeing, ask yourself what every single piece of iPhone software was written on.
Someone has to make the software and you ignore developers at your peril.
These profits are not available, they are MADE. Mostly by Apple.
It's weird to hear people waxing poetic about touch nowadays, because now that we've all been using touch interfaces for several years, we all should be realizing that they aren't necessarily that awesome. We've all by now used a map app that, as nice as the touch interface was, just didn't quite gel in terms of whether we were zooming, panning, twisting, or trying to select a single location. We should all be able to observe that the act of using a touch interface also means that we must block the line-of-sight to that same interface, meaning both that the touch thing we are interacting with can't feed back in the most natural place to do so, and that touch interfaces must be very large to accommodate the lack of precision. We must all have dealt with the accidental dialing brought on by the touch interface that can't distinguish between fingers and buttocks, or the stray emails we deleted in the act of simply picking up our phone, or the accidental order placed when we tried to clean a drop of water or a speck of dust off the screen. And I don't care how good you are with that touch keyboard, you'd be better on something that could have custom-built haptic feedback, and ideally, haptic feedback you can "feel" without triggering inputs, too. Touch screens are still the best solution for mobile, but they are clearly not "all that".
From this point of view, where mobile is the future but touch just happens to be along for that ride, bodging touch on to a laptop isn't that impressive and there's no reason to believe it's the wave of the future. I've actually got "touch" on my laptop, in the form of my trackpad, and for a 2D slab, I get rather a lot of distinct inputs out of it. Remember that everything I can use that for comfortably is one more opportunity that I don't need a "touchbar" for. (That's been my real criticism of the touchbar; it's not necessarily intrinsically a bad idea or fundamentally useless, it's that the list of things for which it is the best solution is rather short. And I'm not saying it's empty... just short. In particular, it's much shorter than the list of things that it could be used for, but for which it's not the best solution.)
Further, despite not particularly wanting it the laptop I'm typing this on has a full touchscreen. Mostly I remember this when I go to clean the screen of some bit of dust and my mouse cursor starts jumping around. I don't need it for much. Even when acquiring a button on the screen, the touchpad is faster than removing my hand from the keyboard and clicking. My touchpad already has several useful gestures, draining away the marginal utility of other touch interfaces. It's not that useful, even though it's sitting right in front of me. Now, it's not a 2-in-1, where I can at least see the use case, but, neither is the new Mac, right? It's just not that useful. Not useless, but not very useful.
It's not touch that's the future, it's mobile. The still-not-yet-mature-but-still-inevitable "mobile phone that docks to a desktop and provides its guts" will be a hybrid device, and the "touchiness" is irrelevant.
(Actually, now that I think of it this way, the Nintendo Switch may be the closest thing to a successful implementation of that I've seen in a while. Perhaps if that succeeds, it'll open the doors to computer versions of that idea.)
But there's some goodness in there. The principal of using diagramming as a descriptive documentation and communication solution is highly worthwhile, but again it should be limited to pieces of the system that need such things. And in addition, the level of detail should be just as much as is sufficient to communicate what's necessary -- don't "prematurely optimize" by trying to document every bit of the system in excruciating detail.
There's also often better, simpler ways to document many aspects of a system, a few boxes and arrows work well for many things. Lightweight versions of the Archimate style work well for describing complete systems. Protocols are well described by a lightweight treatment of sequence diagrams, etc.
They'll often go out of date as quickly as you make them, so keeping them up to date and well versioned turns into a challenge.
Because it's free and provides cross platform compatibility (and the diagrams are supposed to be communication devices), we tend to use yEd for most things.
It is useful to draw ideas as graphics for people who's brains are wired visually. And it can makes nice figures for books and articles explaining structures and concepts. But in neither case does the value predominantly depend on the depictions begin adherent to a standard, as much as other qualities, like focusing on the right part of a larger system, or leaving out unimportant detail, etc.
So nonstandard diagrams offer the author or user more creative flexibility, which is often very important.
I do see value in loosely following UML notation, for the obvious reason that one can immediately see if someone tries to show classes, states, requests, systems parts, and so on. That was probably the original goal behind UML all along, even if people lost sight of it during the fad phase.
A bonus comment for the youngin's ... When you hear that some new system will allow "the common man" to write his own software without developers, smile and agree with them because they'll come back when it doesn't go as planned and you can charge a higher rate for the resulting expedited project.
I should also admit that I liked (like?) the idea of writing code using diagrams. In the '80s I wrote a program I called "Flo-Pro" in Turbo C that never quite became self-compiling. It wasn't at all OOPsish or FP. In the '90s I wrote several tools in Prograph  (now known as Marten) but was stymied by the fact that I was the only one in the company using the tool. In the early aughts, I tried URL tools that promised to write my code from the diagrams - it worked for very simple code but I never saw round-tripping work.
I love drawings in general - my coworkers joke that it's not a meeting unless I have a dry-erase marker in my hand. But those diagrams are invariably system-level, architectural drawings. As others have noted, I also appreciate ERD as a way to visualize relationships in RDBMS. So as much as I like the idea, development stays in the world of text - I'm not holding my breath for some magic bullet.
I won't comment on the pros and cons of UML. Instead I'll invite you to ask yourself a couple of questions.
1. What other clients do you support who have similar characteristics as this client (and may therefore also benefit from UML support)? If the number is significant in terms of impact to your bottom line versus the time you'd have to spend implementing it, then you should consider it worth your time, and view it as an opportunity to up-sell (if you can) or keep existing customers.
2. Do you intend to attempt to move into supporting large enterprise, and especially government contractors? If so, you might consider UML support just because it is ubiquitous there.
Originally it was mostly because it was the default setting in my Enterprise Architect tool, but it's proven more useful than Archimate (and other notations) because people without architect knowledge understands it much better.
On the business side it's mainly the system integrations, dependencies and information flows that are of value and you could honestly do them in Word if you wanted. Because it's very easy to build upon it, it's also easy to turn the business specs into something I can hand to an actual developer or use as a common language when talking features and requirements.
I wouldn't use UML if I was doing the systems engineering from requirement specs handed to me, and it is very rare that we use more than 10-25% of its full functionality, but it has enourmous value when you are trying to get future system owners to understand their own business processes and what they actually need from their IT.
Of course, other people's experience may differ - but I largely thought it was a big con.
I used Rational Rhapsody for a few years. We used it for use case diagrams, sequence diagrams, class diagrams, object model diagrams, statecharts+code generation.
Many folks scoff at and draw the line at code generation. By default, tools like Rhapsody seek to box you in to a certain way of doing things. It's not difficult at all to customize but it requires effort to opt out of some defaults. I felt like I experienced significant pros and cons. One one hand it was awkward to use their IDE to edit the code. OTOH it helped encourage a level of organization to the code. Statecharts are very expressive and very clear, I really liked them. There's no limit to the expressiveness of the code you can write. But the vocabulary used to describe the widgets I was working with was new to me, so it took a good deal of time to look up and understand the customizations required.
In the absence of code generation features of UML, the diagramming features are really great. Developers are too quick to treat it like a religion and (on both sides) become inspired to pray at the altar or preach about the evil that lies within. But really, it's just a glossary of visual representations mapped to software design concepts. That's all it needs to be -- conventions like the ones used in other engineering discipline's diagrams. Diagrams with "boxes and arrows" are just fine but there's always the implicit questions: "does that rectangle represent the process executing the 'flabtisticator executable' or the 'flabtisticator class'?
When starting out a project, I tend to lean more towards well labeled conceptual diagrams. I will also use activity diagram, sequence and state diagrams.
While I have often read about people designing class diagram before hand, then writing or generating code. I never use class diagram before code is written. I use class diagram to document an existing system.
It's a tool, if you are willing to be flexible and realize it for what it is, then it's useful.
I use my own subset/version of UML, which uses a simplified "grammar", and allows you to express basically only the following:
- Class with attributes- Parent/child relationship- One-to-one relationship- One-to-many relationship- Many-to-many relationship
If I have some other need (rare), then I improvise. Usually I'm the only one who looks at these, but if a client or someone else needs to, the language is simple enough to understand that I can explain it in a few examples/sentences.
For architectural diagrams, I just use basic boxes, arrows, cans, etc. UML also tends to feel complicated to the point of being hard to read.
In both of the above cases, I think my not using UML is because its goals differ from mine. UML seeks to capture how a system comes together as completely an accurately as possible. I tend to think that the code should suffice for that (and if it doesn't, it's time to have a long hard talk about technical debt). I prefer diagrams to just be a gloss that helps to explain how things come together at a high level.
For understanding protocols and suchlike, though, UML sequence diagrams are my go-to. That's a rare spot where I really do want the diagram to capture a whole lot of fine detail, and the UML standard provides a pretty clear, intuitive and uncluttered visual language for the job.
State and sequence diagrams are really cool to discuss dynamic flows and identify potential logic holes. UML-like diagrams are way better then to come up with your own representation of this everytime
Maybe it's because I work in front-end, which "traditionally" is a bit less strict (JS is dynamically typed etc.), but also, I think that typically the codebase changes too rapidly and the fancy graphs can't catch up with that, they get outdated in a few months, and no one bothers to update them or even look at them anymore (they might be useful in the beginning of the project though).
And then there's the domain specific UMLs, such as Operations Management and BPMN, where the diagram can be programmatically "powered up" to analyze operational efficiency. If you work in a hierarchical organization where you need deliverables that filter to other departments, and there is a perceived value, then someone is going to be tasked to make it. But in a flat organization in startup mode, it's a waste of money.
If you're working across organizations, in public/private partnerships; if your government organization needs to be accountable at diverse levels, then UML is visual language that communicates a lot of information at once--in one artifact. Tax dollars going for new transportation infrastructure in New York City, maybe there's a need to get diverse groups on board. But you're going to pave potholes in Levittown, NY--who cares? Get it done; stop wasting money.
And finally, there is a the language-cultural dimension. Europe is multi-lingual, so it's no surprise the Open-Education Resources offering UML-like education materials are from European universities , and not American Universities. That's not our language problem (yet).
If you have a customer asking for UML, you need to understand their problems. Once you do that, then you can decide if the problem vector they present is profitable sector for your company.
To put all this in other words, UML is a tool and a visual language. Use it or not, it's not going away--ever.
Easy to use, allows you to "code" the UML structure in a simple template language, and the output looks rather nice.
Other than for explaining particular design patterns I don't find class diagrams all that useful, certainly not for giving you a complete picture of a system that consists of more than a handful of classes.
Sequence diagrams, state diagrams, use case diagrams, basically anything that involves or describes activities: I think those are tremendously useful.
I can definitely see an argument for certain types of projects (libraries and frameworks). If you have diagramming capability, and you are in the enterprise Windows market, I think this is a no-brainer. I'd be curious what diagramming support you had if it were not UML....
Having said that, I wouldn't try to implement a full object modelling solution. It's not the kind of thing that help files need. Actor diagrams and sequence diagrams would make more sense to me.
I used to write a lot of UML diagrams in Mechatronics Engineering for a big company, where specs were not supposed to change often.For projects with a long lifetime and a slow change velocity, UML is totally justified.
UML diagrams are not worth it for Software Engineering: code evolve too fast to keep your diagrams up to date.In that case, I replace UML diagrams by simple sketchs / mockups and simple tables.
UML diagrams can be very useful to represent a system to someone which is not technical enough to understand code, but can understand the basics of the diagrams.
Personally, to myself it's usually more a waste of my time.
I tried to keep it up to date for the public docs, but that can be an uphill battle.
A picture is sometimes worth hundred words and this applies to UML as well.
When I take notes, some concepts are better/faster materialized as relationships between objects or actors or activities
Also reasoning about schematics topology is useful and enlightening when a problem is large
It's just a tool for modelling. You can pick and choose the diagrams you need, and at the abstraction level you desire.
I do miss Booch's fluffy clouds a bit, though.
Question to HN: What tools do you guys use to draw UML diagrams?
The class diagrams that everyone is really thinking about when they say "UML" are imho kind of useless. It reflects a kind of obsessive OO purism, and taxonomical obsession, that was quite trendy in the late 90s, early 2000s.
But it turns out in most cases looking at a class diagram doesn't really tell you much about what software does or how it works. And in any case I personally find it easier to look at header files or source files to get a picture of how things fit together. Class diagrams don't really help.
Once in a while I'll fire up Visio and sketch out a state machine or sequence diagram, but all I'm really doing is throwing down some bubbles or rectangles, drawing some arrows between them, and tacking on some labels. It's nowhere near as formalized as UML, but it works well enough.
In 2009 a scathing report was released by the National Academy of Sciences that essentially says that blood spatter, handwriting, hair, fingerprints, and bite mark analysis are all junk science. If two "experts" can look at the same evidence and come to entirely different conclusions, how is this science? It's opinion wrapped up as scientific fact. Who knows how many people are innocently convicted. It's terrifying.
An excerpt from WikiPedia about hair analysis:
The outcry from defense attorneys has forced the FBI to open up on disputed hair analysis matches since 2012. The Justice department began an "unprecedented" review of old cases involving hair analysis in July 2013, examining more than 21,000 cases referred to the FBI Lab's hair unit from 1982 through 1999, and including as many as 27 death penalty convictions in which FBI experts may have exaggerated the reliability of hair analysis in their testimony. The review is still in progress, but in 2015, it released findings on 268 trials examined so far in which hair analysis was used. The review concluded that in 257 of these 268 trials, the analysts gave flawed testimony that overstated the accuracy of the findings in favor of the prosecution. About 1200 cases remain to be examined.
She served 6 years before parole, not 25 years behind bars
I'm far more concerned with Cameron Todd Willingham. Governor Perry had this evidence, that much of the state evidence being used was junk science, and did nothing while an innocent man was put to death. Shameful.
Prosecutors like to pick the same people to testify as "experts" and their top qualification is that they have testified before as "experts". I imagine many have optimized putting up an act and throwing around fancy terms to make it seems really precise and scientific. Their future employment depends on that.
Because I'm sure we are using some 'junk science' we just don't understand at the present time.
But good news, while the Great Barrier Reef is now 90% bleeched, the Seychelles are forming a new underwater ecosystem!
Is it possible to create technology that can suck out the CO2 from the atmosphere? If so, what would the back of the envelope costs for this be like?
If this is a viable option, why continuously try to have these large scale agreements instead of building and selling technology to counter the CO2 emissions?
Its worse than no agreement because countries can point to it and declare they did something whereas they really aren't obligating themselves to more than a small tax. China has what, said their emissions may slow by 2030?
Oracle is a 150 billion dollar company, similar in market cap to Intel. Once you've read histories on these people, it suddenly makes sense that Warren Buffet and Bill Gates hang out, while Steve Jobs and Larry Ellison were BFFs. Like with Jobs and Apple, Ellison and Oracle are aggressively polarizing.
I see a future of declines for Oracle; Salesforce.com is redefining and taking their market (corporate IT budgets). There is simply no way other than lock-in and being dicks that they'll be able to see the profitability they once had, let alone sales growth (unless by acquisition). You can see it already with their patent suits; they're out of ideas. But if you work in the business of software, Oracle's history is worth knowing.
> The pressure was initially determined to ~88 GPa by ruby fluorescence using the scale of Chijioke et al (20); the exciting laser power was limited to a few mW. At higher pressures we measured the IR vibron absorption peaks of hydrogen with a Fourier transform infrared spectrometer with a thermal IR source, using the known pressure dependence of the IR vibron peaks for pressure determination (see SM).
> Photos were taken with a smartphone camera at the ocular of a modified stereo microscope
> Moreover, SMH (solid metal hydrogen) is predicted to be metastable so that it may exist at room temperature when the pressure is released. If so, and superconducting, it could have an important impact on mankinds energy problems and would revolutionize rocketry as a powerful rocket propellant.
> The principal limitation for achieving the required pressures to observe SMH in a DAC (diamond anvil cell) has been failure of the diamonds.
> The sample was cryogenically loaded at 15 K and included a grain of ruby for pressure determination.
> As of the writing of this article we are maintaining the first sample of the first element in the form of solid metallic hydrogen at liquid nitrogen temperature in a cryostat.
> A few months later, Silveras group squeezed hydrogen hard enough to make it nearly opaque, though not reflective not quite a metal. We think were just below the pressure that you need to make metallic hydrogen, Silvera says. His findings are consistent with Eremets new phase, but Silvera disputes Eremets speculations of metallicity. Every time they see something change they call it metallic, Silvera says. But they dont really have evidence of metallic hydrogen.
> All this back and forth may seem chaotic, but its also a sign of a swiftly progressing field, the researchers say. I think its very healthy competition, Gregoryanz says. When you realize that somebody is getting ahead of you, you work hard.
Standard atmospheric pressure ~= 100Pa
So they increased the pressure in that chamber by 4.95billion times.
And then they hypothesise that the hydrogen metal could be stable at room temperature.
Oh. My. Days.
I've been using Lambda quite a bit, I think it's SO amazingly useful. Tasks that are highly parallelized and CPU intensive can literally be infinitely scaled out. I find it weird that their poster child use case is still always a reactive event like watching S3 and formatting images. There are so many use cases for directly invoking a lambda directly from your code.
Imagine a case where you had to parse a million documents with a relatively expensive computation, let's say 250MS per document. Maybe you have a solid machine with a few cores that's running your server, but even then you can't have the server cpu locked for so long, so naturally you'd need some sort of worker server set up. With a good machine and multiple cores, maybe you get 8 running at once. With a lambda, you can forego the worker server altogether. Just invoke a million lambdas directly from your application server, completely parallelized.
Theoretically, you've taken something that would take 70 hours and had it run in 250ms without having to set up any additional infrastructure.
I've considered the possibility of having students do things on AWS (beyond web dev), including Lambda, and just expensing the costs. It seems feasible to quickly set up every student with controlled access via IAM...but is there a way to set up rate-limiting, ideally through a policy? That is, shut an IAM down if a student accidentally invokes a million processes? Or, for that matter, limiting the storage capacity of a S3 bucket?
Such a model would allow infrastructure developers to abstract away most of the concerns around networking, collisions, security, etc., and let game developers concentrate their efforts on simply making the game.
I currently have a game server cluster written in Golang, where the locations are instantiated with an idempotent request operation. It doesn't matter if a particular location-instance exists at a particular moment. It's sufficient for the "master control" server to only approximately know the loads of the different cluster server processes. My experience leads me to believe that something like AWS Lambda, but optimized for implementing game loops would work well, so long as game developers could get their heads around pure functional programming and implement with soft real-time requirements in mind. (John Carmack already advocates the use of pure functions, and game devs in general already do the latter.)
But what's the solution for structured data? DynamoDB is the obvious main candidate, but it's billed by hour and high concurrency is very expensive, requiring complicated temporary increases and decreases of concurrency that are hard to predict.
Is there a good solution for running massively parallel lamdas on stuctured data?
Is anyone using it in production that can comment?
Disclosure: I work on Google Cloud, but I'm just pointing out a fact ;).
P.S. Quoting the author: "As you can see for these queries, the reference implementation performs reasonably well; it's nowhere near Redshift performance for the same queries, but for the price it really can't be beat today"
It's like saying deathless meat, because someone else killed the animal you are consuming.
I wonder if there are any companies who spot this sort of content with the aim of getting in on tv screens.
1. A large woven cone
2. A smaller woven cone without a tip
The smaller cone is placed in the larger one; shrimp swim into the small cone to explore, but then get caught in the space between the two cones when they try to get out (presumably because it's difficult to find the single entrance).
He mentions that the only skill necessary is basketweaving -- I wonder if it would be possible to carve something similar (two interlocking geometric shapes) or if the trap being woven is essential to its function, for example, for allowing flowing water in to entice shrimp.
One of the parts that stood out for me was
> In practice, a long stretch of creek might have several traps collecting food each day without any effort on the part of the fisherman.
If he were to go whole hog long-term, shrimp traps would free up his time for doing other crafts in ways that spear fishing or actively hunting wouldn't, though I suppose the local yield of shrimp would factor into that (whether he could collect enough calories consistently to fund his other efforts).
Is this common / natural or is there some PR at work?
How do we know this?
Maybe he could get a budget and go to Alabama or somewhere with a wild hog problem and get a legal sign-off on hunting something properly with stone age tools. I mean, that's actually what cavemen were doing most of the time. Fire alone doesn't make a meal.
Remind me why this is on HN?
> Ah snap! There was no error compiling the test kernel or running it, but the numbers don't check out. Please report your browser (+ version) and hardware configuration to us, so we can try to fix this. Deviation was 0.33731377391086426.
I'm on linux 4.7 + chromium 53 with Intel 4000 graphics
Thanks, I could tell when my fan spun up when I loaded the page and my browser lagged.
Chrome on Ubuntu on Intel NUC
> Ah snap! There was no error compiling the test kernel or running it, but the numbers don't check out. Please report your browser (+ version) and hardware configuration to us, so we can try to fix this. Deviation was 0.673000000262012.
Latest Chrome on a macbook air. Looks like at least one other person has had this issue, I'll report my experience as well. https://github.com/turbo/js/issues/1
if (!frameBufferStatus) throw new Error('ERROR: (fatal): ' + frameBufferStatus.message);
Chrome on Windows 10 + AMD R9/280.
Anyone got some example results they can share with us, including hardware/OS/software you're running and what kind of speed up was found?
It's even more hilarious because the reason NZ is a desirable place to live is because there is little money in politics and there are few class distinctions. How will that change when all the billionaires move there and want to get their way (and are used to the world conforming to the bank account)? Just another place for the wealthy elite of the world to slowly ruin, maybe the next step is the moon?
The system is rapidly growing out of control and no one seems to be able to stop or halt the insanity. It's like we're all marching towards oblivion and everyone knows it but no one has the will to turn the ship around because the system incentivizes short term greed.
He and his partner managed to come over here when the pound was worth $3 NZD, rent was cheaper, save well and move back to Auckland when houses were depressed (2009-2010). Since then they rode the wave and now have a beautiful home with a view straight down the Waitemata harbour.
My partner and I? We came here when the pound was at $2.10ish, not so bad I thought, not as good, but y'know, we make good money, we can deal with it. Then BREXIT. BOOM. Now the pound is $1.70, our salaries are, of course, still the same. Also a shithole house in Auckland in a bad suburb now costs at least $700k NZD - the first house that my partner's brother bought cost $650k NZD, in a nice suburb on transport links.
The world is mad.
Money does not buy style if you see all those new houses by the way...
As the arctic continues to melt I expect there to be some nice openings in northern canada that are even more remote.
The mega rich should buy sailing vessels and live on those and leave the housing prices affordable to the rest of us.
 Or at least not own a house.
It's funny how this article's impression is "prices are returning to normal" when in fact the government's new token rules haven't had a chance to take effect and certainly haven't had time to be analyzed.
Sadly, these new rules are just for show (an election is nearing here) and until effective reforms are put in place the bubble will continue to rise.
Isn't that essentially anywhere on the planet?
Please everyone stay away.
It will probably ruin his day but the good thing with nuclear fall out is it's globalized nature. https://en.wikipedia.org/wiki/On_the_Beach_%281959_film%29
i guess "unlikely" probably isn't the word i would use in this situation.
If any of you mega-rich are reading this. Please don't fuck it up.
Consider your impact - and I'm not just talking about the enviroment.
Most unusual. /s
'Suppose you have some strange coin - you've tossed it 10 times, and every time it lands on heads. How would you describe this information to someone? You wouldn't say HHHHHHHHH. You would just say "10 tosses, all heads" - bam! You've just compressed some data! Easy. I saved you hours of mindfuck lectures.'
This is a really great, simple way to explain what is otherwise a fairly complex concept to the average bear. Great work.
The key idea is to encode differences; even in an I-frame, macroblocks can be encoded as differences from previous macroblocks, and with various filterings applied: https://www.vcodex.com/h264avc-intra-precition/ This reduces the spatial redundancies within a frame, and motion compensation reduces the temporaral redundancies between frames.
You can sometimes see this when seeking through video that doesn't contain many I-frames, as all the decoder can do is try to decode and apply differences to the last full frame; if that isn't the actual preceding frame, you will see the blocks move around and change in odd ways to create sometimes rather amusing effects, until it reaches the next I-frame. The first example I found on the Internet shows this clearly, likely resulting from jumping immediately into the middle of a file: http://i.imgur.com/G4tbmTo.png That frame contains only the differences from the previous one.
As someone who has written a JPEG decoder just for fun and learning purposes, I'm probably going to try a video decoder next; although I think starting from something simpler like H.261 and working upwards from there would be much easier than starting immediately with H.264. The principles are not all that different, but the number of modes/configurations the newer standards have --- essentially for the purpose of eliminating more redundancies from the output --- can be overwhelming. H.261 only supports two frame sizes, no B-frames, and no intra-prediction. It's certainly a fascinating area to explore if you're interested in video and compression in general.
For example if you replace H.264 with a much older technology like mpeg-1 (from 1993) every sentence stays correct, except this:
"It is the result of 30+ years of work" :)
> The only thing moving really is the ball. What if you could just have one static image of everything on the background, and then one moving image of just the ball. Wouldn't that save a lot of space? You see where I am going with this? Get it? See where I am going? Motion estimation?
Reusing the background isn't motion compensation -- you get that by encoding the differences between frames so unchanging parts are encoded very efficiently.
Motion compensation is when you have the camera follow the ball and the background moves. Rather than encoding the difference between frames itself, you figure out that most of the frame moved and you encode the different from one frame to a shifted version of the blocks from a previous frame.
Motion compensation won't work particularly well for a tennis ball because it's spinning rapidly (so the ball looks distinctly different in consecutive frames) but more importantly because the ball occupies a tiny fraction of the total space so it doesn't help that much.
Motion compensation should work much better for things like moving cars and moving people.
This is a great overview and the techniques are similar to those of h264.
I found it invaluable to get up to speed when I had to do some work on the screen content coding extensions of hevc in Argon Streams. They are a set of bit streams to verify hevc and vp9, take a look, it is a very innovative technique:
I wanted to recreate this for the home page of my file manager . The best I could come up with was . This PNG is 900KB in size. The H.264 .mp4 I now have on the home page is only 200 KB in size (though admittedly in worse quality).
It's tough to beat a technology that has seen so much optimization!
Don't know in photoshop, but in Gimp there's a plugin called "wavelet decomposer" that does that.
Sadly, this is what makes video encoders designed for photographic content unsuitable for transferring text or computer graphics. Fine edges, especially red-black contrasts start to color-bleed due to subsampling.
While a 4:4:4 profile exists a lot of codecs either don't implement it or the software using them does not expose that option. This is especially bad when used for screencasting.
Another issue is banding, since h.264's main and high profiles only use 8bit precision, including for internal processing, and the rounding errors accumulate, resulting in banding artifacts in shallow gradients. High10 profile solves this, but again, support is lacking.
Ehm, what?! The image on the right looks really bad and the missing holes was the first thing I noticed. No zooming needed.
And that's exactly my problem with the majority of online video (iTunes store, Netflix, HBO etc). Even when it's called "HD", there are compression artefacts and gradient banding everywhere.
I understand there must be compromises due to bandwidth, but I don't agree on how much that compromise currently is.
We can conclude that 64KiB demos are at least 48 times as magical as H.264.
How does DCT work:
The fair coin flip is also an example of a process that cannot be compressed well at all because (1) the probably of the same event happening in a row is not as high as for unfair coins (RLE is minimally effective) and (2) the uniform distribution has maximal entropy, so there is no advantage in using different code lengths to represent the events. (Since the process has a binary outcome, there is also nothing to gain in terms of code lengths for unfair coins.)
I would expect also the edges in the image to become more blurred, as edges correspond to high-frequency content. However, this only seems to be slightly the case in the example images.
So instead of the reported 916KiB we're looking at 584KiB.
This doesn't change the overall point, but details matter.
$ wget https://sidbala.com/content/images/2016/11/FramePNG.png --2016-11-04 22:08:08-- https://sidbala.com/content/images/2016/11/FramePNG.png Resolving sidbala.com (sidbala.com)... 188.8.131.52, 184.108.40.206, 2400:cb00:2048:1::6819:1112, ... Connecting to sidbala.com (sidbala.com)|220.127.116.11|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [image/png] Saving to: FramePNG.png FramePNG.png [ <=> ] 622.34K --.-KB/s in 0.05s 2016-11-04 22:08:08 (12.1 MB/s) - FramePNG.png saved  $ pngout FramePNG.png In: 637273 bytes FramePNG.png /c2 /f5 Out: 597850 bytes FramePNG.png /c2 /f5 Chg: -39423 bytes ( 93% of original)
I apologize if this is trivial. What does 1920 in above equation represent?
BPG is an open source lossless format for images that uses HEVC under the hood, and is generally better than PNG across the board: http://bellard.org/bpg/
For a runner-up lossless image format unencumbered by H265 patents (completely libre), try http://flif.info/.
A video on youtube led me to Joofa Mac Photoshop FFT/Inverse FFT plugins  which was worth a try. I was unable to register it, as have others. Then I came across ImageJ , which is a really great tool (with FFT/IFFT).
Edit: if anyone checks out ImageJ, there's a bundled app called Fiji  that makes installation easier and has all the plugins.
If anyone has other apps/plugins to consider, please comment.
Anyway super interesting subject.
In other words, why is H.264 in particular magical?
The article discusses lossy compression in broad terms, but have we reaped all the low hanging fruit? Can we expect some sort of saturation just like we have with Moore's law where it gets harder and harder to optimize videos?
Simplistic as it is, it touches on all the main differences. The only problem with H.265 is the higher requirements and time needed for encoding and decoding.
> Even at 2%, you don't notice the difference at this zoom level. 2%!
I'm not supposed to see that major streakiness? The 2% difference is extremely visible, even 11% leaves a noticably bad pattern on the keys (though I'd probably be okay with it in a moving video), only the 30% difference looks acceptable in a still image.
"Okay, but what the freq are freqX and freqY?"
First of all, I think he meant "you would NOT even notice".
Second of all, that's the first thing I noticed. That PNG looks crystal clear. The video looks like overcompressed garbage.
What a terrible introduction of lossy compression. This would mean that if I empty the thrash bin on my desktop, it's lossy compression.
The concept of going through all compression ideas that are used is pretty neat though.
Aldous Huxley, Brave New World Revisited (1958)
But I'd much rather have it than any system of forced social roles, where there is one person or small cabal of people who make the decisions and everyone else knows their job is simply to obey.
I'm in no way suggesting that one vote should be valued more than another one, but that people should be doing their homework and researching properly the benefits and consequences of their decisions...
Having a majority of the population voting with the gut can lead us to disastrous results...
Or you can look back on history, and look at countries who have done things differently, and realize it is by far the best system we have had.
If you want to live a full lifetime in peace and security and health. Be born in a democratic and capitalist country somewhere in the past 70 years, and you maximize that chance. Any other time and place and the chance of a good life drops quickly.
To other commenters in this thread, describing democracy or capitalism with words like "disastrous" or "complete failure" or "elite rule", please some perspective.
It is worth reflecting that, although a lot of people complain that people are 'voting for a slogan' or similar, there is evidence (mainly the outrageous success of democracies vs. non-democracies) that the average voter does actually have some idea what is going on.
It is also a subtle and interesting fact that if a large group of people are voting essentially randomly, then they will cancel out. In this way, a person voting with no thought for policy will probably cancel out another person voting with little thought. A 52-48 type margin can mean that 96% of the population had no idea, and the 4% that knew what was going on voted unanimously in favour. The point being that a vote can be sliced up theoretically so that ignorant voters have less influence than might be expected - and again, the practice of democracies suggests this tends to happen more often than intuition suggests.
Democracies throw out some cruel decisions, but that usually means the interests of the voters are being served rather than democracy failing.
> We have been so accustomed to hear from infancy eulogies of the wisdom which shaped our Constitution, praises of its perfection, hymns to its symmetry and strength, that to doubt its fullness of all excellence has come to sound like sacrilege.
Some things never change.
This is, to me, perhaps one the strongest strengths of our Republic. Different ideas and values can be thrive in different areas at the same time, and we can test and experiment with what's true and right.
It's still better than nondemocratic systems I guess. But that's a terribly low bar to pass. That's not something to be proud of.
Everyone always says that it's the best system of government that has been tried. Well maybe we aren't trying hard enough! There are other systems, and here are a few that are my personal favorites.
My ideal system of government has no politicians. It forms a parliament or congress just like normal, but the representatives are sampled randomly from the population. Ideally they would be filtered for IQ or education, but this is optional. And then they would debate and vote on issues, without having party loyalty, and without having to pander to the general population. It's sort of like direct democracy, but the random sampling lets it scale to much larger populations.
I also really like the model of the supreme court. I have to say that every supreme court decision I've looked into, they seem remarkably rational and competent. They aren't perfect of course, but it seems so much better than congress. Statistics show that even biased judges tend to become much less biased by the time they retire.
I'm not sure how they accomplish this. My guess is the lifetime appointments, and the structure of the court being to debate issues extensively, and for the judges to at least try to weigh them objectively. I would love to try a system of government modelled after something like the Supreme court.
There is futarchy, proposed by Robin Hanson. The idea is to use prediction markets to make predictions about the future, like whether policies will actually work. Then voters can vote on values ('I approve of Brexit, conditional on it being predicted to increase median wages.') But they bet on beliefs.
Another idea I like is the "Ideological Turing Test". In this case representatives can vote on policy just like normal. But they have to pass a test that proves they fully understand the other side's point of view. By writing arguments for the other side of the argument, and blinded reviewers not being able to tell if it's authentic or not. This would be complicated to implement without people gaming the system, but I think it's worth a try.
There is also alternative voting systems. These are just small modifications of regular democracy. They modify the voting system so you can vote for third parties without being punished for splitting the vote.
- is representative democracy a failure (in contrast with direct democracy)
- is a two-party system a failure (in contrast with a multiparty system)
Also "democracy as a failure" is a common trope that is used by those who perceive the election isn't going according to how they planned. "They are not voting the way I like, therefore democracy has failed" or if the election or polls go the expected way then "democracy and clear minds prevailed again!".
One interesting thing I found about the current election is the role the media plays. To control people in a dictatorship is easier, you just make criticism and dissent punishable, nationalize all the media and it is all simple and easy. In a Democracy controlling is a bit harder, but is still done over the media using sophisticated and not-so-sophisticated methods. Related to that my favorite quote so far comes from CNN's Chris Cuomo talking about the emails: "its illegal to possess these stolen documents. Its different for the media. So everything you learn about this, youre learning from us." It is as if, there was a tiny crack in the matrix and the underlying code was exposed for a moment.
Democracy provides useful data: which topics society agrees on, which topics they disagree on. For the ~51/49 (controversial) cases, instead of enlightening ourselves, we just blindly take the majority choice. This is not the way a scientific society should be approaching government.
When everyone is upset and there are many points of view there is no common way to move forward but there are many hot heads that are willing to shoot first and ask questions later. Imagine a million hot heads without a common goal but the willingness to fight and we can see chaos with out results.
People get upset at the "do nothing congress" because they can't get X done but people aren't willing to admit that the reason is that voters have sent individuals with very diverse ideas to try to get things done. Voters are the ones that are pushing them to not compromise any idea or be punished by being voted out. I can see a future where one person that can use social media very well can push people to vote in ways that we consider distasteful now. What will happen then? Groups will erupt with opposing view and many will be ready to fight.
We've heard allegations that the voting system is rigged but that's very unlikely. We have laws and watch dogs that prevent that in any significant way. We don't have the same for social media but we know that it's possible to manipulate it, even by foreign powers, and that's not illegal worse yet it's hard to impossible to prevent. It's hard to even contemplate how that effects a democratic system.
The founding father created a representative government because they knew that rule by majority can be as distasteful as government by a monarchy or emperor. They thought a functioning government needs representatives that can sort out what's needed. With everyone having a voice that's going to get extremely difficult. Social media is about to let the US test out its governmental system, lets hope it can pass the trouble ahead. Can the US stay together as a nation?
We have the technology to make congress obsolete.
Dictatorship is what came most natural to our animal ancestors, so we tend to rationalize it while condemning the "Zumutung der Komplexitt".
But are they stable? Dictatorships tend to embrace the conservative point of view, which produces usually overpopulation, raging nationalists (the bottled up anger redirected against the guys next door) and religious fanatics. So if this economic surplus, aka innovation is not imported (or even more dangerous constantly produced)theey unravel rather fast, usually by the forces they called upon to stabilize.
There is no inbuilt re-juvenation without weapons. So every new App - any new product or production methode, disturbing the equilibrium, can blow up such a social powder keg.
The seperate problem usually associated with democracys, is that the fullfillment of all wishes in the wests way of life, is rather self-destructive on society. <Anecdata Begin>I have several couples i know who really looked forward to having grandkids, after raising (quite large) familys. And in the west this just doesent happen any more. Nothing more depressing then seeing those baby-boomers and there bottles all in tears about "What went wrong with there kids?"<Anecdata End>It doesent make the situation better, that democracys have a tendency to import large swaths of people from dictatorships - mostly for economic and sociological reasons.
I really hate an idea that minorities must live as majority wants. The assumption that majority is always smart and able to make wise decisions is completely wrong.
Unlike others, I actually think that some form of democracy exist even in authoritarian countries. I lived 22 years in Uzbekistan, then 9 years in Russia. I can say for sure that almost every dictator appeals to masses. Mediocre people (masses) is always their primary audience. For example, in Russia, tzar Putin perfectly represents mentality of majority of people in Russia. People actually love the style he speaks and acts. Dictator won't last long if he looses support from majority.I wrote about this here:
(I was surprised this answer got a lot of upvotes)
Also, I noted that even when masses don't like their current government's ideology, they jump to another mediocre idea.
For example, the mob in Uzbekistan is attracted to radical islam as opposition to current secular dictatorship. So if current secular regime in Uzbekistan will fall, then masses choose to go back to 15th century as an alternative. The mob in Uzbekistan certainly won't choose liberal market economy with highly developed technology sector attracting international capital. The backward silly ideas of islamic clerics are much, much, much closer to the mob.
Another example, next after Putin'ism in the priority queue of ideologies in Russia are: communism, and right next after communism is national-socialism. So there are a lot of people who oppose Putin because he is not true communist or do not fully support national-socialism. Again, there is no "liberal market economy with highly developed technology sector attracting international capital" in their queue of ideas.
I can't even imagine the masses go to the streets demanding relaxing regulations for businesses, reducing government spending, attracting international capital.
I guess in US republican party is relatively popular because of religion. Remove strong support of religion in the GOP and after that their popularity will probably drop 10 times.
In Europe, masses a bit smarter than in Uzbekistan and Russia but still they are demanding nanny state, taking money from high earners.
I spent a lot of time and effort to escape poor government policies supported by masses.I born and lived in Uzbekistan, then moved to Russia, then to Sweden, then to the Netherlands.So I'm not afraid to say to entire society - "fk off, you are all wrong, I'm leaving!".I already did it 3 times!
For example, I left Sweden because of ridiculously high taxes and really big nanny state.
I see decentralized democracy as a solution.For example, I would support an idea of small federal government and pretty independent states.So that voters can vote for laws only in their states (with rare but inevitable exceptions).It would be competition between states and eventually people with certain ideas would concentrate in particular states.Some states would be more socialist, some more capitalist.Head of federal government should not be a single person but rather a group of persons from each party.
I think Switzerland is closest example to this.
In such country, you can easily move between states with different laws, taxes, ideologies. It's far easier than moving between countries if you are disagree with prevailing political sentiment (what I'm doing right now).
Fortunately we have options that aren't the failed states of the 20th century, we need a democratic economy and a democratic workforce. Those are the only solutions to this problem, and if you spend enough time looking at the problems and their causes, it becomes readily apparent why this is so.
Democracy is not a failure, if you believe democracy is a failure, you're saying that your ability to control your own life is a failure. The problem is the structure on which modern democracy has been built.
Playlist (2012 lectures from MIT OpenCourseWare):
But right now as a programmer, I am using data-structures more on an as-needed basis. Sometimes it is difficult to find the right data-structures for the job. So then it would be nice to have a resource that provides the functionality of data-structures as a black box. Learning these would make more sense than learning also all of the gory technical details upfront.
(spent 2 min trying to find them)
At the other end of the spectrum - accessible and brief, I find Dasgupta et al.'s Algorithms a refreshingly engaging read.
I've learned so much and am really impressed with their depth of knowledge and how they are able to convey complex ideas in a very easy to understand way, I can't wait to start the next courses.
Anyone would recommend resources for learning fundamental of data structures?
Book, video, or courses are welcome. I don't care the programming languages that are used for implementations. I am OK with C.
Quite the pre-reqs...
I think a lot of programmers have good understanding of many data structures. But I think hashes and dictionaries are still taken for granted. What they really need to think of hashes as many magical black boxes and the hashing function directs which key to go to which magical bucket. :)
It was nice knowing y'all.
He also says some of his constituents suspect the sound is being generated on purpose by Greenpeace to scare wildlife away from the rich hunting ground. The organization has a tense past with Inuit stemming from its opposition to the seal hunt in the 1970s and 1980s.
"Military plane investigates mystery 'ping' near IgloolikSearch turns up nothing"
(Sorry, got carried away with the other 'discovery of a broken arrow' thread here yesterday!)
I would love to see what those guidelines entail. For example, how thoroughly do systems need to be retested after major and minor updates? And how are we going to enforce those safety guidelines, given the scandals with emission guidelines (which is a much less complicated territory) we've seen lately (e.g. VW).
P.S.: Not to mention that I appreciate using my thumb for the primary meta key instead of my little finger.
Homebrew has made things even easier and has been adopted as the one right way to install things in a lot of projects and companies. And the fact that it is a rolling release package manager means you can always get the latest and greatest or use homebrew/versions to stick with an LTS version.
I have always found installs of the same Linux distro by different people to be almost incompatible, let alone installs of different distros. Different hardware, different desktop environments, different applications and configurations. On the one hand everyone can have a tailor made experience, but it makes it hard to debug or come up with common configurations and instructions.
Elementary is making some simple and familiar choices that make it easier for everyone to start at the same place. It looks and feels good, but is different enough that I can't just switch without feeling all the rough edges.
If developers are serious about migrating to a linux distro and PC hardware, I think a hybrid rolling release for devtools and versioned releases of the base system might be needed to capture a lot of the success of macOS. I'm not even sure if that's really possible.
that's a bit dangerous; Ctrl-V is normally used to "escape"/make literal the following keypress, or do block select in vim.
The notification-on-long-running-process looks very handy though (I've been using https://gist.github.com/unhammer/01c65597b5e6509b9eea , but of course clicking it doesn't put me back in the right tmux window). And the "energy-sucking apps" indication mentioned in http://blog.elementary.io/post/152626170946/switching-from-m... looks very handy. (I've been considering creating wrapper for Firefox that Ctrl-Z's it when it's minimized )
Is anyone running the Elementary DE (or parts of it) on Ubuntu? Does it work OK, or do you have to run the whole OS for it to be worth it?
- a modern full featured client for email, with an efficient and pretty UI, with good shortcut support (at least as good as the Fastmail and Gmail web interfaces)
- a fast and full featured PDF viewer that supports annotations properly -- anything based on Poppler unfortunately does not cut it
- friendly software to create pretty presentations -- Keynote still seems to be king
Development tools are the least of my worries.
[ https://apricityos.com/download ]
Anyway, Geany beats Scratch.
If I wasn't dualbooting I might have spent more than a day to figure out what happened - but I was too lazy and scrapped dualbooting.
When you start sending the perps to jail for violating others' constitutional rights, they'll fall in line pretty quickly.
Until then, these "scandals" are no different than Kim Kardashian's nip-slip or that Paris Hilton video.
We also heard that this reporter was 'encouraged' to write this by someone from a competing accelerator with an obvious agenda. Seems like manufactured controversy otherwise.
All that said, we like hardware and we like Luke, and we're excited he's in the new batch! And we're thankful for the work he did to get our hardware program set up.
Uhh, they clearly didn't do their homework if they thought Altman co-founded YC.
Fitbit and GoPro's downturn is mostly fueled by mismanagement and poor decisions than hardware being intrinsically hard.
The hard part of hardware is building something that works well enough and is in demand. When you have your device on people's wrist or mounted on their helmet, you have overcome the "hardware is hard" part. (for the most part).
After that it's building on your original success, expanding your market and creating value for your customers; all of which are true for any other business, not strictly hardware. For fitbit and GoPro, the latter has proven more difficult.
Doesn't sound very acrimonious.
The development cycles are much less predictable compared to software, you never know how many prototypes you have to burn through until you reach the production stage. Small changes become hugely expensive once you've finalized the designs, and even a minuscule error like a tolerance mismatch can cause hundreds of thousands in damage and ruin a startup almost instantly. I'm always wary of a HW startup, unless the lead engineer is well known and has experience.
(I do not know if they already do this);
Should it not be a sound idea to work with companies that make the THINGS that your users would be on to bundle a device with the things...
Rosignol branded gopro when you buy a pair of skis
[SkateBoard] branded gopro bundle, or a tony hawk edition
Water-proof versions bundled with every [insert water product]
Specialized/Brooks branded versions...
Try to get the companies already selling the transport mechanisms the users of go-pro would be using to boost sales. Lower margins, higher(???) volume?
it's cool that yc experiments with shit like this but didn't work out and now he's doing something else.
hardware is hard.
Seems like pretty good evidence YC is now great at helping hardware startups that he's choosing to go through the program himself.