clicks link on Hacker News
"Hmm, not sure what this is about. I'll just hit the 'Start' button"
Hits start button. 'Add a button' dialog pops up
"That seems pretty straight forward, I'll just add drag that thing labelled 'Button' from the left window pane, on to what looks like a smartphone"
Drags button over. Gets 'Congratulations' box
"Wow, this is really intuitive so far. With a little effort, something like this would be a real game changer in the mobile space!"
New dialog pops up: 'Add a randomcat component to your app
Looks around for anything labelled 'randomcat'
"Huh, thats strange. I wondered if it's labelled something else and I'm just missing it"
Looks for 'random', 'cat', 'Cat.random()', and any other possible combination
Gives up and leaves
As bmoskowitz pointed out we have some rough words about the project. For this group, I'd in particular point out the roadmap and CONTRIBUTORS.md documents on the github repo:
At the highest level, we're exploring whether it's possible to make a tool that lets non-devs (_not_ you folks!) who currently see their phones as a pure engine of consumption, as a place where they can create something fun or useful.
It's very, very early software, and it's public mostly because a) we kinda don't know how to do anything else, and b) we're going to use early and frequent user feedback to correct the aim on the product.
If people are interested, we're more than happy to entertain questions either here or on github, irc, the mailing list, etc.
Oh, and yeah, many of the components are broken, brittle, etc. This is still just a prototype.
That said, we're getting positive reactions from people close to our target audience, such as high school teachers, people teaching others how to make their first app, etc.
I'm sure we have loads of x-browser compatibility bugs, as well as known issues with respect to accessibility, absent localization, no great mechanism for contributing new components, and many more.
Oh, and the gamification bits in particular were really just testing the gamification APIs -- the levels we have in place are deeply unuseful =).
- opt out of the levels thing on first load, it's not ready. - instead, jump straight in the designer and run through these few steps:
- drag & drop a button, click on it, notice it sends out messages on the blue channel- D&D a counter, notice that it listens to a blue channel, and that the button clicks cause it to increment (that's how we "program" these components- D&D another button, make it emit on a different color, and configure the counter to "count down" on that color. That way one button increments, the other decrements
- D&D the fireworks component, configure its "shoot this many rockets" to a third color (and clear "shoot rocket); make the counter emit on that color. Enjoy the fireworks show.
- Other components that work well for understanding things are:
- ratings widget - input widget connected to a map widget will center the map on a place name (although HN will likely exceed the limits on our usage of the OSM server; need to setup another one =() - flickr widget can do both topic and location searches
Let us know if you have ideas for components we should build (or submit a PR!).
The publish button will create "hosted apps" which can be installed on FirefoxOS, Firefox for Android, and incidentally recent builds of Firefox desktop, although the focus for Appmaker is very much mobile apps).
The gamified GUI is a little bit confusing, and you end up with a resulting 'app' (that sometimes takes a few reloads to work, oddly) with an Install button that doesn't seem to do anything yet -- but which I suspect will be a link to save 'as an app' icon to your mobile phone's homescreens.
It's not fully baked, at the moment, and it appears that some of the widgets aren't loading, but it's definitely a neat proof of concept, that was either leaked early, or is for some reason swamped under load, or something.
This is an open source project, community-built from the beginning. It's pre-alpha. Pre-pre alpha, even. There has been no public launch or fanfare. That's why you'll see no blog posts or explanatory text yet, why many of the components don't work, and why the tutorials aren't built.
But it's cool to see that it's already found its way to HN.
It's meant to be like Hypercard for mobile apps.
If you want to learn more, check out the repo:
or the vision stub / wiki:
I understand that there are some interfaces that aren't meant for mobile. There are some that aren't meant for desktop. But at a minimum, you have to make some small effort to give a message to those visiting from unsupported platforms. It shows that you care.
If it doesn't seem like you care about the experience I'm having with your product then I have no motivation to go back and try it again.
I lost interest fairly quickly, and half of the components seemed broken.
That'd seal the deal for me.
It would be a little nutty to suggest that Golang 1.1 is going to give optimized C code a run for its money. Nobody could seriously be suggesting that.
What is surprising is that the naive expression of an "interesting" compute-bound program in both languages are as close as they are.
Most C/C++ code --- the overwhelming majority, in fact --- is not especially performance sensitive. It often happens to have performance and memory footprint demands that exceed the capabilities of naive Python, but that fit squarely into the capabilities of naive C.
The expectation of many C programmers, myself included, is that there'd still be marked difference between Go and C for this kind of code. But it appears that there may not be.
This doesn't suggest that I'd want to try to fit Golang into a kernel module and write a driver with it, but it does further suggest that maybe I'd be a little silly to write my next "must be faster than Python" program in C.
That was my thought while reading article; Rust seems like the answer here. I'm coming from the opposite direction than the OP: I'm unwilling to give up the expressiveness of Ruby and friends in order to write micro-optimized C++ code, and I'm hoping Rust will give me the best of both worlds.
I don't see why people feel that C++ needs to be replaced, when I write C++ I have many levels of scope - and while dangerous it is not impossible and the empowerment makes me feel like a god.
Programming is not incremental. If we spend all day writing a python back-end and when it doesn't give the performance numbers that day was a complete waste. When I think about C++ I know that a code written in C++ will take me 100% of the way - even if it takes longer to write.
So, please just use nginx to host some static html files for your blog, and fetch your discussion boards asynchronously..
Presumption: Writing Go code is more fun than C++ code.
Demonstration: You can write performance Go code that's not too far from C++ code.
Result: Cool, here's a more fun than C++ language I can use as a step down the complexity path when I need performance.
Or like tptacek said.
But really, the T-Mobile CEO was helping -- the author wanted out of his contract (well, out of the $200 fee).
(EDIT: The original title was "I emailed the CEO of T-Mobile and he killed my contract, no joke")
I once received an apology letter after helping my mom bringing her complaint to the governing board of deutsche Bank. Because god forbid she was right and that douchebag bank worker wasn't.
Indeed just wow. I wish people would complain more, when there is a need.
What do I mean with when there is a need? That's the thing. Were not supposed to be machine(even though a lot of people wish for the opposite). Were supposed to evaluate the choices given to us and act accordingly.
And for all of you running a small business and thinking of the douchebag client you don't want. I apologize, because I know exactly who you're talking about and you're right.
Still, I'd say it's how it's supposed to be. In the Netherlands it'd be illegal to upgrade contracts like this. You can't start charging more without giving the user an option to quit the contract for free (or continue the old contract for the old price). Also after the contract period (one or two years), consumers have a right to cancel the contract each month, also for free.
Instead of "getting frustrated" and taking the issue to the CEO, you could have spent some time and effort to resolve it yourself.
Also, this post doesn't provide enough information about your issue and why you had a misunderstanding. If it did, then it would be more meaningful.
Usually, phone upgrades are offered starting two years into a three-year plan. They don't want to let the contract expire before they try to reel you back in with an upgrade; that'd be incredibly dangerous for retention. They want to offer you the phone while you're still good and legally bound to them, but when you feel like you're almost out.
Does the browser just pay attention to whether each line of JS updates the DOM and queue up its updates until it encounters one that doesn't? Doesn't fit my model for how the JS engine fits into the browser. I guess I don't really know, but I always assumed it just reflowed on a fixed timeout.
Edit: Nevermind, I get it: it's that the intervening statement reads from the DOM, thus triggering a flush. I just missed that in the article.
We had a rule: If you think you need a callLater(), you don't need to use callLater(). If you still need a callLater(), you need to get someone to come and look at your code now to tell you that you don't need to use callLater(). If you both agree that you need to use a callLater(), you've still got to justify it at code review time.
The biggest difference I can see at the moment is that Flex doesn't recompute layout until the end of the frame, even if you do read from it. JS does recompute, so you need to defer for performance rather than (as in Flex) correctness. In either environment, the sane thing to do is to avoid having to defer your calls at all. It may be more work now, but your sanity will thank you later.
As an example of how bad things can get, Adobe's charting components would take more than 13 frames to settle rendering, because of all the deferred processing. This is a good example of how deferring your calls can actually cost you quite a lot of performance.
 Good explanation of how this process works on android: http://www.youtube.com/watch?v=Q8m9sHdyXnE
Anyone want to offer the net's very first ever explicit definition of this term?
> Nimrod is a statically typed, imperative programming language that tries to give the programmer ultimate power without compromises on runtime efficiency. This means it focuses on compile-time mechanisms in all their various forms.
> Beneath a nice infix/indentation based syntax with a powerful (AST based, hygienic) macro system lies a semantic model that supports a soft realtime GC on thread local heaps. Asynchronous message passing is used between threads, so no "stop the world" mechanism is necessary. An unsafe shared memory heap is also provided for the increased efficiency that results from that model.
Here are some examples of web apps which use Jester in production for those interested: Nimrod forum (http://forum.nimrod-code.org) and Nimbuild (http://build.nimrod-code.org). The source code for both is also available on Github: https://github.com/nimrod-code/nimforum and https://github.com/nimrod-code/nimbuild.
It would cover all my current expenses handily. Of course, I'm young and single but by no means frugal. (I find that the little costs involved in worrying about my expenses easily outweigh the money saved.) So this is quite an income.
One of the main questions about something like this is about who would do boring, low-paid work with this sort of basic income. What I would really hope is that people would still do many of those jobs, but for far fewer hours--largely as a way to get money for incidental expenses and luxuries beyond the basic income. One problem I find with most jobs is that it's much easier to get more pay than less hours, even if I really want the latter. There is a large drop-off between full-time and part-time work.
Beyond a certain level, I would value having more free time far more than making more money. Unfortunately, mostly for social reasons, it's hard to express this preference. A basic income could make this much easier to do.
While I suspect this might not pass, I think it would be very valuable for the entire world. One of the unfortunate realities in politics is that it is really hard to run experiments; small countries like Switzerland can act as a test subject for the entire world. Or perhaps like a tech early adopter for modern policies.
Either way, this passing would be very interesting.
: For me, this is not quite as simple. In reality, there are plenty of jobs where I would be happy to work relatively long hours. But this stops being a question of pay, or even "work": after all, I'm happy to spend hours and hours programming for free. Being paid to do something I really like is wonderful, but it really changes the dynamics in ways that probably do not apply to most people.
On the left, you have some unions saying this is going to be counter-productive and that it will reduce the leverage of employees in negotiation ("You've already got 2500, stop complaining"). Some other unions say it's going to give employee more leverage ("If you don't pay me more, I leave").
There are some people (including right-wing "economy-friendly" politician) who think this is a boost for innovation. By letting people work on what they want, without the risk of becoming homeless if it fails, you'll have more people trying to become independent / create companies.
And finally, you have what is still the majority reaction when told about this idea, which is that this is encouraging laziness.
More info http://en.wikipedia.org/wiki/Basic_income
The model I imagine would also:
* Be paid to all citizens from age zero. Which means it can replace many existing systems, from child support payments and old age pensions.
* Child salaries from ~3 onwards could come in the form of vouchers with limited scope, e.g. accredited education providers, accredited child care services.
And you need to combine it with some further reforms, e.g:
* No minimum wage.
* Pretty much all existing welfare scrapped.
* Reduced work rights (e.g. less onerous unfair dismissal rules)
The underlying goal of such a system would be to dramatically simplify the role of the welfare state, and put the responsibility back on the individual to manage their own welfare.
Switzerland has one of the most strict immigration rules in Europe.
So I'll just leave it there:http://paulgraham.com/inequality.html
The basic income vote followed this process, and though it gathered enough interest to warrant people voting on it, it has little chance of passing.
Interestingly a similar vote recently passed which limited the income in a company to a factor of 12 (i.e: the CEO can not make more than 12 times the lowest salary of his company) which wasn't expected of switzerland (a rather liberal and conservative country)
It is like student loans in the USA, everything will rise to the maximum price that people can obtain money.
Rough translation into English:
Federal People's Initiative 'For an uncoditional basic income'
The federal constituion shall be amended as follows:
Art. 110a (new) Unconditional Basic Income
(1) The Confederay introduces an unconditional basic income.
(2) The basic income shall allow the whole population a decent life and participlation in public activities.
(3) The law defines funding and amount of the basic income.
In the US, one can look how the Section 8 housing program serves a similar pressure-relief function in the housing rental market. By giving essentially free rent to those who cannot afford current market rate rents, it relieves political pressure to reform housing policies that keep rental rates high while also inflating rents and property values, heavily distorting the rental market. I think one can easily view the Section 8 program more as welfare program benefiting property owners rather than lower class renters.
A basic income would have a similar effect on the general cost of living, inflating values and benefiting the wealthy. Again, like the Section 8 program, this will be a welfare program benefiting the wealthy because this basic wage will simply flow upward and concentrate at the highest economic rungs.
This vote will get something like 80% no votes because people are affraid this will change how people think about work.
The main thing it would do is to remove the welfare trap - whereby you can earn less from starting work. Suddenly, every Franc you earn adds something onto your income. And you get rid of a whole tranche of bureaucracy at the same time.
Not to mention, in combination with the other stuff being passed by the swiss there country can have some major problems coming up here.
Also lots of alternative art shows, post-feminist poetry readings, etc.
Should be fun
I would love to see what that spike would look like.
Less workers, higher wages, more money in circulation... What an inflationary mess that would be.
This is a peculiar initiative. Surely, a plot by the commies, or is it not?
What happens if nobody has a job?
OK, that's a little extreme. Let's see, a family of five would get 12500 F per month unconditionally. That's probably a pretty good chunk of money for doing nothing.
I see images of five to ten people living together to collectively earn 25000 F per month.
In the same story they talk about limiting executive pay to 12x the salary of the lowest paid employee. Again, I just don't see it. In a global market I just don't see intelligent and capable people not looking past their borders seeking better compensation for what they have to offer.
How can you build a sustainable and competitive society this way? Again, I'll admit to not being mentally equipped to comprehend how this can work. Perhaps someone can educate me.
This is not a "let's apply this now!" thing, but a petition to study the ways and means of how it could be applied.
Yes, I guess Los Angeles could do something like that, but..
They've even voted on whether to abolish the military.
Those who choose not to work enough might have to face inflationary affects in housing etc. needing to catch up to the median (not average) population income levels.
Eitherways, if the Swiss go wrong on this their system of voting is flexible enough to allow for change back.
That seems to be the result of a fairly open asylum policy. Some people find it's too open and complain about that.
With the concept of a basic income like this, I suppose their asylum policy would have to become more restrictive.
The great thing about the Google Reader interface was that you can sign up to 300 blogs, and see which ones have new posts in the left hand column. You read the interesting ones every few days, and perhaps scroll through the headlines of the boring ones once per month.
This way no post goes missing because it is too old, a frequent poster doesn't take priority over an infrequent but more interesting one, and the user decides how to "filter" content based on every visit.
All I really want is the ability to designate certain follows as "important", which would give me their tweets in something like the format of an RSS reader at the top of the page. Then dump the rest into the current "river of news" style which I can wade through, or not.
Speaking of lists, I disagree with the author's claim that they are hard to set up. Twitter (and many Twitter apps) have made a list quite easy to set up, and monitor. Doing this is one way to handle the unfiltered feed problem.
He brings up an interesting point, though, regarding negative behaviors such as being less likely to follow new people. I agree that this can happen, but the flip side is people may be far more selective about who they follow as time goes on. Assuming that such users are regularly unfollowing bad or spammy accounts, their Twitter experience should improve over time.
So for me this works is fine, but I know many others that follow for social reasons, not because the person they start following is interesting. And if you do that, yes indeed your timeline will be overflowing all the time.
I've never not followed anyone for having too many tweets in my timeline already. And if someone starts spamming shit I don't care about (accounts with lots of followers sometimes abuse it for political or other things), I unfollow them without second thought. And I let them know why I unfollowed (like with downvotes here and on stackoverflow, I always try to comment if I do negative actions).
Just a list of people I follow and their most recent tweet. Either ordered alphabetically or ordered by date last tweeted. If I hover over one of the people , then I can look at their past X tweets.
On Facebook I can easily set people to three settings which are private to me - read everything they write, read only significant things they write (things which get more than ordinary likes/replies, or whatever), or ignore them. This is great for me, otherwise my entire Facebook feed would be filled with my 4-5 chattiest friends or current/former coworkers, I'd have to scroll through their collective 10-12 posts a day to see anyone elses, etc.
Twitter does not have this option where I publicly follow someone, due to them pressuring me to follow them on Twitter, but where I can actually privately ignore them. Which just tends to mean I use Twitter less. Maybe there is an option buried somewhere where I can do this, but it's definitely not obvious like on Facebook. Which just means I use Twitter a lot less. Because all it is for me is a constant stream of 4-5 chatty acquaintances with the dozens of others I follow popping in once in a while.
There are ways to deal with this on Twitter, but none of them are that good. I can create another account, but then there are problems from that - accounts with private tweets I no longer have access to or have to ask for two adds from everyone etc. I can create a private list without these people on it, but then I have to go to the trouble of maintaining two lists - the main list and the private list. Facebook just makes this a lot easier.
I think the only way out of this is for twitter to put out 1 or a series of lenses with which to view your feed. People still want to follow all of their friends as well as celebrities, athletes, bands, news aggregators, etc; however, they don't need to see all of that crap in one feed. I think it'll be the redditization of twitter that solves this problem.
Twitter doesn't provide any easy way to manage followings while keep suggesting people to follow, because promoted accounts will be unfollowed easily.
I am one of the user in the official popular users list in China region, but most of my tweets are Japanese. This kind of careless mistake should not happen if they had do a simple review before selecting me. I've ask them to remove me from the list, but they never reply.
They seems to encourage users to keep following, increase the quantity not quality.
Good work and thanks for sharing!
(also, obligatory "now the NSA can estimate if they can chase you on foot")
Sort of relatedly, I bought the Withings Wifi scale about a year ago, which does the same thing. I really like it. It is easy to go back and look at weight trends without having to obsess too much about the number each day.
The logic used, based on pattern detections is pretty clever.Reminds me of cellular automata algorithms.
Even if you are not interested in puzzle games, reading the "first steps" part of the site is quite interesting and enlightening.
In the "make a game" part, one thing is not clear enough, though : in order to launch the games, you'll have to hit the "x" key.
I have some puzzle game ideas that I've tried to build several times, but I always get frustrated with how tricky it is to express the rules in code. This seems like such a clever way to do it. Fun!
Nice works btw :)
(And yes, I get the irony that they probably picked this headline out of fear that "not your dad's..." was sexist.)
* I don't participate in the Bitcoin community, so I'm sorry in advance if this is an ignorant question! Congratulations to the authors - I think this is a pretty significant accomplishment regardless.
I salute the amount of effort that's gone into this though.
And just because I know someone is going to ask: Motion prediction in H.264 is based on luma, but there's also a chroma option. I've done some quick searches and found a patent for HEVC that covers using the correlation for motion prediction.
xiphmont_: huh demo4:CfL is on HN already jobstijl0: I posted it, didn't saw you where still working on it xiphmont: ah, it wasn't actually due to be up for real until I got back from vacation in two weeks xiphmont: no harm, just 'agh, nothing is tested yet!'
> And I'll let you in on a dirty little secret: you can even match on interface in your sshd config for things like these
I don't get the secret.
I once worked with an admin who would setup a script to enable SSH passthrough on the firewall (also OpenBSD) at a specific time of day, but never the same time. Once connected in the allowed window and he finished his business, he would reset the timer for another time of day (or perhaps several days to a week later if he's going to be away for a while).
It's a bit like the timed bank vault where even the manager couldn't open it until the timer on the door allowed it.
The whole thing (http://www.fantasticmetropolis.com/i/division/full) seems to be temporarily offline, but here's a pretty good summary: http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumbe...
Lounesto was a controversial and sometimes reviled figure, but I (purely as a non-mathematician and sometime reader of his posts) was very sorry to hear of his death in a swimming incident about 10 years ago now. Nice that his website is still available though.
I tend to agree with his idea that it's valuable to search for counterexamples to newer proofs - a sort of application of Popperian principles in the mathematical domain I suppose. I've no idea on the validity of his claims about specific counterexamples, though.
"That equates to 1 vehicle fire for every 20 million miles driven, compared to 1 fire in over 100 million miles for Tesla. This means you are 5 times more likely to experience a fire in a conventional gasoline car than a Tesla!"
Americans drive an aggregate of 3 trillion miles every year, while Tesla drivers have done 100 million (and they don't cite this number; are they including test drives?). That's well over an order of magnitude difference. Plus, the average Tesla driver is currently probably a superior driver (if for no other reason than they have a brand new expensive car) and has taken better care of their car (since it's within 2-3 years old, tops). In theory, Teslas will eventually become more mainstream over the years -- resold, price drops, lower-end models, etc.
Again, I don't think their conclusion about Teslas being safer overall is wrong. However, their conclusion of the likelihood of a Tesla catching on fire seems off, and the exclamation mark makes this press release seem glib.
It takes a LOT of force to pierce 1/4 inch plate. My boat weighs 12 tons, and it hardly has a dent from the collissions I and previous owners have been in.
My conjecture: The Model S is a heavy car. Hit something pokey at speed and you've got an awful lot of forces channeled to a point.
I also thought it a bit much that Musk tried to compare this to severing "fuel supply lines" in a gas car. The likelihood of a 3 inch puncture severing a fuel line or entering the gas tank is vastly lower than compromising a battery pack that runs the length of the underside of the car.
The Tesla's underbelly vulnerability zone is vastly larger than fuel tanks and lines.. and a punctured battery doesn't need an ignition source to start a fire, either.
I would only point that 25 tons of force, isn't really a lot - I mean, the small jack that you use to lift your car can be a 5 or 6 ton device.
You have a vehicle traveling a decent rate of speed, for it to strike or run over anything at all will involve tons of force.
Neat explanation of the sort of math involved, with both SI and US units http://hyperphysics.phy-astr.gsu.edu/hbase/carcr.html . In the example, a car going 30 mph (50km/h) striking a tree will hit with about 48 tons of force.
"A fire caused by the impact began in the front battery module the battery pack has a total of 16 modules but was contained to the front section of the car by internal firewalls within the pack."
If the fire was able to jump cells, does this make the battery pack "fundamentally unsound", as Elon has described the Boeing battery? Not necessarily. However, merely puncturing the gas tank of a ICE car in this way is not guaranteed to set the gasoline on fire. The ignition temperature of gasoline is over 500 F and the gas tank itself is plastic, most likely. Gasoline vapor is explosive, but the car was traveling fairly rapidly and a there's a fair amount of wind to dispel vapor. The ignition source would have to be heat from the metal of the debris self-striking metal of the debris, or though both layers of plastic to the auto frame itself, and that spark would have to find some gasoline, which is pooled at the bottom of the tank and not near the top. I suppose it's possible. Car crashes do produce burning gasoline, though usually it's a very severe crash that mixes gas vapor with the heat of the engine.
(Knowing the Elon is the Sun God among many here, I want to say this: I do not particularly care about karma, I'm saying 100% of what I am thinking.)
Otherwise it looks like making excuses and that is bad.
1. I have had my gasoline car catch on fire in my lifetime. (That was the end of the car.) However, it was having a lot of trouble at the time and we had just taken it to the mechanic. (That's right, it caught on fire at the mechanic's shop. We were still waiting to talk to the mechanic before going back home when it caught on fire.) This was not the only on-fire incident among our friends. One had his minivan catch on fire in a gas station.
But both of them were old cars. What is to say that most of the cars that catch on fire aren't much older than the Tesla? What is to say that the Tesla won't have more trouble as it gets older?
Oh yeah. The batteries will have to be replaced before the car is run down as much as our old beaters were. And Tesla owners will have the money to maintain their cars better than we did as teenagers.
So, what I'm saying is that the real test will be in a decade. More fires will probably happen, just like regular cars do.
Either way, it's probably not dangerous enough to be worth avoiding buying a new one.
All cars: 1 fire per 116 million vehicle miles/year
Tesla: 1 fire per 113 m vehicle miles/since inception
Obviously, the Model (s) being a newer Tesla model does not have the full historical amount of "Tesla" Miles as the denominator.
 Furthermore, only 2% of non-deliberate fires start in the fuel line or fuel tank of a normal vehicle
He seemed to skip that last bit. (?)
AFAIK, all the Lithium Ion Battery electrolytes are flammable (they are pressurized in the battery container too). Depending on the chemistry of the Lithium-ion battery that Model S uses, some (I.E. LFP) are safer than the others, but still, 1% potential?
EVs like Chevy Volt, Fisker Karma and even Boeing 787 Dreamliner and UPS/FedEx freight flights had been caught Lithium fires in air before.
- All Tesla cars are new and almost all of them have superior drivers.
- They drive their cars only on certain roads where as Gasoline cars are almost everywhere.
- You can not compare 100m sample set with 2 trillion size sample set.
We all know there is negative rhetoric bouncing around about this incident. It seems to me that, precisely because of this, there really isn't a need to write in such a mannertrying to block all possible avenues of attack as if one is a afraid of what will be written in response.
The tone, to me, betrays insecurity, and this seems something at odds with the bullish, innovative nature of the non-PR aspects of the business.
And who would have thought a side-effect of disrupting the automotive industry would be training fire-fighters on the correct techniques for battling a lithium fueled fire?
Diesel-powered car would be much safer, since oil requires something like a wick in it to burn. It's hard to argue with Tesla's statement, since argument is true; but it doesn't include this issue in electric vs ICE vehicle competition.
I LOVE THE COMPANY. I DON'T ENJOY OR APPRECIATE THE LAME MARKETING ATTEMPTS THEY SOMETIMES MAKE. Just like that whole business of jumping through hoops to make it seem like there was some new magical way to finance a Tesla, this is wrong.
Trying to create a safety metric by comparing the number of fires to the number of miles driven per vehicle type is pure nonsense. You have to look at the causes and mechanisms of the fires and dig a lot deeper than that in order to even hope to generate a meaningful metric.
Here's an imperfect analogy (numbers made-up): One million people run marathons every year world wide. 1000 have heart attacks and die. Ten thousand people have run marathons with our shoes and only one had a heart attack and didn't die. You are far less likely to have a heart attack and die if you run marathons with our shoes.
Almost anyone would look at that and recognize it as a poor attempt to create a nexus where one does not exist. I think it's bad marketing.
Now, if we started to dive into the statistics and identified location, weather conditions, age, physical conditioning, pre-existing conditions (heart problem they did not know about), etc. we might actually be able to attempt a comparison between people wearing the new shoes vs. the other brands. Even then, the nexus would be tenuous at best.
A similar exercise would be needed to compare car fires between brands and types with any degree of validity. I don't have the time to dive into the stats. It was easy enough to Google  and do a quick scan:
It is easy to see that young males are more likely to be involved in a car fire.
There are statistics about different brands having different fatality rates (not necessarily related).
Lots of fires are caused by running equipment. Lots of fires originate in the engine compartment. Mechanical and electrical failures seem to account for over 60% of fires.
The point is simple: Far more extensive and detailed statistical work needs to be undertaken before anyone can conclude absolutely anything on the merits of any particular car or design as it pertains to potential to cause fires.
Elon and his team are very smart. They know this. And this is why some of their marketing of late feels really dirty and beneath them. This is Tesla reacting to news that affected their stock price and, potentially, buyer sentiment, with marketing rather than the truth.
Are Tesla's safer than all gasoline cars? That question is probably not an easy one to answer at all.
There's the potential for a theoretical sort of an answer based on design. For example, there are no fuel lines to rupture. Does that mean it is safer? Hard to say. What do you compare that to? Perhaps you can list all the potential sources of ignition and sort them by probability and MTBF? Not sure.
Of course, then you have the real-life probability. Once you get a million cars on the road with all kinds of people, driving in all conditions, roads and levels of maintenance and neglect things can change dramatically. If I remember correctly Tesla has somewhere in the order of twenty thousand. There's a reason we see major car companies recall hundreds of thousands of cars every so often. Shit happens. Design error are made. And it can take time and a massive installed base to discover them.
THE TRUTH OF THE MATTER is these are the kinds of tests electric cars will have to endure over a period of time in order to reach wide adoption. Despite what's been said here a full tank of gasoline is far safer than a fully charged battery pack with enough energy to go 300 miles.
Before anyone mauls me, consider how many gasoline cars have been driven and, yes, crashed, world-wide since gasoline cars came into mass production. Not last year. Since forever.
There have probably been millions of accidents without fires, even with fuel leaks. There's probably no imaginable way to compare the two at this time. We simply don't have enough data. And, no, linking to a horrible crash video on youtube involving gasoline igniting does absolutley nothing to support arguments on either side.
The one issue with electrics that is not spoken of is the fact that you have a several hundred volt high energy system that could very well electrocute passengers. I fully expect that to happen one day (in general, not necessarily Tesla). If and when that happens you can bet it will set the breaks on electrics for a while and relevant stocks will plummet.
I still believe electric cars are the future. We simply need to go though the evolutionary process that will make them really safe for hundreds of millions of electrics to share the road. What happens when you have a pile-up of ten or twenty electric cars on a fogg-covered highway? A pile of mangled wrecks with 400 Volt high energy systems is unimaginably dangerous. I can think of a few horrific scenarios under those conditions.
At some level part of me thinks that fuel cells are the future, not batteries. Having something relatively benign that can leak out would be a good thing.
A few months ago there was a horrific crash in my neighborhood. This 18 year old kid decided it was OK to go 100 miles per hour on this avenue. He lost control and plowed into a bunch of cars parked by the side of the road. He absolutely destroyed seven of them before coming to a stop. Most of the cars were mangled beyond recognition. He was driving an SUV with a lot of mass. His SUV was nearly cut in half and impaled into one of the cars to a degree that made it difficult to see where one car started and the other ended. Almost like taking two lumps of play-doh and mixing them together.
No fire. Gasoline all over the place but no ignition at all. He hit the first car, fused into it and the "ball" formed by the two cars proceeded to destroy the other six. Absolutely amazing display of how much kinetic energy was dissipated.
Had this been eight fully-charged electric cars I am almost certain there would have been a horrific fire as well as the potential for absolutely impossible to describe electrocution of some of the passengers. And, to make matters worst, it would have taken the rescue crew far longer to remove the victims as they would have to be worries about electrocuting themselves and the victims (at the very least).
Until there are enough electric cars on the road to have a massive pile-up accident  where most cars are electric we will not really understand the practical reality of a world where every car on the road is electric. Imagine having to walk out of a one hundred mangled car pile-up where every car has a battery pack storing enough energy to drive 300 to 400 miles and they are wired to produce hundreds of volts. I can't imagine anyone who understands electronics and electricity that would tell me all would be well after looking at the pictures from this accident  if all cars were electric. Look at pictures 1, 8 and 11. No fires. Gasoline isn't all that bad in this regard.
Elon Musk & Tesla would review its design, would possibly add protection for these type of accidents.
Will any other gasoline car manufacturer be willing to participate in such crash test what model S encountered, I doubt anyone will.
crafty writing. read it as "high speed"
That's wrong, it's insanely easy to make. You don't need a "popper", just a pot with a lid. Any pot. And what kitchen doesn't have oil, butter and salt around?
But since TV wasn't around yet, what were you going to eat it to? ;)
A "small" at Regal has 670 calories and 34 grams of saturated fat. Thats about as many calories as a Pizza Hut Personal Pan Pepperoni Pizzaexcept the popcorn has three times the saturated fat. Even shared with another person, that size provides nearly an entire days worth of the kind of fat that clogs arteries and promotes heart disease. And every tablespoon of "buttery" oil topping adds another 130 calories. Asking for topping is like asking for oil on French fries or potato chips, according to CSPI.
The introduction of popcorn may have been one of the first steps of movie theaters to "open up to wider audiences", but as another thread points out, the artificial flavor and smells that surround the product today has not just found friends. Movie theaters, in their fight for customers, have had to lower their standards so drastically to attract new movie goers that others turned away in bewilderment. With the advent of home entertainment technology, both for audio and video, a fair amount of people now prefer the quiet, clean, comfortable, distraction-free screening in their own living room to a night at the movies.
At the same time, we're witnessing a big cultural landmark of the 20th century is dying out. It already has in some forms that had to make space for the mega multiplexes and super blockbusters.
In this context, the introduction of popcorn may have marked the beginning of a development in which the original attraction, the movie, became just one factor among many in the "movie going experience", thereby being devalued. In the end, movie theaters will have to answer to the question why they expect their customers to pay premium prices for these factors.
The business model of movie theaters, with or without popcorn, is not sustainable any more. Whether the disappearance of the cultural entity "movie theater" in its present form would still constitute a big cultural loss, or whether that loss has already happened long ago, is certainly worth debating.
Haha. I have a big coat with inner bags where I can fit two 1.5 lt sodas, I always sneak snacks when going with my friends, since, at least in my country, every snack is ridiculously overpriced (around 5 to 10 times the original price).
Considering the storylines of many silent films, I'm rather amused to contemplate what that implies. Seriously, "Keystone Cops" is kind of like the "America's Got Talent" of its day - "Othello" it ain't.
Not everything needs a blog post.
I've taken a jaundiced view of "liberation tech" efforts in the past and this is as good an illustration as any of why. Among "amateur" libtech projects, Tor is about as good as you get --- an active community, extremely widespread use, technical people with their heads screwed on right and as much humility as you can reasonably expect of people whose projects are (candidly) intended to thwart world governments.
If Tor can't provide meaningful assurances (here, there's a subtext that Tor actually made NSA's job easier), you'd need an awfully convincing reason for how you're going to do better than they are before "liberating" the Chinese internet, especially given that it your users who assume the real risks.
EDIT: The above is somewhat hyperbolic and unclear. The NSA's capabilities may have legitimate uses. Similarly, there may be legitimate military uses for nuclear weapons. But building nuclear weapons creates the risk of worldwide nuclear destruction. Similarly, building this kind of highly efficient exploit system creates the risk of destroying all Internet security. The potential destruction far outweighs whatever good the weapons might accomplish. That is why I said they belong in the same category.
Tor, including hidden services, was never designed to protect against someone who could observe all or almost all traffic in the Tor network. Given that data, it's rather easy to correlate timing information. Indeed, Tor fundamentally allows this since it aims to be a low latency network.
Given the NSA's extensive tapping of key fiber lines, we should assume they can actually observe the necessary traffic.From the original paper announcing Tor: "A global passive adversary is the most commonly assumed threat when analyzing theoretical anonymity designs. But like all practical low-latency systems, Tor does not protect against such a strong adversary." --- Tor: The Second Generation Onion Router  https://svn.torproject.org/svn/projects/design-paper/tor-des...
- always have an update to date version of tor bundle!
- compile the bundle yourself from source
- run it virtually, and always roll back to a clean snapshot (before installing it tor) when done
- if possible use from a network that is not your own (open wifi, public wifi, etc.)
- spoof your mac address
- do not run JS, Java applets, etc.!
I know this seems extreme, but from what I read, it's the best you can do to protect yourself.
It would be nice if somebody could honeypot them to find out the vulns and malware types they are using.
It looks like they had some trouble picking out users 5 years ago... lord only knows how easy it must be for them now.
It block non-anonymized traffic and makes permanent changes difficult. OTOH, privilege escalation bugs happen frequently on Linux.
However, for those with more limited resources, Ryan Barnett is working on an open-source monitoring system for BeEF (https://vimeo.com/54087884).
> But the documents suggest that the fundamental security of the Tor service remains intact. One top-secret presentation, titled 'Tor Stinks', states: "We will never be able to de-anonymize all Tor users all the time." It continues: "With manual analysis we can de-anonymize a very small fraction of Tor users," and says the agency has had "no success de-anonymizing a user in response" to a specific request.
So only with "manual analysis" can intel agencies have any success, and that appears to be with a small subset of users who have other vulnerabilities. But when targeting a specific user, the NSA appears to have had no success in de-anonymizing them.
Instead of having just an Tor/browser bundle, build a vagrant machine specification that installs the Tor bundle. This virtual machine would be destroyed and recreated from time to time. Now put the machine specification in GitHub and let anyone use it.
Tor's biggest vulnerability is the risk associated with operating exit nodes means that the number of exit nodes remains relatively low at ~1000 worldwide. If hundreds of thousands of exit nodes started popping up all over the globe. It would be very hard to counter.
I'm also curious if enough governments unhappy with what is happening could go as far as hosting many tor nodes outside the control of the NSA. Is the Global Passive Adversary threat still valid if there are many of them that are non-cooperative with one another (i.e. China can't monitor US and Russian tor nodes, Russia can't monitor US and Chinese nodes, and the US can't monitor Chinese and Russian nodes)? My intuition tells me that the global passive adversary would have to be able to monitor most of the nodes, but if others came on the scene doing the same, they would dilute the percentage of nodes that any single global passive adversary could monitor.
Anyone knows what these tags look like?
Its also great all the technical details that are being released about how they Intel Agencies collect data. Its all fascinating.
why does speed is a factor to mitm attacks? the slide shows a proper mintm diagram... or is this quatum thing exploiting a package arriving before the honest response? and why they would need to do that if they are in a position to do a proper mitm attack and not expose themselves for someone who monitors man-on-the-side attacks?
App looks good, was thinking about creating something like this for my own use. My use case would be for monitoring a forum 'Whats New' page so we can be alerted if a thread is started about our business. But we would need the alert much more frequently (every hour at least), and wouldn't want the same alert over and over again.
do you have a specific target audience for this?
Thanks to Matz, DHH, Evan Phoenix. Thanks to the hundreds of contributors. Thanks to Engine Yard for the $$$. Thanks to the community.
Rubinius 2 will target Ruby 2.1. We're brining Ruby into the future! We will not support multiple Ruby language versions moving forward.
Version releases are changing with the release of Rubinius 2.0. There will be a new release once a week, won't follow a pre-determined release schedule. Master branch will be kept extremely stable. New versions will be X.Y.Z+1. Please post issues, hopefully they will be fixed quickly.
The goal is to semantically version the Rubinius core starting with version 3.0. We've added a subdomain http://releases.rubini.us for hosting release tarballs.
About Rubinius Parts:
* It has a VM that runs byte code produced by the Ruby compiler. Every Ruby method gets its own interpreter.
* The generational garbage collector (GC) has a very fast young generation collector, usually pausing for less than 15 ms to complete a collection.
* Rubinius implements native operating system threads for concurrency and has no global interpreter lock (GIL). Ruby code can run in parallel on multi-core or multi-CPU hardware.
* The Rubinius just-in-time compiler (JIT) turns Ruby bytecode into machine code. The JIT thread is mostly independent of the Ruby threads so the JIT operation doesn't impact the running code's performance.
* The Rubinius core libraries (e.g. Array, Hash, Range, etc.), as well as Rubinius tools like the bytecode compiler, are written in Ruby. The Rubinius systems treat them just like Ruby application code.
Commenters note: There's a whole section called "Plans, Meeet Future" which seems to say Ruby hasn't kept pace with the "SaaS revolution." Honestly, not sure what as going on here, I'll punt on a summarizing.
Plans for improvement:
* Significantly improve concurrency coordination in the system. Some operations requires topping all threads. Working to get rid of this.
* Provide more efficiency by using more modern lock-free concurrent data structures.
* Make the GC more concurrent and parallel.
* Make the JIT even faster and expose more of it to regular Ruby code.
Gems as Components:
* Major components, like the bytecode compiler, Ruby parser, debugger, etc. have been moved to gems. These components can be updated easily and quickly without requiring a Rubinius release.
* In Rubinius 2.0, the Ruby standard library has also been converted to gems.
The post then reflects on how Rubinius has inspired other projects: RubySpec, Topaz, Opal, Puma, etc.
And it ends with: "Ruby is an excellent language. Rubinius is dedicated to providing Ruby developers with excellent tools and technology competitive with these other languages. Developers who are happy writing Ruby shouldn't be forced to leave it because of technical limitations."
As someone who has been working with MRI since 2006-ish, I feel this statement is accurate. MRI is stagnating around the GIL. Rails last lost some of it's edge and this is in part responsible. Since about 2011, I've invested into Erlang (elixir), and mostly NodeJS. They're truly communities that are evolving well, they are applying themselves well to new problems inherit to changes in application building demands.
Does anyone have any pointers on how to keep the latest RBX up to date in RVM?
Will heroku be supporting RBX's new weeklyish release cycle?
I wonder what this means for RubySpec?
Now, Rubinius 2.0, Ruby 2.1, Topaz, JRuby.
Exciting time for Ruby implementation.
This could also describe Sony's behavior with the Playstation 3. Early-model PS3's contained a feature called OtherOS which allowed you to run Linux on the console in a way officially supported by Sony.
Sony later decided to remove OtherOS support from the console in a firmware update. While technically optional, the firmware update was required to play online games on the console, or play subsequently produced game discs.
IMHO it's fine if a hardware manufacturer chooses to remove features from newer models. OTOH reaching out through the cloud to remove features and cripple models that people have already bought should constitute deliberate fraud against the consumer. You bought something which was advertised to do X, Y, and Z, the manufacturer deliberately removes the capability to do X after you've purchased it -- it seems like it should be totally illegal for them to do that.
If a car dealership owner decides to sell only cars without radios, that's their business decision, and it's not illegal for them to do business that way. If a car dealership owner decides to drive around town, breaking into cars people have already bought from him and removing their radios, he's going to jail and rightfully so.
Why should PS3's be any different?
Apparently, they are -- AFAIK Sony has suffered no legal consequences for its policy whatsoever.
I can't fathom why vendor locks are covered by the DMCA, but they are. (At least, that's what the Library of Congress says) so maybe this is covered too. Or not.
Maybe a bunch of customers could get together and launch a class action lawsuit?
> Unfortunately, this would be very difficult for several reasons including the fact that wireless subscribers are no longer allowed to sue their carriers as part of a class lawsuit.
> The problem is the U.S. Supreme Court's 2011 decision in Concepcion v. AT&T Mobility, in which the Court upheld the validity of class action waivers and arbitration clauses in consumer contracts, according to Michael Ashenbrener of Aschenbrener Law, a consumer advocacy law firm based in Chicago.
> "As a result of the Concepcion case, it is essentially impossible to sue a U.S. cell phone carrier in a class action," Aschenbrener explained in an e-mail. "Consequently, there is no effective check on the power of U.S. wireless companies."
Mobile phone companies suck, eh?
They have earthquake resistent storage for the Keystone-Mast sterogram collection of master glass plate netagives.
Pacific Pinball Museum also also pretty coolhttp://pacificpinball.org
If you have some interesting stuff taking up space in your garage, send it to one of these museums. If you miss the old junk that you've thrown away, go visit one of these museums.
I have been using Gradle now for almost 3 years and I can't be happier. The support is great, they roll out new features at a regular pace, it's fast and incremental build WORKS.
If you work in Java and you are still stuck with Maven, please, take a look at Gradle.
By default, not only are you downloading a truck load of jars from the internet and running them locally, you are fetching them over an insecure channel!
So as of a few years ago I too moved on to Gradle and have loved it.
Language based build systems offer more expressiveness, but that is also more rope to hang yourself with if you get someone that doesn't know Gradle well doing tinkering with your build. For example we had someone add a custom checkstyle report to an Android project using Gradle. After their edits it stopped working because of the way the Android plugin was designed, bringing the build down for several hours. I came in, rewrote the Gradle build file to workaround the issue and it worked. Then I documented to the person why it hadn't worked and what I did to fix it.
As with the adoption of anything that gives you more rope to hang yourself with, it's a necessary piece of Gradle adoption to document your practices and train your team well.
Fight the Maven way and you'll end up in a mess.
tl;dr Understand the tools you use.
I agree that this example demonstrates that the Maven model is broken, but why would you structure your project like this rather than using another module or a submodule?
Maybe it's just lack of experience, but I highly prefer the rigidity of Maven projects compared to some of the messes that more expressive build systems allow.
It might be, but it still has it's merits. It was one of the first, and many other improved tools can't work without it's repositories.
I still like the IDE support for MVN: e.g. IntelliJ draws a nice dependency diagram that I find useful, even if I don't use Maven for something else in a project.
Maybe it's biggest mistake was that dependencies are global by default, and not local (like Node's NPM). If the dependency structure would have been local to a project, than many of the pain points I encountered with MVN in the past wouldn't have even existed.
Build reproducibility, which the author mentions is another issue. When I scrub the local mirror, the outcome can be completely different. Previously builds where working fine, when I scrub, everything breaks. This is mainly due to non-local dependencies which had been cached locally, but not replaced by most recent versions.
Another suspicion I have is that most build scripts are slapped together by googling until it works. Which of course is prone to being fragile (not to mention legal issues because it is opaque from where all stuff is pulled).
But why would any single person in this world run chromeos on a machine that runs Windows 8? The very same machine already either runs Windows (if that's your eco system) or is powerful enough to run Linux. Why would you _ever_ run Chrome OS here?
Put differently: What's the whole point of Chrome OS, unless bundled with cheap/slow hardware, as a kind of Damn Small Linux, the Web 2.0 version?
I hope they don't ruin the Chrome browser with techniques used to promote Google+. I don't want to see a tomorrow where you download Google Chrome and it comes pre-installed with Google Drive, Google Docs, Google+ etc. Shoving things down people throat isn't going to end well.
Its like going into McDonalds and finding a Burger King stall inside it.
This is going to be an experience as smooth as a 1974 land rover with no tyres on it.
DRM-enabled Firefox would be effectively non-free software: you could not modify it and rebuild it from source while retaining the DRM functionality.
* Firefox is the only browser that can't play certain content * Firefox is the only browser that plays all content
I would assume the first, because it should be easy for a content provider to just block a certain browser entirely (and that block could be circumvented, but the majority of people won't do that). People will blame Firefox, not the content provider.
Remember what happened to html5 video. Everyone but Firefox was pragmatic, and implemented h.264 -- primarily, but not only for hardware acceleration reasons. Years later, Webkit-based browsers are ubiquitous, and Mozilla is developing a phone OS nobody will care about, in a desperate effort to become relevant again.
Imo, Mozilla ought to spare itself another embarrassment by being the only guys in the room with the contrarian opinion. Take the issue to the W3C directly -- or for that matter vote for your local pirate party. HN and other tech news venues might be the correct places to recruit support, but you ultimately want to lobby your case directly.
- Media purchase was inconvenient and overly expensive. - People pirated because it was convenient and cheap. - Streaming services offered convenient, low cost solutions. - People 'stopped' pirating because streaming is a decent, convenient legal alternative.
"The W3Cs (and Tim Baner Lees) support of EME shows clearly that once again, the W3C has gone down a blind alley (like with XHTML) and is not interested to serve the real needs of the web. The WhatWG was the result of W3Cs stagnation on addressing real world needs. And once again the W3C is more interested in stagnation than real world needs with EME. It has to be expected that the relevancy of any W3C standard will substantially diminish in the future."
I wrote pretty much the same thing in the comments on the blog post yesterday when people were freaking out about this then. EME is a plugin spec for implementing DRM, not something that would get baked into browsers.
Everyone put their logic pants on and stop freaking out for a second. This is might be a silly spec for implementing a stupid premise (DRM), but it's not the end of the open web.
Is this just a crusade agains DRM as a whole (good luck with that) from the free software movement, or do they have problems with this exact proposal from the w3c?
So I wonder if FireFox CAN even implement it ?
So, why not the same answer for passive content?
I see the corruption of W3C (because that's what it is) by corporations almost as bad as the corruption of NIST and the security standards by the NSA.
And for what exactly? The apparent "convenience" of not having a 3rd party plugin, but instead a "native" plugin in the operating system, that will only work on certain operating systems and browsers? HTML and DRM are incompatible in principle, and will be incompatible in practice, too. It won't give you any convenience, and will potentially make things worse in many other ways.
And all of this because we're starting to buy into the idea that the content companies are right and piracy is hurting their sales? I guess repeating a lie long enough, does make it true in the end - even though it probably isn't .
So once again, why are you letting our Internet freedom slip away without even a fight?
 - http://www.freedomhouse.org/report/freedom-net/freedom-net-2...
 - http://torrentfreak.com/piracy-isnt-hurting-the-entertainmen...
Imagine the new world that would be open to the malware/spyware if DRM is enabled they will easily use this to hide their shitty stuff and not allowing anybody to see whats going on, how does w3c is going to let that happen :S
Hopefully Firefox wont be open to implemment this shit on their browser.
here is the reason: if there was such kind of mechanism in browser, we probably already had snapchat years ago on browser instead of Apple's safe guarded garden.
there is no evil technology. it just depends on how to use it. i'm surprised so many are blindly naive.
And even if they implement DRM, I could probably just grab the source and comment out a few ifs, and would be fine (assuming its not just a wrapper for Windows' DRM).
as we all know drm is folly. if the data can be decrypted to use then it can be stolen /always/.
By forcing yourself to try for 15 minutes, you gain a deeper understanding of what you're troubleshooting so that, even if you don't fix it in 15, next time you're in a better position to troubleshoot than you were the last time.
And by forcing yourself to ask for help after 15, you not only limit the amount of banging-your-head time, but you also get to see how the other person solves the problem while all the details are still fresh in your mind, so that you'll more likely have a deeper understanding of why what you were doing to fix it wasn't working, and why the ultimate solution actually worked.
The person you ask can focus on the parts that you didn't figure out for yourself. And, you may have gained a different perspective and/or insight into deficiencies or additional options that is actually of interest to the person you talk to (write, IM, etc.).
Viola. You just turned a lecture into a more interesting and engaging conversation.
So now when I'm tempted to ask a question on SO, I write out the question in a text editor, giving as much detail as possible. It's not a 100%, but I've found that going through the process of trying to frame a question intelligently goes a long way toward figuring it out myself.
More relevant to HN viewers: If you're doing work on a production server, ask before trying. The cost to your corporation of you failing and bringing down a mission critical service is typically greater than the context switch of one additional person to make sure you're doing it right.
Strongly recommended to hackers.
When I started, I had very little experience, but a willingness to learn. My boss hired me anyway, and it moved from pushing paper to labs to "OK, we need to update this web application" and "I need you to learn how to deploy a very customized Windows image for 300 computers, and learn to maintain them." Since I was much younger, first as a student and then a full-time employee at uni, it was easy to ask my bosses (the first, if you can believe this, actually wrote his own code to hide a password in the bootloader to run some admin task on the first boot after imaging and then delete after completion; with Windows installations and incosistency, it took him months to get that write; he now is a full-time lit nerd and author, talk about renaissance man) and tell them everything I did and needed help. Not only did that teach me to solve the problem, it taught me how to approach computer problems (kind of like the OSI stack, but more general than networking, and not as shitty as "turn the computer on again and off again") and then onto "how do I debug stupid coding mistakes in scripts with the least time possible" (answer: it might not be a production app, but make sure your scripts have good on-and-off logging infrastructure or you will be sorry).
Unfortunately, I moved on from that job. And if this long-winded post is any indication, I am now seen as too chatty and annoying with this approach where I work. Some people get it, while as the other more senior infrastructure people see it as me questioning them when I ask for explanations or better tips to troubleshoot issues I could see (not that are there, but potentially could see) from my end and know when to leave them alone. As others pointed it, it is essential to enforce this on everyone, and in many institutions, that is seen as being chatty and nosy.
I learned a lot through my mentors, and I wish this could be imposed everywhere I worked and work, but many oppose this as questioning authority. I wish it was different, but oh well.
I'd suggest to try also with a step in between. Something like: try for 15 minutes, if you still can't find the solution, go for a quick break, like getting a coffee, and if the solution still doesn't magically appear, ask someone.
I lost count of how many times I solved a problem while getting up to get coffee, after trying hard to find the answer for a few minutes. I can't be the only one.
In the process of writing up a clear and detailed post, which often involves simplifying the problem into something reproducible on jsfiddle, I suddenly see the answer.
Instead of hitting submit I can just close my browser tab.
I noticed that going home early and tackling the problem early the next morning helps more than the 2-3 hours I spent with no success.
I think the reason why this works well is because you are force to document and make it as easy to understand as possible. There are complex problems, but it is easier to solve if those problems are broken down into solvable pieces.
Even better: recently someone worked out how to fully root it so you can enable hardware virtualization and sign your own kernels.
You can even install all ruby gems, packages and w/e you need for the class before you distribute it for convenience.
I have used AIDE (https://play.google.com/store/apps/details?id=com.aide.ui&hl...) to with Github and Dropbox with some success, but I feel it could be a lot more polished - but it works as far as I can tell.
A Chromebook + modern Android smartphone/tablet is powerful enough to provide a very good development environment (no need to emulate a device). It's held back by Eclipse being slow and bloated and Android SDK being unavailable.
Is there any hope for me other than getting a more powerful laptop? I feel like I people around the world should be able to write great Android software with a Chromebook + android device, but they currently can't
The problem is that I don't want to get a laptop and spend countless hours getting it to "just work"
And 4 hours battery life? Not really enough IMO, especially for a slow Celeron :-/
$200 coding machine is not a bad idea, but I just cannot figure out where to use it.
It's interesting to think about the factors that go into this. I could imagine:
* High unit cost
* High maintenance cost, especially in remote locations.
* It is only useful in remote locations where wheeled vehicles can't go.
* It requires special training to operate.
* It doesn't have enough intelligence to avoid obstacles by itself in the remote, rough locations it would otherwise be fit for.
(Sometimes I miss Slashdot. This one deserves an 'overlords' reference).
Why don't we see practical applications of all of these robotics experiments? The answer is very simple really: Most of them are relatively pointless and add very little to the robotics knowledge-base that will be needed to really move robotics forward into real-world applied robots.
Think of something like robotics vacuum cleaners. Nothing whatsoever innovative about any of them. It's a wheeled platform that has been in use in hobby and research robotics since, well, forever. The '70's and '80's were full of robots with this basic platform. What changed? Electronics got better, batteries smaller, microprocessors more capable, manufacturing more efficient. What was retained and reused from prior research? Probably not much.
I started in college with the goal of becoming a robotics engineer. An EE with specialization in robotics. It didn't take long for me to realize that the field wasn't as interesting and exciting as I made it out to be in my mind. The R2D2's and C3-PO's were nowhere to be found and were easily decades away from becoming reality. If I wanted to be in robotics I would end-up making industrial manipulators or things with motors that we would all pretend were robots. That's a pet peeve of mine. Battlebots had nothing whatsoever to do with robotics. It was about a bunch of remote controlled machines. Not robots.
I digress. The point is that I was really excited about the field until I realized what I wanted to do would have to wait 50 or 100 years. I wanted to work on Commander Data, not a mindless pick-and-place machine.
And so I begun to dissect things and think about what it would take to get there. Do we learn anything by making humanoid-looking little robots out of RC servos? I built a couple. It's an utter waste of time. Nothing whatsoever of value other than to pretend we built a humanoid. Don't get me wrong, it's a great hobby and lots of fun for the kids to learn, but it is far, far away from anything even remotely useful.
In my opinion these are the areas that need a quantum leap in development before robots like Wildcat can become useful and relevant outside the lab:
This is huge. Motors, gears, springs, pistons and bladders just don't cut it. We need a step change in the performance and capabilities of what we use to do the job of biological muscles. Machines like Wildcat can't operate for days at a time. They use internal combustion engines to power pumps and hydraulic or pneumatic end-effectors to actuate joints. This is lousy. Very little can be learned from trying to operate such machines. You end-up with things like Asimo that walk like they are taking a dump because it is nearly impossible to implement true dynamic gaits because we either can't implement enough degrees of freedom or joint actuation simply isn't up to par.
Artificial muscles that perform well and are energy efficient would revolutionize the field.
Thankfully this is something robotics shares with electric cars. We need to do much better than current LiPo cells allow in terms of volumetric power density (at the very least).
ARTIFICIAL INTELLIGENCE + CONTROL SYSTEMS
This is a field that has seen advances but is nowhere near where it needs to be. I can teach a five year old kid how to sort and fold a pile of clothes without much effort (other than maintaining his or her attention). It would be very hard to do the same with the AI we have mastered to date. I am talking about having a couple of robot arms and a camera presented with a random pile of clothes and having those clothes sorted and properly folded as a human would. No special mechanics, suction mechanisms or anything like that.
PROCESSING / NEURAL COMPUTING
The AI+CONTROL field ultimately needs far more advanced and energy efficient processing architectures than are commonly available today. Stuffing a robot with a powerful Linux PC provides nowhere near the processing bandwidth needed to perform at a level comparable to a human child. I am not sure what form this step improvement in computing will take, but we need it.
PROGRAMMING LANGUAGES / DEVELOPMENT AND SIMULATION TOOLS
We are in the dark ages. We need a serious paradigm shift in the way we program computers if we are ever going to even approach something that can compare to the fictional C3-PO or Commander Data ideas.
If you want to contribute to robotics your time and efforts would be far better spent on the above (I am sure there are other areas I have not listed) rather than making little remote-controlled gyro-actuated cubes that link to each other via magnets. I don't know what can be learned from that other than making remote-controlled gyro-actuated little cubes that link together via magnets. Cool toy. Useless for the advancement of robotics. It's almost like spending a lot of time playing chess: You become better at playing chess, a narrow skill, and virtually nothing you do can be translated or reused for other tasks outside of chess. Grandmasters are not genius thinkers, they are simply great chess players and that's it. Master little cubes with gyros and that's all you've mastered.
I have two German Shepherd dogs. I have trained both of them to search for objects I hide anywhere in the house. I show them the object, I let them smell it and then hide the object while they wait in a "sit-stay" well out of sight. Sometimes I'll hide the object deep in a drawer inside a closet in an upstairs bedroom while they wait in the garage with the door closed. These dogs are amazing to watch. They always find what I showed them. Every so often they need a little help (and they ask for it), most of the time they do it on their own. Think about all that is required for an animal to do this spanning a range of capabilities from cognition, perception, sensing, navigation, planning, communications and more.
There is no way a bunch of little blocks or a gasoline-hydraulic-powered machine is helping us advance towards even something as simple, in terms of biological beings, as finding an hidden object using smell. A better place to spend money and resources is in the areas I highlighted above and others I did not mention. Once you "ace" the above, the process of designing and fabricating a mechanical frame with the required capabilities should be an almost academic exercise for any engineer with a moderate range of experience in the electromechanical fields.
Not to minimize Boston Dynamic, but I really think a lot of what they and others are doing is simply burning tax money for no good reason. Well, there is a good reason. The government folks who shovel out the money are easily impressed by this stuff. Nothing really advances but it is impressive as hell. Who knows how much money was burned on the GE walker in the 1960's . I don't know of anything that came out of that project and is in use today. If I gave any reasonably capable team of engineers a few million dollars to play --without a requirement to actually deliver something that works in the real world-- they could build similarly capable machines. There's nothing special about these systems other than they are impressive to the untrained eye.
General Electric built quadrupeds in 1968 . The only reason they didn't perform like the Boston Dynamics rigs is that they did not have access to better computing platforms, sensors and electronics. There is nothing in the Boston Dynamics machines in terms of mechanics or hydraulics that was not available or could not be implemented in 1968. Just look at the video  (got to love the sound effects). This machine, all by itself, proves my point about the futility of some of this research. They all put the cart in front of the horse. The GE machine needed better effectors, sensors, energy storage, AI and control. The machine shows the amazing mechanical complexity that was attainable in 1968. Remember, no Solidworks, no microprocessors, no FPGA's, no Linux, just a dude pulling levers. Amazing stuff.
We are simply focusing on and throwing money at the wrong things.
Just came across this, which is really cool (1957):
Not to mention that with a biological "robot" you also have a very low cost of producing subsequent units if you manage to create a non-sterile hybrid.
(...I'm only half joking)
What I never could wrap my head around is how to monetize it without using ads all over.
I see you are using "Buy on amazon". Is this giving you a share of the profit?
Anyway, great job!
Great ideas for vinyl links too.
(Posted my own track under the artist name, just got deleted. Guess I should have lied.)
EDIT: the site is very nice though.
Here's my submission, if anyone wants to hear it: https://www.upbeatapp.com/#/?track=662
I see you are using redis! It would be great if you could share some details about your usage like how much memory your app uses and what are your costs associated with it.
How do you deal with copyright and author rights?
A bit of feedback: people's submissions should default with 1 upvote(ala Reddit).
I remember seeing the iPhone unveiled and thinking "It's cool, but will people really buy such an expensive phone?". I think it was $600. That was pretty expensive at the time. I also remember thinking about how they wanted all apps to be web-based. A disaster for certain I thought. The phone market was all over the place and brand loyalty was in short supply. I'd seen compaq go from dominating PDAs and nosedive off the cliff. Motorola took their brand loyalty (remember how many people had Razrs?) and went into hiding. Time and again I'd seen phone platforms rise and fall. I was skeptical.
All I knew was one thing - I certainly wasn't going to buy one.
Years later and I now program for iOS a lot. Everyday pretty much. I'm a full blown Mac convert and I'll be honest, the iPhone was what caused it. I bought my first iPhone at version 4. Then I specifically bought my first Mac so I could use the SDK for that phone. I fell in love with the platform, in all its insane glory.
I might move to another platform one day, but I can honestly say I never imagined this is what I would be working on.
when the rumours started swirling about apple launching a phone people could not believe it. like at all. apple, the ipod guys, building a phone?! no way, what a joke. you had the photoshops of ipods with a dial, etc. analysts explaining why this was completely wrong, impossible and apple was doomed.
same at the launch of the iPad. same at the launch of the iPod (less space than a nomad, no wifi, lame). what the fuck is a nomad one might say today.
those great photoshops of steve holding a giant iphone to his ear, hilarious. an iPad, buhaha, bunch of retards at apple. but now the galaxy note makes perfect sense. to exactly the same neckbeards who laughed at apple's idiocy before.
apple is indeed the most frustrating company. it somehow has defied gravity in the second jobs era and proven that large swaths of the tech world couldn't define taste and style if their life depended on it.
and perfection, like the iphone launch, is a matter of style and taste.
"Compounding all the technical challenges, Jobss obsession with secrecy meant that even as they were exhausted by 80-hour workweeks, the few hundred engineers and designers working on the iPhone couldnt talk about it to anyone else. If Apple found out youd told a friend in a bar, or even your spouse, you could be fired."
Christ, what an asshole.
-----(regarding the presentation)
They had AT&T, the iPhones wireless carrier, bring in a portable cell tower, so they knew reception would be strong. Then, with Jobss approval, they preprogrammed the phones display to always show five bars of signal strength regardless of its true strength. The chances of the radios crashing during the few minutes that Jobs would use it to make a call were small, but the chances of its crashing at some point during the 90-minute presentation were high.
While scenic, the 280 is certainly not 'remarkably empty'. I make the commute from San Jose to SF everyday and wish I shared the enthusiasm of the author. Apologies for commenting on something completely orthogonal to the point of the OP.
What is interesting is that product-demo glitches happened all the time. We went to one presentation in which Steve had to ask for people not to use the Internet because they had not enough bandwidth.
But mistakes were so "naturally handled" that people just did not care.
I think Edison said, you will not be remembered by your mistakes, but from your successes.
"The solution, he says, was to tweak the AirPort software so that it seemed to be operating in Japan instead of the United States."
And while that suggests some pretty deep technical savvy at executive levels, they still had heartburn over seemingly simple questions like "can you put radio waves through aluminum?"
It seems to me that the genius of Jobs was 1) to envision customer experiences based on really remarkable extensions / integrations of existing tech and 2) to judge the moment when those visions had gone from "someday" to "now".
As a former Apple fan, I actually find the iPhone's hemorrhaging of market share and Apple's uncertain future extremely encouraging. I always attributed the things I liked about Apple to their struggling underdog status. They lost that with the iPhone, and they've never been the same since.
It will be fascinating to see if some of the old Apple shines through in the years to come.
What other stodgy industry is there that Apple could easily disrupt? How about this: how does it make you feel when you use the DVR box that your cable / satellite company forces you to use, to watch tv? I know the answer for me. Seems like low hanging fruit with potentially enormous payoff for Apple.
See press release:http://www.apple.com/pr/library/2005/09/07Apple-Motorola-Cin...
Within two years, non-iPhone smartphones will be niche players with partisan user bases, but the bulk of mobile development will be once again for iOS.