Joining Google gives him ready access to data sets of almost unimaginable size, as well as unparalleled infrastructure and skills for handling such large data sets, putting him in an ideal position to connect researchers in academic and corporate settings with the data, infrastructure, and data management skills they need to make their visions a reality.
According to the MIT Technology Review, he will be working with Peter Norvig, who is not just Google's Director of Research, but a well-known figure in AI.
It's a shame, he's brought many great contributions to our field, but I fear he has jumped the shark a while ago. Maybe going to Google will force him to work on solutions to problems of which the correctness can be more easily assessed.
Singularity U as far as I understand is not really there so people can more quickly get to the point of uploading their brain to the cloud or anything - it's essentially for business strategists who want to have a better grasp of where things will be in 5-10+ years out. If the Goog believes strongly in the Kurz's ability to do this then it seems like a pretty nice score for the Goog.
A hire like this one certainly reinforces that perception.
I don't know if it's truly possible to accomplish, but it's fascinating to see a major company taking steps in that directions.
He's a visionary who can deliver a finished product. I think he must have some pretty specific ideas, and he wants to partner with Google.
A few guesses:
- New interfaces to replace keyboard/mouse/touch. Voice, gesture, face, brainwaves. Sign language with humming, blinking, and pupil pointing. Works with tablets, TVs, wearables, cars, buildings, ATMs, etc.
- SuperPets (r) that can pass the Turing test. And do the shopping.
- Surgically implanted Bluetooth. (It could literally be a tooth!)
- Hover skateboards.
- The Matrix. (Or the 13th Floor, which was a better movie in my not-so humble opinion.)
I don't think it'll have to do with life-extension though. That's just too crazy far out-there.
I see what DRF means, and The Singularity is Near did seem mostly a perfunctory literature review, with important issues not discussed, just skimmed over. (For example, he doesn't discussed the causes of accelerating returns, doesn't support the causes with data, only the effects. Another example: is it necessarily true that we are intelligent enough to understand ourselves? We're effective when we can something decompose hierarchically into simpler concepts... but what if there isn't such a decomposition of intelligence? i.e. the simplest decomposition is too complex for us to grasp. Hofstadner asks if a giraffe is intelligent enough to understand itself.)
But I thought he supported his basic thesis, that progress is accelerating, compellingly. Really did a great job (seems to be the result of ongoing criticism, and him finding ways to refute it).
Read between the lines - "next decade's â€˜unrealistic' visions" - is likely nothing less than brain computer interfaces with the end goal of extending life by storing the entire human mind on a machine. Certainly not far off from Kurweil's timelines on Law of Accelerating Returns. I can understand why the PR does not say this, but it seems clear this is where Kurzweil would want to invest his time.
I mean even if you don't believe in the Singularity, you must believe in Google, right?
Good thing that in the brave new future world of 2013 labor and marketing are completely free of all costs, opportunity and otherwise.
Just from memory for my startup (Authy.com), initial costs were:
Depending on your skills this initial costs will vary. An although I agree you don't need external investment to cover them, you should at least plan to invest $10,000US to cover your initial costs.
EDIT: Seeking technical co-founder to help build out idea. Must be prepared to sign NDA before equity can be discussed.
In other words, I trust this guy with knowing how to execute (and to the point, probably recognize) successful minimally-viable products.
It is not a joke.
> I have no understanding of the concept of humor...
So it is a joke.. Or at the very least tongue in cheek. (With some exceptions,) I doubt hosting is what most VC funding is spent on.
Edit: as he said, "The biggest obstacle to creating something useful is finding the time to build it and attracting an initial pool of paying customers." If you have access to those you probably have $37. I get that he is trying to say technical costs can be negligible for startups. I fail to see how Investment Co-Prosperity Cloud helps anyone in anyway. And thus, I think it's a joke (even if the funding is real).
> Participants receive almost no money, and are expected to do everything themselves.
So not really all that different than many (most?) incubators.
From Webpop we'll offer a free project and a startup template (http://app.webpop.com/themes/startup) for anybody accepted into the program.
For some people this might be enough to completely skip the Linode and buy one more beer.
($1.12 won't even cover the minimum BART or Muni or VTA fares now, I think, unless you're a child or senior or disabled or something)
I wonder if this can get the appropriate community to make it a decent alternative compared with TypeScript and CoffeeScript - or maybe I'm missing something here?
It also supports ruby, scheme, lua and others
What made it impractical was the slowness. Emscripten seems to work fine for clang, not so much for interpreted languages. Maybe the translator path can solve this issue.
(Ignore this...)The mime-type is wrong though, there isn't an authoritative mime-type for Python, so it should use a private subtype like text/x-python or application/x-python.
But yeah. Cool stuff.
Native support of Python would be my dream (I don't see why there's not 3-4 competing languages in the browser; though the complexity of such is a decent argument against), but this solution seems to be a great stopgap, as well as compatible with future in-browser python implementations.
iPhone says I have to sync using the X Y or Z protocol, android can connect to Exchange and also offers up an api so I can download an app that hooks whatever-the-hell directly into my contact list (or other android system bits).
Its the same for sharing. I click the share button on android and I get a list of all the applications that can take my android.content.Intent.ACTION_SEND intent. On iPhone, I can pick any service I want as long as it's twitter.
I've been using Exchange exclusively instead of IMAP for Calendar and Mail "Push" since I switched to iPhone....
This is absolutely outrageous.
I should have noted that they hadn't been on the internet in the 90's when Microsoft did the same with Hotmail, and others.
Ultimately, nothing is free.
Still,I can't help but wonder if ads in emails didn't generate enough cash so now it's tiem to charge.
Also, I'm doubtful that this affects the majority of users: on both major mobile platforms (Android and iOS), Google has a push solution with nearly identical functionality to Exchange.
I was under the impression that GMail uses their own version of IMAP? The built in iOS mail client does not provide push e-mail via IMAP but instead uses P-IMAP for push e-mail via iCloud.
This kindof stings.
The one that used to go to https://tools.google.com/dlpage/gappssync/thankyou.html .
It's also not clear that there's really a valid comparison here. In order for computers to recognize objects we need to program them to learn recognition on their own (because programming them explicitly to do it would be far too hard). When we program a computer to solve a logic problem the computer isn't learning to solve that problem, it's the programmer, not the program that "knows" how to solve it.
Trying to teach a neural network to play chess is probably much harder than teaching it to recognize images (at least my very limited experiments suggest this to be true).
Boy did I leave at the wrong time. :)
I would claim that while a single logical frame is easy to simulate on a digital computer, creating and balancing multiple, not-necessarily consistent frames is very difficult and requires as much computer activity or brain activity as the also difficult activities of raw input processing.
One might argue that neural system began as very different systems from digital computers but the evolution of the large human has allowed them emulate discreet, including a computer's digital logic while still doing the balancing of multitudes of environmental constraints which neural system have excelled at for millions of years. And letting "us" conceive-of and to even build computers perfecting this discreet logic. Pretty amazing.
I think there are all kinds of reasoning skills that we've never been able to test, because they depend on perception and motor skills. It seems possible those would take many more computational resources. I find it hardly surprising that feeding a program abstractions and allowing it to reason about those abstractions is simple. It's the interacting with the real world, correlating abstractions with the real world and coming up with useful new abstractions, that's hard.
I don't think we'll ever have an AI until something is built that can freely interact with the world, freely gather data and freely modify itself to enhance all its abilities. An AI without pressure sensors that ever touched sand will never understand the universe.
Besides, the distinction between "high-level reasoning" and "low-level sensorimotor skills" seems fairly weak. Checkers already starts to blur the line: the problem space can be modeled as pattern recognition and tactics (like how humans model their own gameplay), or it can be modeled as a "dumb search" through a decision tree (like how a computer algorithm might play). Then you get to something like chess, which has a prohibitively large decision tree to do a "dumb search," then face recognition, then natural language processing, etc.
Even as a non-programmer (I am a programmer) I might relate well to an ordinary desktop/laptop running my Excel spreadsheets. I can create a spreadsheet, enter data in cells, enter formulae, format the content beautifully, specify and view charts of the data I'm entering and information I'm computing. I might be able to respect and appreciate the beauty and complexity of how the spreadsheet program was implemented in an abstract sense. I might describe to another person my ideas about how the spreadsheet program was created, its major features and concerns, and its obvious complexity. What I'd be missing though, likely, is (a) the complex interface between what I see and what supports that experience behind the scenes; and (b) the 50+ years of computing technology under the hood that has evolved to support my narrow and visible relations with my Excel spreadsheets.
From the user's view, the Excel spreadsheets, Windows Explorer, the Start button, etc., are the aspects of the computer analogous to a human's thought processes. They're visible and explainable. The user might have some vague notion that files are stored on disk, that there's something called a CPU, that does the computer thing, etc. The user has no clue, though, that the Excel spreadsheet program itself contains but a very small portion of the effort to make its visible manifestations happen. There's an enter support system from file system, CPU, memory, buses all over the place, GPU, video display, chips, specialized interfaces to I/O and other subsystems, ASIC's, semiconductor physics, electricity, magnetism, etc. The hardware, firmware, and software for the latter have had 50 years to evolve and mature. To a normal user these aspects aren't understandable. They understand Excel.
And so for us, we can understand and describe human thought and cognition in an abstract way. But most thought is below the level we're conscious of, and supporting that thought is an entire interface with the physical elements of the body, its nervous system and autonomous function, and the interface of these with the brain.
Consider this: While reading, most humans use a different part of the brain to register consonants and vowels . No matter how much we like to think we learn language in an orderly fashion, that is not the case. Our reading and speaking skills are simply built over time and experience by having neurons connect as we experience visual words and other people talking; formal language instruction probably plays a secondary role of attaching labels to already built neural networks.
Brains are vary good at fuzzy highly parallel tasks and bad at sequential ones. Computers suck at those fuzzy parallel tasks, but are rather good at accurate sequential ones. People are easy to train individually, computers take a lot of up front effort but after that it's easy.
Making a robot hand pick up an egg and a cup of coffee and turn a wrench isn't difficult provided that you build in similar feedback loops and low-level "firmware" that the brain does for us, unconsciously.
But if you did that it would take tens of kilowatts to run a halfway decent robot. And that's clearly ridiculous! So nobody does it.
He was instrumental in helping me out when I was new to the game and of all the investors/angels I have dealt with, he is the most down to earth and awesome in many ways.
I feel very confident that he is one of the few guys in the valley who actually cares about helping entrepreneurs and is not in it for the cash. Angellist's success is proof of that.
What is the strategy in deciding who to follow?
How do you attract people to follow you?
Programmers certainly do it. Laughing at noobs and being mean to them is pretty much the sole purpose of IRC, unless I'm mistaken. Surfers do it. Climbers do it. I've even seen rocket scientists do it.
The interesting thing is watching which members of a given group behave this way.
It's not everybody. There's a certain skill range where you find this behavior. Generally it ranges between "reasonably good" and "better than most people I know", and it grows exponentially in that range (though, again, only in people who are given to such behavior).
But there it stops. Once you hit a threshold of "better than pretty much everybody in the world, even those who have dedicated their life to this stuff", you don't really see this sort of elitism anymore.
I live in the climbing mecca of Fontainebleau, and can watch first hand as 7a boulderers from around the world descend and act like jackasses trying to scootch their butts off the ground on problems that are hard (but not world class) while scowling with superiority at the lowly rabble that might dare touch the holds of their project. It's best to simply wait until they give up before going over and doing the problem.
But occasionally you see a guy working an 8a. That's pretty stout by anybody's definition (even at font), but he's not shouting or swearing at it. He's just calmly doing his thing, uninterested in being the center of attention, and more than happy to talk to anybody who walks up without the least hint of snoot.
I think you find the computer programming equivalent of that guy from time to time too. He's the "bourne shell" guy that another comment mentions downthread, and he's above the elite.
The cool thing is that you don't have to be as good as him to act like him. All you need do is not be a dick.
Miguel de Icaza had a blog post on this I think. The problem with large opensource projects is that they have a lot to do, and simply don't have time to thoroughly follow up on all the small contributions that are ridden with naieve errors and plain formatting issues. Not to mention the big ones that come with architectural changes without explanations.
I think it's unfair to call these charity workers jerks, just because they are trying to make light of a dire situation.
Yes it can hurt if your contribution is coldly cast aside, and yes it would be much better if they warmly took you in and taught you in their ways, but if the OSS project leaders don't keep up the constant stream of contributions, improving the project all the time the project will die and all work will have been in vain.
On a side note: which project will you be contributing to this christmas? It is charity time after all and a bunch of hem could use a commit or two from your hands :) just be sure to read their code-style documents ;)
In my experience, most open source project leaders are very congenial and gracious that you're spending time on their project. I did some poking around and couldn't find any OSS leads that disparaged their contributors.
Not sure if the lack of examples was an attempt to not "name and blame" or if there aren't many good ones. OP, to be clear, this isn't merely leads saying "this code/feature/suggestion is inappropriate" to pull requests, but honest malice?
If you're a young developer, or seasoned for that matter, and the urge to put down the work of someone else tugs at you, consider this;
I recently had the pleasure of meeting Steve Bourne, inventor of the Bourne Shell (as in, /bin/sh on every Unix system ever). Here is a guy who was literally sitting next to the guys who invented Unix WHILE THEY INVENTED IT. And all this time later he's surprisingly humble, friendly, and genuinely interested in what other, younger developers are doing.
If a guy who has earned the right to be smug several times over treats people with respect, what right do we have to do otherwise?
This is why I commit (no pun intended) to try my best to not be like that with any of my projects. Now, to be fair, none of the Fogbeam projects have a lot of outside contributions to date, but every time someone has contacted me, I've tried to respond in a polite, reasonable and appropriate manner.
One thing to consider, when interacting with people you don't know, is that you don't know what you're possibly getting. We got a request once, for permission to take our code, make it work with MySql, and use it for some academic research. Now that was already allowed by the license anyway, but I took the time to respond to the guy, and had a few chat/email interactions with him as worked on his project, even though I had no idea who he was, how important the project was, or if anything would ever come of it. A year or so later, I get an email saying "Hey, here's a pre-print of the paper we published, it's being presented at $PRESTIGIOUS_CONFERENCE, and we mention your project in the paper". That turns out to be a nice "feather in the bonnet" for us and helped get the project some visibility it would not have gotten otherwise.
Honestly, I don't see any value in being dismissive, insulting or demeaning towards anyone, just because they aren't already an expert in your project.
I consider people who are "the elite" to be folks like Larry Wall. Or Guido van Rossem. Or someone like Ian Lance Taylor (who has hacked on many things in the GCC/binutils toolchain). Their projects are known by a bit more than a mere "hundreds of thousands of people", and they are definitely not jerks.
The reality is if you want to be very successful, especially in a project where all of the contributors are volunteers, you can't be a jerk, because then people won't want to work with you. In the very early days of NetBSD, there were a quite a few people who were quite disagreeable to be around on the core team list. One of them was in my work and social circles, and it's one of the reasons I choose to work on Linux instead of NetBSD. But even NetBSD is known by more than "hundreds of thousands" of people.
And that's the key --- yes, being a jerk will probably be a strong negative factor if you want your project to be one of the really top, well-known, successful projects. But you can a jerk and still have a moderately successful OSS project. Because at the end of the day, for better or worse, people will overlook someone being a jerk if they have a good, solid product to offer. This is true outside of the OSS world as well, of course. As far as I'm concerned neither Larry Ellison nor Steve Jobs would win the nicest person of the year award. But their products were sufficiently good that people were willing to overlook their personality traits, and indeed even idolize them as positive examples of leaders in the Tech industry.
The best way to respond to that is to politely request what you need and then ignore if they won't be helpful. ... but humans don't always respond in the best way: Another possible response is to respond harshly and critically in order to generate a hierarchy: "I am not here to serve you. Your patches may be accepted if it suits my fancy.". Neither extreme of being high and mighty nor of allowing people to simply abuse you is ideal.
Let me quote a recent example from IRC:
> XXXXX interesting how the number of new github issues went down since i started ignoring them :)
> XXXXX could be coincidence, but i suspect having a few open tickets discourages the more frivolous requests we usually got there"
Nice attitude there! After reading several such comments and some diatribe on github (following a bug report), I really had difficulties justifying the use of the software developed by this guy, especially after having been warned about this earlier by a co-worker ("the project is fine, XXXXX is the only problem with it") and not taking it seriously because I thought he was exaggerating (I'm not really into personality cult etc.).
I was on the receiving end way back in 2002. I'd just written an RS232 library for the .NET Compact Framework that ended up in the Microsoft .NET Compact Framework Core Reference. It was also referenced by an MSDN article, so got a lot of attention.
I left a bug in there which broke anything that didn't use default settings. The abuse was astounding. It was the last code written on my own time that I ever published.
Constructive criticism is a great thing, but telling someone they will never work again in this industry because they make a small CSS error on their first ever post-college project is another.
I've been fired from one job for standing up in a meeting and calling the boss a self-important asshole & refusing to retract it.
Life is too short to let these type of people get you down, we are all just floating on a rock in space & going to die in a few years....what's important: http://www.reddit.com/r/pics/comments/14su4p/he_sang_to_her_...
I am glad someone is speaking out against it, because it sucks.
One of the major reasons people don't learn to be better developers is because of "elite developers" that have come before them that take pride in humiliating them.
This article is very on-point. I nearly stopped learning Rails because of the torment from #RubyOnRails on IRC. But then I remembered it is the internet and everybody is probably a dog.
I am glad I didn't stop learning - and I am very conscious of this with "noobs" now.
"stupid pull request of the day site:twitter.com" returns one result, and it isn't negative in any fashion.
The article makes a good point - But there's a lot more we can take out of this, that being that for any community to succeed, it needs to be just that: A community. A place where others help each other grow.
Exactly, and that's why they do it to others! That may be their default (not consciously-chosen) behavior because it's what they got used to.
"you are wrong about this, because I say you are wrong." [turned out: he was dead wrong!]
"stay home if you want to answer phonecall from your dad." [knowing he is in the hospital]
"today you have been all day on the phone." [after talking with dad for 3 min 35 sec]
"stop pinging google to check if the net is working."
"ping doesn't tell you anything."
"I hate those Chrome tabs -- they are affecting my search results."
"I'm a CTO - I can be rude."
"Don't work here if you have family."
"If we succeed with this project [24-months period], we may get million dollars bonus" [perfectly knowing its impossible and simply not true]
"I fixed Asia!" -"Cool!" -"What did you do?" -"At 3am? I was with my wife and kids." -"Well, I hope that helped alot in your career." [next day, after he IT-supported Asia at 3am]
"You see my desk? Apple, Apple, Apple..."
"You are on a McDonalds French Fry Guy schedule, ha!" [after working 14 hours straight from 7am till 9pm]
"You work long hours and are not paid for those, because you are upper managment and should be proud of it". [after working 14 hours straight]
"Don't ask for that, you are not upper managment!" [when something failed to work and needed to figure the details to troubleshoot]
"You are upper managment, you should know this!" [when I didn't know something IT-related]
"You won't get the bonus, you are not upper managment!" [bonus question around Christmas time]
[email provider down; on the phone with support] "Why are you calling them? chat-support is faster!"
[days later, the same issue; on the chat] "Stop wasting time on chat, just grab a phone and call them!"
[after 12 hours straight work on 8hr schedule] "I completed the project, I am going home" -"Fine with me, as long as you are Symfony Framework specialist" [next day after staying extra 2 hours to understand basics of Symfony Framework] -"Never mind, we won't use them anyway!"
Those were the perks.. there were some better here and there, but honestly I started making notes way too late. But my tech-friends always loved to ask whats new with my CTO. They used to call him "Chief Toilet Officer", because frankly speaking he couldn't do shit right.
I'm going to come right out and say this: Some people should not contribute to a FOSS project. Whether that's because they can't deal with other people or because they're not willing to put in even a modicum of effort to work effectively with other people. If you go up to a group of people who are used to doing things according to procedure X and you blithely ignore it, you really should not be surprised when your efforts are met with derision at best and hostility at worst.
While I understand the point the author is trying to make here, and even sympathize to a point, the mindset isn't going to change, nor should it. The bar to entry is a part of what make high quality projects high quality.
This leads me to think some aspects of personality and how we treat others are innate. Jerks can be talented and successful too, and just remain Jerks. It takes all kinds.
I have had a few drinks so it's hard to properly articulate what I mean, but maybe somebody else knows what I'm talking about.
Won't we be given some specifics so that we don't have to guess what famous OSS author actually typed "HAHAHA" at a pull request ?
"While it's possible that within, say, 10 years, internet access will have reached near global ubiquity, that shouldn't stop us from actively finding ways to work around its current limitations, to reach populations in need; waiting 10 years means letting already disadvantaged communities fall another generation behind, perpetuating the global digital divide as we move into whatever its next instantiation may be."
++. Kudos for highlighting (and tackling) the compounding nature of inequality.
I mean a full blown browser of current generation abilities, but with the option of completely keyboard driven interaction. Concise input of small commands (navigation, data extraction, exploring and massaging of extracted data, data uploading etc.) that can be composed to form more complex interactions.
It should have full interoperability with the CLI, don't go re-inventing grep, sed etc. but instead stand on their shoulders by interoperating fluently with them. In practice i expect this means the "programer's browser" functions as a full blown terminal emulator too.
If it were a text editor (and it should be a competent one of those, but the web is more than a textual medium so some implementations should certainly offer richer functionality than plain text editing) then the net and its many protocols would be its filesystem.
It should be extensible, but only by 1 language. Competing programer's browsers could offer alternative extension languages as their differentiator.
It should be entirely open for extension, by that i mean more than plugins, i should be able to rip out core parts of it to be replaced with alternative implementations (A's needs won't match B's needs and so on).
It shouldn't go too far though, it shouldn't be a full blown operating system: why re-invent the wheel, reuse the thousands of man years effort in existing OS's. It should be multi platform.
We almost have the requisite component parts available now, who's going to get the ball rolling with the new wave of "programer's browsers"?
Unfortunately, vmail doesn't seem to consider internationalization important at all. US users save a single press of the shift key when starring messages (,8 instead of ,*), but on a german keyboard, where the star is shift-3 instead of shift-8, the shift-less version of starring trashes the message instead. Not a typo I'm eager to make.
Problems like this are, sadly, way too common.
Why on earth would you do that?
But better than that... oh my word, the documentation. How I wish every command line app had that kind of documentation. Just one page that documents the shit out of it.
There's only one thing that would make it better: if that documentation came bundled with the RubyGem so I could read it offline with `ri`.
Alternatively, I see there are requests for using OAuth, which would be a similar approach to have a "token" that can be revoked.
I have folders with tons of photographs in them that I upload from my desktop. Whenever I'm on my ipad or iphone, finding a picture I just uploaded is an absolute nightmare if I can't remember the name ... I usually have to go load up dropbox.com in my browser, do the sort there and download the file.
I guess my particular use case doesn't occur that often :(
The navigation bar (title bar) up on top looks pretty horrible. Opacity of the icons flickers when navigating back and forth, and "very long folder names..." are truncated differently the moment animation starts, causing them to jump around and overlay buttons for the duration of the animation.
I blurted out "Don't those two things contradict each other? You can't have something that's easy and have regular expressions!".
I immediately figured I'd totally blown my chances and discovered afterwards from the recruiter that the one thing he really liked in candidates was to be challenged. Me and my big mouth got me that job.
As an example, you can do the following in Tcl as the article points out:
set a pu set b ts $a$b "Hello World"
Another interesting issue is the lack of types and how Tcl interprets them. Ask the user for a number and check if its between 1 and 3, and reject if it isn't. Works fine until the user tries the number 0x01, which matches 1 in some places but not in others. Gave me a lot of appreciation for typed languages where an int really is an int.
I find the most unproductive use of my time at work is when writing TCL code. Switching to TDD has helped a lot, but I still find the language maddeningly frustrating.
I hate that this is both valid syntax with wildly different meanings:
set env(VAR) "value" set $env(VAR) "value"
I hate that the TCL error message line numbers are RELATIVE TO THE PROCEDURE instead of relative to the file.
I hate that the language is fully interpreted and not compiled, so you can have a syntax error in a code branch that isn't triggered just lying there for months if you don't fully unit test every code path.
I do not understand how anyone could like TCL unless they were using a linting tool to give the robustness of debugging that most other languages have.
Because of our code base and the 3rd party tools I'm not able to lint our code and it is a brutally painful language to debug.
It so happens that I was getting into Lisp at the same time, and the parallels between them are obvious. You effectively get macros, so it's a pretty expressive language. There's even an object system in Tcl based on CLOS. But ultimately, it feels like a language with lots of good ideas, implemented poorly. I'm not a PL theorist or purist, but "everything is a string" constantly feels like a really poor abstraction.
Tcl is practical for small, quick projects, but you soon run into insane things like the fact that curly brackets are syntactically significant in strings - and comments.
Nowadays Lua has mostly taken TCL's place as an embedded scripting language of choice and I must say I'm a little sorry, there are some things that I miss from TCL.
I remember being annoyed by Tcl back then when I was modifying existing scripts. Although now I can see that it probably isn't the easiest language to learn for a beginner.
I'm sure a lot of people's first, and probably only, experience with Tcl was from Eggdrop bots.
I Learned a lot from "Tcl for Web Nerds" and its companion "SQL for web nerds" (all links listed here: http://openacs.org/doc/ ). Back when I started Tcl, I was more like a emacs-lisp script kiddie. Because Tcl was so easy to learn and because OpenACS was such a great MVC style framework, it didn't take long to master the framework itself and dive into the interesting things that were related more with the architecture of a web-app and not just the syntax of a programming language.
The "whereas" poem, as I like to call it, that announced Miguel Sofer's inclusion in the Tcl Core team was real fun to read too - http://code.activestate.com/lists/tcl-core/1983/
Miguel Sofer had described an algorithm for representing hierarchical data in a RDBMS. What he put forward could be thought of as a different kind of a nested set representation. Whereas the nested-set approach involved keeping track of two numeric values (left & right) for each node in our relational records based tree, Miguel Sofer's algorithm would use the ability to lexically sort a base159 encoded string. This way, tree operations could be implemented via sub-string matching and sorting.
His algorithm was implemented in OpenACS because it allowed an efficient implementation of the OpenACS nodes table. Each URI in OpenACS has a node record associated with it and all these nodes are hierarchical records - http://openacs.org/forums/message-view?message_id=16799
The OpenACS nodes system allows the implementation of a fine grained permissions system which enables a child node to auto-magically inherit the permissions of its parent node - this is, if the child node didn't have any specific permissions set on itself - http://openacs.org/doc/permissions-tediously-explained.html
When I was learning ruby I implemented this algorithm using active-record - https://github.com/gautamc/hierarchical_objects The utdt.edu link hosting the pdf that Miguel Sofer created is broken now; I found a copy here: http://www.tetilab.com/roberto/pgsql/postgres-trees.pdf
Expect can be used to automate CLI tasks, but for me it is indispensable for testing programs with text-only UIs (I get to write a lot of those).
After reading this article I wonder how Lua and Tcl compare. It seems like Tcl is more like Lisp than Lua really. Does anyone have experience with any two of those languages and would care to comment on their differences and similarities?
There are more legacy Tcl articles at "reddit.com/r/tcl".
And I myself am surprised at how many photographers have managed to write their own scripts, even non-programmers. I've been thinking about adding support for python as the scripting language, but I can't justify it just for been fashionable.
Sadly his apparent ignorance or lack of concern regarding Tcl's numerous flaws leads me to question his engineering maturity. For instance, nowhere did he warn of how simple typo errors in Tcl's variable names and interpolated strings leads to a layer of engineering hell beyond anything Dante imagined, a place I've been and vowed never to return.
The core benefit to Tcl is also its main detriment. Tcl takes off in places like OpenOCD because its incredibly powerful simplicity makes it incredibly tiny. You can throw it in pretty much anywhere as an embedded language and it will buy you a TON of functionality. If you want it to be performant, however, you'll wind up writing a lot of piecewise optimizations which will ultimately cause you to lose the size benefit.
So no, it's not a toy language by any means. And yes, its simplicity yields incredible LISP-like power. However, it's not a language I'd _ever_ use to write the core of anything where performance is of concern.
Bane. Of. My. Life.
Later working at our studio, I've stumbled on couple of sound tools written in TCL, and lately had to dabble once in a while in the MacPorts land (also TCL).
But then most of the people I've asked they don't know about it...
I never found a use for Tcl myself but I had no problem with it.
Perhaps the first should be called "Odin" and the second "Loki."
I don't believe we will truely understand the higgs boson until we fully understand gravity and more so explain why it is weaker than it should be.
Sadly, at least in my case, most people are using Gmail on the Web and they filter the HTML in numerous mystifying ways (although this is ultimately good for us as users IMHO).
Luckily there's still a lot that most mass e-mailers can do, including myself, to make e-mail pleasant for readers without niceties like custom fonts.. but one day it would be great to universally expect something a little more elaborate than HTML 3.2 ;-)
Here's a scenario that plays out in WoW all the time and it happened to me. Basically, a user quits playing WoW and their account gets hacked at some point after they quit. The hacker then turns on 2 factor auth via the WoW authenticator app. It is now impossible for the original user to log in to the account or reset passwords. To fix this you must argue and explain with customer support that the account was hacked an that the 2 factor auth is preventing you from resetting passwords and such.
So, unless you turn on two factor auth up front for all users, it's going to actually make it worse for the end user if their account gets hacked. So, like captchas, it's solving one problem and creating another for the user. I'm not sure that is the best solution.
Why are people manually typing in keys? The authenticated website could just have an API with a receiving point for a token. A press of a button in a mobile app would unlock the login form for a short period just like a normal 2FA key, only with typing from the user.
You could use the numerical codes as a backup if the mobile device wasn't network accessible, but just being able to "push button" authenticate in a mobile app would make them a lot more usable normally.
Has this already been done, and I just haven't heard of it?
Also, let administrators enforce 2fa on all users of an account, and/or see the status of all users of the account. Also being able to enforce password complexity requirements would be nice, but 2fa might be sufficient.
1) I'm surprised it didn't happen sooner. There are a few turn-key two-factor auth solutions, and I expect having this added security is a major benefit for their customers.
2) I'm surprised they chose to use Google Authenticator. The favourite in this space seems to be Authy; off the top of my head Cloudflare and DNSimple both use them. Any thoughts on the pros and cons?
A few peeps from my university started Toopher though, looks promising - https://www.toopher.com , since it leverages your phone
From a students perspective most students have close to no-idea what kind of company they want to work for when they graduate and often have a very limited concept of the reality of what working at different companies would involve.
From a startups perspective it's very hard to target students, a typical serious recruitment campaign would cost 6-7 figures to run and is really only an option for companies who are hiring a large number of candidates (as you can ammortize the costs of having presence at careers fairs, etc.) - I would imagine Google spends in the millions if not tens of millions in student recruitment every year.
Most startups aren't Facebook or DropBox. Look at First Round's portfolio page, chances are that as someone who's familiar with the startup scene you still won't recognize most of their consumer facing startups let alone their b2b startups.
Most students will never have heard of these startups. Even if a student cares about the domain of the startup they probably still won't have heard of the startup.
You can't be the candidate's first choice if they've never heard of you. But once you've got their CV and you've decided that you want to go after them you can sell them on your company and make it their first choice.
I'm also willing to bet that most startups would rather hire a stellar engineer with average passion about the domain (but cares about the tech) over someone with stellar passion and average talent.
The students may be better served if there was a list of tags/terms for the all the companies so the students can go through them and check/tick the ones they want to apply to. Shotgun mass spam doesn't ensure a cultural fit.
(Should +100 (for how many people in the company) even be an option? Is a company still a startup at that point?)
On the one hand, this is something I have always felt needed to exist. In today's weak job market it's important that we do whatever we can to match talent with openings.
On the other hand, I think startups are the wrong type fo companies for this. If I'm hiring for a small (<50) person company I want to make sure the person wants to work on my particular product and help with my particular vision. This would be great for big tech companies though, where they just need development talent and there is sure to be some internal project/product for which you're a good fit.
I really like this part: "our Talent Team will review your submission and if you're a fit, we'll follow up directly and connect you with relevant companies". They step in and do the work to connect you with relevant companies which is a great value add opportunity for a VC.
Think of all those times you google something just to click the first link. "twitter gem github", "ebay tickle me elmo", whatever.
Reassign Alfred's I'm Feeling Lucky (Google) hotkey to "L".
Now you can Opn+Space (whatever brings Alfred up), "l twitter gem github" or "l ebay tickle me elmo" and it brings you directly to the webpage.
It's also nice because it lets you type where you want to go instead of wasting brain cpu cycles remembering the URL. "l hacker news". "l rails guides". Or even "l ebay". "l github".
And you don't even need to have your browser open. Just do it from any other app. It's huge.
If im reading this correctly, with intelligent workflows I can populate my Alfred results list with carefully curated search results.
This looks like the push I need to pony up and support andrew like i shouldve been doing all along.
Workflows also remind me of Apple's own Automator.
After losing almost 2,000 sailors and 4 Navy ships in an accident attributed to poor navigation, the British government offered the Longitude Prize - which was worth millions of dollars in todays money.
From Gallileo and his method of timekeeping by tracking the moons of Jupiter, through to John Harrison and his invention of the chronometer - which ended up winning most of the Longitude Prize - the effort that went into finding a solution had many side effects for science and the solution opened up the world to better navigation and the eventual colonization.
The entire story is chronicled in the book 'Longitude', which was a best seller in 1998. It is well worth a read. Wikipedia is also a good starting point for finding out more.
It's this process that will answer other unanswered questions in our physics engine, another one for example being whether or not photons degrade, or if it is possible to remove the higgs boson from matter, rendering it with no mass.
Also, why I'm absolutely loving the coursera class on astronomy: https://class.coursera.org/introastro-2012-001/.
If you have not peaked into it yet, the way Dr. Plesser explains concepts and bridges them with the historical advances leaves a lasting impression. I wish we had classes like this back in school.
Is there any kind of site/extension that displays hacker news, but gives stories the titles they were originally submitted under? If not I guess I'll do it myself.
A real renaissance (or post-renaissance) man! I love imagining a scientist heading up a bunch if police!
Remember that Descartes also authored major mathematical works. Philosophers of that day could and did do a little bit of everything. Anyway, his argument, and the other arguments of them time, were based on things like the fact that if the speed of light were finite, you would notice things like the sun, moon, and earth being out of alignment during an eclipse, since the earth's shadow would lag behind it. Since no such misalignment was observed, the speed of light must be infinite. Later philosophers pointed out, of course, that it was also possible that the speed of light was finite but very fast, and the eclipse lag time was immeasurably small. Then Roemer settled things once and for all.