hacker news with inline top comments    .. more ..    23 Jul 2017 News
home   ask   best   5 weeks ago   
1
Computational Linear Algebra fast.ai
217 points by julianj  7 hours ago   10 comments top 6
1
fdej 2 hours ago 1 reply      
> Locality: traditional runtime computations focus on Big O, the number of operations computed. However, for modern computing, moving data around in memory can be very time-consuming

I need to nitpick here... Big O notation is a way to describe growth rates of functions. You can count data movements (or anything else) with Big O.

2
kyrre 9 minutes ago 0 replies      
Why 'computational' and not 'numerical'?
3
rabreu08 6 hours ago 1 reply      
Looks like a good course. I think it would benefit if they added some module on implementing some basic Linear system of equations solvers, like gradient or steepest descent. Or even GMRES/MINRES or so.. The amout of knowledge that i gained from trying to implement these was remarkable.
4
ceyhunkazel 1 hour ago 0 replies      
Top-down approach is the best approach to teach and learn well done!
5
stuartaxelowen 1 hour ago 1 reply      
... What part of linear algebra isn't computational?
6
ianai 4 hours ago 0 replies      
I've been wanting to do a math refresher in linear or modern algebra for a while...tempting.
2
Yellowstone Bears Eat 40K Moths a Day in August yellowstonepark.com
172 points by curtis  10 hours ago   98 comments top 10
1
dwills1 6 hours ago 1 reply      
The summer of 2017 is going to be a bad year for the miller moth eating bears of the Colorado Rocky Mountains (note: black bears only - there are no grizzlies in Colorado). It seems there was a late freezing rain/frost that killed many if not most miller moth grubs this year. I normally experience an onslaught in the migration as I'm on their path from the eastern plains to the mountains. Normal years I see dozens per night from June 1 to July 15, several somehow squeezing into my house each day. This year there were essentially zero miller moths.
2
d-sc 8 hours ago 1 reply      
Kinda cool to see this on hackner news. My landlord manages bears for the greater Yellowstone area. So I have heard firsthand accounts about this.

The bears effectively scoop the moths from the rocks using their claws. It's one of the most caloric dense foods they have.

3
almostApatriot1 9 hours ago 1 reply      
there was a good issue of National Geographic dedicated to Yellowstone that covered this last year:

http://www.nationalgeographic.com/magazine/2016/05/yellowsto...

Basically, the grizzlies there build up on moths and nuts. The moths are 65% body fat.

4
schwarzrules 6 hours ago 1 reply      
Could protect the moth population with a more liberal policy to provide bears access to picnic baskets...
5
WheelsAtLarge 9 hours ago 5 replies      
Findings like this is why I find it very unpleasant when people start talking about destroying whole species because they are pests to humans.

The latest one I've read about is the idea that we need to get rid of all mosquitos. They are pests to us but I wonder how many other species depend on them for food and how will the mosquito's extermination affect the web of life.

6
Mothra555 3 hours ago 2 replies      
My question is this. Why would pesticides on the farms that wipe out a lot of these moths be causing a problem?

The bears were there before the farms. If the farms are creating a lot of new moths that weren't otherwise there, wouldn't the pesticides be bringing things back into equilibrium?

I am really against using pesticides and I try to eat only organic products, but I am just wondering why no one else seems to be talking about this. Those mass quantities of moths being there isn't natural in the first place.

7
briga 5 hours ago 0 replies      
I don't think it's just the Yellowstone area. In Jasper I've seen bears doing the same sort of the rock-flipping scavenging. The sheer number of moths they eat is pretty surprising though, I never would have suspected bugs to be such a big food source for bears.
8
ugh123 5 hours ago 2 replies      
So the way they come to the 40K number is since each moth is about 1/2 calorie, and the bear requires at least 20k calories a day, then they must eat 40k moths/day. Hmm
9
BigJono 9 hours ago 5 replies      
That seems like an extraordinary claim. If the bears were eating moths for 10 hours straight, they'd have to catch and consume more than one moth every second to hit that number... I must be vastly underestimating the amount of moths in these caves or the amount they move around.
10
r00fus 5 hours ago 1 reply      
"As importantly, the moths provide a crucial food source in the face of declines of other bear foods"

Seems to me the fact is that bears usually eat other stuff but those aren't readily available (likely due to human activity).

3
Satellites Taking Pictures of Rockets Carrying More Satellites planet.com
148 points by privong  10 hours ago   18 comments top 4
1
oelmekki 36 minutes ago 0 replies      
Each time I'm sad we're not exploring galaxy star trek style yet and the space conquest made a flop, I remember how many commercial satellites are around here.

It really has this regular taste of things you think just died after being hyped (the moon landing) and which you suddenly realize has got slowly, steadily, discretely omnipresent since then without anybody noticing much (granted, satellites are not as cool as visiting other celestial bodies, but that's still growth, and going the right direction).

I would love to see a graph of the amount of satellites in activity over time.

2
dopeboy 49 minutes ago 2 replies      
Kind of offtopic but I figured I'd ask: how close are we to enemy of the state style satellites that can track movement like a overhead CCTV camera?
3
natch 9 hours ago 4 replies      
Would love to see an actual speed version of this... what's the hurry?
4
siscia 2 hours ago 1 reply      
Any of you have any idea of how much the images from planet.com cost?
4
Open Bazaar decentralized Bitcoin marketplace openbazaar.org
318 points by amingilani  16 hours ago   207 comments top 30
1
SamPatt 10 hours ago 7 replies      
I work on OpenBazaar and I'm happy to answer questions.

This website links to the current version of OpenBazaar, but we're just about to launch a completely new version, which you can read about here:

https://medium.com/openbazaarproject/openbazaar-2-0-p2p-trad...

The 2.0 is built with Go and uses IPFS. It's open source and we welcome any developers into the project:

https://github.com/OpenBazaar/openbazaar-go

2
jasode 14 hours ago 3 replies      
Seems to be some confusion...

To clarify OpenBazaar's "No Transaction Fees", it means there are no marketplace platform fees.

To compare with ebay, to sell an item for $500 and charge $20 for shipping ($520):

- eBay fee: ~10% of final bid value + 10% of any shipping charge added on = $52

- PayPal fee: 2.9% of payment = $15.08

The bitcoin network would eliminate the PayPal fees. (The bitcoin fees would still exist but presumably would be lower than 2.9%)

The OpenBazaar platform would eliminate the eBay platform fees.

3
Dowwie 10 hours ago 1 reply      
Chris

If I understood the docs correctly, openbazaar 1.0 server is written in Python but since it's release the team has worked on V2 in Go.

I'd really appreciate learning the motivation for the migration to Go and think others may as well. Go programs are faster than Python programs but lo and behold is a viable v1 - a server whose performance may have been good enough?

Please, enlighten!

4
mike-cardwell 11 hours ago 4 replies      
Installed. Run. First screen: Select your language. English already selected. Hit "Next". Nothing happens. No apparent way of actually selecting a language or proceeding. Uninstalled.
5
vit05 11 hours ago 2 replies      
People are talking about drugs or questioning that it is not exactly for free. But my big question, from the point of view of someone who has owned a small online store, is security for the buyer. In a completely p2p transaction, the biggest difficulty lies in the assurances that the two parties are completely satisfied with the deal. The first purchase is the most important move in any market, paying a premium so that everything goes well is not so bad.
6
avaer 14 hours ago 3 replies      
> How are there no fees and restrictions?

_Someone_ is paying the TX fees. Who? It can't be a single party, especially the developers, without betraying the decentralization.

> Pay with 50+ cryptocurrencies on OpenBazaar: Bitcoin, Ethereum, Litecoin, Zcash, Dash, etc. Seller receives payment in Bitcoin

How does this work without a middleman or central party?

7
erlend_sh 1 hour ago 0 replies      
What's the business model for openbazaar.org, i.e. the core team? How are you going to sustain development?
8
hellbanner 12 hours ago 1 reply      
Hello, I am looking for a decentralized store that lets me:

* Host digital goods (like run a server, or issue an asset ID that can be redeemed on a bittorrent style network for the file download)* Receive payment > register an asset or return a download key to the account of the user who sent it.

I want automated digital good downloading, paying & receiving BTC. (Bitpay, Stripe etc pay in BTC but seller receives $Currency)

9
blairanderson 14 hours ago 0 replies      
Shouldn't need to be said, this does not remove fees from BTC/Ether transactions.

The feature is that it does not ADD fees to use the platform.

10
symlinkk 14 hours ago 2 replies      
It's not anonymous at all right now. I believe they're working on Tor support for the future however.
11
bigbass1 11 hours ago 0 replies      
Dont know why this is even being discuss just use syscoin with VPN,TOR and ZEC and your sorted and everything is on the blockchain no servers to be traced. This is the setup https://www.reddit.com/r/DarkNetMarkets/comments/5s63o3/guid...
12
mkj 3 hours ago 0 replies      
Is there currently a plan how the OB company will make money?
13
Xeoncross 14 hours ago 2 replies      
Nothing is free, so here are my brief ideas for funding this:

- Ads

- "premium features" for sellers

- "promoted" sellers

- Fees for those that want to be "Secure Escrow and Dispute Resolution" accounts

14
dang 14 hours ago 0 replies      
15
hdhzy 14 hours ago 2 replies      
I can't find details on dispute resolution. Are they using multisig escrow transactions like Silk Road?
16
HirojaShibe 3 hours ago 0 replies      
I cannot wait for 2.0 it has been a longtime coming.
17
matthewbauer 13 hours ago 6 replies      
Am I the only one here that gets a little terrified by this kind of thing? I mean I'm fine if they're just selling recreational drugs. But, we all know that there are much worse things people will be selling on these crypto markets than just plain old LSD and marijuana. Can we find a way to help law enforcement police at the very least the really, really bad stuff?
19
kepler 9 hours ago 1 reply      
Seems interesting but I couldn't get pass the language setup on Mac.
20
SpeakMouthWords 14 hours ago 4 replies      
How does the unit economics work here? To my understanding, BTC (even after SegWit) has non-negligible transaction fees. Is this site just eating into VC money to make things feeless?
21
omarchowdhury 14 hours ago 1 reply      
In the crypto-darknet community, the word is that this is where the drug trade is going to move to after the recent busts of the past weeks.
22
phaed 10 hours ago 0 replies      
Would like to see a sellers API / webhooks for automation.
23
optimalsolver 14 hours ago 2 replies      
What's the point if it doesn't support Tor or some other anonymizing service?
24
strictlyCrypto 6 hours ago 0 replies      
Is there a business model? Compensation for continued development?
25
acover 14 hours ago 2 replies      
How does this work? Where is the store list stored?
26
davidgerard 9 hours ago 1 reply      
Does this version still require the vendor to keep their PC online 24/7?
27
bobsgame 12 hours ago 1 reply      
It's just like Napster->Kazaa->Gnutella all over again! In 10 years there will be iTunes.
28
ultim8k 14 hours ago 0 replies      
This is huge!
29
dghughes 13 hours ago 0 replies      
Ah the good ole days when I had $5 in my wallet and paid by taking the $5 note out and handing it to the cashier.
30
notindexed 14 hours ago 1 reply      
Don't wanna be the partypooper but Blockmarket wins the dex marketplace niche ;-)

http://syscoin.org/

5
Give in to Procrastination and Stop Prefetching (2013) [pdf] mit.edu
85 points by lainon  9 hours ago   10 comments top 5
1
wgjordan 7 hours ago 0 replies      
Abstract Generations of computer programmers are taught to prefetch network objects in computer science classes. In practice, prefetching can be harmful to the users wallet when she is on a limited or pay-per-byte cellular data plan. Many popular, professionally-written smartphone apps today prefetch large amounts of network data that the typical user may never use. We present Procrastinator, which automatically decides when to fetch each network object that an app requests. This decision is made based on whether the user is on Wi-Fi or cellular, how many bytes are remaining on the users data plan, and whether the object is neededat the present time. Procrastinator does not require developer effort,nor app source code, nor OS changes it modifies the app binary to trap specific system calls and inject custom code. Our system can achieve as little as no savings to 4X savings in bytes transferred, depending on the user and the app. In theory, we can achieve 17X savings, but we need to overcome additional technical challenges.
2
rixed 4 hours ago 1 reply      
I don't want to dismiss this work that I find interesting but still there is something sad about it.Estimating the cost of a connection can hardly be done without a change in the transport protocols, because the device do not know the cost of a fetch just by looking at the type of network it is connected to. Think about WiFi hotspots to cellular network. Think about data bundles. This is way easier to address in the transport protocol.

Why can't we spend any effort fixing the root causes of anything and instead treat every early tech as a given and spend so much effort in developing workarounds?

4
justinsaccount 7 hours ago 1 reply      
> prefetching can be harmful to the users wallet when she is on a limited or pay-per-byte cellular data plan

Sure, but if you are not on a limited or pay-per-byte data plan, then prefetching a large block block of data so the radio can go into low power mode for a while is more helpful.

5
gjjrfcbugxbhf 3 hours ago 1 reply      
This should be based on whether the connection is marked as metered rather than WiFi Vs cellar. I s sometimes need to tether one device to another - a risky thing to do with a limited but high speed data plan and a second device that will happily use 100s Mb of data in a few minutes.
6
Using Hilbert Curves to 100% Zelda merovius.de
111 points by Merovius  11 hours ago   35 comments top 10
1
decafb 1 hour ago 0 replies      
For reference: The generic name what op does is "spatial indexing". There is an algorithm for efficiently working with such data called Hilbert R-tree (https://en.wikipedia.org/wiki/Hilbert_R-tree). But there are alternatives (see https://en.wikipedia.org/wiki/Spatial_database).

Many databases nowadays contain functions to these operations (e.g. https://dev.mysql.com/doc/refman/5.7/en/spatial-analysis-fun...)

2
ufo 8 hours ago 5 replies      
> There might also be better approaches than Hilbert Curves. For example, we could view it as an instance of the Traveling Salesman Problem with a couple of hundred points; it should be possible to have a good heuristic solution for that. On the other hand, a TSP solution doesn't necessarily only have short jumps, so it might not be that good?

TSP is actually very amenable to heuristics and state of the art branch-and-bound algorithms can often find optimal solutions even for instances with thousands of points.

Does anyone here know if there is a good open-source solver that we could throw this problem instance at?

3
twotwotwo 5 hours ago 1 reply      
> So I started on the onerous task of finding the last 17 locations.

A French guy (Xalikah) who did the first, manually planned 100% speedrun of the game had a similar problem; he spent a few hours with a couple folks helping him check his map for obscure place names he was missing, and when he was at the last one, someone joked "99.81% speedrun," and people were suggesting he do a slow systematic scroll over the map so they could look for missing placenames. He wound up sleeping five hours or so and in the morning remembered that he'd skipped using some bridge somewhere. 49 hours!

It's kinda wild that game worlds are now large enough that you can reasonably use algorithms not only to write games but to get 100% completion playing them as well.

4
sdenton4 3 hours ago 0 replies      
(I experimented with using space filling curves in City Skylines (a sim city clone) a while ago; here's awriteup for any who might be interested: https://inventingsituations.net/2015/11/28/space-filling-cur... )
5
j05huaNathaniel 1 hour ago 0 replies      
Except that this doesn't take into account elevation.
6
bhickey 7 hours ago 0 replies      
2-opt can probably improve on the Hilbert solution (https://en.wikipedia.org/wiki/2-opt)

Is the world planar? There's an epsilon approximation scheme for planar TSP that's linear in the graph size (but exponential in 1/)

7
nwjtkjn 6 hours ago 5 replies      
Why is the Hilbert Curve any better in this situation than say lexicographical order on (x,y)?
8
thesmallestcat 1 hour ago 1 reply      
A Zelda game that depends on Google Maps. I'm getting old.
9
nitrogen 7 hours ago 1 reply      
The returned file turns out to not actually be JSON (that'd be too easy, I guess) but some kind of javascript-code which is then probably eventually eval'd to get the data:

 /**/jQuery3110644358575 2152035_1500757689075( /* json-data */)
Is that just JSONP?

10
rileytg 9 hours ago 0 replies      
excellent! i wonder if there are more algorithms that can be used in this game. binary search is an algorithm that can be easily done by a person (doesn't need a ton of iterations to locate result).
7
Sugar-sweetened drinks with protein rich meal affect metabolism biomedcentral.com
8 points by nreece  2 hours ago   1 comment top
1
gaius 1 minute ago 0 replies      
This is very interesting - like most influenced by GI theory, I had assumed that rate of absorption/avoiding an insulin spike meant that if you wanted something sweet, it would be OK or at least least-bad, if it were mixed with slow-digesting items such as protein. Guess not :-(
8
Ask HN: What do you actually use your biohack implants for?
26 points by amingilani  2 hours ago   7 comments top 5
1
pimeys 5 minutes ago 0 replies      
Does a CGM glucose monitoring sensor count? I get the values real time to my android watch and remote InfluxDB for later analysis.
2
michjedi 43 minutes ago 0 replies      
You could easily cut an Oyster Card chip out of the card, stick it in your hand and use it to travel in London.
3
jackhwds 13 minutes ago 0 replies      
Can you not use it to pay for things? thats the whole reason I want to get one like an upgrade from apple pay
4
cup 10 minutes ago 0 replies      
There are plenty of uses. Braces for teeth straightening. Cochlea implant for assisted hearing. Bionic eye to improve sight. Insulin pump to regulate blood glucose levels. Pace maker to supplement failing pacemaker cells.

Never understood why people associated "biohack implants" with sticking a magnet or NFC chip in your finger.

5
lordnacho 1 hour ago 1 reply      
Could you possibly use it as a train ticket? A number of modern commuter systems now have swipe system that are NFC chips. Is it programmable?
9
GoblinCore-64: Scalable, open architecture for data intensive HPC tdl.org
53 points by ingve  10 hours ago   1 comment top
1
sitkack 8 hours ago 0 replies      
Another win for RISC-V, exactly what it was for, an open platform for anyone to innovate on processor design.
10
Rocket Propellant from Lunar Soil wickmanspacecraft.com
39 points by vpribish  8 hours ago   18 comments top 4
1
Pica_soO 2 hours ago 0 replies      
I wonder- if a planet continuously lost atmosphere of the aeons, shouldn't there be a small swirl of gas spiraling out from that planet?

If that is the case, wouldn't it be enough to have a ram-scoop satellite harvesting it?

2
52-6F-62 5 hours ago 0 replies      
I found Wickman's story pretty interesting! Thanks for posting.

There was a lot of interesting info on there besides this post. Ahem -- the post itself was really interesting -- reminds me of how little I know in spite of my interest in space. Gotta get on that!

3
sethrin 6 hours ago 4 replies      
I feel like the difficulty of obtaining rocket fuel on a given rocky planet is equivalent to within the limits of napkin math. I've never heard a particularly convincing argument for using the Moon as a springboard to the wider universe as opposed to Mars or Venus. Perhaps someone could offer some correction.
4
curtis 4 hours ago 0 replies      
You may also find LANTR (LOX-Augmented Nuclear Thermal Rocket) interesting:

http://www.nss.org/settlement/moon/LANTR.html

LANTR is still assuming that hydrogen is supplied from earth, but it's performance as a rocket fuel is magnified by using it in a hybrid nuclear thermal/chemical rocket, where the the LOX is supplied from the moon.

11
Twilio Why there's no API for crossing the chasm exponents.co
117 points by rmason  15 hours ago   53 comments top 12
1
chevman 13 hours ago 4 replies      
I work at a company within the Fortune 10 and we use Twilio in some isolated products, and they are pushing hard on a few potential deals/opportunities to expand further.

Technical foundation appears solid to all their stuff, but their enterprise sales/account folks seem very green - unsure of how to navigate large enterprises, not connecting with the right folks at the right level within the org, not understanding who or how decisions are getting made, etc.

We see this alot - deals with tens or potentially hundreds of millions of dollars at stake where potential vendors have a couple sales folks working the deal, not getting traction. Get a few more folks working the deal and figuring out the right angles and they could have it locked down in a matter of months, but they never seem to figure this out and a year later they're still dicking around with the wrong people at the wrong level not making progress.

2
bduerst 8 hours ago 1 reply      
TBH, their billboard is pretty genius: https://i1.wp.com/exponents.co/wp-content/uploads/2017/03/Tw...

Many CTOs, CIOs, C*Os, VP of product, etc. ask at least some of their developers which platforms they prefer. Most people in those positions won't admit they don't know the latest tech, but they do look to the people working for them. This billboard taps into that and establishes authority, so I don't think it's worth ripping on.

3
djyaz1200 2 hours ago 0 replies      
Twilio is wisely selling axes during the gold rush. Interesting article but I've bet my company and $50k that this is wrong. (I buy their stock on dips)

Rather than trying to deal with all the costs/conflicts and intricacies of selling their services directly to each client they are smart to empower others to handle that last mile for them. That's the true essence of a platform. When offering a platform like iOS you invite others to make money handling all the stuff that you aren't uniquely qualified to deal with and/or that isn't worth your time. I've talked to Google employees who consistently say if it's not a $100M/yr opportunity we won't even look at X.

You could argue, as I think this article does that Twilio is leaving billions of dollars on the table with this approach but they are getting something for that money... an ecosystem of fiercely loyal clients invested in their success (because companies like mine make money off of them).

Finally, does Twilio (or anyone) really know what the future of enterprise telephony will look like? I would argue they don't, and anyone with any technical experience knows how VERY dangerous + expensive it is to start building a client facing product without a clear vision for what it should be. Better that they provide the API and let other companies fight to figure out the rest... which of course we are happy to make money doing for them :)

4
eapotapov 12 hours ago 0 replies      
I've read the article and that's one more argument for me that IPO is a very toxic thing sometimes.mass marketing strategy/creating a commodity is a very hard longplay game. the prize is a huge stake of the market, but it's very challenging to get there.

now assume you have your revenue and then you get two customers that are going to pay you 25% of your current revenue (20% of the revenue after they start working with you). if you're private you just stick to your strategy and count these two clients as some random money (as they don't fit your main strategy). being public, now, if you lose these customers - you're at risk, the market forces you to make decisions, to keep these customers, to go to the enterprise market, hire sales people etc.

"In Twilios earliest days, multiple angel investors and VCs told Twilios founders that focusing on software developers would never lead to a venture-scale business. To make their case, Twilios skeptics pointed out that software developers have no control over budgets in the enterprise.

But Lawson, Cooke, and Wolthius were undeterred. They believed deeply that software developers held the keys to the futureand not just of telecom, but perhaps the whole world."

well now, being public you can't just do this.

imagine Amazon listening to advise to use retail-chains (system integrators in Twilio case), or should Amazon do really care about revenue coming from one specific customer (even if it's huge)?

it's really hard to keep doing what you believe you need to do when you're under pressure of the public market. I absolutely understand why do such companies go public, but that makes me sad.

5
timfrietas 12 hours ago 0 replies      
Twilio seems to be leaning a little bit in the directions of solutions #1 and #2: https://www.twilio.com/showcase

But they are also going further all-in on the developer-first market with things like their serverless offering: https://www.twilio.com/blog/2017/05/introducing-twilio-funct...

I'd be interested in the author's thoughts on Twilio's strategy in context of AWS (which I think the author would agree has crossed the chasm).

6
jrochkind1 13 hours ago 0 replies      
I was thinking about the 'systems integrator partner' strategy before I even got to the end, without knowing that 'systems integrator' was what that was called.

After getting to the end, I realized a very common path would be to start with the 'systems integrator partner' strategy, then realize, hey, these guys took the risk and proved it could be done, but now they're taking a huge slice of our potential revenue -- let's build our own products, end our partnerships, and screw the partners who got us here. Sometimes it "works", sometimes it doesn't.

7
hayden592 6 hours ago 0 replies      
Twilio is great and we used it on my last team, which is a fortune 50. However, I saw the issues with having a subpar sales team. As I was leaving the communications team, "management" decided they wanted to replace Twilio with a Salesforce product. That Salesforce product uses Twilio for sending e-voice calls and text messages. Unfortunately my arguments against the switch fell on deaf ears
8
jaytaylor 10 hours ago 2 replies      
Page-down doesn't work right on this site, it skips over all the text and takes chrome to the next image. Too unpleasant to read, imho.
9
hyperpallium 9 hours ago 0 replies      
"Crossing the chasm"... now there's a phrase I haven't heard for a very log time. Although I enjoyed the book, even the author says that the "chasm" no longer exists (in part, cured by his book).

Though he's right about consumer products due to direct access to consumers via internet, it may still exist for B2B suppliers, especially for the enterprise, where high-value purchases require an involved and cautious organizational process.

10
micaeked 14 hours ago 3 replies      
Side note, please don't hijack normal navigation keys to do something unexpected. In this case, page up/down navigates the sections instead of the normal behavior.
11
hartator 7 hours ago 0 replies      
3 paragraphs about why to write this essay. :/
12
rmason 15 hours ago 3 replies      
I personally disagree with this guy but felt the debate on here about what he wrote would be useful.

If you've IPO'd in general you've already crossed the chasm ;<). Sounds like a bitter former employee taking his shot.

12
Raising a Truly Bilingual Child nytimes.com
130 points by wallflower  15 hours ago   171 comments top 32
1
hkmurakami 13 hours ago 10 replies      
>But if a child grows up speaking that second language Korean, say with cousins and grandparents, attending a Saturday School that emphasizes the language and the culture, listening to music and even reading books in that language, and visits Korea along the way, that child will end up with a much stronger sense of the language.

Did this from Kindergarten through 10th grade. "Saturday School" was a full program that followed the Japanese Ministry of Education's requirements for accreditation (since in my era, most students went back to Japan after their parents' 4-6 year stint in the US office). I do notice that my Japanese is stronger than that of my peers who stopped going to these schools at a much earlier age.

>A child who is learning two languages will have a smaller vocabulary in each than a child who is only learning one; there are only so many hours in the day, and youre either hearing English or Spanish, Dr. Hoff said.

For me, this was a big problem since my English vocabulary remained quite weak compared to my peers all the way through high school (mainly because I had a strong preference for reading Japanese material at home). The general lack of confidence in a broad range of English skills is a long lasting effect that is orthogonal to my actual knowledge or skills.

I'm a "true bilingual" but I don't know if I'd want my imaginary children to be the same. If you're committed to living in the States, maximizing your English skill is imo a better investment of finite cognitive resources. My personal feeling is that being highly proficient at English and being "okay" at a second language works out well, but at the same time I know many friends of Asian descent who are self conscious of their "child-like" use of their second language. A conundrum.

2
freyfogle 13 hours ago 7 replies      
It's interesting that the article starts with the line "True bilingualism is a relatively rare thing". That's probably the case in the US or English-speaking world, but definitely not the case in many other areas. I live in Catalonia, it is very normal for locals to be fully fluent in Catalan and Spanish, I've seen the same in the Basque region. Go to India and almost every person you meet at all levels of society speaks several languages.

In many parts of the world bilingualism is the norm.

I don't recall the exact book, but I remember reading a Jared Diamond book about societies in remote Papua New Guinea and meeting indigenous peoples who all spoke several different languages. If I recall correctly he argues that being monolingual is the historic oddity.

3
ldd 13 hours ago 5 replies      
>True bilingualism is a relatively rare and a beautiful thing(...) Highly competent bilingualism is probably more common in other countries, since many children growing up in the United States arent exposed to other languages.

I acknowledge that this article is aimed at Americans, but I don't really think that bilingualism is rare in the world, or for that matter, in certain parts of the USA.

Perhaps this is not relevant to the article, but it seems that bilingualism is a forgotten issue these days in the USA. Yet, it is actually a very important issue. One time a person simply told me that they couldn't see the point in Junot Daz work because it used so many 'foreign' words.

At the time of writing this reply, there were already plenty of other replies that suggested that we should all speak English exclusively or at least that speaking English proficiently should be prioritized. I think that they are probably at least partially right, but it surprises me that there is really not a lot of controversy on this topic.

All I am saying is that here we have a perfect topic in which humans are not entirely rational, and instead of being puzzled and talking about it, we just simply forget about it.

[edited for clarity. Like Celia said, my English is not very good looking, so I'd appreciate corrections :D]

4
wvh 7 hours ago 0 replies      
I'm raising my 3-year-old bilingually I speak 6 languages myself, although 2 are somewhat rusty. It's hard, because my native language is the underdog as it's pretty much just me talking with her. She understands what I say, but replies always in mommy's language. It forces you to switch languages if you want to have a fluent back-and-forth conversation instead of something that just peters out. If you want true bilinguality, as the minority language speaker, you have to be willing to push and fight, and sometimes feel on the outside yourself.

When she was younger, it was sometimes heartbreaking to read a picture book where she knew some words in one language, and then the other parent comes in and all the words change.

Kids' minds are amazing though. I sometimes speak some English or French (not our native languages) with her as a game, and the speed with which those words stick is unbelievable their minds are just sponges. Last week she threw some random French words at guests, confusing them to no end.

5
matthewaveryusa 7 hours ago 2 replies      
Adult bilinguals mix their languages all the time; its a sign of language ability

I'm trilangual and that statement is complete BS. I would trade the decent mastery of three languages for complete mastery of one any day of the week. No one cares that I'm fluent in French or Polish expect for when I pronounce French words properly or can order two beers at a bar on a biz trip. People do notice that I tend to reel for pretty basic words in English. I guess it's always a matter of perspective.

6
spraak 13 hours ago 1 reply      
Somewhat related is an experiment I've been doing with my child, where the only media I've introduced to them (since birth) is in a language other than English (we live in the US). Usually this is Swedish or German, which are the languages I know best besides English, but sometimes Spanish, French, Japanese, Korean etc. etc.

What I've found is that they have a play/babble language of their own that sounds like a mix of mostly Swedish and German. We talk together in this language at times, too. Sometimes they even have asked me what the English word for an idea/thing they've learned or experienced in German or Swedish. Another time they asked me what "zum Beispiel and till Eksempel" mean! which amazed me because they obviously connected that they both mean the same thing ("for example").

In real life (i.e. not at home watching movies) they can hold conversations with Germans and Swedes, too.

At this point they're not truly bilingual, but it's been really fun and interesting to see how much they have learned, and makes available for them in the future, as with deeper study and practice they could be truly multilingual.

7
geff82 11 hours ago 0 replies      
I also think that in all the 4 languages I speak, I exhibit a slightly different personality, as you usually not only learn to speak in a language, but also how to behave in another culture.
8
cletus 11 hours ago 1 reply      
I'm a native English speaker and not bilingual so feel like an outsider looking in here but this is an area I find interesting. For one, I kind of wish I was bilingual. I suspect this is much easier to achieve as a non-English speaker for several reasons:

- The pervasiveness of English as a first or second language

- As much as people complain about English, in the transition from Old English to Middle English when English was not the court language of England, English lost a lot of what I like to call the grammatical bullshit (eg gender of nouns, cases, agreement of case, number, adjective and article and so on).

Anyway, this article made one claim that resonates with my observations:

> But parents should not assume that young childrens natural language abilities will lead to true grown-up language skills without a good deal of effort.

How many people do you know that have done 12 years of Italian or French or German or Spanish through school and can maybe remember how to count to 10? I know quite a few.

Another claim the article makes is that bilingual children are less fluent in each language than a child who only knows one language. I've often wondered if this is the case for a similar reason: language ability seems to largely be a function of exposure and there's only so much time to go around.

If true, I wonder how this has affected the development of English-speaking countries, which are particularly mono-lingual. Is it an advantage? A disadvantage? A bit of both?

9
arde 6 hours ago 3 replies      
> screen time doesn't count

Absolutely false. Screen time is an excellent way of exposing the child to a language and thus learning to distinguish the phonemes that make it up. It can be argued that screen time alone doesn't help too much in learning to speak a language (if by screen we mean a non-interactive one, of course). But it's great for listening comprehension.

Source: starting when she was a baby, I let my kid watch as much TV as she wanted as long as it was in English and not in our language (Spanish). She watched about 2 hours a day, on average. Her mother and I are quite fluent in English and, even though we rarely speak it among us, we encouraged our child to learn it this way, occasionally answering her questions or translating the words that she couldn't figure out by herself. She's now 6 and speaks it pretty well, she can easily maintain a conversation and she is very good at identifying the correct phonemes while listening (even in the cases when she may not know the word). Actually, she's better at listening comprehension than what her mother and I could ever hope to achieve ourselves. Differentiating phonemes is a crucial ability that is learnt optimally at an early age and after only a few years old it cannot be learnt that well (I remember reading an article about this limitation in Scientific American more than 20 years ago, I don't have a link for that).

10
jasonkester 3 hours ago 0 replies      
We pulled this off by moving to France when our first kid was 2 years old. He went in the local village school at age 3 and was happily speaking French to our adult friends at age 5.

A year later, French is just the language that Kids and Strangers speak, so he pulls it out in those situations then falls back to English if that doesn't work.

It's fascinating to watch. I'm trying to gauge when will be a good time to up sticks and move to a Spanish speaking country for a year or two.

11
cheesedoodle 9 hours ago 1 reply      
"Adult bilinguals mix their languages all the time; its a sign of language ability"

I have to agree with one of the Indian commentator that this does not signal language abillity but rather the lack of vocabulary.

Me and my wife are doing our best trying to raise our children multilingual (three mother tongues). The use of drop in words only means to us that we dont know the right word in the current speaking language.

That aside, I think that you are truly bi/multi-lingual when you can express feelings effortless in any of your languages equally.

12
JohnGB 12 hours ago 2 replies      
> A child who is learning two languages will have a smaller vocabulary in each than a child who is only learning one; there are only so many hours in the day, and youre either hearing English or Spanish, Dr. Hoff said.

That is rubbish. It depends on how the child is exposed to the various languages and how much exposure they get. My daughter speaks 3 languages at home (one to each adult). We were told exactly this that she would be behind language wise, but we made sure that she was read stories in each of the languages, and had good exposure. She has just started at a bilingual school, and her monolingual English teacher has told us that she is far ahead of the rest of her class language wise.

Yes, this may be an outlier, but every family that I know that has bilingual or more children and who expose the children to enough of each language have children that are ahead of the norm in each of those languages.

If however you simply split the time that you would give in language related activities or conversation between multiple languages, then of course the child will be behind in each one, as you've only exposed them to half the language in each. This is true up to about 7 years old, and non-existent after that anyway.

13
hiyer 6 hours ago 0 replies      
A minimum of bilingualism is probably the norm in India and trilingualism is very common (I myself speak Hindi, Tamil, and English fluently, and a smattering of Kannada). It is not all that unusual to find people who speak 4-5 languages fluently also.
14
foobaw 4 hours ago 0 replies      
Not trying to brag, but due to my background I ended up trilingual (Korean, Spanish and English).

It all depends on circumstance - I spent a few years growing up in Korea, Latin America and the U.S - and went to school in all three countries where I was forced to learn the native language. It's been a few years but I still speak all those languages due to friends and family so I haven't lost the ability yet.

It's not a very useful skill for my job - but I just wanted to point out that I can attest that immersion as a child is one of the best ways to learn languages.

15
lordnacho 13 hours ago 1 reply      
This is really, really hard to do properly.

I'm native in two languages, neither of them my first. By this I mean native-accented speech, and able to do a degree in these language. The only way it was possible was that I went to school in one language and lived in a society that spoke another. So school was teaching me English, meaning I learned all of math/science/history through high school with English vocabulary.

But I was only able to learn the local language because I kept local friends and my cousins spoke it with me. And I read the papers and watched the TV, as well as socialising with people who showed up at my parents' restaurant.

So now as an adult there are only two languages in which I could do a degree. What happened to my other languages? It's like I'm tourist when I speak the old with my parents. Oddly enough there are two of those, as they were also a minority where they grew up. I can ask them for various kinds of food, but I can barely explain to them what I do for a living. I can read a paper in French or German too, and get along ok, but nothing too deep.

The absolute most that I know of (personally) is to be able to do a degree in three languages, a few friends of mine who'd lived in two countries while learning English. Even so they were identifiably non native accented.

I still tend to think most multilingual people are slightly deficient in one of their languages. If you expand "language" to mean "culture", even more so. You just aren't going to know all the minor celebrities of multiple language zones. For instance I met a young guy in Switzerland who'd been taught Danish by his dad. He spoke with a native accent, but wobbled when it came to common language and cultural idioms. Like an English aristocrat who'd apologise in perfect RP at not knowing what happens at the Ascot, or what wrapping your head around something means. Being Swiss, he spoke Danish, English, French, German, and Spanish. I'm guessing with that level of skill: high in all, but wobbly when you dig a little.

16
joshaidan 12 hours ago 2 replies      
Bilingualism in Canada is very common among French-English languages, and depending on what part of the country you live in, your exposure to both languages can be fairly even.

My favourite bilingual situations is seeing a child "complain/whine" to their grandparent in English while the grandparent tries to soothe the child in French. Both understanding each other perfectly.

Something that blew me away once was this one kid at church who spoke with a British accent--I believe he lived in Whales before moving to Canada. A week later I heard this same child speak with a perfect Qubcois accent. I later found out his mom was from Montreal.

17
Cyph0n 9 hours ago 1 reply      
As a person fluent in both Arabic and English, I always wonder if Arabic dialects count as "languages". Note that basically no one speaks "written" Arabic; everyone uses a dialect based on their region.

For example, North African dialects are basically unintelligible to Middle Easterners. Even within North Africa, the dialects are vastly different: for instance, Tunisians have trouble understanding the general Algerian dialect. You can go even further: people within Tunisia can find trouble understanding each other's dialects (e.g., north vs. south)! The linguistic variations are enormous.

I was raised in the UAE, so I can understand (at a high level) basically all Arabic dialects. In Tunisia, I speak Tunisian; in the UAE, I speak the dialect closest to who I'm talking to (if applicable), or a form of Emirati Arabic otherwise (e.g., in the case of Sudanese Arabic).

18
80211 6 hours ago 0 replies      
With bi-lingual parents (Yiddish, English), I grew up bilingual. In school and some courses at Yivo, I was able to finish off full literacy in Yiddish. I grew up reading/writing Hebrew, too, but it took spending a year in Israel to lose the "Yiddish" accent and speak like an Israeli.

So, if parents are bilingual and you have friends/school where both languages are spoken, it's very easy.

19
pacaro 4 hours ago 0 replies      
My sister is bilingual, but because she only lived in the U.K. as a child and teenager, there is vocabulary that is used more commonly in adult conversations that she knows, but has a hard time with.

So while she is bilingual by any useful definition, there are common scenarios in which she sounds like a foreigner with "very good English" rather than a native speaker.

Our grandfather had the opposite problem. He lived in the U.K. from age 18, and only had the very faintest accent, but in his 60s and beyond would be frequently complimented that his German was "very good for an Englishman"

20
ivanbakel 13 hours ago 0 replies      
>The ones who are sucessful bilinguals as adults are still much better in English than they are in Spanish

Sadly true. Probably the biggest challenge of a bilingual childhood (or I imagine, bilinguality anywhere) is the lack of exposure to a massive amount of second-language vocabulary that you'd normally learn by use. At least it's good to know that not having equal proficiency isn't a failure.

Interestingly, I've had discussions in the past where the other side suggested that immigrant parents have a duty to speak the native language in the household, to encourage their children to learn it better. This article does a nice job of explaining why that's already unnecessary, and maintaining a second language in the face of native schooling, media, and what else, is hard enough.

One thing that helps, in my experience, is reading fairy tales. They have some of the most common use of language, and can take a conversational tone that's missing from regular reading.

21
jeena 9 hours ago 0 replies      
I was born in Poland then at age 11 I moved to Germany and later at age 27 I moved to Sweden. My sister is two years younger, she had it easier to learn German but she doesn't want to speak Polish if sho doesn't need to. I don't have that. I had also big problems learning English at school, because I had to learn English in German which I didn't understand in the beginning, and later I was always behind.

I was really bad in English when I moved to Sweden which helped me to learn Swidish faster because people wouldn't use English to communicate with me, which swedes do all the time if you speak English but not Swedish.

I learned English later by mostly watching TV, which is only subtitled in Sweden, because I wasn't able to read that fast in Swedish, I started listening to the English original. Later most of the studies here at university were in English (at least in Computer Science), so I was forced to use my English and improve it. And later at work the office language was English too.

Anyway, I do even have an anegdote from my Grandfather who was captured by the Russians during WW2. He was in the Wehrmacht, but because he was from Silecia he spoke some polish too, which helped him during his time in captivity in Siberia. Because he could translate, they gave him bigger portions of food, so he always said: "You never know where life leads you and every language you speak is like an extra hand."

22
karaokeyoga 8 hours ago 0 replies      
My son spent his first six years in San Francisco. Canadian dad, Japanese mom. He went to Japanese preschool and spent close to two years (kindergarten, most of grade one) in a Mandarin immersion public school. We moved to Japan after that.

His younger sister (three years his junior) began French immersion in Japan. Actually, a truly French school, started by French parents living in Japan. Since our son was working on three languages, and there were limited options for being immersed in Mandarin here, we decided on French for her.

Fast-forward eight years, six of which were spent in Japan, and two in the UK and France. Our son is bilingual, but his English is short of native. Japanese took over as the dominant language for him. His Mandarin is as good as gone, in spite of three extended trips to China and the (intermittent) use of native Mandarin tutors while living in Japan. I gave up on Mandarin for him when I realized how much his English was deteriorating.

Our daughter is native-level Japanese and English, and just short of native in French. She speaks without an accent but is short on a lot of vocabulary, for example. (Similar to my son and his English.)

Looking back at the experience of raising these children with multiple languages, my main observation is that, although children are superior at learning languages compared to their adult selves, there are starkly different levels of language acquisition ability from child to child.

My daughter is truly a language monster. She is shockingly impressive in all three of her languages, and is constantly trying out new words and getting them absolutely right (context, etc.).

My son is, not surprisingly, much like his parents. Language is hard for us. He is stronger in other areas (music, visual art, writing ability), and we work hard to let him know that everyone has their strengths and weaknesses. He's jealous of his younger sister's French (and her English, too), but we tell him that we're jealous, too!

My second observation is that raising children like this is a substantial amount of work.

Last but not least, I have come to appreciate the joy and magic of being truly, madly, deeply native in a language. Having that final 1% of ability adds untold richness and closeness. There's nothing quite like it. Being truly bilingual is fantastic, but not the same as being dual-native, and it doesn't come close, in my opinion.

And, as an afterthought, there are all the non-spoken aspects of another culture, such as behaviour, body language, and even the volume of your voice, that are, in my opinion, as important as language fluency in terms of feeling "close" within a society.

23
noisy_boy 6 hours ago 0 replies      
I think a good mix is when your family speaks one language, you speak another in your school with friends and you still have to learn English because one must learn English. That was the setting I grew up in and it allowed enough exposure/practice for me to become very fluent in all three.
24
jayhuang 9 hours ago 1 reply      
The easiest way to learn a language is simply to put a child in that environment, an environment where they have no other option but to learn. Granted, it's can be immensely stressful, but having been in a similar situation, I can attest that it works.

I grew up in Canada my whole life, but was taken to Taiwan towards the end of my elementary years and enrolled in the normal school system there without so much as knowing the alphabet. Spending 3.5 years there allowed me to pick up the language fluently, at a fluent, accent-less level (reading, writing, speaking, listening). While there I also learned "Taiwanese" at a fluent, accent-less level while visiting produce and night markets.

Now when I interact with Mandarin speakers in Mandarin, they assume I grew up in Asia, and vice versa with English.

On the flip side, many Asian friends who grew up here attended Saturday Chinese schools, and although it helps allow you to communicate on a basic level, most hate learning Mandarin, and thus fight it. Speak with classmates in English the second the teacher isn't hounding them, speak in English during breaks, etc. Many end up not even being capable of conversing

25
shioyama 9 hours ago 2 replies      
> There is certainly no research to suggest that children need to have languages lined up with speakers or they get confused.

This goes against the mantra that each parent should speak to their child in their native language, which I hear all the time but I've always thought was complete bs. As parents we're both fairly proficient in each other's language and speak to our children in both languages depending on context.

I find the "rule" that you speak to your child in your native language to actually be very bad because it artificially splits conversations across languages, and also gives the kid the impression that you should speak your native language, whereas the whole point is that they should be comfortable speaking any language.

26
gnicholas 11 hours ago 0 replies      
> It does take longer to acquire two languages than one, Dr. Hoff said, and that, again, comes back to the exposure.

A child who is learning two languages will have a smaller vocabulary in each than a child who is only learning one; there are only so many hours in the day, and youre either hearing English or Spanish, Dr. Hoff said. The children will be fine, though, she said.

This is the first time I've seen an expert go on record saying that there is some downside (perhaps temporary) of raising a child bilingual. It's all the rage to do so (and I am currently doing so, I'll admit), and everyone loves to talk about how a study showed that bilingual speakers are better at this or that.

But it's important to consider that there is some downside, and for some children - perhaps those without a huge vocabulary to begin with - it might not be the best thing to do.

27
kutkloon7 12 hours ago 1 reply      
While I'm not exactly bilingual, I would like to teach my children (when I have them) multiple languages, just because I think that the potential for learning (and especially for learning languages) is at its maximum during childhood.

However, I'm not quite sure what the best languages would be. I want to give them some exposure to a broad spectrum of languages. I know that the best way to learn a language is to actually use it. In this light, I'm wondering what the best balance between casually exposing them to a huge number of languages and learning them just one extra language really well would be.

28
guytpearson1 9 hours ago 0 replies      
This article is a fluff piece for the doctors interviewed. Awful research, brief interviews, and so vague you have to laugh. The entire thing screams "Yeah, no shit."
29
orless 10 hours ago 2 replies      
We're native Russians and live in Germany for quite a while (17+ years). My children (8 and 5yo) were born here and we're raising them bilingual. They went/go to normal kindergarden/school and have natural exposure to the German language. At home, we speak Russian with children, unless there is a non-Russian-speaker present. The older son additionally goes to the Russian school on Saturdays. We often talk to our parents (who live in Russia) over videochat. Me and my wife are fluent in German, my wife is even Master of Arts in German language and literature studies). We parents also speak English and some French. A few years ago we even had a habit to speak a different language (Russian/German/English) each day, just for practice. We dropped the habit when we got children, completely switching to Russian at home.

I would say that our experience largely overlaps with what I read in the article. Raising bilingual child is hard, "truly bilingual" (whatever the measure for "truly" is) is even harder. But it is probably easier if you're living in a foreign country.

Judging from the older son - yes, his vocabulary in German is somewhat smaller compared to his German peers, but not significantly. His Russian vocabulary is probably also smaller compared to Russian kids, but more than that - it is different. He learned the language primarily from us parents, there are almost no Russian friends around, so his language is very adult, I'd even say academic. For instance, he normally does not use the word "", Russian equivalent for "cool", he'd rather say "remarkable". His grammar in Russian, is, however, heavily influenced by German. Prime example are reflexive verbs. For instance, for "I'm wrong" you'd probably say "ich habe mich geirrt" in German - literally "I have erred myself". Note the usage of the reflexive pronoun "mich" ("myself"). You don't use it in Russian as reflexion is encoded in the verb suffix itself - " ". But my son sometimes still uses the reflexive pronoun in Russian, saying something like " " which is not correct and sounds pretty funny.

Over the years we've estabilished a small set of rules for language. Speak Russian in the family, unless a non-Russian-speaker is present. Answer in the language in which you were addressed. Do not mix languages. It seems to work pretty well for us.

The article seems to take mixing of languages quite easy. We see this as one of the biggest challenges. It is way too easy to start using German words where Russian translations are cumbersome or unusual. (Prime example is "Termin" - "appointment". In Russian there's just no good translation for this word. The closest is probably " " - "the appointed time", but that has a completely different connotation.)It may be not sound so bad if you mix Romance/Germanic languages, but if you mix Russian and German, the result is absolutely horrible. You end up with a language which neither "pure" Russian nor "pure" German speaker will understand. You'll need to know both languages to understand the mixture.

So this is probably the rule where we are most strict and persistent. Do not, never, mix languages. If you don't know a word, ask, we'll help. If it's too complicated, say it in German, we'll help to translate.

It is hard to keep the language and it is easy to loose it. We know families which lost Russian in the next generation. Their children understand some Russian but answer and German. I think this is a pity. Second language has great value, it takes so much effort to learn it as an adult, so it is unforgivable to loose this opportunity in the childhood.

30
quickthrower2 9 hours ago 0 replies      
My wife is truly trilingual in Arabic, English and French. Coming from a 80s Lebanon childhood. I.e. you wouldn't know that she isn't native in each language. It's probably less rare outside of UK/US/etc.
31
j_s 11 hours ago 2 replies      
What are the best technologies to help an English-speaking household even introduce other languages?
32
danieltillett 12 hours ago 0 replies      
One thing I wish us native English speakers did was work on fixing English - spelling in particular, but also the irregular grammar. English is so much harder to learn that it needs to be.
13
Fast MySQL Backup and Restore Using Mydumper/Myloader wplobster.com
42 points by lyri787w  9 hours ago   17 comments top 7
1
dexterbt1 7 hours ago 1 reply      
There are no benchmark numbers to back the claim that it's fast. At least the OP should have done some measurements; I'm interested in anything that will speed up our DB's 300+M record tables.

The article also does not link to the official maintainer page/github page.

2
whatnotests 5 hours ago 0 replies      
1. Shows password on CLI

2. Segfaults for me

I'll be using mysqldump for now...but doing a flush+lock+copy the innodbdata file(s) seems like it would be much, much faster.

3
wanderr 7 hours ago 2 replies      
How does this compare to the percona tools?
4
lathiat 5 hours ago 2 replies      
Watch out last time I looked most of these tools miss some things like stored procedures, though I haven't checked mydumper specifically recently. mysqlpdump certainly did, and I think I tried mydumper.

There's also other things to potentially worry about; events, functions, etc.

5
roselan 2 hours ago 1 reply      
Note that with 5.7 there is a new tool, "mysqlpump" that can do parallel export/import and direct compression.

I haven't test it yet because mysqldump is enough for our needs.

6
jaequery 6 hours ago 0 replies      
this is not really new as this has been out for atleast 7 years now. but this really is fast. i think atleast five times faster from what i remember.
7
boznz 8 hours ago 3 replies      
Nice for my Linux (and possibly BSD) customers but some of my customers use MySQL on Windows servers, it would be nice to give them some love every now and then :-)
14
The Vietnam of Computer Science (2006) tedneward.com
80 points by jwdunne  13 hours ago   55 comments top 14
1
plinkplonk 1 hour ago 0 replies      
Interesting how "The Vietnam of ..." assumes the US viewpoint as the default.

From the Vietnamese viewpoint, the war could (for example) be seen as underdog patriots resisting waves of invaders possessing an overwhelming material and technical superiority and utterly defeating them through sheer willpower and sacrifice, thus gaining a nation of their own.

Which would make a paper titled "The Vietnam(War) of Computer Science" a description of some kind of breakthrough in CS, attained after a tremendous struggle.

So yes, dumb analogy. The paper would read as well without the bullshit title.

2
neilk 12 hours ago 3 replies      
It's a good description of the problems of ORM and their possible resolutions.

Personally, I prefer to interact with a relational database via a good query builder. The advantage of a query builder is that you inevitably end up with dozens of slightly different queries. With a query builder you can make a query just like another, but with slightly different constraints or joins or whatever you like.

You can then populate objects with said queries when it make sense, and do other non-objecty operations like reporting and search. The entire point of an object system is to reduce coupling, so IDK why people are so eager to tightly couple their programming model to storage.

But about the Vietnam part... really, that's a shocking analogy. The overlong, painfully US-centric military history part could be completely excised. I didn't even understand the title at first. Vietnam is not solely defined by America's failed attempt to ~~napalm it into submission~~ nobly preserve democracy.

3
ThinkBeat 11 hours ago 3 replies      
The Vietnam analogy is just in bad tasteand does not add much to the article.
4
PaulHoule 11 hours ago 2 replies      
This is an old article and much has changed.

ORM frameworks in many languages such as Ruby, Python, Java, etc. have improved enormously since that time. I think they'll continue to improve as metaprogramming facilities improve, it is very feasible to convert an expression like 'x>2*y-7' from conventional programming languages to SQL and other query languages.

Also there is immense interest in document databases such as CouchDB, Marklogic, Elasticsearch, DynamoDB, etc. These solve some of the problems of object-relational mapping, and we will someday see new frameworks.

5
fizixer 11 hours ago 5 replies      
ORM is the Vietnam of Computer Science, because OOP is the Vietnam of Computer Science: A 25-year failed experiment in boosting productivity of masses of mediocre programmers using this one true silver bullet.
6
sbierwagen 6 hours ago 0 replies      

 and the most memorable image of the war, that of streams of fleeing people seeking space on the Huey helicopter perched on the roof of the embassy.
Common misconception. The roof in question was that of an apartment building a kilometre away from the embassy: https://en.wikipedia.org/wiki/22_Gia_Long_Street

7
wcr3 9 hours ago 1 reply      
ah yes, "the vietnam of computer science." a classic. never forget. can't wait for "the holocaust of programming." heard good things about "ethnic cleansing and compiler theory" too.

keep us posted.

8
jancsika 9 hours ago 1 reply      
> Johnsons successor, Republican $g(Richard Nixon), tried several tactics to bring pressure to the NVA/VC forces to bargain [...]

One such tactic the author fails to mention was the successful effort to undermine Johnson's peace negotations by doing illegal covert dealings with the South Vietnamese through Nixon's underlings. Nixon was undermining the negotiations between the U.S. executive branch and North/South Vietnam when Nixon was still a candidate, encouraging the South to hold out until after the election.

You can read about it and much more in Christopher Hitchens' "The Trial of Henry Kissinger." Regardless of one's opinion of Hitchens, he refers to unclassified documents and many other reputable sources in his section on this.

The only reason I mention it here is because the author has attempted to flesh out an analogy with what I must assume were supposed to be the salient details of each item being compared. To do that and leave out the fact that a presidential candidate undermined the acting president's effort to end the war is quite strange.

Also, I'd be interested to hear the author explain what "regular violations of nearby Laos and Cambodia" actually means.

9
gleenn 10 hours ago 0 replies      
Interesting article but I feel like the metaphor is a little shallow. There are plenty of troublesome topics in computer science that could be compared to Vietnam in the same way, it just feels a little excessive to compare those things to war.

I did end up getting sucked into reading a lot of history about the actual war, turns out TFA should probably be split into two, one about the tech aspect and one about the pure history. The history section was well written.

10
andreasgonewild 9 hours ago 0 replies      
A classic, the information about Vietnam isn't really relevant but I still find it interesting. Ran into this post back when I was busy implementing ORM's in umpteen different languages myself, and I remember that it motivated me to keep looking for better ways. Lately, I got the chance to sidestep the whole issue and implement my own persistence from the ground up in Snackis (https://github.com/andreas-gone-wild/snackis/tree/master/src...). It is still based on tables and records, but having them available as first class objects helps bridge the gap considerably.
11
sgt101 11 hours ago 0 replies      
One two year spell with Hibernate was the end of all flirtations with ORM at our gaff. "No transparent persistence shall be the sum of transparent persistence."
12
est 6 hours ago 0 replies      
I think the problem with ORM these days is leaky abstraction.

Sometimes we have RDBMS but more the times we have Redis and shit. You can't organize all sources of data under the same umbrella and it's painful.

13
rosser 9 hours ago 0 replies      
I've said for years that using an ORM is ultimately a piece of tech debt.

Same point, far less inflammatory.

14
graycat 9 hours ago 0 replies      
At one time, I was in that object, relational, etc. stuff. Some people wanted all that stuff combined into some one, unified thingy.

My conclusion: The whole was a mess. Better just to leave the separate pieces separate.

One of the pieces was, sure, relational data base.

Then there was the international work on CMIS/P -- IIRC common management informaiton system/protocol. This work seemed to want to model the world and everything in it, but really the goal was how do define and specify data for real time computer and network system management.

This work was object oriented and, thereby, open to a lot of confusion: Really, what the work was talking about was just some data, say, describing a printer. So, there be fields with data -- character strings, numbers, arrays, etc. All it was was just data; that is, there was no associated code; no subroutines or functions to call; no applications programmer interfaces (API). The object part was that the definition of data from/for printer A could draw from, inherit from, the definition from printer B, etc.

The next part was something we called Resource Object Data Model (RODM). So, the idea was that in real time management of server farms and networks, there could be several different programs trying to manage different aspects of the work, communicating with the same computer or network node, and causing conflicts. So the idea was to have a system, RODM, to present to all the programs trying to manage a single view of the servers and networks to be managed, a single view that would handle locking, exclusive access, cache data from the servers and networks, etc. So, RODM was a hierarchy, say, something like a file system hierarchy. At the leaves were objects that described parts of the servers or networks. The objects were active in the sense that they could run their own, if you will threads of execution, set timers, respond to timers, send and receive messages with other objects, the servers, the networks, programs doing the management, etc. All the hierarchy and objects were dynamic in the sense, much like in a file system, they could be created, changed, and deleted all during real time operations. RODM had some curious, tricky stuff: Could reserve an address space just for data. An address space for code could jump into another address space for code and execute code there. Etc.

One role for RODM was for programs to do system management based on AI (expert systems) to have RODM to talk to as model of the servers and networks being managed.

It flopped.

I contributed a little: Part of such management is detecting and correcting problems in real time. Some of the problems are ones never seen before. So, I worked up how to detect those problems, with known, adjustable false alarm rate, etc. So, that was to be a much better approach to anomaly and early problem detection than we could expect just from the usually intended uses of expert systems.

But, really, as for the OP, it was all too complicated.

15
The Illuminating Geometry of Viruses quantamagazine.org
96 points by algui91  17 hours ago   5 comments top 4
1
blinry 14 hours ago 0 replies      
You might also enjoy Hamish Todd's (highly interactive) documentation on virus geometry: http://viruspatterns.com
3
tinix 12 hours ago 1 reply      
Mmmm... geometry. Resonance.

So you take this known and documented knowledge of lipid mechanics[0], and apply that to geometry of pathogens.

Very much like electroporation[1] but, with magnetic fields and RF instead of direct application of voltage. but try and research: "Mortal Oscillatory Rate" or "Raymond Rife" and be inundated with disinformation and BS.

Resonance is not a new or strange concept, by any means, we have no shortage of study around these ideas, take nuclear magnetic resonance[2] for example. That's an example on a very low level, then we can see this with electroporation and ultrasonification techniques at a much higher level.

It's very frustrating, however, as soon as it's applied in a field that is deemed derogatory to the pharmaceutical industry, suddenly it's woo and nonsense, apparently. We have documented evidence of these phenomenon at many levels, and the high and low level effects are pretty well understood, but as soon as it's applied in the middle somewhere, to actual pathogens, then, it's suddenly quackery, apparently.

Meanwhile, at least we do have some research being done on these things and it's VERY promising:

Targeted treatment of cancer with radiofrequency electromagnetic fields amplitude-modulated at tumor-specific frequencies[3]

Destruction of bacterial spores by phenomenally high efficiency non-contact ultrasonic transducers[4]

System for Cleansing Organisms from Water[5]

I think most of the claims of quackery are just a knee-jerk reaction and it leads to throwing the baby out with the bath water; just because some asshats have made fake devices to exploit vulnerable people doesn't mean the technology they claimed to use is fake or nonsense.

[0] https://en.wikipedia.org/wiki/Lipid_bilayer_mechanics

[1] https://en.wikipedia.org/wiki/Irreversible_electroporation

[2] https://en.wikipedia.org/wiki/Nuclear_Magnetic_Resonance

[3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3845545/

[4] https://link.springer.com/article/10.1007/s10019-002-0214-2

[5] https://www.google.com/patents/US20140202961

4
noir-york 15 hours ago 0 replies      
Thanks for posting this! Super interesting
16
Ligatures in programming fonts tinyletter.com
70 points by matiasz  6 hours ago   41 comments top 17
1
panic 3 hours ago 1 reply      
The first problem isn't really a problem, since ligatures are provided by fonts, not character encodings. From the Unicode FAQ on ligatures and digraphs (http://unicode.org/faq/ligature_digraph.html):

"The existing ligatures exist basically for compatibility and round-tripping with non-Unicode character sets. Their use is discouraged. No more will be encoded in any circumstances.

"Ligaturing is a behavior encoded in fonts: if a modern font is asked to display h followed by r, and the font has an hr ligature in it, it can display the ligature. Some fonts have no ligatures, some (especially for non-Latin scripts) have hundreds. It does not make sense to assign Unicode code points to all these font-specific possibilities."

The second problem still stands, though, especially since these sequences of characters can be tokenized differently in different programming languages. IMO, if you're going to have character replacement like this, it should be a configurable editor feature like syntax highlighting.

2
mg794613 2 hours ago 1 reply      
Ugh to answer the last line of the author: I have been using them for that long and the up and downsides are very well known to me. He has an subjective opinion about them, states them as facts 'and everyone who thinks differently is just stupid' OK author...
3
wmil 3 hours ago 1 reply      
The article lists a bunch of non-issues. Yes, you can create confusion by abusing unicode.

But that's not new. C++ allows zero width spaces in identifiers. There's a guy on reddit who uses characters from Canadian Aboriginal Syllabics block to have angle braces in Go identifiers.

Yes, they are guaranteed to be wrong sometimes. The big one is the <<< ligature makes the merge conflict zipper look strange.

But it's incredibly obvious when they are wrong. So it's not an issue in practise.

The reality is that no one is making anyone use a ligature font, and some people like them. If it's causing a problem then you can spend ten seconds changing your font.

4
Stenzel 2 hours ago 1 reply      
I agree with the article, but the arguments the author gives are not quite spot-on.Ligatures render code unreadable, there is no way to see how to enter a particular character sequence that is shown as a ligature. They might beautify code for some individuals, but they should never be used for showing code in an public or explanatory context, like on the web. Some operators are no longer recognisable, a few just look wrong - a clear case where simplicity and functionality is sacrificed for style.
5
abritishguy 1 hour ago 0 replies      
I use ligatures in atom and they are 100% aware of context. I disable them in contexts where they don't make sense e.g comments and I disable them on the line the cursor is on.

I've not had any issues. The => ligature looking like the right arrow character is just like a cyrillic A looking like a latin A - it's a problem that never manifests itself.

The author has a very subjective opinion that they try and present as fact.

6
idle_zealot 3 hours ago 1 reply      
I use fonts with ligatures while programming because they're more expressive of intent. Many languages use a combination of characters to form a single meaningful token, such as JS with =>. This token is meant to appear similar to an arrow, and has nothing to do with = or >. In this case, I find it preferable to draw a .
7
SCdF 1 hour ago 1 reply      
Honestly, this sounds like the kind of argument you could present about how syntax highlighting is a terrible idea. It might be wrong!

I don't use ligatures, but I really don't see a problem with other people using them. It's fine, it's a style preference thing, that, like fonts and colour schemes, is as much fashion and personal preference as it is anything. But it's fine.

8
nvivo 1 hour ago 0 replies      
I work with someone who uses ligatures. Everytime I need to see his screen is a problem, I can never recognize what exactly the characters mean. Yes, you get used to it if you use it, but if you don't, working with other people becomes a problem.

I thought it was a bad idea the first time I saw it, and after seeing it in real code I still think the same thing.

9
captainmuon 39 minutes ago 1 reply      
I think ligatures can be a great feature, but they should not be decided by the font, but by the editor. So you can can destinguish between 'input >> var' and 'vector<vector<int>>' and render the literature in one case but not the other. This and more creative text decorations can be really helpful to read code. Other examples are rendering css colors inline, rendering of tables and formulas in emacs, ...
10
mrkgnao 37 minutes ago 0 replies      
> The problem is that ligature substitution is dumb in the sense that it only considers whether certain characters appear in a certain order. It doesnt have any awareness of the semantic context.

True, which is why Iosevka has language-specific ligatures that a sufficiently smart editor (I think the "JS types", like Atom/VSCode, have CSS for this) can use to ligate (?) intelligently.

11
Arcanum-XIII 3 hours ago 1 reply      
It seems there is some confusion for the author between the display part and the on disk part. text will not be saved with the ligature, it will still be pure unadorned text when saved... and for the confusion that could arise between a simple quote and a typographic one, it's not coming from the font but from the editor rendering engine (word does those kind of change, IntelliJ does not for example, but both can display correctly ligature)
12
VeejayRampay 15 minutes ago 0 replies      
Am I the only one thinking that "www" using Fira Code is unreadable?
13
frou_dh 2 hours ago 0 replies      
Ligatures in programming fonts really seem like something you'd only get into in the midst of a hardcore procrastination bout.
14
anorakoverflow 1 hour ago 0 replies      
The author's point about "dumb" ligatures doesn't really hold up: While the "fi" ligature will always mean "f followed by i" its use is not always correct.

For example, in German compound nouns, you do not set a ligature between the two nouns. For instance, "Kauflche" (Kau: chewing, + Flche: area) should be written with ligature, while in "Kaufleute" (Kauf: purchase, + Leute: people, = merchants) the ligature should be avoided.

15
sk0g 2 hours ago 0 replies      
Q: Should random people on the internet dictate what you do and like? A: Hell no

I quite like the ligatures that come with Fira Code, and most of the author's issues are not applicable to it.

Maybe they should spend more than 5 minutes trying things out.

16
qarioz 1 hour ago 0 replies      
Ligatures are for personal usage only. If you are doing presentation to me and use ligatures, I will judge you.
17
warrenm 5 hours ago 3 replies      
I didn't even know those were a thing

But in a monospace font? No. No. No. no. No.

Epicly bad plan to use something like that to display code

17
I hacked my body for a future that never came theverge.com
126 points by ValentineC  16 hours ago   128 comments top 23
1
Terr_ 14 hours ago 7 replies      
> Another panelist shut them down immediately: doctors and scientists arent even close to solving the phantom pain and limited mobility that most amputees face, let alone building something uniformly better than a human limb.

To repost a slightly tounge-incheek description of features your "standard model" has:

1. Supports a very large number of individual movements and articulations

2. Meets certain weight-restrictions (overall system must be near-buoyant in water)

3 Supports a wide variety of automatic self-repair techniques, many of which can occur without ceasing operation

4. Is entirely produced and usually maintained by unskilled (unconscious?) labor from common raw materials

5. Contains a comprehensive suite of sensors

6. Not too brittle, flexes to store and release mechanical energy from certain impacts

7. Selectively reinforces itself when strain is detected

8. Has areas for the storage of long-term energy reserves, which double as an impact cushion

9. Houses small fabricators to replenish some of its own operating fluids

10. Subsystems for thermal management (evaporative cooling, automatic micro-activation)

2
xaa 10 hours ago 2 replies      
Obviously this article is full of self-modification for fairly whimsical reasons. It's easy to make fun of.

But this kind of problem is more serious and discouraging when you think about aging. You know cells in each organ are going to mutate, get protein aggregation, fibrosis, and so on. Instead of trying to figure out something to do about each of those things individually at the molecular level, it might seem simpler to just replace organs periodically. Every 20 years, just get a new heart, etc.

Nontrivial and risky, of course, but perhaps less so than the even less-developed alternatives. But as TFA points out, surgery is hard on the body, and the body likes to reject transplants, be they biological or mechanical. Integrating vasculature is hard, integrating nerves is REALLY hard.

So in short, I think even if these particular applications are frivolous, hopefully people who are doing this will help push forward knowledge on the general question of "how do you add/replace/integrate body parts safely and robustly". They are truly pioneers, in all senses of the term -- they're pushing forward the frontier at great personal risk.

3
elvinyung 2 hours ago 0 replies      
The bit about wearables reminds me of this quote from Neuromancer:

> He stared through the glass at a flat lozenge of vatgrown flesh that lay on a carved pedestal of imitation jade. ... it was tattooed with a luminous digital display wired to a subcutaneous chip. Why bother with the surgery, he found himself thinking, while sweat coursed down his ribs, when you could just carry the thing around in your pocket?

4
leggomylibro 9 hours ago 1 reply      
Yeah, I got mine out after a few years. It lost sensitivity as magnets do, and I didn't want to pinch a nerve. They have new ones that are much slimmer now, but also much more difficult to remove...I passed.

We actually do have promising 'batteries' for this sort of application as of 2017, though; lithium hybrid supercapacitors. They're roughly an order of magnitude worse than Li-Po batteries today, in energy density and cost. But the advantages over LiPos are a higher ~10C charge/discharge rate, no thermal runaway "Galaxy S7" issues, and a maximum charge retention rate on the order of 80-95% after 10,000 cycles should let them last decades at least. A very compact 100F/24mAh module is available today at 1.8-2.7V from Murata, and I think Taiyo-Yuden has offerings too.

I think we're still making progress. Anyone buying one of those magnets knew they were buying into an alpha version of a product; it's not like we asked the FDA to approve things first.

5
nextlevelwizard 2 hours ago 0 replies      
The thing is there hasn't been many (if any) technological break throughs in past 7 years (since the end of 2010) that would conceivably be used for bio hacking. The title is bad and the author should feel bad. How did she think magnet in finger would be "prepare" her for the future? It was a gimmick and it worked for years. What more could she ask for?
6
Mothra555 2 hours ago 0 replies      
It's merely a future that is much further out. Scientists and doctors aren't even close to being able to interface man made technology to nerve endings and brain waves in a way that could compete or enhance the natural. The same is true with AI and machine learning that is supposedly due to destroy humanity any minute now. It's all a bunch of hype.

I quit subscribing to Popular Mechanics and the like many years ago because they were always so hyped with the next big breakthrough that never happened. It was science fiction (which I do love to read, but not when it is proclaimed as reality).

People buy into these cults and do bizarre things like this all the time, people want so badly to believe the comic books.

The narcissism of man is on full display with things like this. Do you realize just how little we understand about anything? Any field of science, medicine, exploration, psychology, philosophy. All whose pillars of truth are overturned regularly. I wonder why people are so stupid as to believe the next thing the surgeon general recommends them to eat or not eat, just to have that completely changes 5 years later. People basing their child rearing on psychologists whose studies cannot be reproduces almost 90% of the time.

Here is my philosophy. Wither you believe in creation or evolution, look back to learn how our bodies should stay healthy. The things we should eat and not eat. The exercise we should do. Look back before man started screwing things up (I'm talking hunter gatherers here). Use common sense when taking care of the planet. Look how interdependent everything is and don't screw that up.

Use your head, don't listen to someone else's hype. They are just trying to make a buck or gain power and influence.

7
beambot 1 hour ago 0 replies      
Body hacking of this nature hasn't gone away... It's just getting more sophisticated with actual medical uses. For example, a paper I coauthored at RFIC 2017 based on work done at Google[x] / Verily Life Sciences:

http://ieeexplore.ieee.org/document/7969066/?reload=true

Abstract: A wireless system-on-chip with integrated antenna, power harvesting and biosensors is presented that is small enough, 200m x 200m x 100m, to allow painless injection. Small device size is enabled by: a 13m x 20m 1nA current reference; optical clock recovery; low voltage inverting dc-dc to enable use of higher quantum efficiency diodes; on-chip resonant 2.4GHz antenna; and array scanning reader. In-vivo power and data transfer is demonstrated and linear glucose concentration recordings reported.

Edit - PDF copy of paper: http://www.travisdeyle.com/publications/pdf/2017_rfic_implan...

8
LeoNatan25 13 hours ago 2 replies      
Just the description of the nonsense these two have done gave me a shiver. The toward the MRI part, I prepared for the worse. Had he forgotten about the magnet, it could have gone really bad.

And I think to myself, what a ridiculous world.

9
FRex 12 hours ago 2 replies      
I came to the same conclusion instantly at the moment of my first encounter of a finger magnet idea on YouTube - "Cool, but I want a ring that does that and that I can tune, replace and put on and take off at will...".

Alternatively: nanites like in DX1 and DX2.

10
mirimir 8 hours ago 0 replies      
I did the opposite, in a sense. I ignored my body for a future that did come, contrary to expectation. During the 60s-70s, every year without nuclear holocaust seemed like a gift. Now I deal with it. So it goes ;)
11
knolax 15 hours ago 1 reply      
I checked out the biohack.me forum mentioned in the article and it's really disappointing that the only implants currently being tried are either magnets or NFC/RFID chips. I was expecting to find a vibrant sub culture with all sorts of new ideas for implants being tried out.
12
tzs 8 hours ago 1 reply      
Would it be possible to re-magnetize an implanted magnet that has weakened over time? Or would that require exposing them to very strong magnetic fields, which would rip them out of the body? (I've seen videos of magnet manufacturing, and when they exposed the new magnets to strong fields to magnetize them, they were always firmly clamped in place so they would remain stationary).
13
im3w1l 1 hour ago 0 replies      
I feel like a magnetic piercing would be more practical than putting it fully under the skin.
14
anotheryou 13 hours ago 3 replies      
north, electro magnetic fields. Before we implant anything, we need an interesting small sensor in general.

Any small sensor that does anything interesting for me? Requirements: For prosthetic learning it must react fast and the signal should change with muscle/body movement.

How I wire the signal to my body is antoher issue, but first show me an interesting sensor. Best I can come up with is remote temperature sensing, still booring.

15
rubatuga 15 hours ago 1 reply      
Didnt people do this to seem counterculture/edgy? I mean, it looked kind of cool the first time I read about it, but that was only because of the novelty factor. That sixth sense doesnt really help us, and Im pretty sure that more animals would have a magnetic sensor if beneficial. Some pigeons have magnetic sensors to determine their orientation to the earth.
16
zitterbewegung 7 hours ago 0 replies      
I think that bio hacking has lost its initial hype but it does have a chance to jump the chasm if it got a killer app. Or even creating non invasive ways to perform the current set of tasks.
17
FullMtlAlcoholc 14 hours ago 2 replies      
I never asked for this.

If you can perform all these tasks with your smartphone cheaper and without invasive surgery, you can't really call it an augmentation or upgrade. Frankly, if implantable magnets and NFC/RFID chips are the full extent of the contributions of the biohacking community, then the community completely lacks imagination and creativity.

We can use radio waves to measure heart rate and breathing and use that data to measure a person's emotional state. It would be interesting to see someone hack together an internal sensor that does this and have it attached to implantable led's, so your body could "glow" particular colors based on your emotional state. Or use electronic ink tattoos to make an animated gif on your body.

Just copying the same two uninspired trends isn't what I would call hacking.

18
p1mrx 13 hours ago 6 replies      
Has anyone attempted a modification (electronic or pneumatic) that lets you hear your own heartbeat 24/7? That might be useful for monitoring and controlling your health and emotional state.
19
Mz 13 hours ago 1 reply      
one futurist gushed over a visually striking prosthetic arm, musing that it might be worth amputating for the upgrade. Another panelist shut them down immediately: doctors and scientists arent even close to solving the phantom pain and limited mobility that most amputees face, let alone building something uniformly better than a human limb.

You can upgrade your current arm by eating better and going to the gym. If that isn't enough to get the visually striking look you are after, tattoos and the right clothes can help.

I can't believe there are nutcases in the world ready to embrace becoming a Borg. This is bizarre.

20
PhasmaFelis 11 hours ago 1 reply      
Magnets wear out? I didn't know that. How come?
21
booleandilemma 12 hours ago 3 replies      
"Hacking your body" the way this article describes seems more pathological than anything.

It's like people who fetishize about getting amputated.

22
analognoise 12 hours ago 0 replies      
Not many people are impressed with this silly shit to have it done to them?

You know, I don't often say this, but the general populace was right on this one.

23
micimize 3 hours ago 0 replies      
Author throws their uninformed and limited view of injectables against their previous empty and uninformed optimism for wetware grinding due to their #fingermagnetennui (which to be fair is No Joke), is saddened.

Includes such deep insights as:

Stitching a cellphone into your arm maybe isn't the future

Replacing body parts for fun doesn't make sense yet

biohacking is in decline because Trump was elected

18
The idiot's guide to special variables and lexical closures (2003) [pdf] flownet.com
45 points by lisper  14 hours ago   11 comments top 2
1
taeric 9 hours ago 3 replies      
I've heard that dynamic scoping was considered faster than lexical for a time. Anyone know of any papers that would justify such a stance?
2
thesmallestcat 9 hours ago 2 replies      
What are the practical differences between special vars in Lisp and dynamic vars in Clojure? The obvious ones are that `def ^:dynamic` can't be local to a function, and that Clojure uses a separate form, `binding`, rather than `let`, to set their values.
19
E-Commerce as a Jobs Engine? One Economists Unorthodox View nytimes.com
36 points by prostoalex  13 hours ago   18 comments top 8
1
Spooky23 10 hours ago 1 reply      
This sounds to me like reading statistics without understanding them, or finding a gap in BLS numbers.

I worked at CompUSA in the mid 90s, which was a big box format computer store similar in size to a Bed Bath and Beyond or Staples. We had like 75-100 employees, 5-7 were salaried, and only 15-25 were full time. Most part-timers worked 25-28 hours a week. (They wanted to avoid state health insurance requirements which kicked in at 30)

I don't think that Amazon is replacing those manhours 1:1. I'd guess that they need 1/5 of the manhours. Most of the supporting people (UPS, FedEx, LTL freight, soda machine guy, cleaners) are a wash between retail and ecommerce.

The retail->logistics displacement hurts a lot of marginal workers like kids, single mothers and others. It also creates a permanent underclass of temp workers who as contractors will always be treated worse than employees. There isn't much of that in retail.

2
jondubois 1 hour ago 1 reply      
The only real progress brought on by technology so far (in terms of human happiness) is in medicine.

Other than that it has mostly served to give people options that they don't really need.

If technology is not freeing up people from their jobs, then I think it isn't delivering the real progress that humanity needs.

Personally, I feel that the jobs have gotten worse especially in the last 5 years. I used to be an all rounder software engineer and these days I find myself more and more being either a front-end developer or a back end developer depending on the company... And the job is becoming increasingly tedious; that's why I change jobs every 6 to 12 months. I get paid more but the job gets more boring.

Most people in the West shouldn't need to work but instead of coming up with UBI and taxing rich shareholders more, the government prefers to invent increasingly narrow and meaningless jobs to keep the masses busy.

I think that's why Bitcoin is so valuable, everything in our system has become so extremely inefficient and low in value that now the only thing required for something to actually have value is the fact that it can get people's attention.

If something can get people's attention and keep them busy, then wealthy people and big companies will poor some of their infinite money in it.

3
azemetre 10 hours ago 2 replies      
E-commerce has added 178,000 jobs since 2002 while department stores have lost 448,000 in the same period according to the US Bureau of Labor Statistics. I haven't read the paper the article is reference but it appears to be very liberal with how the author wants to define Ecomm jobs.

https://www.nytimes.com/interactive/2017/07/06/business/ecom...

The job parity isn't the same, I don't honestly believe the rhetoric of "other jobs will be created over time in new sectors." You can do so much more with less and it's only going to get worse for rural and suburban communities because it appears Ecomm is mostly growing in large metro areas.

4
moomin 1 hour ago 1 reply      
Sounds like he's not comparing like with like: he's taking the greater reach of the tech industry and comparing them with a more restricted reading of the previous industry.

That isn't even the worst thing with this reading: we know that the availability of decent jobs is going down. If, as he says, e-commerce is pushing them up, he needs to demonstrate a previously unidentified force that is large than the development of e-commerce pushing in the opposite direction. I find it highly likely that such a force exists.

5
hn_throwaway_99 10 hours ago 2 replies      
I'm skeptical. As pointed out in the article, automation at fulfillment centers will ramp up quickly. Warehouses are pretty easy to make into "clean" environments, and automating them should be a lot easier than, say, driverless cars or fruit picking.

That said, one thing that makes sense to me is warehouse workers are a highly efficient use of labor, and as such it makes sense that it pays better than retail. Pretty much all of their time is spent doing something in service to a customer, while in retail there is a ton of standing around doing nothing, or worse, pestering customers with lots of "Can I help you with anything?"

6
baybal2 1 hour ago 1 reply      
Americans never bought into eCommerce.

Just like US lagged with smartphone and mobile internet adoption, until Apple came with its smartphone, US will be off reach for the industry until eCommerce will get its "iphone moment"

7
mozumder 4 hours ago 0 replies      
Probably a better way to figure out the real employment numbers behind eCommerce is to trace the economy behind an average persons yearly consumption with and without Ecommerce. If the average person is spending more with eCommerce then you're going to have more employment due to the bigger economy behind it. (Assume $1000 spent by the average person in both cases result in equal employment.)

You can then trace second order effects by comparing where the employment is occurring and in what industries.

8
XiaomiFan 9 hours ago 1 reply      
20
Virus, the Beauty of the Beast viruspatterns.com
8 points by dvt  4 hours ago   2 comments top
1
raphlinus 3 hours ago 1 reply      
This is excellent and I have no idea why the 5 or so previous submissions failed to gain any traction. The topic is fascinating, and the style of presentation (video coordinated with interactive 2D and 3D animated models) is absolutely of interest to hackers.
21
Bitcoin Cash Starts Trading trustnodes.com
105 points by Andrew_Quentin  18 hours ago   61 comments top 10
1
hudon 13 hours ago 7 replies      
A fork like Bitcoin Cash will be a boon for the blockchain community.

There is a pretty big subset of the community that believe Bitcoin is a panacea that will replace currencies, payment networks and so on, regardless of how technically inferior the blockchain is to other more mature distributed databases and networks. They believe one day, the blockchain will be just as efficient or even more so. This subset is vocal about the urgency of increasing Bitcoin's block size limit so that we can increase transaction throughput as much as possible and as soon as possible.

On the other hand, most developers who have worked on Bitcoin proper (either protocol development or Core node development) believe that Bitcoin is more about financial sovereignty and censorship resistance, not as an in-place replacement of PayPal or VISA. This group wants to find as many ways to scale the blockchain without increasing the block size limit because increasing the size of blocks puts at risk users' ability to validate the chain. This is because larger blocks means more resources required to transmit, validate and store blocks and if you cannot validate blocks, then you are trusting transaction validators (miners) just like you trust PayPal. Risking the ability to validate the chain is risking the financial sovereignty or censorship resistance they value so much.

Regardless of how well some claim SegWit2x is doing, truth is once SegWit is activated, we still have to face the 2x hard fork which many people in the second camp will simply refuse to support.

Having said all this, Bitcoin Cash represents an earnest understanding that there are two camps in Bitcoin and because of the differing economic visions, they will have different technical visions, so why not have two chains and evolve them independently? Rather than playing tug-of-war and both parties being dissatisfied?

2
TD-Linux 15 hours ago 0 replies      
For those unaware, "Bitcoin Cash" is a proposed future fork of Bitcoin. The primary miner supporting it also runs an exchange, so what they are effectively trading is "future promised coins" once they start actually mining the fork.

Because so few of % of the total coins are tradeable, I think this results in a similarly volatile "market cap" as many ICOs.

3
encryptThrow32 11 hours ago 1 reply      
The dangers of tx replay mean that HF's like this can never be safe. Beware those that would tell you that these forks are safe, they are not.This is another scam from those that would try to usurp the blockchain.

You will be able to replay original bitcoin and abc tx's on each chain, unless you opt-in to some funny new untesed hash.This will hugely disrupt the minority chain ABC, as the mempools on cash chain fill with other valid tx's from main chain.Its going to be a bloodbath. Steer very clear!

From: https://bitcoin.stackexchange.com/questions/56867/bitcoin-ca...

Bitcoin Cash (aka Bitcoin ABC aka UAHF) provides two methods of replay protection, both of which are opt in. If you do not create transactions which use these features, then your transactions are vulnerable to replay.

The first method is a redefined sighashing algorithm which is basically the same as the one specified by BIP 143. This sighash algorithm is only used when the sighash flag has bit 6 set. These transactions would be invalid on the non-UAHF chain as the different sighashing algorithm will result in invalid transactions. This means that in order to use this, you will need to transact on the UAHF chain first and then on the non-UAHF chain second.

The second method uses an OP_RETURN output which has the exact string:

Bitcoin: A Peer-to-Peer Electronic Cash Systemas the data of the OP_RETURN. Any transaction which contains this string will be considered invalid by UAHF nodes until block 530,000. This means that prior to block 530,000, you can split your coins by transacting on the non-UAHF chain first with the OP_RETURN output, and then transacting on the UAHF chain second.

4
unabridged 1 hour ago 0 replies      
If you're going to fork bitcoin it should be to change the algorithm so 5 chip companies don't control the entire mining process.
5
eterm 15 hours ago 7 replies      
Is there a crypto voting ring? I understand why most crypto articles reach the top 10 posts but this one has nothing going for it.

* Niche website linked / breaking the news

* Non-mainstream crypto-fork

* No novel interest / features

There's nothing to suggest why this would be upvoted, even among this crypto-friendly crowd.

6
saurik 16 hours ago 0 replies      
https://www.reddit.com/r/Bitcoin/comments/6hko7c/viabtc_will...

^ Here is some information (mostly in the form of a fragment of a video, but also to a much much lesser extent some comments) on what seems to be earlier versions of this plan, which I found useful for context (though I haven't spent the time to work this all out in my head yet...).

7
SeriousM 3 hours ago 0 replies      
As a bitcoin newbie, I'm even more confused now as before and I don't know if I should start with bitcoin at all..
8
Artlav 13 hours ago 1 reply      
That sounds like a massive scam, in it's pumping stage.
9
ZeusNuts 13 hours ago 2 replies      
I don't think this is a joke, but it should be. Forking a chain will never result in value-added.
22
GitLab 9.4 Released with Related Issues and Web Application Monitoring gitlab.com
73 points by jfreax  17 hours ago   10 comments top 3
1
Walkman 3 hours ago 2 replies      
I was really optimistic about GitLab's vision and really liked how they shipped a ton of really good features, but they reached feature creep and I'm pretty sure they will never get "usably fast" because maintaining all these is a huge burden.

Also if you have tons of features like these, I think it's impossible to get all of them right and at the end, you have a lot of mediocre/poor features and few good ones.

The other thing is user experience. Who will know all these features, especially when setting some of them is already hard and need experts...

If you want to be everything for everyone, it's mostly impossible or you get a very complicated, unusable and/or slow thing.

I'm curious for counterarguments to these points.

2
josh64 2 hours ago 1 reply      
The new beta navigation is so much better than the current navigation model. The navigation context is now easier to see and I find the colour quite attractive. :)

I can't wait until it is enabled by default in future versions.

3
Eduard 2 hours ago 0 replies      
Did anyone give GitLab's metrics feature a try? Is it comparable to Nagios, Zabbix and the like?
23
The Colditz Glider wikipedia.org
19 points by todayiamme  9 hours ago   2 comments top 2
1
magicbuzz 38 minutes ago 0 replies      
Schloss Colditz apparently now has a museum. Be neat to see that.
2
locusm 3 hours ago 0 replies      
This story reminded me of a great board game in the 70's called "Escape from Colditz". Apparently theres a reprint!https://theboardgameshow.com/2016/10/18/escape-from-colditz-...
24
Cryptoeconomics: Paving the Future of Blockchain Technology hackernoon.com
99 points by doener  19 hours ago   74 comments top 6
1
pedrocr 15 hours ago 8 replies      
Before modelling cryptoeconomics maybe someone should apply some sound basic economics to the existing blockchain. Having a cryptocurrency where the founder owns 5% of all the currency that will ever exist and deflation is builtin and fixed seems like it fails some fairly basic things we know about standard economics before we need to start modelling the impacts of cryptoeconomics. Just had an interesting discussion about this on reddit:

https://www.reddit.com/r/programming/comments/6okg5v/a_hacke...

2
Torai 16 hours ago 0 replies      
Cryptoeconomics: 150% more bs than standard economics and powered by Blockchain Theology.
3
ftlio 14 hours ago 1 reply      
What I think a lot of people will initially miss in the critique of money provided by blockchains (read: Bitcoin) is the topology of the network as helpful in modeling cryptoeconomics (read: economics).

Any meaningful definition of a blockchain should prove useful in modeling the systems of money preceded by it. I honestly don't know why people think bitcoins and dollars need separate economics.

4
EGreg 7 hours ago 0 replies      
My main question is:

Has anyone found how to solve the double-spend problem WITHOUT global consensus?

The original Ripple idea used trustlines. The nice thing about those is that you don't have the double-spend problem. Each entity issues its own currency, so to spea, making it credit-money rather than value-money. I really love the absence of a global ledger recording every transaction ever made - both from a privacy point of view and this one:

https://www.scuttlebutt.nz/stories/design-challenge-avoid-ce...

But the problem with trustlines is that it's hard for anyone to pay anyone, and the bandwidth of trustlines can be pretty small. Plus, you do have to have the permission of all the intermediaries to make a payment.

I would love to have each community issue its own currency without being able to control who pays whom, and without the double spend problem, and without a global ledger of all communities in the world. What are the latest solutioms to this?

5
dreamdu5t 16 hours ago 4 replies      
Terrible article. So many suckers in this mania. 99% of people writing about, trading with, and investing in cryptocurrencies don't know how blockchain or bitcoin even works. It's worse than 1999. The hype and ignorance is astounding.
6
KirinDave 11 hours ago 0 replies      
It really stresses me out how folks have equated PoW algorithms in a chain with a general Byzantine Fault solution.

They haven't, and Bitcoin essentially lives and dies by the proposition that it is more valuable to the majority of miners alive than dead. The instant this is not true, there is a potential for a majority of miners to defraud the system and extract value until the value of the currency drops to 0.

And of course, the damage to the system just 2-3 colluding miners could do if they wanted is pretty significant, especially if the goal was DoS instead of theft.

25
The ages of distraction aeon.co
48 points by jonbaer  14 hours ago   7 comments top 2
1
Dowwie 11 hours ago 3 replies      
I find it difficult to watch many new movies, particularly of the animated variety, because they jump from one thing to the next within seconds.

Movies catering to technology-induced attention deficit disorder are junk food for the mind. Maybe it's time for information diet nutrition panels.

2
du_bing 8 hours ago 0 replies      
For a large group of people, such as a society, inattention may be good for innovation and mutation, while for a single person, attention is the best way to really do something in his life.
26
Bilingual Education: Potential Brain Benefits npr.org
59 points by curtis  15 hours ago   16 comments top 5
1
chadcmulligan 15 minutes ago 0 replies      
Any one know if math / programming languages count?
2
ouid 14 hours ago 0 replies      
>"If it's just about moving the kids around," Steele says, "that's not as exciting as if it's a way of teaching that makes you smarter."

>Steele suspects the latter because the effects are found in reading, not in math or science where there were few differences.

This seems like evidence that bilingualism doesn't actually make you smarter with neuromagic, but the investigator chose to instead interpret it precisely the opposite way.

I think a simpler explanation is that reading is a skill which has a component that is invariant with respect to the language. Identifying glyphs, pronouncing the words in your head etc.

3
dmingoddd 14 hours ago 2 replies      
Fun fact, almost all Indians know 2 languages well. 1 their regional one, 2nd Hindi.. If they speak english, that's three..

Down south Hindi is less prevelant but 2 languages is like the norm for kids here..

4
Temasik 7 hours ago 1 reply      
if you want an easy language to learn, learn malay

https://en.wikipedia.org/wiki/Malay_language

proven by chinese,indians,bangladeshis,nepalense,filipinos,burmese, cambodians

5
dgut 10 hours ago 0 replies      
As with everything else, I suspect learning more languages will just make you good at just that, learning more languages.
27
16-Bit VM in JavaScript francisstokes.wordpress.com
73 points by mzehrer  17 hours ago   8 comments top 4
1
jasonhansel 7 hours ago 1 reply      
Congratulations! I've been working on a similar (though less well-documented) project in Rust: https://github.com/jasonhansel/kineticvm
2
mysterydip 10 hours ago 0 replies      
Something I've always wanted to do (16 bit as well, even) but never got around to. Thanks for the writeup!
3
pjmlp 15 hours ago 1 reply      
Very interesting read and props for choosing opcode D, S.
4
orionblastar 7 hours ago 2 replies      
This is in a way that they make emulators in Javascript like that too? Might be possible to emulate Mame and Mess machines if they haven't already?
28
Converting floats to strings corsix.org
86 points by jsnell  18 hours ago   17 comments top 9
1
lifthrasiir 14 hours ago 0 replies      
> Often the problem is framed as "converting a floating point number to the shortest possible decimal string representation", but this framing is neither neccessary nor sufficient for implementing the %e / %f / %g formats of sprintf.

This sentence is crucial for anyone wants to implement the float-to-decimal conversion. There exists a Grisu-inspired analogue for this problem but to my current knowledge it is not well described in any known literature. I had to fully analyze the analogue and recreate my (slightly improved) version of the algorithm for Rust [1]. It is not too complex once you understood but annoying enough to derive again.

[1] https://github.com/rust-lang/rust/blob/f8d485f/src/libcore/n...

2
corsix 14 hours ago 0 replies      
The end of the post alludes to part 2 ("how to adapt nd_print into something which can behave like the %e, %f, and %g formats of sprintf"), which alas I've yet to get around to writing. There'll also be at least one more part after that detailing the bag of performance tricks which can be used to turn the described algorithm into something very competitive. Alternatively, you can jump to the conclusion by just reading the implementation I wrote for LuaJIT [1], though obviously that won't give you the narrative of _why_ things are the way they are.

[1] https://github.com/LuaJIT/LuaJIT/blob/cf2dfaf3b4eef9b2de32d3...

3
to3m 12 hours ago 1 reply      
Similar approach, I think, by Russ Cox: https://research.swtch.com/ftoa

Starts with an int->string conversion, then does the rest with string manipulation - strings, of course, being bignums with a somewhat wasteful BCD representation.

4
panic 13 hours ago 0 replies      
musl libc also uses base-1000000000 bignums for float formatting: https://git.musl-libc.org/cgit/musl/tree/src/stdio/vfprintf....
5
userbinator 13 hours ago 0 replies      
Interesting. This alternative algorithm is very reminiscent of conversion to https://en.wikipedia.org/wiki/Binary-coded_decimal .
6
nerdponx 15 hours ago 0 replies      
I love learning about all the complexity behind things we do every day and never stop to think about.
7
gumby 10 hours ago 4 replies      
I just have to whine that I have always hated the construct "convert to string".

You don't convert a car to a photograph when you take a picture, and neither do you do any "conversion" when you construct a string that represents the same value as is represented by the bit pattern of your double or float.

This sounds like some weird linguistic pedantry, and perhaps it is, but I suspect this usage causes confusion for a number of beginning programmers.

8
orionblastar 7 hours ago 0 replies      
In 1986 I did that in Turbo Pascal 3.0 for DOS. Had a bug with the round function so I made my own nround that converted the floating point to a string then parsed the string to find the right rounding and converted it back to a float. I was the only person in class to get the correct answer and graded down by my professor and accused of hacking.
9
khanan 16 hours ago 1 reply      
Now, in Python3... :D
29
Robots from Gurgaon-based GreyOrange techinasia.com
37 points by williswee  14 hours ago   6 comments top 4
1
payne92 13 hours ago 0 replies      
If they want to come to the US at some point, Kiva's patent portfolio will be a big hurdle to clear: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=H...
2
senthilnayagam 4 hours ago 0 replies      
patents can be bypassed needs R&D money and good patent lawyers.

alternatively, if it can be argued as essential patent for warehousing industry, compulsory licensing or royalty can be negotiated.

but both would take couple of years. lets wait and watch this space an see how it plays out

3
0xbear 13 hours ago 0 replies      
I wonder what Bezos and Kiva/Amazon Robotics think about all this. :-)
4
rebootthesystem 13 hours ago 1 reply      
I thought Amazon had the concept of picking-up shelves and moving them about locked-up due to Kiva patents. Is that not true?
30
Pascal at Apple fogus.me
153 points by janvdberg  1 day ago   119 comments top 14
1
kabdib 1 day ago 5 replies      
I joined Apple in 1987, about the time they started ditching Pascal in favor of C. C++ (in the form of CFront) was just starting to be a thing.

Apple's Pascal had been extended to the point where there were few true differences between it and C, other than

- strings with busted semantics (size being part of the type being a huge mistake, leading to a proliferation of types like Str255, Str32, Str31, Str64, etc). I should add that C's strings were semantically busted, too, and in more dangerous ways. No way to win :-)

- nested procedures (not terribly useful in practice, IMHO)

- an object syntax, used for Object Pascal and MacApp (a complete, though large and somewhat slow app framework).

- some miscellany, like enums and modules

Apple extended Pascal pretty extensively, adding pointer arithmetic, address-of, variant functions calls, and a bunch of things I've forgotten. I could write some Pascal, then write some C, and squint and they'd look pretty much the same. Most people shrugged and wrote new code C if they were able, and then moved to C++ when CFront became usable.

2
thought_alarm 1 day ago 4 replies      
I'm reminded of a really great interview with Bill Atkinson where he describes (among many other things) how he initially brought Pascal to Apple and the Apple II.

https://youtu.be/6tUWoy1tJkE?t=45m

The Pascal bits are from 45:00 to about 50:00.

 ... My manager at the time said, no, we don't want to do this [Pascal], people are happy with what they got. I overrode him and went to Jobs, and Jobs said "Well, I'm not convinced. I think our users are happy with BASIC and assembly language. But you seem passionate about it. I'll give you one week to prove me otherwise." I was on an airplane within two hours down to UC San Diego and I started porting right away. ... The other thing that happened then is I had to plug in the disk routines, and their system was pretty big and that little 13-sector floppy disk didn't have a lot of capacity. Well, Woz had just come up with a different way of encoding the data on the disk so that we could get more data for the same disk size, and we needed the 16-sector disk routines. And so Woz came down, and I was there... I had never bothered to get a motel because I slept on the bench when I wasn't working. This is in the computer science lab at UC San Diego. I was busy, I didn't have time to go sleep. But Woz came down, and I got to interact with him and it was really fun because he was working on installing these 16-sector disk driver routines, and he'd go 'type type type type type' -- and he didn't type in assembly language and have it assembled. No, he'd type in 6502 machine code. Hex. -- He'd type in hex, and then, you know, watching him type and he'd go 'type type type' -- pause -- 'type type type type', and when he finished I asked him what was the pause? And he said "forward branch, seven instructions, I had to compute the offset before I continued". So, he didn't back-patch the offset, he actually looked at what he was going to be typing, knew how many bytes it would take... he was brilliant.

3
rcarmo 1 day ago 7 replies      
I wrote a fair amount of Pascal in my 680x0 Mac days, both in MPW (the Macintosh Programmer's Workshop) and THINK Pascal. Back then Modula-2 was available on VAXen and "big" machines, but Pascal was almost "portable" across Mac/PC/VAXen and was amazingly fast, so it was pretty fun.

I eventually moved to C (also using THINK C - see retrospective link below for a sample of those heady times) and never looked back until a couple of weeks ago I set up Lazarus for my kid to play with (there are too many Python GUI development options, and none halfway as good).

Lazarus is _amazing_ (if somewhat odd in today's world), and I really wish we had more IDEs like it instead of all the crappy Electron/web approaches for building desktop apps. It builds fast, tiny, entirely native apps in less than a second, and is an excellent example of how far Pascal went despite falling off the mainstream wagon.

(If anyone knows of anything like it for cross-platform desktop apps, let me know, I'd love to try it out)

- Link about early C dev on the Mac, that also mentions MPW and Pascal in passing - https://retrocomputing.stackexchange.com/questions/3213/what...

4
cjensen 1 day ago 8 replies      
I really miss Pascal; it was a great and safe language for beginners. As it was extended with Objects and Modules, it was great for development.

But there are good reasons it was surpassed by C. In early Pascal, you got a pointer by allocating memory; you could not get a pointer to an existing variable. You'd be surprised how often that gets in the way when implementing a data structure. Just try to implement the following function in C without using the address-of operator:

 struct list *head = (void *) 0; void push_back (struct list *entry) { struct list **p = &head; while (*p != 0) p = &p->next; *p = entry; }
Pascal got better. But once you've switched to C, the sheer verbosity of Pascal is bothersome. Instead of "{" and "}" Pascal uses "begin" and "end". It uses "procedure" or "function" to introduce a function.

There's no going back, but I wish it was still available for learners. Java is comparable in terms of programmer safety, but has too much ridiculous boilerplate just to write "hello, world".

5
robterrell 1 day ago 3 replies      
It may seem quaint now, but Apple Pascal was a serious tool. I took AP Computer Science in 1985 and the language taught was UCSD Pascal on the Apple ][+. (In the 80's, C on an Apple ][ was impossible. The only C compiler you could get was for a card that went in the expansion slot that included a Z80 processor.)

When I went to college in 1986, Pascal was the primary language used in all entry-level courses at Virginia Tech. (Turbo Pascal on an IBM PC -- $5 at the student stores, if you brought your own floppy. I'm the weirdo who brought a Mac Plus to school and used Lightspeed/Think Pascal.)

All of the classic Mac APIs used pascal calling conventions. Pascal continued to be the language used for serious Mac development for a long time.

I can't find any references via Google, but Apple had an internal language called "Classcal" which I was told was "pascal with classes". Eventually Think Pascal adopted this object-oriented Pascal syntax.

Just today I was thinking about how great it was coding in Lightspeed Pascal, when I was trying to get VS Code to display ligatures. Lightspeed Pascal parsed the AST and auto-formatted all your code for you. Tabs became tab stops, like a word processor. I still miss that; hard to believe today we're still fighting about tabs v. spaces.

6
dfan 1 day ago 1 reply      
In order to run Apple Pascal on my Apple ][+, I had to buy a "language card". This was bigger than an index card (maybe 3 by 6 inches) and added sixteen whole kilobytes to your computer's RAM, beefing it up to a massive 64K and rendering it capable of running such a system hog as Apple Pascal. I think it was about a hundred bucks in the early 1980s.

Meanwhile, the Apple ][+ could only display 40 columns on screen, where of course by "screen" I mean "television". (You could buy another big card to give you enough memory to display 80 columns at a time, but who had the cash to make another huge purchase like that?). Of course, 40 columns isn't enough to write in a structured programming language with indentation like Pascal, and in fact the Pascal program itself supported logical lines of up to 80 characters.

This issue was resolved as brilliantly as you might expect. You could toggle between looking at the left half of your program (cut off at the 40-character mark) or the right half. I'm not kidding.

7
jzelinskie 1 day ago 3 replies      
I've been reading a lot about Niklaus Wirth recently. I read an interesting piece about Oberon I found in an HN archive[0] that mentions Oberon usage on Macs. I'm very tempted to buy "The School of Niklaus Wirth: The Art of Simplicity" after reading a few things about him. I wish there were more instances of "computing in a vacuum" like at ETH.

[0]: https://news.ycombinator.com/item?id=10058486

8
Animats 1 day ago 0 replies      
The page isn't rendering properly with ad blocking. The original memo is being served from Storify. Where is it from? The Internet Archive. Here's the original, which reads better directly from the Archive.[1]

[1] https://archive.org/details/Apple_Pascal_History_DTC_1992

9
malkia 1 day ago 0 replies      
I've started with Basic, some assembly (CALL-151) on my Apple ][ clone (Pravetz 8C), but as soon as I got my hands on IBM PC/XT (or AT) Turbo Pascal (the 30-40kb turbo.com) was just the right choice. It fit on one disk, there was plenty more, while a Microsoft C/C++ compiler and linker each took a whole separate disk.

The best thing I've loved were the .TPU files (but not sure whether TP4 or TP5 had them truly). There were no .h files to be included, or .lib (.a) to be added, it worked just magically well (with some limitations).

I've moved to C/C++ later simply because, well it's a stupid reason. I was writing a "File Manager" like app for DOS (just single column, not like Norton Commander, FAR or Midnight Commander), and the only function in Turbo Pascal 5.0 to move files was just renaming a file in the same folder... Had I known about inline assembly and be more brave, I would've stayed in Pascal Land (And I was already familiar with Ralph Brown's Interrupt List)... But hey, this stupid reason moved me to C/C++ as the builtin function there did it... then again, soon after that I've started using more and more inline assembly.

I love C/C++ now (especially C), and where I used to be really good at Pascal, I might have some hurdles reading pascal code today. Delphi was my last stop, and while I like it, I switched to video game development, and Pascal was not much used there (... Age Of Wonders I believe was written in some form of Pascal and possibly some other games,... Also part of Xoreax's IncrediBuild might've been, especially the part that does the C/C++ header processing, I think it was since we had issues with it, and while debugging found something pascal-ish in there, but don't remember now).

10
SwellJoe 1 day ago 0 replies      
My first programming language was (obviously) BASIC, but my second was Pascal. I took AP computer programming in high school and it was taught with Pascal on Apple II and IIe computers. My dad later bought me Turbo Pascal for the PC (I remember a yellow box with "+ Objects" on it, so it must have been 5.5 Pro in 1989), and I used it on his machine, but never did much with it other than tinker. I finally got what I viewed as a real programming setup when I got DICE (Dillon's Integrated C Environment) for my Amiga a couple years later. Still didn't do much more than tinker, though, until I got a Linux box a couple years after that and source code for everything was available for poking at.

Anyway, Pascal was very common in education back then and Apply was very common in education...ergo, Apple and Pascal went together a lot of the time.

11
Lerc 1 day ago 0 replies      
Does the p-code compiler self host? I've been working on an emulator for an imaginary 8-bit machine (AVR instruction set), and have been looking for languafge options to run on it.

In the last few days, I've gotten FreePascal compiling for it, bit I would also like to have languages that I can compile or interpret on the machine itself.

12
swombat 21 hours ago 0 replies      
I learned to program with Turbo Pascal on my PC back in the early 90s. Language lite is fun.

And yet, when I clicked on this part of me was really just hoping this was referring to nvidia's Pascal architecture, a hint that maybe they were finally dropping the Radeon line and getting some decent video cards into their machines.

One can but dream I guess.

13
EvanAnderson 1 day ago 1 reply      
When opened w/o Javascript you see only the first paragraph and the timeline at the bottom. I almost skipped over this because I thought there wasn't anything interesting there.
14
carapace 1 day ago 0 replies      
Please, everybody, if you haven't read it, stop now, get a copy of "Humane Interface", spend the weekend reading it, and then come in Monday and apply it.
       cached 23 July 2017 10:02:01 GMT