Hacker Newsnew | past | comments | ask | show | jobs | submit | vsingh's commentslogin

The "Chinese mother" approach to raising children is based around motivations at the second-highest level of Maslow's hierarchy:

http://en.wikipedia.org/wiki/File:Maslow%27s_Hierarchy_of_Ne...

While this aggressive approach to parenting can be made to sound right on a certain dispassionate level, to some people it just feels intensely wrong in a way that's hard to explain. Why is that?

What happens is that children raised to heavily optimize "Esteem" have a hard time switching gears into "Self-actualization". It's no surprise that the "Chinese mother" disallows her child from starring in the school play. That would be a means of self-expression; it would throw a monkey wrench in the whole works.

I've found many times in life that in order to self-actualize further, I've had to give up things that others praised. I think that in quitting Google and joining a startup (despite her parents' likely disapproval), the author has taken a big step towards self-actualization.


This is not intended to exactly disagree: Many times when people cite the Maslow hierarchy, they often to take it as an axiom that the hierarchy is completely exactly how people work, and to lead better lives, people must go about fulfilling exactly these needs in this order.

As someone who presumably subscribes to the hierarchy, would you agree that you seem fairly certain that this is how it works? And, if so, can you say why?

At minimum, I'd say that it's not obvious to me that the order specified by the hierarchy is really in evidence. For example, I believe I've witnessed a fair number of people I'd say were self-actualized and esteemed who are fairly short on the friendship/family/sexual-intimacy front.


No, I do not believe in a strict order imposed by the hierarchy. My view, which I've come to through studying both Maslow's Hierarchy and Dabrowski's Theory of Positive Disintegration, is that you're able to operate at the highest level you've so far achieved during your lifetime, even if one of the lower levels is missing. For example, one who had all five levels fulfilled during childhood will be capable of self-actualization in later life even during periods of physical or fiscal insecurity.


Some clarifications:

1. Presumably you don't have to satisfy everything in each level to be considered satisfying that level; otherwise if your entire family dies in a plane crash, you can never achieve MH3 again, and if you never got MH5 before that, you're permanently fucked.

2. If you get to MH4, but then your town gets hit by a neutron bomb, killing your wife, family, and friends, you get your MH3 knocked out, but your MH1, MH2, and MH4 are all still intact (MH4 because you telecommute most of the time, so your career is intact). Are you immediately eligible for MH5, or do you have to regenerate your MH3 first?

3. Interesting corner case: Say you get to MH3 by virtue of being raised in a rich American family. At age 18, you fly somewhere to visit a college by yourself, and your entire city gets blown away by a neutron bomb. You now have no friends or family, but life insurance on your family pays off handsomely, so you are at MH2. You decide to become a soulless driven businessman, and you do fine; you get promoted, everybody respects your competence, you respect other people's competence, etc, but you have no friends and no companionship. It would appear that you have reached MH4, but you never got there while you had MH3; does this count as MH4? Or do you deny the possibility of this scenario? (I made it up, so that's perfectly valid.)

(These are not leading questions; I'm curious as to how the Maslow viewpoint deals with corner cases.)


What's the point of a "hierarchy" then, if the "upper" levels do not depend on the "lower" ones? Basically what you're saying here is a tautology: one is able to operate at a "level" that, well, one is able to operate at.


It's clear that vsingh claims that you need to fulfill the lower levels to reach the higher ones for the first time.


>Some sort of ethnic autonomy that says Indian rule of India is better regardless of the lot of the common man?

I'm surprised you find this hard to understand. People throughout history have fought and died for self-rule.

Hell, anyone who's been a kid knows how much it sucks to be told what to do, especially when the person doing the telling thinks it's "what's best for you".


Hell, anyone who's been a kid knows how much it sucks to be told what to do, especially when the person doing the telling thinks it's "what's best for you".

It is not uncommon for the kid to eventually grow up and come to the understanding: "my parents knew what was best for me after all".


This may be a case of stretching a metaphor too far. Are you aware of any countries saying "Please, Britain, make us one of your colonies again!"?


Oh, I understand that. I just remembered some childhood experiences.

But while I certainly do not want to move back in with my parents and let them tell me what to do, I'm doing some things now that they tried to teach me when I was young and I thought those things to be stupid at that time.


hong kong.


And do you think that the people who shout for self-rule because of the biases of the primate brain are taking into account the death toll or looking at moving averages of standard of living for the common man?


It's not that interesting, but my sleep habits have been pretty poor in 2010. My resolution for 2011 is to be in bed by 1AM every single day, and to be up at 8AM on every non-weekend day.


Could anyone explain the "self-healing" algorithm in simplistic terms?

From what I gathered, when they compact a page of memory, moving all the objects within it to different locations, they will set a marker on all pointers to be "unset". Then, while program execution is still going on, the GC thread will be busily going through the pointers and correcting them to their new locations as necessary, then setting the marker flag. If, during this period, the executing code tries to use an unmarked pointer, a "read barrier" is hit in the VM, and the GC code corrects that pointer ("self-heals"), sets the marker, then allows execution to continue.

Do I have this right? What about the initial unsetting of all these markers? It would seem to require going through all pointers before you want to compact a page, and I would suspect they're being more clever than that.


You don't mark pointers, you unmap a page of virtual memory. This instantly invalidates all pointers to that page without touching them, and allows you to immediately reuse that physical memory by mapping it to a new virtual address. When you eventually dereference a pointer to an unmapped page the processor's MMU throws a page fault. The VM catches the fault and fixes up the pointer on the spot.

What I'd like to know is how you can guarantee that all garbage is eventually collected in a system like this, and how you can guarantee that you've fixed up all pointers to an unmapped virtual page so you can reuse it.


> how you can guarantee that you've fixed up all pointers to an unmapped virtual page so you can reuse it.

Maybe they don't.

amd64 effectively has a 48-bit addressing limit. You can unmap 1,000 4K pages each second for over two years before you need to reuse a page address.


If I'm reading the article correctly, they do fix up all the pointers on the next gc run. For every scanned pointer just check first whether it points to a stale page (and update it), then do the liveness marking. btw, they're using 2mb pages to cut down on page faulting overhead. And they're talking about terabytes of allocations per second (because they're compacting the first gc generation with this scheme, too), so virtual memory deallocation needs to happen somewhat timely. Furthermore at least the linux kernel never deallocates page tables, so no longer used virtual memory is definitely not free.


An interesting idea, but I'd imagine the tables you need to maintain to fixup pointers would become prohibitively large before you ran out of address space.


Perhaps the VM doesn't use tables but a pointer remapping algorithm, something along the lines of (page_address * large_prime_number) mod 2^48.


The problem is you're not just moving whole pages, you're moving and compacting the objects within that page. Each object in the page has a new offset in the new page (or pages).


Only if you had a very large number of undead per live instance.


So does that mean there are conceiveable advantages to using the full 64bit addressing scheme? What about 128 bits?


No, that means that only having 48 bits of physically addressable memory gives you the ability to have many times that amount of virtual memory -- and that virtual memory can be used to implement this fun page-faulting shenanigans without worrying about taking up any real memory address space. You have enough virtual memory for all your physical memory, and these tricks, and then some.


The GC process does a full mark-and-sweep collection, so eventually it will have processed the entire heap. At that point, all garbage that existed when the collection started will have been collected, although new garbage will have been created by the program in the meantime. Similarly, all objects that survived the collection will have had their pointers fixed, and new objects will have had the correct pointers in the first place.

The purpose of the read barrier is to allow the program to keep running while collection is happening. It lets the VM trap access to the parts of memory that the GC is working on, and do little bits of the collection algorithm in the program threads so they don't have to stop and wait for the GC thread to finish what it's doing.


I guess I don't understand how the mark phase can work while the program is running. If the program is constantly modifying references then how does the GC know when it's done marking?

If the program modifies an object that's already been scanned during this mark phase, does the GC have to go back and re-scan that object? If so, then how can you guarantee the mark phase will complete without stopping the program at some point?


That's definitely the hard part. :-)

From the interview: "We will find all the live objects in the heap in a single pass. We will never have to revisit any reference." So, no the GC never has to revisit any object, and it completes when it's scanned all live objects.

The read-barrier is what prevents the program from interfering with the marking phase. That bit about unmapping virtual memory happens later when marked objects are being moved. As the GC walks through memory it's marking references by setting (or clearing) a bit in the pointer. Meanwhile the program goes about modifying objects by copying references - first reading them into a register and then writing them back out to the object's storage on the heap. The read triggers the read-barrier. If the reference being read has already been marked, great. If not, the trap handler marks the reference before allowing the program to continue. The write operation then writes a marked reference on to the heap, so no need for the GC thread to return to that object.

Make sense?


How do you mark a single reference? Don't you have to mark all its children recursively, in the worst case scanning the whole heap (if this cycle just started and the GC thread hasn't gotten far yet) before returning control to the program?


Ah good catch. That detail isn't apparent from the interview. See this paper for more: https://www.usenix.org/events/vee05/full_papers/p46-click.pd.... Basically, references contain flags that indicate whether the GC thread has already visited that reference. When the read-barrier traps, the handler checks the flag. If the reference has been visited already, fine. If not, it gets added to the GC thread's queue of objects to visit. Then the program marks the reference as visited, and continues. New objects are always marked visited, and aren't collected until the following GC cycle, so the current mark phase is guaranteed to complete.


So when you're marking, you're working off a mark stack. You pop an object off the market stack, scan it and mark each of its children, then put any newly-marked children onto the mark stack. Note that the mark stack separates when an object gets marked from when it is scanned and its children marked. So when the read barrier triggers, you can mark the object and put it on the mark stack, as if you'd just scanned its parent. The object's children will get recursively marked when the marker processes the object from the mark stack, but that can happen independently of the read barrier fault.


Marking is concurrent. The read barrier trap only has to mark the immediate reference. If the mutator that triggered the trap proceeds to reference one of the reference's children before the marker thread has gotten around to it, that will trigger another trap.


BTW, you can read more in Cliff Click's USENIX paper from 2005: https://www.usenix.org/events/vee05/full_papers/p46-click.pd...


Ah, that makes things clearer - thanks.

> What I'd like to know is how you can guarantee that all garbage is eventually collected in a system like this.

Me too. If you can't guarantee that, it seems that when you unmap a page of virtual memory, you'd have to make sure never to use that page again. You'd also have to keep around your table of "mappings from old pointers to new pointers" forever, just in case you encounter a lingering bad pointer and need to correct it.


Yes, exactly, in fact I was just editing my post to add that concern! Their garbage collector constantly scans the heap fixing up pointers so you don't have to wait for every reference to hit a page fault, but it seems difficult to guarantee that you're done if the program is twiddling references while the collector is doing its work. Perhaps there is also a write barrier which makes sure to never write a pointer to a collected page.


Most Generational GC's (including Hotspots, which is the origin of Azul's JVM) will have the JIT insert a write barriers for all pointer sets. This is to keep track of cross generational references (so you don't have to scan the entire heap).

The hard part has always been dealing with reads, (which are much more common and expensive to put a software barrier around), and Azul has quite brilliantly figured a way to handle this both in their specialized hardware, and now their VM.


You're probably right. Although he doesn't mention the write barrier, this is usually required for a generation GC.


I think the trick is that after you complete the next mark phase, you have visited every possible pointer to the unmapped page.


Only if the program hasn't written a new reference to the unmapped page in that time. You need to check on writes too.


He gave a TED talk recently which covers these ideas: http://www.ted.com/talks/steven_johnson_where_good_ideas_com...


That's really neat. Bravo!

I've hit a roadblock in my attempt to install the proper Clojure SLIME environment in Emacs. I've got clojure-mode, SLIME, swank-clojure, and leiningen installed, and I get as far as the slime-repl showing up properly, but my Clojure forms seemingly get ignored by the swank-clojure process. It's frustrating. From my readings of various Google Groups, people often seem to have trouble getting this whole machinery up and running. While the developers have done their best to set up ways to install the whole shebang automatically, that tends to make it more difficult for those of us who like to download systems piece by piece and put it together ourselves.

For that reason, a new way to interact with Clojure excites me. However, I may not take the leap of installing this project immediately. The main thing I like about the SLIME environment is that you get the full power of your text editor even in the REPL, which is invaluable for playing around with complex forms. With this system, it seems that you can either evaluate expressions in Textmate (which is not a REPL and forces you to context-switch to another window) or connect to cake in a terminal (which doesn't give you the flexibility of a text editor).

Perhaps a special Textmate buffer that automatically pastes the output of the eval'd form after the cursor would give me what I want. I will clone the project repo and start looking into what I can do. Thanks for the work you've done.


Thanks for the feedback! Feel free to fork and try things out. I think you'll find that the project is quite small and manageable and each command is pretty self contained. We would love to hear about alternate interaction models.

One exciting possibility is nREPL, http://github.com/cemerick/nREPL. Since WebKit now supports WebSockets I have a feeling we could add many interesting behaviors into the Bundle w/o descending into Objective-C.


"Today, he was deep in his own personal maelstrom of defensiveness and hostility. His head was frequently down; in fact his whole posture betrayed his unhappiness. He frequently hid his hands behind his back — a classic defensive posture — when he wasn't clasping them in front of his stomach (another defensive posture)."

It was a defensive press conference, by nature. It's not like he's announcing a new product or something. Do you expect, or even want Steve to summon fake enthusiasm on command?

Say what you want about the "reality distortion field", but Steve is a profoundly honest guy, in the sense that he won't convey any impression other than that which he really feels. I bet he couldn't give a Stevenote about a toaster no matter how hard he tried.


The closest he came was when he introduced and demoed the Motorola ROKR. And even then he was visibly displeased when it failed on him on stage.


Is Arc a success on the same scale as Viaweb or Y Combinator?

In the academic sense ("was it a design that influenced others?") I think the answer is yes. pg's essays about Arc and the language itself got a lot of people thinking about how to improve Lisp. Rich Hickey, creator of Clojure, was influenced to some extent by Arc.

In the practical sense ("is the community active and thriving?") I think the answer is no, so far. #arc on freenode is dead quiet and nearly empty, and there are only 20 new posts on arclanguage.org in the last 40 days. There's nothing wrong with that. It's just not "on fire", that's all. Not yet, anyway.

But it's only fair to give it time. pg said:

"Number one, expect change. Arc is still fluid and future releases are guaranteed to break all your code. It was mainly to aid the evolution of the language that we even released it."

So it's not surprising that an active, thriving community of library creators hasn't sprung up yet.


I prefer it to Scheme or Common Lisp. But while I've spent a decent amount of time on news.arc, I haven't spent much lately on the underlying language.

It turns out I can do a maximum of 2 things at once. I can't work on YC, writing, and hacking. And since YC is a given, that means I have to choose between writing and hacking. Over the last couple years I've mostly chosen writing, but that might change.

Which reminds me, I really should do a new release of news.arc. It's significantly better now.


I'll be interested to see if having a child increases your capacity for simultaneous serious projects. It had that effect on me. I think it does that because you're forced to do lots of context switching, and eventually get better at it.


So far that hasn't happened.


It turns out I can do a maximum of 2 things at once.

Huh. That reminds me of a rule I figured out a long time ago: you have two cards to play. As in: Job, school, family, startup, pick 2. (Though startup really wants both.)


I was just thinking about how much faster HN has been lately. It's pretty rare to get a timeout, or even have to wait for a page to load. It's one of those things that I guess you don't really notice until the other sites you visit start getting slow again (ahem, reddit).

Anyways, thanks for keeping it fast!


Has any of the news.arc goodness trickled down into arc.arc? It'd be cool to see arc3.2.


"If we believe something about the world, we are more likely to passively accept as truth any information that confirms our beliefs, and actively dismiss information that doesn’t."

This is important. Darwin, on his journeys, was very strict on himself about noting down any information that seemed to contradict his theories, because he knew that the natural tendency of the brain is to turn a blind eye to any such inconvenient facts.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: