Hacker Newsnew | past | comments | ask | show | jobs | submit | ethanpoole's commentslogin

At the University of Minnesota, those with degrees from the university can audit courses at a greatly reduced rate. I'm not sure whether this applies to online courses, but it's very likely that it does. During my undergraduate education there, I encountered many such post-degree students; they were quite a joy to have in class.


That's awesome. Question: Could those post-degree auditors get any credit for those classes? Or was it simply for the content?


It's simply for the content, not credit. I assume that if they wanted to get a second degree, they would have to reenroll. I should point out that university employees could also take classes for free, while earning credit; though there have been some cuts to this program recently, if I recall.


> If a computer's handling of a language can vary so drastically, imagine how the human brain must deal with expressing or receiving complex thoughts through these.

Orthography and language are not the same thing. Having a complex orthography has no bearing on processing complexity. If a language has a complex orthography, it may be hard to read and write, which are learned skills, but it won't be hard to speak and listen (which are naturally acquired skills given exposure to a language in the critical period).


That assumes that each of these properties exist in a vacuum, which is surely not the case. There's an obvious interaction between written and spoken language, and even without that, hundreds or thousands of years of differences in the written interactions and data transferred through reading within a culture could have immense impact on collective experiences and ways of thinking.


These are all complaints about orthography, not language.


Foreigners are taught to use the -vat form to form a conditional to get the correct consonant gradation. It's not relevant for juoda because the stem is juo-, no consonants.


Considering that there is so much evidence against linguistic relativity, I do not understand why these types of stories keep appearing on HN.


I agree that the public obsession with this idea is strange. There is probably still room for debate and further research but public credulity for this idea far outreaches any possible reality.

If you have studied linguistics much at all, you know that the strongest form of the hypothesis that language governs human perception has been "debunked". And it is appropriate to say debunked rather than disproved because of the way that the public latches on to the idea again and again without being presented with any convincing evidence, just little anecdotes and lazy science journalism.

I don't think the poster above is responsible for searching the Internet for everyone else to validate his claim, which is fairly easy to do. Here is a starting point:

http://www1.icsi.berkeley.edu/wcs/

One can also look in to the the Sapir–Whorf hypothesis.

Again, further research in to this type of thing is justifiable but the evidence so far has been pretty convincing that linguistic relativity only exists in a fairly shallow sense, if at all. We don't live in Samuel Delany's Babel-17 universe. Objective reality, biology and even cultural influences will likely be seen to trump linguistic relativism again and again as factors that govern our perception of time, space, color and quantity.


The only reason that I didn't provide links in my original post is because I have had this discussion over and over. I get tired of having it.


What? What about the studies that show that reading Arabic literally uses different parts of your brain than when reading other languages, or even the evidence presented in the article?

Even intuitively, it would make sense that different languages/cultures might distinguish or understand different concepts differently...since this is HN, imagine asking a trial of 100 people who have programmed in nothing but COBOL their whole life to explain monads or continuations intuitively, and how they'd use them in practice, compared to a trial of 100 who understood Scheme and Haskell. I think the results would be pretty obvious. Why would structures and concepts in human language be any different?

I really don't understand the political posturing behind denying linguistic relativism (not that you're necessarily doing that, of course).

src: http://www.bbc.co.uk/news/health-11181457


Reading and language are not the same thing, e.g. you can speak a language without being literate. You might therefore expect different writing systems to use different capabilities. Of course, "different parts of your brain" does not mean different capabilities. They could be the same capabilities instantiated differently in the brain.

It may very well be that speakers of a language with feature A can perform some task better than speakers of a language without feature A. However, it does not meant that the speakers have different cognitive capabilities (e.g. you can provide sufficient task training), which is the ultimate argument of people supporting linguistic relativity.

I cannot comment on the political motivations to deny linguistic relativism. I am just a linguist.


If a writing system doesn't count (it's a fair point, and I understand what you're saying), then how does one counter the evidence in the article that one speaker of one language does a task better than speakers of a different language?

If my language is, for example, French or (older) Welsh, and my counting system is vigesimal (base 20), why is it wrong to claim that a person who speaks French would have an easier time with the basic idea of base 20 math, or just counting in terms of 20's, than a native English speaker, assuming no extra training on the part of either? Why is it wrong to assume that a language with a case system would yield native speakers who were better at explicitly identifying the subject and object of a sentence than speakers of languages that don't have a case system?

I just have a hard time grasping how people conditioned to think a certain way via their (or, a) language couldn't have a better understanding of some concepts than others.

>I cannot comment on the political motivations to deny linguistic relativism. I am just a linguist.

Sorry if you felt that I was pigeonholing you, thanks for your input.


How does French qualify as using a base-20 counting system? Intuitively, I'd expect that to mean

- special words for the numbers 1-19

- special words for 20, 40, 60, 80, 100, 120, ..., 380

- special words for 400, 8000, 160,000, etc.

Instead, what we actually see in French is

- special words for the numbers 1-16

- special words for 20, 30, 40, 50, 60, and 80

- special words for 100, 1000, 1,000,000, etc.

This is a base-10 system (with the odd quirk), as far as I can see.


It is actually referred to as a "base 20" counting system, although I agree that it isn't a very good one. It's definitely not a "pure" base-20 system, but many numbers are expressed of a multiple of twenty + remainder.

Perhaps a better example of a feature in a language that would provide more prominent cognitive differences would be one of the languages that requires users to mark a statement based on how they know the fact (witness, second-hand account, etc.), whose names escape me. Ah well.


> Even intuitively, it would make sense that different languages/cultures might distinguish or understand different concepts differently...since this is HN, imagine asking a trial of 100 people who have programmed in nothing but COBOL their whole life to explain monads or continuations intuitively, and how they'd use them in practice, compared to a trial of 100 who understood Scheme and Haskell. I think the results would be pretty obvious. Why would structures and concepts in human language be any different?

I'm not a linguist, but have recently started to become familiar with parts of it. I don't think you can reasonably make the comparison between engineered computer languages and our innate language abilities. There's ample evidence for a universal grammar, and that languages are not engineered so much as match patterns we're wired to recognize and be able to manipulate. I can stretch your analogy only as far as recognizing the different grammatical constructs that different languages employ (eg English is subject-verb-object and native English speakers first have trouble understanding and reproducing object-subject-verb constructions but eventually get it).

Human languages have nothing truly comparable to the situation you describe. Just because computer languages are called "languages" doesn't mean that it can actually map to a human language.


I was using the computer languages as a facetious example to illustrate a point, the idea being that a context one is immersed in colors perceptions, and I was simply stating that applied to a human language, I don't see why that wouldn't be the case.

There is actually quite a lot of evidence against a universal grammar (http://en.wikipedia.org/wiki/Universal_grammar#Criticisms), and I don't understand how one could even begin to say that everyone necessarily develops the same grammatical concepts at any useful degree of practicality. The idea of a subjunctive tense only exists implicitly in English, yet is prevalent in many Romance languages (and others). In Ancient Greek, the concept of singular and plural were joined distinctly by the concept of pair, which exists currently in no language that I can think of (and certainly isn't popular if it exists). In English, nouns are not explicitly marked to indicate role in the sentence (left up to ordering and ultimately subjective, implicit interpretation), which is easily taken care of in other languages like German, Russian and Hungarian with a case system. Hell, English doesn't even have a stand-alone future tense (i.e., a verb conjugation) ...how exactly can we make any argument for a universal grammar, when there exists so many grammatical concepts in other languages that aren't actually possible in any explicit sense even in English? If I've misunderstood your usage of the phrase "Universal Grammar", I apologize for my ignorance.


A common paradigm in the 1990s version of universal grammar (I have not kept up) is the principles and parameters framework. In this framework, the universal grammar consists of a collection of principles (such as "syntactic movement is always sensitive to hierarchical structure") that can be configured via a collection of parameters (traditionally, these were thought of as binary or ternary switches). A summary of this framework is available at http://web.uconn.edu/snyder/papers/CELS.pdf .

The theory is that the principles of universal grammar arise directly from the structure of the brain, while the parameters are set during language acquisition.

From the perspective of an adherent of the principles and parameters variety of universal grammar, the examples of cross-linguistic variation you mention are not all that radical. A much stronger challenge to universal grammar is the existence of the Pirahã language, which (it has been argued) cannot express recursion, which would otherwise be a linguistic universal.


I'm a guy who likes abstract stuff, and that definition seems extraordinarily abstract to me, to the point of obfuscating the actual meaning of what we're talking about. Am I right that seems that the definition of grammar there is an almost-meta abstraction of structure?

I still can't help but believe that, in practice the 'less radical' points I mentioned, like tense, mood and case (which I think would be switches, right?) arise differently in different contexts, if at all, owing to differences in cognition in various groups, and thus reinforces that mode of cognition in someone who learns the language.

The difference in morphology between, say, Ancient Greek and Modern English is pretty drastic, so what gives? In fact, I don't think English has ever been as morphologically-complex as Ancient Greek, and yet some hypothesize that they even descended from the same Indo-European language...if both languages start out from the same point, but evolve with a completely-different grammatical structure that makes different concepts explicit or notable, why would that happen in light of a true universal grammar that is inherent to everyone?

It's one thing to say that everyone at a certain intellectual level have a capacity to understand and use certain grammatical constructs, but to claim them as inherent to all people seems sketchy...

>The theory is that the principles of universal grammar arise directly from the structure of the brain, while the parameters are set during language acquisition.

Wouldn't that prove that language inherently affects cognition, and thus proves linguistic relativism? I don't think you actually made a point in favor of linguistic relativism, but that was the theme of this whole discussion thread.


English and the protolanguages preceding it certainly were morphologically complex. English, though, has gone through a couple-three near-pidginizing events along the way that have made it a much simpler, more telegraphic language than it, in most ways, should be. Some aspects of the Celtic languages that were on the ground before English hit Great Britain have worked their way into the grammar (present progressive and do-support). The evidence seems to show that much of the inflective complexity was on its way out after the Vikings made the scene (the languages, Old Norse and Old English, were close enough to allow a degree of mutual comprehension, but differences in case markings, verb endings and the always-arbitrary grammatical gender made a hard-enough task more difficult than it needed to be). And one can hardly say that the influence of Norman French on the language was slight. Remember that the purpose of language is to communicate, so the long-term linguistic homogeneity of the group largely controls how complex the language can become. "City" languages tend to be much simpler (on average) than "forest" languages because outsiders rarely have to learn that 700-speaker language.

The surface grammar — what actually comes out of the speaker's mouth — doesn't directly reflect the underlying grammar/syntax, though. The 25-syllable Cree word will tree in pretty much the same way as the 18-word English sentence it replaces. The tonal variations a native English speaker uses convey the same information as the pragmatic particles a Mandarin speaker has to include at the end of every utterance. (We literates are often prone to forgetting that these squiggly marks are to language what choreographic notation is to dance or staff notation is to music.) We might not always be aware of the mechanics of including things like evidential marking, ergativity or pragmatics, but we do it nonetheless. And we've replaced (for the moment, at least) most of our tense and case markings with entire words. The syntax is structurally the same, we just add the word "from" instead of adding the ablative case ending. And really, does anybody actually need grammatical gender? (See Twain's The Awful German language for a reductio ad absurdum.)


Well, I was making the point that, at some point, Ancient Greek was more morphologically complex than any variation of English (which is true to my knowledge).

I agree that we can accomplish specific meanings without explicit grammatical structure, but the rabbit hole we were tumbling down was the simple idea that the prominence of a grammatical construct might facilitate a different kind of thinking than in a language that doesn't contain said construct.

Grammatical gender is a rarely-useful concept, I agree.


> Wouldn't that prove that language inherently affects cognition, and thus proves linguistic relativism?

No, it wouldn't. It's not the language that is affecting how the brain works here; it's the other way around. The theory of universal grammar can be viewed in part as an explicit rejection of linguistic relativity.

> I don't think you actually made a point in favor of linguistic relativism, but that was the theme of this whole discussion thread.

I was responding to your comment:

> If I've misunderstood your usage of the phrase "Universal Grammar", I apologize for my ignorance.

I had thought you might be interested in more information on what "universal grammar" is supposed to mean. I was not attempting to take a position on linguistic relativism.


> No, it wouldn't. It's not the language that is affecting how the brain works here; it's the other way around. The theory of universal grammar can be viewed in part as an explicit rejection of linguistic relativity.

I thought you said certain parameters were set during language acquisition, which would mean that the language inherently modifies how you think.

> I had thought you might be interested in more information on what "universal grammar" is supposed to mean. I was not attempting to take a position on linguistic relativism.

I understand, thank you.


English has a subjunctive, but it's only ever visible in the 3rd person singular.

While English may not have a dual number, a case system (anymore, though it's still seen vestigially in the pronouns), or a verb conjugation expressing the future tense, the point is that English speakers are not constrained in what they are capable of expressing.

For the dual number, we simply say "two cows" or "both cows". To express the future, we have "will", "be going to", or a present (progressive) future construction like:

"We play tennis on Saturday" / "We are playing tennis on Saturday"

Whereas other languages use case to express certain semantic/grammatical relationships, English fills this void with word order and a slew of prepositions.

So at worst English speakers expend a few extra syllables on certain ideas. But there is no limitation on expression or cognition, which is the claim made by a strong version Sapir-Whorf.


I definitely do not buy the strong Sapir-Whorf at all.

I do not believe someone is explicitly constrained by the language, but I do believe that the explicit existence of certain concepts can facilitate a different kind of thinking more naturally than in other languages, which is the only real point I'm trying to back up.


> The idea of a subjunctive tense only exists implicitly in English

I can't say I understand what you mean by implicit, but this is a very strange thing to say. The following sentences use a subjunctive verb:

1. I demand that you return my saw.

2. I will ask that he not show overt disrespect in my class.

As you can see, the english subjunctive form (subjunctive is not a tense; it is a mood) is productive and applies to any verb in the language (and is different from the present tense for every verb in the language), which places it on roughly the same level of existence as, say, the past tense.


There are vestiges of dual case in many Slavic languages, and it's alive and well in Slovenian.


I had no idea it was actively used in Slovenian...I think even Modern Greek has "vestiges" of the dual, though.


Mark Baker's _The Atoms of Language_ and Steven Pinker's _The Language Instinct_ talk about the evidence against linguistic relativism. There is also a debate between Mark Baker (I believe) and some psychologist published online, but I couldn't find the link. I don't know of any other popular literature on the topic.


Do you know enough to be able to write a great rebuttal article? Or to link to a great rebuttal article written by an expert?

I'd like to read that.


I agree. The article is titled "How Language Seems To Shape One's View Of The World" whereas the scientific consensus for the past half century has been the opposite, and very little evidence has come to light over the past half century to change this. The beginning of the article is given to the ideas of a heterodox associate professor at UCSD. Further down in the article John McWhorter affirms the orthodox view (his speciality is slightly tangential to the question).

There is a mountain of evidence supporting the orthodox view, and the Boroditsky's, the Daniel Everett's etc. have not come up with much conflicting evidence or compelling alternative hypotheses. Is there any prominent scientist in this or a related field who subscribes to these Sapir-Whorf type theories? The hey day of this idea was before World War II, and the alternative theories may get play on NPR and the like, but not within the scientific community. Until, as I said, compelling evidence to the contrary is seen.


But the article and the whole hype is not purely about the science of linguistics. While I do think that randomly quoting some not-so relevant research is a bit discomforting it's still a super interesting topic. Almost anyone that has knows at least 2 languages very well will notice interesting differences in using them. Just as in that Nabokov anecdote. It's not necessarily a statement about some inherent properties of a language. It's something many individuals consistently experience. That using different languages yields different results even though their goal might be the same. I myself experience that almost everyday. It raises interesting questions. I don't know anything about linguistics, much less about what linguistic relativity means. But I would love to find explanations for this phenomenon. And I guess that's where they popularity comes from?


care to share some of that evidence with the rest of us?


It's falls somewhere around psycholinguistics or cognitive sciences if you squint hard enough while looking at it, so then it becomes interesting.

Once someone feels they've mastered the hard sciences, the squishy sciences often become new, seductive mistresses.


Although you are right to be skeptical of this claim because near-native competency is always possible, L2 competency (learning a language as a foreign language) will always be less than L1 competency (native). For instance, there are grammatical constructions with very low input frequency about which native speakers have strong intuitions, which L2 speakers will not have. The difference in L1 and L2 intuitions may be remarkably subtle, often semantic/pragmatic, but you don't have to work in the linguistic community for long to come across them.


The difference between L1 and L2 language competency is deeper than whether you acquired multiple languages in "parallel" or "serially". The competency level, whether you are fluent or near-fluent, is based on the critical-learning period which ends roughly at the onset of puberty. Nothing rules out acquiring multiple L1s in serial, which I suspect to be the case in many instances of multilingual children.

Moreover, the author's anecdotal support is in contradiction with the fact that children do acquire multiple languages largely without problem. There are effects of one language on the other, but they are different in natural, and far less frequent, to crossover effects in L2 language learning, e.g. learning Spanish is high school.


The relationship between language and thought is unclear. To be polarising, psychologists believe that language influences thought/culture and linguists believe that thought/culture influences language. There are evidence to support both conclusions, but most of the debate in popular media focuses on examples that are severely misinterpreted. For example, if Language A does not distinguish green and blue, it does not necessarily mean that the speakers of Language A cannot distinguish them, but only that Language A does not distinguish them linguistically. A more concrete example: many languages, such as Chinese, lack linguistic tense (e.g. past, present, future)—using instead aspectual markers (e.g. perfective, imperfective)—, but this does not mean that Chinese speakers cannot understand temporal relations (nor that they could not still express such notions linguistically). Similarly, Russian has separate words for light blue and dark blue, but English speakers can still express these two distinct shades despite not having two separate, distinct words. In short, one has to be extremely cautious about making such broad generalisations without fully understanding the empirical data and, ideally, linguistic theory. See the debate in the Economist between Boroditsky and Liberman for more on this topic: http://www.economist.com/debate/overview/190


Exactly. Just as I used to hear people say "Eskimos" have (some double digit number) of words to describe different "snow". Which makes sense if you realize they live in snow. On the other hand, for many purposes, English, as well as other languages, have ways to describe that snow --they just don't have one word nouns fot that. It's not as though we'd be dumfounded finding a way to describe this snow. Now, since it's not everyday we see snow, yeah, we might not at first glance have it apparent that there is a difference between fresh snow and day-old snow (bit of melting, evaporation) snow drift, snowpack, falling snow, fallen snow, etc.

With regard to grammatical gender, w/re German, I was told that it was futile to try to derive grammatical gender from the attributes of objects --that it was pretty random, for the most part. Mark Twain once remarked "In German, a young lady has no sex, while a turnip has"


The Inuit snow thing is actually a common misconception and like you mention it comes from a difference in our understanding of what a 'word' is. The Inuit language typically uses suffixes to modify nouns. Also, the Inuit language can use one 'word' for what otherwise would be a phrase in English.

Wiki has more on it: http://en.wikipedia.org/wiki/Eskimo_words_for_snow


Reading Chaucer is a good exercise for looking at how words are used, in "The Canterbury Tales" the word Horse is rarely (if ever, I'm not sure) used, instead very specific words are used to describe the type of horse in question Palfrey, Charger, etc.

In the modern world, we are the same with motorized vehicles, I drive what my wife likes to call a junker, beater, jalopy, hoopty car,

We make the distinctions that are important to us, that doesn't meant that we aren't capable of making finer distinctions if we need to

It's like huffman encoding...


> To be polarising, psychologists believe that language influences thought/culture and linguists believe that thought/culture influences language. There are evidence to support both conclusions(...)

I sometimes get the feeling that people in such debates don't understand the concept of feedback loops. It is entirely all right for language to influence culture and to be influenced by it at the same time. And it's not even that mind-blowing or difficult to reason about, one just needs to step one level higher on the ladder of abstraction.


I completely agree. For the most part, people do accept the bilateral influence between culture and language. However, the debate generally focuses on the relationship between language and thought/cognition/mind (with culture mistakenly lumped in there) where bilateral influence doesn't make as much sense, at least intuitively. Consequently, the language-culture and language-thought relationships should be analysed separately, although not in isolation of course.


If you casually asked a native Chinese speaker "are leaves and the sky the same color?" would they answer differently than if you asked a native English speaker? If so (i.e. if Chinese speakers tend to say Yes while English speakers tend to say No), then I would argue that there's something more going on than simple hue labels, since the labels themselves aren't being asked for or provided.


Chinese don't use 青qing(blue/green) in daily speech, only 蓝lan(blue) and 绿lu(green). The crossover where blue becomes green is different with Chinese than English-speakers, though.

If I see a Chinese wearing what I call "aqua-green" clothing, I often ask them "What color sweater are you wearing, blue or green?" They always answer "blue".


Even if a Chinese speaker casually considers the leaves and the sky to be the same colour, it does not meant that s/he could not make the distinction if prompted. It would be similar to asking an English speaker to distinguish different shades of the same colour. For example, historically, English lacked a word for "orange" (which was borrowed from French) such that "red" and "yellow" covered greater spectrums respectively. Clearly, it wasn't that English speakers couldn't distinguish red, yellow, and orange because modern-day English speakers can do so perfectly.


The problem with that test is that the notion of "same" is too fuzzy. If you asked me "are leaves and traffic lights the same color?", I might answer yes. After all, they're both green. On the other hand, I might say no, since one is dark green and the other is bright green. To test if something is going on mentally, it'd be better to take my personal opinions out of the equation. For instance, show a traffic light colored square on a leaf colored background, followed by a sky colored circle on a leaf colored background.


Absolutely. That's why I mentioned that it was a casual question. Since everyone presumably has similar hue thresholds regardless of language, I was wondering if language would effect their answer, according to whatever definitions of "color" and "sameness" they themselves use.

In English, if you casually ask the color of the sky or the color you would use for water on a map, most people will simply say "blue" even though they could probably detect hue difference if presented printed samples of the two. Accordingly, I think most English speakers would say Yes if casually asked whether the sky is the same color as water on a map. But is that the same response and reasoning you would get if you asked a speaker of a language that doesn't distinguish between blue and green whether the sky and leaves are the same color?


I don't think so. I think it would be similar to asking someone if the sky was the same color as the blue angel planes flying around the same sky (on a regularly blue-skies day).


> if Language A does not distinguish green and blue, it does not necessarily mean that the speakers of Language A cannot distinguish them, but only that Language A does not distinguish them linguistically.

True, but just to be clear: In the first link in the article it's shown how people speaking languages with no blue/green distinction are significantly slower at selecting the 'odd one out' when given a set of blue shapes and one green shape. Furthermore, this effect is only significant in their right visual field, whose images are processed by the left (more language adept) part of the brain.


Yes, but this evidence does not necessarily say very much about language and the relationship between language and thought. For example, if a culture does not teach their children to swim and therefore the culture at large does not know how to swim, those people could still swim, albeit slowly. Does this say anything about their bodies? No. If some sort of conditioning by one's culture is required to distinguish colours (which seems to be the case), this is very interesting, but it says very little about language.


That's quite an interesting "debate". Liberman concedes the entire thing in his first sentence!


If you're interested in natural language processing (NLP), but don't have a linguistics background, I would suggest reading Steven Pinker's The Language Instinct. It will introduce you to the necessary terminology and concepts for NLP in an easy-to-digest way. (The NLTK book has been free online for quite some time as well.)


The Language Instinct is a great book, but unless the content of newer editions has changed significantly, it's more of an overview of linguistics in general, and language acquisition in particular. There's not much -- if any -- practical NLP. For example, looking at Amazon's statistically-improbable phrases (SIPs), I see nothing related to NLP, nor do I see any terms related to practical NLP during a quick glance of the book's index. The index includes references to a few pages on "statistics of language", but I honestly don't remember what those were about.

Also, the Language Instinct was written in the early 1990s, so although new editions have been released, it's a bit dated. It's a classic read for linguists and people interested in language, but I wouldn't recommend it as an introduction to NLP.


Oh no, I didn't intend for The Language Instinct as an introduction to NLP, but as a basic introduction to the fundamentals of linguistics. I mainly had in mind terms like morpheme, phoneme, scope, etc. A basic understanding of these concepts will make reading the NLTK book much easier, although it isn't necessary.


Ok, that makes more sense. I read your comment as being a recommendation of The Language Instinct to learn the fundamentals of NLP. But point taken, it's a good book about language.


No. From what I understand, any material caught in the accretion disk is merely funnelling down into the black hole's event horizon, i.e. the centre. It will not affect any orbiting bodies, such as our solar system, but it will make the black hole pseudo-visible to astronomers because the material in the accretion disk will admit some light.


It will add mass to the black hole though, right? So that will affect orbits, no? Or is that just so infinitesimally small that it can reasonably be ignored?


The mass is already roughly there. One thing to bear in mind - if you replaced the Sun with a black hole of equal mass, planets wouldn't change orbits. It's the same gravity and centre of mass, just a different density/volume.


This is true for Newtonian gravity, but only approximately true for general relativity (or so I understand it).


This includes the effects of general relativity. As long as the mass is the same, the orbits of planets will remain the same.


According to wikipedia[1] the converse of the Shell Theorem is (nearly) true:

Suppose there is a force F between masses M and m, separated by a distance r of the form F = Mmf(r) such that any spherically symmetric body affects external bodies as if its mass were concentrated at its centre. Then what form can the function f take?

The form of f allows Newtonian gravity but not Einsteinian.

[1] http://en.wikipedia.org/wiki/Shell_theorem#Converses_and_gen...


The key point there is "spherically symmetric". The Sun isn't. It bulges around its equator thanks to rotation, as does everything. That does have effects on planetary motion; an object in an inclined orbit spends a bit more time a bit farther away from a bit of the Sun's mass. Replace the Sun with a black hole of equal mass and you don't have that oblateness. The effect on planetary orbits would be very small, so macroscopically the Solar System would still be the same, just with very slight differences in orbital speeds and periods.


Interesting. However, I would have thought that planets are far out enough that classical is accurate enough. This sort of thing is why I specified planets as opposed to orbits in general.


All that mass is already orbiting (on a galactic scale) right on top of the mass. It'll affect our solar system's orbits in the same way that the sun moving a few microns affects them, I'd imagine.


The black hole is estimated around 4 million solar masses, the gas cloud around 3 earth masses.


So by "spring to life" the writer meant "we may, if we're lucky, finally see a glint of something". It's very exciting, yet the writer put it in terms that will scare the living hell of any uneducated person like me. Glad there's always HN.


I was thinking of the exact same thing.

It's ironic when a piece of writing tries to present you with factual information begins with a deceiving title, all in the name of attention grabbing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: