Wikipedia:Reference desk/Science

This is an old revision of this page, as edited by Noopolo (talk | contribs) at 14:12, 20 February 2015 (Are Nobel prizes in science a Golden standard for quality?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


February 16

Genus species

Perhaps kinda RD/L, but: is there a term for a critter for whom the genus name is also the species name? Bufo bufo, for example, or iguana iguana, or gorilla gorilla? --jpgordon::==( o ) 00:16, 16 February 2015 (UTC)Reply

Not 100% of the time, but these are usually the Type species. Some type species do not have identical species names to genus names, but AFAIK, the converse is more ocften true: if the names match, then it is usually the type species. --Jayron32 00:35, 16 February 2015 (UTC)Reply
See tautonym. (Not a term for a kind of critter, though, but one for that kind of Linnean binomial.) Deor (talk) 01:00, 16 February 2015 (UTC)Reply
 
Preserved display of an alligator gar head
Here's the thread: https://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Archives/Science/2012_September_24#Black_rat_taxonomy μηδείς (talk) 04:12, 16 February 2015 (UTC)Reply
Such as Octopus octopus. Also, the great white shark is Carcharodon carcharias. Is the species name a variant on the genus name? ←Baseball Bugs What's up, Doc? carrots05:02, 16 February 2015 (UTC)Reply
In the case of the great white, its name means sharp-toothed maneater. The karkhar root is found in each word. μηδείς (talk) 05:18, 16 February 2015 (UTC)Reply
So the root karchar means both 'sharp' and 'maneater'? —Tamfang (talk) 09:14, 16 February 2015 (UTC)Reply
  • According to EO, karkhar- means "sharp", adj form karkharos, and by combination with odon- "tooth" we get Carcharodon, "sharp toothed". Carcharias is a word derived from the same root meaning maneating. An imperfect analogy would be if we called them something like "Sharptoothed sharpies".
The word seems limited to Greek, and OR implies it either a direct borrowing from outside greek, and imitative root, or a pre-Greek ghar-ghar > karkhar. Ghar- is not a known PIE root, although, interestingly, it would likely have become "gar" in English (a sharp-toothed predatory fish more common to the Germans than the Shark).
Yet the normal derivation for the garfish is from *ghaiso- "spear".(Calvert Watkins), so this connection is my speculation. μηδείς (talk) 18:02, 16 February 2015 (UTC)Reply


horseshoe crabs and their medical products

I read about the horseshoe crabs and their medical products. I'm looking for a generic name or commercial name of their products or antibiotics. Thanks. 149.78.227.128 (talk) 02:11, 16 February 2015 (UTC)Reply

You can try limulus amebocyte lysate or "LAL test". Dragons flight (talk) 18:14, 16 February 2015 (UTC)Reply

cesium-137 Question

Hello Wikipedia

I wonder why only the foreign accidents are discussed in the cesium 137 when we have accidents here in USA. Santa Susana located in California was a big accident.

http://www.latimes.com/business/hiltzik/la-fi-hiltzik-20140613-column.html#page=1

http://www.latimes.com/business/hiltzik/la-fi-hiltzik-20140613-column.html#page=1 — Preceding unsigned comment added by 2602:306:BC27:E060:58E6:FE3D:6167:F8E6 (talk) 03:21, 16 February 2015 (UTC)Reply

Because you (yes YOU are to blame) didn't update the articles. Wikipedia only exists because people who care add information to it. If you find information you care about is not in Wikipedia, you literally have no one else to blame except yourself that it isn't in Wikipedia. So, we're all eagerly awaiting your additions to Wikipedia. What are you waiting for. Get on it. --Jayron32 03:34, 16 February 2015 (UTC)Reply
When someone uses terms such as "foreign" and "here in the US/USA" in a post it is certain proof that they are a victim of the delusion that this website is "usa.wikipedia.org". There is no "here" or "foreign" on WP. Roger (Dodger67) (talk) 06:34, 17 February 2015 (UTC)Reply
Please assume good faith. If I say "here in Canada" I am not under the delusion that WP is about Canada. --70.49.169.244 (talk) 14:45, 17 February 2015 (UTC)Reply

Species ID

 
Flower species??

Can some one please identify this species. It was taken in Hyderabad, India. Nikhil (talk) 03:43, 16 February 2015 (UTC)Reply

 
Euphorbia milii
Our pleasure. μηδείς (talk) 22:31, 16 February 2015 (UTC)Reply

What determines the direction of movement in a Rotor ship?

If a cylinder is oriented vertically and spinning clockwise or anti-clockwise, why wouldn't it move the ship backwards?--Fend 83 (talk) 14:37, 16 February 2015 (UTC)Reply

 
Firstly, these craft are essentially powered by the wind - like a sailing ship. They merely replace the sail with a spinning cylinder. The cylinder produces a force that's at roughly right angles to the airflow, using the Magnus effect. (See image at right) Spinning the cylinder effectively speeds up the airflow on one side of the cylinder, while slowing it down on the other - and that's what creates the force at right angles to the airflow. If the wind is blowing from the left, and the cylinder is spinning clockwise, then the thrust will be forwards. If the cylinder is spinning counter-clockwise, with the wind coming from the left, then there will be a rearward thrust. But if the wind were coming from the right, the reverse would be the case. So a part of what makes these ships work is the ability to alter the direction of rotation of the cylinders depending on the wind direction. A rotorcraft has to be piloted much like a sailing ship - with a keel to enable it to sail at angles closer to the wind, and a requirement to tack in order to sail directly into the wind.
If there were no wind whatever, then spinning the cylinder would have no effect (other than to cause the entire ship to slowly spin in the opposite direction!)...and it wouldn't matter which way the cylinder were rotated.
SteveBaker (talk) 15:03, 16 February 2015 (UTC)Reply

What is the advantage of Wi-Fi before GSM?

Is it right the fact that high-frequency electromagnetic radio waves always had more high kinetic inertia (energy) than the low-frequency electromagnetic radio waves?--85.141.234.70 (talk) 16:15, 16 February 2015 (UTC)Reply

WiFi is only effective over relatively short distances, and doesn't pass through walls very well. That means that when you use a WiFi transmitter, your signal isn't going to interfere with those of people in adjacent buildings (or at least not by much). This is limiting, because you can't (for example) use your WiFi laptop to talk to your printer at home while you are at work, 10 miles away...but it does mean that a LOT of people can share that frequency band without problems. GSM on the other hand has a much longer range - which makes it useful for mobile phones. The downside is that the use of that frequency has to be heavily regulated and you can't have everyone using it at the same time.
There isn't inherently more energy in one frequency than another - it's just that the legal requirements for using those frequency bands are different, and the distance that a signal will travel depends on both the energy AND the frequency. Low frequency signals can travel very long distances with little power. High frequencies need more power to cover the same distance. High frequency signals are also capable of carrying more information than low frequencies - which is also important in telephony and computing applications.
SteveBaker (talk) 16:38, 16 February 2015 (UTC)Reply
But there is more energy in high frequencies photons (E=hv where E is energy, h is Planck's constant and the Greek letter ν (nu) is the photon's frequency). Dja1979 (talk) 17:23, 16 February 2015 (UTC)Reply
True - but what stops you from just sending more photons? There is nothing inherently more energetic about radio signals at one frequency versus another. SteveBaker (talk) 15:01, 17 February 2015 (UTC)Reply

Very thanks for you SteveBaker, Did I understand correctly that the satellite (sputnik) Wi-Fi did not exist, there's been only satellite (sputnik) GSM?--83.237.241.65 (talk) 17:34, 16 February 2015 (UTC)Reply

GSM is a specific set of technologies and radio frequencies used for cell phones. Neither GSM nor Wifi existed at the time of Sputnik (1957). GSM was introduced in the 1980s, while modern Wifi dates to the 1990s. GSM reaches much farther than Wifi, but it is not designed to reach space. Sputnik transmitted at 20 and 40 MHz which are considered high frequency (HF) and very high frequency (VHF) radio and would have been detectable by many ham radio operators of that day. GSM and Wifi both operate at higher frequencies than this. Dragons flight (talk) 18:38, 16 February 2015 (UTC)Reply
Much Thanks! Did Wi-Fi been using GPRS, or only GSM is been using GPRS? So, I thinking that only with GPRS is been full usefully using!--85.141.237.205 (talk) 20:31, 16 February 2015 (UTC)Reply
Wi-Fi was designed for the transmission of packet data in the form of ethernet frames (really I think most would have expected it to be generally IP which had become the dominant ethernet protocol for most purposes by the time). It has no need for GPRS. Similarly LTE or really I think most or all proposals for 4G protocols are designed to provide an all IP entirely packet-switched network. (In other words, even voice calls will always be packet switched if over these networks, e.g. VoLTE.) Nil Einne (talk) 03:20, 17 February 2015 (UTC)Reply
Thankful, By what was depended on the recoupment of the using (useful useless) of Wi-Fi or GSM? So, I thinking that program logical versions of TCP IP is been made a differences in full useful using (in useful useless) between Wi-Fi and GSM, but did it done the radio distance?--83.237.215.228 (talk) 10:54, 17 February 2015 (UTC)Reply
In some countries GSM is billed very expensive while the try to etablish a billing system in WiFi by default failed due number and range of cells. WiFi transmits more data per time than GSM. EDGE, 3G (UMTS is here), followed by 4G (LTE) that just covered data volumes of older Wi-Fi, using IEEE 802.11 standards. --Hans Haase (有问题吗) 13:49, 17 February 2015 (UTC)Reply
I understood so, that from transit of data is been responding the phone servers, but not doing this by the radio cells, is it being right?--85.141.239.56 (talk) 10:06, 18 February 2015 (UTC)Reply
I’m understood so, that the radio cells did not transit of data without phone servers, is it being right? So, the logical radio cells did not existed (existing)!--83.237.244.20 (talk) 15:23, 18 February 2015 (UTC)Reply
I’m believe that the phone servers are being manage (operate) the radio cells, but not phones (client devices) are doing that. So, the phone servers are always seen the all phones (client devices) which were being in the net. Is I’m right?--83.237.195.152 (talk) 19:18, 18 February 2015 (UTC)Reply
So, all this makes me to thinking that cell telephony is being a simple local network.--85.141.234.3 (talk) 02:49, 19 February 2015 (UTC)Reply
It is being well public known that the structure and architecture of simple local networks is always being determines by the servers.--83.237.209.57 (talk) 04:15, 19 February 2015 (UTC)Reply
So this ways and that, the phone servers are always keeps counting of all network clients, but not the radio cells doing this, becouse it was being the simpel local network! Could the radio antenna been operate the radio network, I’m did know.--83.237.220.88 (talk) 05:10, 19 February 2015 (UTC)Reply
Anyhow, all telephony is being a networks of different levels of logical (of different skills level).--83.237.212.29 (talk) 07:11, 19 February 2015 (UTC)Reply

Drinking water vs. drinking water with sodium electrolytes

Since drinking "too much water" can cause hyponatremia, would it be safer to drink the same amount of water but with sodium electrolytes added? Will that still be enough to kill a human being, or will the electrolytes stave off death by keeping the body in isotonic equilibrium? How much is "too much" then, if electrolytes had been added? 66.213.29.17 (talk) 18:16, 16 February 2015 (UTC)Reply

There is no type of water that can be consumed in infinite quantity (obviously), but adding some sodium will increase the amount that can be tolerated. Adding some potassium and a bit of glucose will increase it even more. The optimal result: Gatorade! (Approximately.) Looie496 (talk) 18:38, 16 February 2015 (UTC)Reply
You still haven't discussed the mechanism that prevents consumption of massive quantities per unit time or what the limit is. So, my question still stands. 66.213.29.17 (talk) 19:08, 16 February 2015 (UTC)Reply
The mechanism is basically that the body maintains differences in ionic concentration between the interior and exterior of cells -- high potassium inside, high sodium outside. Those concentration gradients are crucial for cellular function. If they break down, the cells in the body die. Our membrane potential article describes the mechanism in more detail. Looie496 (talk) 19:53, 16 February 2015 (UTC)Reply
What if you drink large quantities of water+sodium+potassium per unit time? Can't ligand-gated and voltage-gated sodium and potassium channels just allow the ions to permeate through the membrane, creating an equilibrium? If it's possible to sustain equilibrium, then can you drink massive quantities? 66.213.29.17 (talk) 22:23, 16 February 2015 (UTC)Reply
Well, if you managed to avoid any ion problems, the next problem would be the physical space the water takes up. That is, the stomach can only hold so much water, and the body can only process water into urine so quickly. For most people, trying to drink more water than their stomach can handle would result in vomiting. For those who lack a functioning regurgitation response, it might actually be possible to rupture the stomach and kill themselves in that manner. StuRat (talk) 22:31, 16 February 2015 (UTC)Reply
No, with Gatorade you will lose to many nutrients to barfing! --Stephan Schulz (talk) 18:48, 16 February 2015 (UTC)Reply
Actually the optimal result is more like Pedialyte, which is used for treatment of severe diarrhea, especially in children. Looie496 (talk) 19:01, 16 February 2015 (UTC)Reply
BTW, you would normally get the electrolytes you need, to go along with your water, by eating food that contains them. It's only drinking lots of water without eating (or without eating much) that is likely to cause an electrolyte imbalance. In fact, the typical Western diet is so heavy in salt, that you aren't likely to suffer from any lack of sodium from drinking water. However, exercising in hot weather is a typical case where you drink lots of water (to replace the lost sweat), without eating, so do need to worry about the loss of electrolytes. StuRat (talk) 18:58, 16 February 2015 (UTC)Reply
I've never had any problems with Gatorade. If Stephan does, he should consider talking to his doctor about it. ←Baseball Bugs What's up, Doc? carrots19:44, 16 February 2015 (UTC)Reply
Four gallons of water will kill an adult drunk at one sitting. Adding electrolytes helps, but not much, since the body already compensates. The problem is that the water cannot be passed quickly enough. The idea of Pedialyte (a wonderful thing) is to replace the lost water as it is lost, not to prevent one from drinking it to the point of causing water toxicity. μηδείς (talk) 22:30, 16 February 2015 (UTC)Reply
Medeis, what do you mean? "...kill an adult if drunk at one sitting...", or does water intoxication occur more easily in those already impaired by alcohol intoxication? Nyttend (talk) 04:37, 17 February 2015 (UTC)Reply
Drunk is the adjectival past participle of drink. "Drunken" is a restrcted adjective meaning having drunk a toxicating amount of alcohol, which is not relevant here--at least in my dialect. E.g. the "drunken sailor" versus the amount of water that had been drunk. μηδείς (talk)
Okay. The only difficulty was the noun use of "drunk", e.g. "there's a drunk, lying in the gutter". Nyttend (talk) 04:52, 17 February 2015 (UTC)Reply
Well, the point is to replace the lost water and electrolytes. StuRat (talk) 23:11, 16 February 2015 (UTC)Reply
If one were able to maintain tonicity through constant imbibition, there's be no need for IV's and saline with glucose. I am tempted to say we should offer a reward to see who can drink the most hillbilly gatorade without dying, but that seems disethical, and a google search on the topic is not helpful. μηδείς (talk) 01:43, 17 February 2015 (UTC)Reply
Well, most of us don't need IV's most of the time. That's only for when we can't drink normally, or when they are administering other meds. StuRat (talk) 03:05, 17 February 2015 (UTC)Reply
Or when you've lost a lot of blood but not enough to warrant blood transfusion or have gotten dehydrated to the point where it's deemed a necessity. At least in Israeli hospitals. (Voice of experience, sadly) Sir William Matthew Flinders Petrie | Say Shalom! 28 Shevat 5775 05:15, 17 February 2015 (UTC)Reply
Wouldn't those fall under the category of "when we can't drink normally" ? StuRat (talk) 16:39, 18 February 2015 (UTC)Reply
How could a human stomach hold four gallons of water? Can it stretch that much? And keeping in mind the weight also, which would be like 32 pounds. ←Baseball Bugs What's up, Doc? carrots02:57, 17 February 2015 (UTC)Reply
You're not thinking this through. If it remained in the stomach, it wouldn't be a problem would it?! Water doesn't stay in the stomach for long - it heads through the intestines, where it's rapidly absorbed into the blood...and THAT's where the problem occurs. So stomach capacity doesn't enter into it. The only route for water to emerge as urine is via the blood, and through the kidneys - and that's where the problem occurs. SteveBaker (talk) 14:56, 17 February 2015 (UTC)Reply
That case was 2 gallons of water plus 2 gallons of gatorade - so it doesn't answer the question of whether gatorade alone would have helped...for that, you need my next response.
Gatorade and Pedialyte are not sufficiently balanced to prevent problems. Our article has a reference that proves the exact point [1] :
"Both before and during the game, Wilbanks drank Gatorade and Pedialyte, beverages with sodium concentrations that are higher than in water but lower than what is naturally found in the body".
...the problem being that there isn't sufficient electrolytes even in those supposedly balanced drinks to prevent problems. In this case, the kid's brain swelled up and caused his tragic death. My bet is that if you had a solution with sufficient salts and sugars in it to prevent the problem, it would be hard to keep it down without vomiting it back up again....but that's just a guess. SteveBaker (talk) 14:50, 17 February 2015 (UTC)Reply

February 17

Modern battering rams

Let's say you take a modern battering ram to a locked metal door and break it in: how does it work? Moments after I took this picture, the firefighters did exactly that; I assumed that it was a simple wooden door, but the neighbors standing next to me said that it was metal. My uninitiated impression is that it would just bounce off, or simply cause the door to buckle and not fit in the doorway anymore (if you cover something with a piece of aluminium foil and then poke it in the middle, it no longer covers everything), but in this case, the door appeared to swing open as if they'd used a key and turned the knob. Nyttend (talk) 04:28, 17 February 2015 (UTC)Reply

We have the articles Battering ram#Modern use and Door breaching which cover this a bit although not perhaps in details of your question. I'm not sure about firefighters, but AFAIK whenever I have seen videos or pictures of real life or training (i.e. not fictional) situations of battering ram door breaching by SWAT like teams or special forces or whatever they generally use the battering ram near the lock which I presume increases the likelihood of breaking it. See for example the picture in our article or [2] or [3] [4]. Or even these failures [5] [6]. Or perhaps even this [7]]. Note, whether special forces of fire fighters, they probably don't care if the door itself breaks, the point is the lock is I imagine generally a weak point compared to the door itself, or even the hinges. (Our article does mention either the lock or hinges are generally the target locations for ballistic breaching i.e. using a firearm of some sort with shotguns and the lock generally being the preferred options.) If the lock itself doesn't break and they have to break the door to some extent, then it would seem usually it wouldn't be the case that targetting another location would have helped. In fact their attempts to breach the lock may break enough that they can try to use a crow bar or some other tool to help them breach as the failures somewhat show. (I imagine particularly for fire fighters who unlike SWAT & special forces aren't often faced with doors intentionally designed to be hard to breach, the percentage of times when the lock breaks or frame holding the lock breaks or the door otherwise opens at the lock. Even for SWAT & special forces teams, I would guess they would normally try to prepare if possible and figure out whether their battering ram is going to work or they should use some other method.) Nil Einne (talk) 06:57, 17 February 2015 (UTC)Reply
Thanks for the detailed response; I'd seen the battering ram article (I read it immediately before coming here), but not the door breaching article. I hadn't really considered the lock itself (I'd really only considered ancient/mediaeval ramming, which splinters wood and cracks masonry), thus the confusion. Nyttend (talk) 07:23, 17 February 2015 (UTC)Reply
It's all about defeating the weakest point. The door itself is unlikely to be that - generally, the place where the bolt passes through the frame is easily the weakest point. So applying a lot of force for a brief amount of time (which is what a battering ram does) is likely to fracture the material on the far side of the bolt. This is why modern dead-bolts come with 6" long screws that go through the striker plate and into the largest, heaviest pieces of timber in the door frame. Those screws are what have to be defeated here - the screws either have to bend or break, allowing the wood holding them in place to splinter and the bolt to pop out. Probably the best defense against that kind of effort to open the door would be one that opens outwards because then the entire frame can support the door and prevent the force of the ram from being concentrated onto the door bolt and striker plate. The battering ram works because it weighs over 100kg - and since Force equals Mass times Acceleration, the more solidly the lock resists motion, the higher the deceleration of the ram at the moment of impact, and the larger the force applied to the bolt and hence through the striker plate to the frame.
Destroying the lock itself may not actually have much effect because the bolt is still likely to pass through the striker plate in the frame and to continue to prevent the door from opening. SteveBaker (talk) 14:39, 17 February 2015 (UTC)Reply
The model used by London's Metropolitan Police is called an Enforcer; it weighs 16 kg (35 lb) and exerts an impact pressure of 3 tonnes. There's a whole playlist on YouTube calle Breeching Methods; all the ones that I looked at seemed to be targeting the lock area of the door. Alansplodge (talk) 18:12, 17 February 2015 (UTC)Reply
Hmmm...the kind I've seen in use in the US military is carried by four guys - evidently intended for chunkier doors. Clearly there are multiple types in use. SteveBaker (talk) 04:58, 18 February 2015 (UTC)Reply
I should clarify when I said lock above, I was including everything that made up the lock, including the dead bolt, and the entry of the dead bolt in to the door frame. Nil Einne (talk) 05:24, 18 February 2015 (UTC)Reply
Outward opening doors is not something I really considered in my above answer. From what I can tell from sources like [8] [9] [10] [11] (Yahoo Answers but has people which sound like they have experience) [12] (doesn't really discuss that well) [13] [14] is other methods will normally be used, such as prying the door open or breaking the hinges. A battering ram could still be used, either to try and damage the door or to force in the Halligan bar (although a sledge hammer or axe or something else may be used edit: instead of the battering ram I mean). A notable point with outward opening doors is that the hinges may be exposed so could be a weak point (although I think more likely to be something law enforcement by which I'm including special forces etc and criminals). One thing I was thinking about but didn't mention is doors with a metal bar or similar across them. There is some discussion of how to deal with these in some of the sources. Also I was a little wrong above in suggesting there won't be any concern about breaking the door. I was primarily thinking of a real emergency. In less important cases like where there is an alarm in an apparently locked and empty office, an attempt may be made to minimise damage [15]. (And another point I was thinking but didn't mention very well is in some cases law enforcement may not necessarily care so much about time taken, as opposed to a successful breach without detection until the breach. Of course, sometimes it's not clear what was being attempted .... [16] (from the above playlist).) With both firefighters and others, the sources generally remind to check and consider all possible points of entry (including checking that you even need to force entry), and have a quick look before forcing to try and determine the best method of doing so. Edit: Should clarify that even in the case of outward opening doors, if you aren't trying to compromise the hinges, you're normally still targetting the lock area. It's just that you still want to open it outwards if possible rather than trying to force it inwards. Although it may be slightly above or below the lock, presuming you can't break the bolt as suggested below. Edit2: [17] may also be of interest. It doesn't go much in to breaching methods, at least that I noticed but does mention the tools firefighters may use, at least in whatever area the person preparing it is from I presume. Nil Einne (talk) 06:44, 18 February 2015 (UTC)Reply
Nobody has yet mentioned a Halligan bar, which is a very effective axe-like tool that can be used to shear a bolt. Here is a video clip of USAF tactical door breaching using this tool. Nimur (talk) 07:45, 18 February 2015 (UTC)Reply
Actually I did above although I didn't go in to detail about how it may be used and in particular didn't mention this sort of usage. Most of the examples I saw were more using it to pry rather than shear the bolt, I presume because it was decided this wouldn't work and/or a lack of training and/or would probably take longer than the alternatives. Nil Einne (talk) 12:50, 18 February 2015 (UTC)Reply
My apologies - Nil Einne definitely gets credit for mentioning this first. Sorry I hadn't read carefully enough! Nimur (talk) 16:31, 18 February 2015 (UTC)Reply

About biological computer simulations...

Could a virtual human model be constructed by making a molecular dynamics simulation of the DNA using the appropriate conditions like making a virtual womb or a virtual fertilized egg?

similar to this: http://www.cell.com/abstract/S0092-8674(12)00776-3

41.235.27.248 (talk) 08:39, 17 February 2015 (UTC)Reply

That model does not use Molecular dynamics. It just updates a large number of cell variables using a combination of "28 Submodels of Diverse Cellular Processes", "For example, metabolism was modeled using flux-balance analysis (Suthers et al., 2009), whereas RNA and protein degradation were modeled as Poisson processes.". Obviously (and this cannot be avoided) the model is a gross oversimplification of the actual molecular processes taking place in a cell. It would theoretically be possible to make a similar model that takes into account cell multiplication and growth of an organism, but the processes that give rise to human development are highly complicated and some aspects are not well understood.
Simulating a human being at a truly molecular level however is far beyond the capabilities of our computers and will probably never be feasible. - Lindert (talk) 10:34, 17 February 2015 (UTC)Reply
In fact it breaks down at a very basic level. DNA contains the recipes for building the body's proteins. The function of a protein depends on the way it folds up into a 3-dimensional shape. Currently we don't have any computationally tractable way of computing the folding pattern of an arbitrarily chosen protein. See protein structure prediction for more information on the "protein folding problem". Looie496 (talk) 14:00, 17 February 2015 (UTC)Reply


It's certainly possible in principle. Whether it's currently (or ever) likely to be practical is a matter of degree. If you tried to simulate every chemical reaction path at the atomic level, then the computational complexity would be insane. Just figuring out how one single protein will fold up given it's chemical structure (See: De novo protein structure prediction) is a task that is so complex that it's currently unsolved. Such partial solutions that we have require hefty amounts of super-computer time and are not yet 100% reliable (See Protein structure prediction). So it's clear that doing it reliably at the level of atoms is not likely to happen for a very long time.
However, we can start at a higher level by simulating the known chemical pathways - that provides a considerable speedup, but at the cost of a loss of fidelity. Since an individual's genome will have small differences in their DNA compared to the average of the entire population, some of their proteins are guaranteed to have subtly different chemical structures, to fold slightly differently than the 'standard model' for these chemical pathways - and to get that right, you're back into the problems of protein folding. So it's likely that until we can solve the problem at the atomic level, we won't be able to reproduce the exact processes that (for example) produce a particular facial appearance or predict brain structure or whatever.
The approach of attacking the problem at the level of chemical pathways would allow us to skirt the issues of exactly how the chemical structure of one organic compound interacts with another would solve that problem and greatly reduce the computational complexity to the point where I think we could say that it might one day be possible. But doing that will inevitably produce a 'generic' human from the virtual womb because tiny differences in a real person's genome would change some of those pathways in subtle (or perhaps not-so-subtle) and un-researched ways. But even with the more modest goal of simulating a generic human, would depend upon us knowing every single chemical pathway in sufficient detail to reproduce it...and clearly we don't yet have that knowledge because new pathways are discovered all the time. But there is a good chance that studies of human biology will eventually uncover 100% of those pathways for a 'generic' human, leading to at least the possibility of coming up with a workable (albeit generic) simulation.
But if you want to predict (for example) how an unborn child will turn out given just the DNA, that level of abstraction is useless.
We could go yet more levels higher - perhaps understanding various cell types and how they replicate and interact - but that starts to look a lot like we're "rigging" the results to come out right...and would require yet more generic information and therefore more generic results.
So I think the answer is basically "No" right now...and probably "No" for the immediate future. But the processes involved are (at least statistically) amenable to computation - so we probably just need a MUCH bigger, faster computer to run it all on. Whether we ever get a computer that large and fast is hard to know - but the limits on the size of transistors is known - and making a computer physically larger always makes it slower (because the speed of light is finite) - so there are limits to how powerful they can get...and getting powerful enough to fully solve de-novo protein folding by brute force in anything like fast enough time to produce a simulation that would run to completion within decades may never be possible - so we're left hoping for an algorithmic breakthrough that may or may not happen. Protein folding is very likely to be an NP-hard problem...so we shouldn't expect solutions that are both fast and accurate.
Bottom line, best guess: "No, it'll never happen".
SteveBaker (talk) 14:24, 17 February 2015 (UTC)Reply
One of the problems is that we don't have a full understanding of how MUCH of the stuff in the DNA works to create biological traits. As noted, DNA only does one thing, make proteins. But there are lots of second-, third-, fourth-, and umpteenth-order stuff going on here. Some bit of DNA may make some tiny protein, which directs the assembly of some other bit, which itself directs the assembly of some other bit, and so on down the line recursively. We're really only good at the first step, that is telling what specific amino acid sequence DNA codes for, basically what we call primary structure. We don't know entirely how such proteins can reliably fold to higher levels of structure. Take that out ten or twenty levels of "we don't know what happens next" and you see how far we are from constructing life from first principles using ONLY the DNA code. And we haven't even gotten into things like epigenetics, which is a fruitful area of research looking into heritable traits which are NOT even coded for in nucleic acids. --Jayron32 17:30, 17 February 2015 (UTC)Reply
Erm no, one thing DNA patently doesn't do is make protein. It's a template to make RNAs, many of which then get turned into protein, but many of which also never get to be protein, and are perfectly happy doing their work as RNA (ribosomal RNA, micro RNA, snoRNA etc etc etc etc)Fgf10 (talk) 21:44, 17 February 2015 (UTC)Reply
Fair enough. I skipped a few steps for the sake of brevity. It's difficult to teach an entire class on cell molecular biology in the space of a few lines on a website. Forgive me. --Jayron32 03:43, 18 February 2015 (UTC)Reply
There's no forgiveness in Hell. μηδείς (talk) 22:58, 18 February 2015 (UTC)Reply

MIT study about living on Mars

I didn't understand this limitation put forth by the MIT study:

"For example, if all food is obtained from locally grown crops, as Mars One envisions, the vegetation would produce unsafe levels of oxygen, which would set off a series of events that would eventually cause human inhabitants to suffocate. To avoid this scenario, a system to remove excess oxygen would have to be implemented — a technology that has not yet been developed for use in space."

Thanks! DRosenbach (Talk | Contribs) 21:45, 17 February 2015 (UTC)Reply

Is it something to do with oxygen toxicity? The plants produce too much oxygen in an enclosed space and create an atmosphere that's too rich to support human life? --Kurt Shaped Box (talk) 21:55, 17 February 2015 (UTC)Reply
Your link was to a press release from MIT, but here is the full study, thirty five pages (presented as a conference paper at the 65th International Astronautical Congress, Toronto, Canada). An Independent Assessment Of The Technical Feasibility Of The Mars One Mission Plan.
It appears the primary concern is that there is no sustainable way to control the molar fraction of oxygen or its partial pressure. State of the art control capabilities depend on ready access to large amounts of nitrogen gas - and when that runs out, the atmosphere becomes uncontrolled. Around that time, the mole fraction of oxygen will be above that level considered safe from a standpoint of fire hazard, and the partial pressure of oxygen will be below that level considered safe from a standpoint of hypoxia. The study authors know how exactly much gas such atmospheric control units need, because NASA has already built state-of-the-art spaceships (like Space Shuttle and International Space Station). Gas leakage from such environments has been well-parameterized.
Cited source #30 is A Cabin Air Separator for EVA Oxygen; it was published in 2011 to investigate an oxygen partial-pressure management system suitable for International Space Station; and it explains details of the current state of the art technology. Significant risks include whether the outlet air is safe to breathe; whether the machine is a fire hazard; and the hazardous noise level that the machine may produce.
Nimur (talk) 22:09, 17 February 2015 (UTC)Reply
It's quite apparent that a lot was left out of lecture during my periodontics training. :) Is this seen as a fundamental flaw in the idea, or is this just with, what we might call, current technology, such that all of this may be dealt with by whatever is developed over the next 10, 20 or 50 years? DRosenbach (Talk | Contribs) 02:52, 18 February 2015 (UTC)Reply
The report primarily concludes two things: that the cost will actually be much much higher (perhaps many orders of magnitude higher) than the proposed cost put forward by the Mars One team; and secondly, that the Technology Readiness Level is "relatively low." TRL is a methodology that organizations like NASA use to estimate technology evolution on a decades-long timescale. Very low technology readiness levels imply that the concept has not even been demonstrated in principle - and that means that there is not presently a path to spaceflight capability for it. In other words, we don't yet even know what we need to work on - so it's fruitless to try to estimate a schedule. Will it be ready in the next 10, 20, or 50 years? Reputable scientists decline to even speculate!
The way I read this report, the MIT authors are not actually saying that a Mars mission is fundamentally flawed. Rather, the MIT authors are taking a somewhat conservative position that any such mission - even using future technologies, so long as we're confident we could build them - will be much more expensive and much more difficult than the Mars One team believes - and then presenting 35 pages of details why. If NASA does decide to commit to such a mission in the near future, you can bet that there will be a lot more study put into these and other details!
Nimur (talk) 03:08, 18 February 2015 (UTC)Reply

Tacking an untethered lighter than air vehicle?

We all know a hot air balloon goes where the wind takes it, controllable only by gaining/lowering altitude or dragging a rope on the ground. Whereas a kytoon, tethered to the ground, is capable of much more control. Still, I wonder... the inertia of a balloon, or in particular its heavy gondola, means that a kite tethered to the center of mass ought to exert a pull on it whenever the velocity vector of the wind changes. Has there ever been a successful use of this inhomogeneity to offer meaningful steering to a lighter than air vehicle without relying on propulsion or outside force? This peculiar offering is the result of one of my playing-VR-in-a-parallel-universe dreams; the design I dreamed involved a streamlined row of methane balloons supporting a base with fermentation tanks from which a telescoping tail of concentric rings of fabric louvers could be extended, drooping downward and in the direction of the wind; the operator, aboard the tail, used the controls so that it pulled at an angle to the wind. When the wind was completely unfavorable it could be retracted by using the weight of the tanks, with variable mechanical advantage, to retract the hawser from which the tail hung; the methane also provided some weak propulsion by propellers and allowed regeneration of lift gas permitting frequent changes in altitude. The dream was made more visually impressive in that, I thought, it was necessary to stay very near the ground to have the most variation in wind velocity... Wnt (talk) 23:37, 17 February 2015 (UTC)Reply

When in doubt, consult the Balloon Flying Handbook. The FAA classifies a "balloon" officially as a non-steerable aircraft, while using other terminology (for example, "thermal airship") to refer to aircraft that are lighter than air, kept aloft by hot air, and are steerable by some means (most typically, powered propulsion). There are also weight-shift-control aircraft, powered parachutes, and so on. For the purposes of regulatory classification and categorization, these aircrafts are not balloons.
Refer to 14 CFR §1.1 Definitions for more information. It is important to know exactly how steerable an aircraft is, because a steerable airship has different right-of-way rules than a balloon or a glider or a weight-shift-controlled aircraft (14 CFR §91.113).
Nimur (talk) 00:20, 18 February 2015 (UTC)Reply
An hot air balloon moves with its surrounding air so there is no air current/-movement difference for the kytoon to work. It might work if the kytoon could reach a different air layer but the goal of changing direction is much easier reached simply by driving the hot air balloon up or down there. --Kharon (talk) 04:02, 18 February 2015 (UTC)Reply
@Kharon: maybe I should have started with a simpler question: when you're on the ground, it's quite common to see the weathervane twist this way and that, for gusts of wind to pick up and die out. This is wind shear, AFAIK. How much wind shear affects a lighter than air vehicle that is close to the ground? (contour flying, as Nimur's FAA manual describes it) The question then is whether this provides enough force, when properly tapped, to significantly alter its course from that of the overall wind. Wnt (talk) 13:56, 18 February 2015 (UTC)Reply
Wind shear is hazardous to all aircraft, and it is particularly hazardous to a balloon. Wind shear can cause a balloon to fly in unusual attitudes; it can induce lift that tugs the balloon downwards (in the opposite direction to that which pilots normally want to be lifted!); low level wind shear can make take-off and landing dangerous. Pilots of light aircraft - and ligher-than-air aircraft - typically try to avoid flight into known conditions of wind shear. Wind that abruptly changes its direction and magnitude is dangerous, unpredictable, and most importantly, invisible.
In fact, if you read the chapter on contour flying, you'll see how very very strongly the handbook emphasizes that it must be conducted safely and legally. "All aircraft should be operated so as to be safe, even in worst-case conditions. Every good pilot is always thinking “what if...,” and should operate accordingly." What if flying into known wind-shear causes the balloon envelope to collapse? What if the wind shear or turbulence inverts the aircraft and dumps the pilot or passenger out of the gondola? If you can't guarantee control of the situation, you aren't making great aeronautical decisions. It would be unwise to design or operate an aircraft whose entire principle of operation runs counter to well-established common-sense guidelines. Nimur (talk) 21:46, 18 February 2015 (UTC)Reply
I greatly appreciate your realistic pilot's common sense! All this is true. I was thinking of this more theoretically (specifically, in terms of a simulated "first propelled aircraft" from a dreamy parallel universe) After all, I was dreaming of a historical computer game, my parallel self riding in the swinging tail of the gondola pulling on the rigging to twist the louvers this way and that to put the tail's pull at an angle to the wind, the other member of the team sitting in the gondola controlling fins on the main body and running the tanks. And not a very serious gamer either ... the dream started with him inserting in free-fall heading right at the spot to control the tail where I had set up, and me dodging and getting knocked out against one of the smaller square fabric louvers barely hanging on. And later on he fouled it by somehow tilting too far out of horizontal and getting air backed up into the fermentation tanks, which did something to foul up the lift gas regeneration... it was a good dream :) More seriously, in the days of computer control of things - including perhaps unmanned lighter than air vehicles - and in any case perhaps allowing better real-time control than a human can do, with better sensing of oncoming wind changes - many of these safety considerations might be substantially reduced. Wnt (talk) 00:53, 19 February 2015 (UTC)Reply
I don't know much about balloons, but I've logged lots of hours with 1,2, and four-line kites. The single line fighter kites in particular are a fascinating example of control through dynamic instability. I see no reason why a balloon with a heavy gondola shouldn't be able to use two or more quad line kites to achieve some amount of tacking or otherwise upstream movement movement that is not the exact same as the average wind velocity for the local region. Think of the SkySails system, but with more kites and variable length lines. You'd hypothetically be able to tap into different airstreams, and also fly each kite near the edge of its Flight_envelope (see here [18] for kite-specific discussion) to achieve various torques and forces, that could then interact with whatever foils are on the gondola and balloon. A sort of intermediate step between a fixed anchor and a free-floating kite/balloon would be kite surfing, and those guys can definitely do some weird things in the air. There's also some mildly related stuff at Kite_applications. SemanticMantis (talk) 15:18, 18 February 2015 (UTC)Reply

February 18

Cell Phone for Use in Europe

If this isn't the right desk, then should I post to the Miscellaneous desk? I will be going to Rome over the night of Thursday, 20 February through Friday, 21 February, via Frankfurt and want to be able to call or at least text my daughter and other family members to let them know that I have arrived safely. I spoke to my cellular carrier, Verizon, and they tried to be helpful, but were not. They said that I should have called them earlier so that they could have sent me a temporary phone. They said that I can buy a pre-paid phone in Europe. That is all right, but I asked if I could buy a pre-paid phone that will work in Europe and in North America while I am still in North America. They said no. Is that correct? Do I really have to wait until I get to Frankfurt to let my family know that I am in Europe? Was my carrier being straight with me in saying that I can't do anything until I get to Europe, or can I get a pre-paid phone in the US that I can use in Europe to text or call the United States? Robert McClenon (talk) 03:14, 18 February 2015 (UTC)Reply

Phones that work in both the U.S. and Europe are increasingly common (I have a couple). What model phone do you have now? It just might work in Europe. Short Brigade Harvester Boris (talk) 03:20, 18 February 2015 (UTC)Reply
The model that I have is a flip-phone that my carrier said will not work in Europe. I believe them on that point. My real question is only whether I can get a phone at the airport that will work in Europe. The technical services person said that I can't get one here, and have to get it in Europe. She may be right, but I would guess that she doesn't know something, which is why I am asking. Robert McClenon (talk) 03:36, 18 February 2015 (UTC)Reply
You absolutely can get a phone here that will work in Europe.
A little technical mumbo-jumbo: Your Verizon phone uses a technology called CDMA. Europe (and almost everywhere else in the world) uses a technology called GSM. That's why your current phone won't work over there.
In the U.S. both AT&T and T-Mobile use GSM. Although they use different frequency bands than European carriers it's common to find "quad band" phones that work in both Europe and the U.S. One such is the Moto E available at BestBuy for $119. There are two models of the Moto E: a "global GSM" and a "U.S. GSM" model. Both have the same voice frequencies and should work for voice, text, and 2G data in both Europe and the U.S. (I have no experience with this model but according to the specs it should.) The difference is that the U.S. model replaces the European 3G frequency bands with T-Mobile's oddball frequency bands for 3G data.
If you don't care about high-speed data overseas I'd go with the "U.S. GSM" model and T-Mobile's pre-paid plan. From what I can tell it will let you send text messages from overseas for $0.50. Voice calls from overseas are very expensive -- they almost always are when roaming internationally with a prepaid plan -- but are tolerable for a quick emergency or safe-arrival call (full rate card here).
Hope this helps. I haven't actually used the Moto E but travel internationally a good deal, and from the specs this is how it should work (i.e., I'd be willing to spend my own money on it). You might try calling T-Mo to check, but often the front-line support staff don't know much about these more uncommon issues, as you have found with Verizon. Short Brigade Harvester Boris (talk) 03:59, 18 February 2015 (UTC)Reply
There's always the easier option of finding the nearest Wifi and using VoIP/Whatapp or internet based messaging services. Fgf10 (talk) 07:47, 18 February 2015 (UTC)Reply
...or, thinking outside the box, use a phone booth. They get rarer, but should still be around in very central locations like airports. --Stephan Schulz (talk) 09:48, 18 February 2015 (UTC)Reply
Finding wifi doesn't help you if you don't have a device which can use it. Or to put it a different way, if the OP is planning to take a laptop or tablet or something perhaps this will work. However if all they're taking is their flip phone, I'm not sure this will have wifi, in which case finding wifi will be of no use. They could buy a cheap smart phone with wifi, but unless this is something they want, it's probably more cost effective to either just get a cheap phone with quad band GSM, or to get a phone in Europe is, as suggested below. (You could probably also buy a dual band or triple band phone that is designed for European markets in the US, but I strongly suspect buying a quad band phone in the US would be easier and probably cheaper too.) By cheap phone I'm thinking something probably under USD25. Nil Einne (talk) 13:05, 18 February 2015 (UTC)Reply

What busters did cancel the regime (status) of plug and play in telephony?--83.237.219.81 (talk) 09:23, 18 February 2015 (UTC)Reply

If you only want a basic mobile phone (no web browsing, and no or very poor camera), then you can get them for €20, and a pay as you go SIM. Lyca and Lebara offer cheap international calls, back to the US/Canada. If you only want to make calls in the European country you are in, then pick any local SIM. In any case, you will probably be asked for your address, you can normally use your hotel's address. LongHairedFop (talk) 11:00, 18 February 2015 (UTC)Reply

Internet connection: charging by the time or amount?

Why is Internet (when it is not flat rate) charged by amount of information, and not connection time? I am thinking in 3G, and 4G plans. --Fend 83 (talk) 13:10, 18 February 2015 (UTC)Reply

Because "connections" are (nearly) free, while data transfer is not. The Internet Protocol is packet-oriented. You don't "rent a line" to "the internet" that is exclusively yours, but you rather send an addressed chunk of data through the net. It's more like the postal service, where you buy stamps for each message you send, not like a cable provider that gives you an exclusive cable (and signal), wether you use it or not. --Stephan Schulz (talk) 15:05, 18 February 2015 (UTC)Reply
In a phone call, you're sending data continuously at a fixed rate - so the amount of time is a direct measure of the amount of data you're sending. When you're surfing the web, you're typically only consuming data when (for example) you load a new page - once it's loaded you can have it on the screen for minutes, hours or days without consuming any more data, so charging by time doesn't make any sense. So, in effect, you're being charged for the amount of data you send and receive in both cases - it's just that in the case of a phone call, it's easier to think of it in terms of "minutes" rather than megabytes.
The network doesn't really care how long something takes - only how much of it's limited data capacity you're consuming...so that's what you're paying for.
SteveBaker (talk) 15:06, 18 February 2015 (UTC)Reply
But there was a time were dial-up internet ruled. And if I am right, this was charged by the minute, not the KB. Couldn't they have allowed users to connect all the time they wanted, but charged them by the data, to limit its use? Fend 83 (talk) 15:51, 18 February 2015 (UTC)Reply
Yes, but that would have kept the phone line open whenever they were logged on, making it unavailable for anyone else. This would result in people leaving their PCs online all the time, but doing very little data transfer, since the price per data transfer would have to be very high. Also, monitoring the amount of data transfer wouldn't be automatic with such a system, you'd basically have to add hardware to eavesdrop on the line to figure out how much data was being transferred. StuRat (talk) 15:57, 18 February 2015 (UTC)Reply
There seems to be some confusion here. There are two components that may arise for dialup internet connection charges. One is the cost you may pay to your telecommunications provider, in other words the company providing you your phone line. The other is what you pay to the internet service provider, the company who provide your internet connection. Depending on a variety of local stuff, you may only may one of these, the companies may be the same, or the fees may be combined. But in terms of your internet service provider, there would be no need to eavesdrop on any line, and no reason why monitoring the amount of data transfer would be any more or less automatic (whatever you mean by that) than mobile connections, or wifi, or DSL or whatever. It would basically work in a similar fashion (monitor data transfer through some router). Now if your telco wanted to charge by data transferred, (which makes almost no sense, at least your ISP charging makes a small amount of sense), they would need to either get this info from the ISP, or eavesdrop on the line, but I'm not sure if that's what the OP was suggested.Nil Einne (talk) 16:11, 18 February 2015 (UTC)Reply
I believe the usual arrangement was for it to be a local call from the home to the ISP, and local calls were typically charged per call rather than per minute, at least in the US. At that point the ISP paid for the call from there on, and billed the customer accordingly. Since they would have used long distance phone lines in those days, and have been billed per minute by the carrier, they would naturally want to pass that per minute rate on to their customers (usually they did this by billing a fixed amount for a given number of minutes, then either cut you off or charged more if you went over). StuRat (talk) 16:24, 18 February 2015 (UTC)Reply
More generally, whenever you price something in a way that doesn't match the true cost to the supplier, you will get problems. For example, if a landlord includes utilities "for free", then tenants will be wasteful of those utilities, doing things like leaving windows open in winter in rooms they want a little cooler. Ultimately this will lead the landlord to raise the rent, or limit heat to everyone, so everybody loses in the end. Then competition will kick in, and another landlord with a more sensible billing policy will get the tenants moving in there, forcing the first landlord to change his policy or go bankrupt. StuRat (talk) 15:57, 18 February 2015 (UTC)Reply
...and this is already happening. Voice call rates are generally priced *WAY* higher than the amount of data they actually produce/consume. That causes customers to use VOIP services (things like Skype and Magic Jack) that convert your voice call into data packets that the phone company cannot easily distinguish from other kinds of data transfer. Telephone-quality speech generally consumes around 8k bits per second. A single image on a web page can easily require a hundred to maybe a thousand times that. If you have a couple of gigabytes of data per month included for free in your phone service agreement - then that's equivalent to maybe 500 HOURS of voice calls. By any reasonable standards, voice calls over digital networks should be free. SteveBaker (talk) 20:20, 18 February 2015 (UTC)Reply

MH370 radio

I haven't yet noticed much about radio signals aspect in the MH370 story, so thought of two things (seemingly not mentioned in our article). First, can the ATC controllers actually tell, whether the radio transmitter used for communication with ATC is switched off in the aircraft, similar to phone signals, telling the calling person whether a particular phone is busy or turned off?

Secondly, since the radio signals from the ATC bounce off from the aircraft regardless whether the onboard radio is switched off or not, was it possible for ATC to track the flight via Doppler effect by repeatedly contacting MH370, measure the radio signals that bounce off from it and doing the related math? Brandmeistertalk 18:12, 18 February 2015 (UTC)Reply

The Doppler effect would only tell you the speed at which the plane was moving towards or away from the observer, and I doubt if they have it set up to detect even that. StuRat (talk) 18:17, 18 February 2015 (UTC)Reply
Besides, that's what radar does...they already have radar systems - why have their voice radio duplicating that function? The problem with radar (and using voice radio to perform the same trick) is a matter of range. For a voice transmission to get someplace, it has to cover some distance or other...for radar to work, the signal has to be bounced off of the aircraft and reflected back to the transmitter. Right there, you've doubled the range that the signal has to travel...which means that you need four times as much power to make it detectable. In practice, it's worse than that because the curved surfaces of the aircraft tend to scatter the signal all over the place, so the amount of the reflected signal getting back to the source is small. Radar systems are designed with enough power - and at a frequency - where these problems are minimized - voice radio is not.
As for whether the ATC operators can tell if the radio is turned on or not - well, no - they could only possibly tell if the radio was actively transmitting (like if the pilot has his thumb on the "TALK" button)...and ordinarily, pilots are trained to avoid transmitting when it's not strictly necessary in order to avoid interference with other aircraft - so an always-on transmitter would be a bad idea for long-range communication. SteveBaker (talk) 20:08, 18 February 2015 (UTC)Reply
As somebody who lives and breathes the RADAR equation, I'm obliged to remind our esteemed Steve Baker that RADAR power goes as  . At twice the distance, you need sixteen times as much transmitter power for the same signal-to-noise ratio. This order of magnitude is not something to scoff at. Nimur (talk) 21:54, 18 February 2015 (UTC)Reply
Hypoxia may have set in - that only applies to a passive target. The transmitters signal decays at R^-2. So if the base station pings the a/c and it broadcasts in response, the R^-2 equation applies Greglocock (talk) 22:14, 18 February 2015 (UTC)Reply
I thought we were talking about primary returns... but yes, if you're including a transponder or other secondary surveillance radar, ... but then your transmitter power needs to be set to the power of the aircraft's squawk box, not the gigantic megawatt-scale ground-station ! Nimur (talk) 22:22, 18 February 2015 (UTC)Reply
Yes, and radar requires a direct line of sight to work, so the curvature of the Earth gets in the way. Some forms of radio, though, like ham radio can bounce between the sky and the surface repeatedly to go halfway around the world. StuRat (talk) 22:03, 18 February 2015 (UTC)Reply
"radar requires a direct line of sight to work" more bollox from the man who knows everything. JORN. Greglocock (talk) 02:25, 20 February 2015 (UTC)Reply

Planting milkweed and Monarch butterfly--when and where is it helpful?

Lots of us are planting milkweed to try to prevent the monarch butterfly from going extinct. But I have read on the web that this can be sometimes counterproductive. Is it a location or timing thing? Where in the United States is it helpful or at least harmless and where is it harmful? Exactly what is the reason for planting milkweed to sometimes paradoxically harm monarch butterflies?155.97.8.213 (talk) 22:42, 18 February 2015 (UTC)Reply

My understanding is that milkweed needs to be planted in sufficient quality and in a large enough area in order for it to be useful. One plant in a backyard mostly tree-shaded will be of little use. (I read this within the last week or so, but don't have a source. In NYC this is pointless, but my parents live in a formerly rural suburban area with a lot of open space. I'd contact the local municipality and ask if they have something like open space areas like they do in NJ. μηδείς (talk) 22:55, 18 February 2015 (UTC)Reply
See Monarch butterfly decline: Monsanto’s Roundup is killing milkweed.
Wavelength (talk) 23:04, 18 February 2015 (UTC)Reply
You are defaming Monsanto. Roundup is an herbicide designed to kill plants. There is nothing wrong with their product. It's like blaming Ginsu knives for stabbings. μηδείς (talk) 01:19, 19 February 2015 (UTC)Reply
Where I come from, milkweed is considered a noxious weed. As to the Roundup complaint, an even better analogy is that it's like complaining if Raid were to kill your pet preying mantis. ←Baseball Bugs What's up, Doc? carrots01:27, 19 February 2015 (UTC)Reply
For the correct spelling, see wikt:praying mantis.—Wavelength (talk) 02:01, 19 February 2015 (UTC)Reply
Probably praying it doesn't get hit by Raid. ←Baseball Bugs What's up, Doc? carrots02:43, 19 February 2015 (UTC)Reply
And your grandfather didn't consider clover a weed. Weed is a cultural definition, not a scientific one. SemanticMantis (talk) 15:06, 19 February 2015 (UTC)Reply
Yes, cultural, hence the "where I come from." And how do you know what either of my grandfathers thought of clover? ←Baseball Bugs What's up, Doc? carrots01:52, 20 February 2015 (UTC)Reply
Surely you are joking. In case you are not - What utter dross. We cannot defame with scientific observations. Glyphosate is applied at extremely high rates across the USA, over 91k tons per year (NASS data here [19]). "Roundup ready" crops developed by Monsanto lead to the creation of glyphosate resistance in weeds ([20]), which leads to even more roundup usage, in a vicious cycle of positive feedback. Gene escape ([21]) has been discovered in many Brassica strains, including rapeseed. Roundup is relatively safe in small applications, but dousing half the country with it every year tends to fuck things up, with respect to our soil, our native plants, our water ways, and many other natural systems. These are all documented in our article, along with many citations to peer-reviewed sources. As I said, Roundup is not the worst pesticide ever. But the incredibly high usage rates are still the cause of many negative impacts to our agriucultural systems and natural ecosystems. Remember, the dose makes the poison, and our dose of glyphosate in the USA is extremely high. Worldwide, roundup is applied to roughly the area of two Californias, and most of that is in the USA, as related by Genetically_modified_crops. So I think I have plenty of room to criticize both Monsanto, and the farmers who use RR crops, all while staying very clear of "defamation". SemanticMantis (talk) 15:06, 19 February 2015 (UTC)Reply
I am not going to respond to that directly, I just don't want my silence to imply I concede or agree μηδείς (talk) 17:51, 19 February 2015 (UTC)Reply
Can you show us a source that says it's counterproductive? I'm somewhat familiar with recent research in conservation biology, restoration ecology and even butterfly gardens, and I have never heard that claim. Another thing to point out is that Monarchs aren't really in danger of going extinct in their near future, it's the migration that is endangered. See e.g. here [22], which quote the WWF scientist as saying
Anyway, burden of proof applies here, and without a specific claim, much less supporting argument and citation, it's hard for me to refute. I will give some general info that might be helpful though -
If planted naturally, the "timing" probably isn't an issue. This would come under phenology. Now, there are concerns that phenological patterns may be breaking down due to climate change, but each individual plant and butterfly will be following its own cues for timing, and planting milkweed can't really change that.
The native range of Asclepias is huge in the USA e.g. [23], so I don't think there will be any problems on that front.
Now, the one thing that could conceivably be true is this: if you were to grow lots of milkweed in greenhouses, and make sure there were always some in bloom, some always bolting, etc, then that artificial change to the phenology could possibly screw up the butterflies desire to migrate based on food scarcity. So maybe we shouldn't intentionally muck around trying to confuse the Monarchs.
The main point is, I believe, for most any home owner in the USA, planting common milkweed on their property can't actually harm the monarch species. SemanticMantis (talk) 23:55, 18 February 2015 (UTC)Reply
Oh, sorry is it this [24]? It says that planting non-native species could be a problem, for pretty much exactly the reasons I said above - exotic milkweeds will bloom all year, and make migration less likely. Good news is, this is very simple, just make sure you plant common milkweed -- you'll have some nice flowers, help the monarch migration persist, and can't really go wrong :)
SemanticMantis (talk) 23:58, 18 February 2015 (UTC)Reply
See The beautiful reason you should plant milkweed | MNN - Mother Nature Network for instructions.
Wavelength (talk) 00:04, 19 February 2015 (UTC)Reply
See Ask the Monarch Butterfly Expert: Dr. Karen Oberhauser.
Wavelength (talk) 00:07, 19 February 2015 (UTC)Reply
See http://monarchwatch.org/waystations/ - they produce a "Creating a Monarch Waystation" guide" and if there is a problem they will know about it. Richerman (talk) 11:57, 19 February 2015 (UTC)Reply

February 19

mixture of hair colours

What hair colour will a boy or a girl have as a result of his/her dad being a brunette and mom being a blonde? What hair colour will a boy or a girl have as a result of his/her dad being a redhead and mom being a blonde? What hair colour will a boy or a girl have as a result of his/her dad being a black hair person and mom being a blonde? What hair colour will a boy or a girl have as a result of his/her dad being a redhead and mom being a brunette? What hair colour will a boy or a girl have as a result of his/her dad being a black hair person and mom being a brunette? What hair colour will a boy or a girl have as a result of his/her dad being a black hair person and mom being a redhead? Will it be a mix of those two colours? Please answer and no discussion. Thanks. — Preceding unsigned comment added by 70.29.32.68 (talk) 01:45, 19 February 2015 (UTC)Reply

Read This section of our article on human hair color paying special attention to the sentence that reads "The genetics of hair colors are not yet firmly established." There are several genes which determine human hair color, and your questions cannot be answered knowledgeably based only knowing the hair colors of the parents. There are some trends which establish some general trends of dominance (for example, dark hair tends to be dominant and lighter hair tends to be recessive) but there's really WAY too many factors at play which cannot be accounted for, and you simply CANNOT reliably predict, in a deterministic way, the hair color of a child solely knowing the hair color of their parents. --Jayron32 02:35, 19 February 2015 (UTC)Reply
meta
The following discussion has been closed. Please do not modify it.
Good answer. Oops, that might count as "discussion." The OP is just a tad pushy for a drive-by. ←Baseball Bugs What's up, Doc? carrots02:42, 19 February 2015 (UTC)Reply
You know, when you call all IP users "drive by", you sound like a bigot. It's also denigrating someone based solely on their online appearance, which sounds a bit like racism. Just a little FYI. SemanticMantis (talk) 15:10, 19 February 2015 (UTC)Reply
That comment belongs on Bugs's talk page, SM, not here where it makes you look like a scold. Bugs actually might have a justification here, given all the racialist questions that come from this range of IP's including ones asking me as a negress if my intelligence were substandard. μηδείς (talk) 20:15, 19 February 2015 (UTC)Reply
Then where does Bugs' baseless name calling belong? Or what about your comment to me that I commented in the wrong place? These could all have been posted elsewhere, or not at all. If Bugs wants to WP:BITE here, I can say here that I think that's rude and uncalled for. WP:BITE, WP:AGF, etc. SemanticMantis (talk) 22:01, 19 February 2015 (UTC)Reply
"Baseless"? Exactly one edit for that IP in the last six years. That's a pretty strong basis. Would you prefer I said, "Probable sock of the Toronto racist troll"? And where's your criticism of the OP's snippy comment, "Please answer and no discussion"? ←Baseball Bugs What's up, Doc? carrots22:11, 19 February 2015 (UTC)Reply
If you knew the hair colors of the grandparents or even great-grandparents, that would help to establish if the parents have recessive hair colors which don't show. For example, if both grandmothers are blonde, even though the parents have dark hair, they may have blonde recessive genes, and each of their children may get 2 copies of those blonde recessive genes and therefore have blonde hair. StuRat (talk) 15:18, 19 February 2015 (UTC)Reply
Well, except that the 2-gene hair color theory (one dominant gene that codes for black melanin and one recessive gene that codes for red melanin) works reasonably well, but still cannot account for the full variation of human hair color. Usually (but not reliably so), two red-headed parents often have red-headed kids (as would be explained by lacking any of the dominant genes at these two alleles), but often enough two red-haired parents will have a black-headed child, or some such. It's a complex melange of factors. Usually, the standard Mendellian dominant/recessive thing works out, but it doesn't often enough for us to say that we can't make rock-solid, 100% predictions about the likely hair color (or eye color, or the like) based solely on knowing the hair colors of one's recent family tree. --Jayron32 02:44, 20 February 2015 (UTC)Reply

72 hour pattern discovered

This is not a request for medical advice, so please don't offer it. This is a question about putative meteorological effects on physiological systems. To illustrate the problem, I will use myself as an example. Approximately 72 hours before we receive rain in Hawaii, I get a distinctive weather pain in certain areas of my body. I am not looking for any kind of medical advice. I am looking for an answer as to why this pattern of 72 hours consistently applies. Why not 24 or 48, for example? In other words, I get a certain weather pain 72 hours before it rains. Can anyone explain the significance of this 72 hour pattern? For example, what is happening meteorologically speaking at the 72 hour mark before the storm arrives and how does it impact living systems? I get the sense that I'm not alone, as my cat seems to feel it too. Viriditas (talk) 02:14, 19 February 2015 (UTC)Reply

Not meteorological, psychological. Some combination of confirmation bias and placebo effect is probably at work here. --Jayron32 02:29, 19 February 2015 (UTC)Reply
I specifically linked to weather pains to prevent this kind of response. Have you read it? The meteorological effects on physiology are a scientific fact, neither psychological nor related to confirmation bias or placebo. I'm curious, on what basis would you make such an ignorant statement? Viriditas (talk) 03:39, 19 February 2015 (UTC)Reply
As said below, the weather pains article doesn't support your claims. It suggests some people with certain conditions may be affected by certain weather changes. It doesn't suggest anyone is able to reliably predict rain exactly and always 72 hours in advance, instead of 71 hours or whatever, or 73 hours or whatever. Nil Einne (talk) 11:19, 19 February 2015 (UTC)Reply
I never said any or insisted on any of those things. I think you are interpreting my words a bit too literally. I specifically asked about meteorological conditions at approximately 72 hours leading up to the storm and their effects, which StuRat directly addressed in his reply. Viriditas (talk) 19:17, 19 February 2015 (UTC)Reply
When ever anyone reports a phenomenon based solely on their personal experience, confirmation bias is the only reasonable conclusion. Scientific reliability requires things like falsifiability and double blind trials and things like that. Beyond that, we have NO WAY of explaining why you experience the unexplained thing you are experiencing. You're aware of weather pain, which is a documented phenomenon. However, when we're dealing with the experience of one person, the only reasonable thing to do is focus on what can explain a unique experience of one person. One's own unique experience can only be explained by first looking for the most likely explanation: that your unique and heretofore unexplained experience is due to an internal process in your head (confirmation bias!) and not due to any scientific universal. If it were due to a scientific universal it would have already been established as such by the multitude of people who had the same experience and would be reported in articles like weather pain already. --Jayron32 02:36, 20 February 2015 (UTC)Reply
This sounds like folk tales (possibly true) of people whose arthritic knees would flare up when a storm was approaching. I think a lot of that is attributed to subtle changes in atmospheric pressure, humidity level, etc. As regards 72 hours, I don't trust any weather forecast that's more than a few hours old. ←Baseball Bugs What's up, Doc? carrots02:40, 19 February 2015 (UTC)Reply
Weather pains are not folk tales. Did you read the link I provided? Clearly, you did not. The question about 72 hours regards the exact meteorological conditions and why it would start 72 hours out. It may be, for example, due to Hawaii's proximity in the Pacific and the coastal effects due to the volcanoes (orographic precipitation). I don't know the answer, but the replies up above are absurd. Viriditas (talk) 03:39, 19 February 2015 (UTC)Reply
Your link calls it "folklore", which might be the better term. Either way, folk wisdom is not necessarily false. Let me ask you this: How far ahead would you consider local, conventional weather forecasts to be reliable for Hawaii? I would think more so than in the mainland. ←Baseball Bugs What's up, Doc? carrots04:05, 19 February 2015 (UTC)Reply
Hawaii weather is pretty predictable. In other words, partly cloudy with scattered windward and mauka showers. It's very boring. Actual storms are rare...and painful. Viriditas (talk) 09:42, 19 February 2015 (UTC)Reply
See above and below. There's a big difference between weather pains which our article discusses and mentions does have some scientific support, and claiming someone can reliably predict rain 72 hours in advance, but not 71 hours or less or 73 hours or more for which you've so far presented zero evidence in support. The responses here seem mostly fair. Noting that while BB did call them folk tales, they also suggested they may be true. It may have been best if BB read the article before responding, but still what's more absurd is to make a claim without evidence and refuse to accept suggestions you may be wrong. Nil Einne (talk) 11:19, 19 February 2015 (UTC)Reply
There's nothing to be right or wrong about, so your comment doesn't make sense. My question specifically has to do with meteorological conditions approximately 72 hours before the storm, which has been answered by SuRat. What evidence am I supposed to provide in this discussion other than the study indicating it is possible? I'm not here to ask about the evidence, only the meteorologic effects. Viriditas (talk) 19:17, 19 February 2015 (UTC)Reply
No physical process reliably predicts rain three days in advance. Weathermen would love it if there was. That said, there might be environmental changes that occur a few days before rain often enough that you notice it. Given its mild climate, I suspect you would already have noticed any temperature swings large enough to be important. So if we discount ambient temperature, the most likely physiological agent is probably pressure variations. I would suggest looking up the barometric pressure at times when you feel "weather pains" and comparing those values to the typical pressure over longer time periods to see if you can see a pattern. If you already have a list of times you've experienced it in the past, you could also look up historical values for pressure. Dragons flight (talk) 04:06, 19 February 2015 (UTC)Reply
This study supposedly shows people with joint pain predicting rain. Does anyone have access to it? Viriditas (talk) 09:35, 19 February 2015 (UTC)Reply
Probably everyone with an web browser, internet access and a PDF viewer? The full text PDF is linked in that page and seems to be available for free and isn't behind a paywall as far as I can tell. It doesn't seem to suggest anyone is able to reliably predict weather 72 hours in advance and for that matter exactly 72 hours and not 74 hours or 70 hours. Nor does our article. So your responses above to other respondents are fairly confusing. Nil Einne (talk) 11:06, 19 February 2015 (UTC)Reply
You seem to be getting hung up on the time frame for some reason. That study shows people using their pain to reliably predict rain. In my case, my question has to do with meteorological effects on the body 72 hours before the rain occurs. Viriditas (talk) 19:17, 19 February 2015 (UTC)Reply
I wonder why 72 hours rather than, say, 73 or 66. Even on Star Trek pretty much everything happens in multiples of 12 hours. —Tamfang (talk) 04:47, 19 February 2015 (UTC)Reply
It wasn't meant to be taken literally. I see my audience has once again had a predictable reaction. I used the word "approximately" for good reason. Viriditas (talk) 19:17, 19 February 2015 (UTC)Reply
The heading doesn't say "approximately". And you would have been better off saying "about 3 days" instead, as 72 hours sounds a little too specific. ←Baseball Bugs What's up, Doc? carrots20:04, 19 February 2015 (UTC)Reply
Low pressure associated with rain could certainly cause pain, but that should only be a few hours before the rain arrives. At 72 hours before, you might be at the height of the high pressure between storms. Maybe you react like that to high pressure. Now rain systems here are rather chaotic in timing, but maybe there they follow a more predictable pattern. StuRat (talk) 07:06, 19 February 2015 (UTC)Reply
It's fairly well established that some people feel joint pain when the air pressure drops abruptly. The issue here is whether rain occurs 72 hours after such a pressure drop at a sufficiently high probability to convince our OP. Certainly meteorologists can measure air pressure just as effectively as someone with joint pain - since we know that they cannot (In general) predict with that kind of accuracy three days in advance, it cannot be that our OP can reliably predict rain in that manner. HOWEVER, it might be that in some very simple weather situation (and an island in the middle of a large ocean is a fairly simple thing, meteorologically-speaking), this degree of reliable prediction is possible. It's certainly plausible that (say) three days ahead of landfall of a rainstorm in Hawaii, there is a 90% probability of a pressure drop. That wouldn't help in predicting the weather in the middle of a large land-mass (for example) - which explains why (on average) the meteorologists can't make that prediction - but perhaps, in this case, it does.
That changes this question into one of: Are rainstorms in Hawaii reliably preceded by a drop in pressure - and how long before the rain event does that typically occur?
Sadly, I can't answer that one - but that is the key to understanding what our OP is saying here. SteveBaker (talk) 20:53, 19 February 2015 (UTC)Reply

Banana spiders - season for spinning webs

Banana spiders, such as Golden silk orb-weaver spin their webs seasonally. In what months (in the northern hemisphere) do they spin their webs? Bubba73 You talkin' to me? 02:28, 19 February 2015 (UTC)Reply

Are Chicory and belgian endive one and the same?

The article Chicory contains pictures of Belgian endive with root, and yet both seem very different from the few apparent images... So... What's going on here? Can someone please explain in simple words?... [PLEASE sign your posts, Ben-Natan

Yes it's variable, but the witloof form is produced by forcing the chicons. To do this, you put a chimney pot over them so they grow white (as they're not exposed to sunlight they don't produce chlorophyll), crisp and compact. Unforced chicory grows into the form in the left hand picture. The process is known as etiolation I believe. --TammyMoet (talk) 10:56, 19 February 2015 (UTC)Reply
Ah, kind of like making goose-liver pâté. There is a term for organisms that can take on radically different morphologies depending on the condition in which they develop, but I can't think of the term off the top off my head. μηδείς (talk) 17:39, 19 February 2015 (UTC)Reply

Nice!

I had the feeling that they use such stencils... Anyways, one more reason I asked the question was that I read somewhere (in Hebrew I think), that the Witloof roots isn't edible compared to "Chicory" root but either I remember false information or what I read was just wrong. Ben-Natan (talk) 13:24, 19 February 2015 (UTC)Reply

Well as witloof = chicory I don't know where you got that from. Chicory root, if ground up, makes a passable coffee substitute. I wouldn't like to use it as a potato substitute though.--TammyMoet (talk) 11:01, 20 February 2015 (UTC)Reply

How to collect water vapor in the desert?

Deserts are known to be hot and dry, but in the evening thew temperature seems to be cool and humid; with some amount of water vapor in the atmosphere. can there be any way of trapping these water vapor in huge quantity which will be beneficial to people living or traveling in the deserts?Momoh G. Musa (talk) 06:22, 19 February 2015 (UTC)Reply

There's only a little water vapor in the dessert, but the temperature does drop dramatically at night, potentially producing dew or frost. A large plastic sheet in the form of funnel with a bottle at the bottom can therefore collect some water. However, it's not "huge quantities". You'd need a very large surface area of plastic sheeting just to provide enough water for one person. Also note that this collection method may bring dust and sand into the bottle, as well, so you may need to filter and/or sterilize the water. And if there's a sandstorm, the entire apparatus will be blown away or destroyed. StuRat (talk) 06:51, 19 February 2015 (UTC)Reply
It's called a fog fence.--Shantavira|feed me 09:14, 19 February 2015 (UTC)Reply
See also solar still and atmospheric water generator. Tevildo (talk) 09:16, 19 February 2015 (UTC)Reply
  • That method is actually used along the Pacific coast of South America, where there is virtually no rainful but dense fogs often form. One way of turning the fog into water is a fog fence, as Shantavira mentioned, but needle-leaved trees are also effective. If you stand under a tall pine tree in a dense fog it can feel like it is raining. Our article on fog collection discusses some of this, although it is written in a rather unencyclopedic way. Looie496 (talk) 17:48, 19 February 2015 (UTC)Reply

Is it possible satellite (sputnik) tracking for general radio and TV broadcasting?

Did it been matter for satellite (sputnik) tracking for general radio or TV broadcast on high-frequency or low-frequency waves range is being the radio or TV broadcasting?--83.237.216.141 (talk) 11:21, 19 February 2015 (UTC)Reply

Hi Alex. Are you asking whether Radio frequency broadcasting interferes with satellite tracking? (I expect it does, but only to a very limited extent.) Or are you asking if whether broadcasts from satellites are used for radio and TV? (the answer of course is yes, see Satellite television). Or are you asking about the broadcasting of Satellite watching? (probably not). Or are you asking about using Global Positioning System satellites for tracking or a Vehicle tracking system or other GPS tracking unit? Or something else? Dbfirs 15:51, 19 February 2015 (UTC)Reply
I was tempted to respond, but I was waiting for a translation into English. ←Baseball Bugs What's up, Doc? carrots16:30, 19 February 2015 (UTC)Reply
I tried translating into Russian and back into English, but Google wasn't helpful: "Is not this was the case for the satellite ( Sputnik ) track the total radio or television broadcasting in high-frequency or low-frequency waves vary currently vremyaradio or television broadcasting ?". Dbfirs 17:54, 19 February 2015 (UTC)Reply
For further info on trying to translate from English nonsense to Russian sense, see GIGO. ←Baseball Bugs What's up, Doc? carrots19:01, 19 February 2015 (UTC)Reply
My interpretation was "Did Sputnik broadcast it's signal at a frequency which could be picked up on radios or TV ?", but with Alex, anyone's guess is as good as mine. (The answer, BTW, is yes, it could be picked up on radios as it flew over.) StuRat (talk) 18:04, 19 February 2015 (UTC)Reply
Sputnik broadcast a signal on 20 and 40 MHz. 20 MHz will be often found on a shortwave receiver dial. So it could be received. 40Mhz is at the low end of VHF, and not on frequencies used for radio or TV broadcasting around here. Graeme Bartlett (talk) 22:12, 19 February 2015 (UTC)Reply
Thankful! In general question, did it been matter the radio frequency waves range for solving technical problems?--83.237.209.24 (talk) 12:28, 20 February 2015 (UTC)Reply

To those with knowledge in Biochemistry, Endocrinology, or Internal Medicine

Please see Talk:Hyperinsulinemia. Ben-Natan (talk) 13:26, 19 February 2015 (UTC)Reply

My only specialty here is being a type II diabetic and having a BA in biology, although I focused on botany. μηδείς (talk) 22:03, 19 February 2015 (UTC)Reply

Boltzmann brain

What exactly is this article talking about? Is this some sort of thought experiment in the same vein as Einstein's relativistic train or an actual proposed possibility? This sounds like what happens when physicists try their hands at philosophy. — Melab±1 15:09, 19 February 2015 (UTC)Reply

It's a thought experiment, but unlike e.g. Schrodinger's cat, Boltzmann Brains might well be out there. We just can't know. Same goes for Russell's Teapot or the Invisible Pink Unicorn. The Boltzmann Brain can also be considered as a paradox, though there are different interpretations and potential resolutions. If you like this sort of thing, you may like to read about Roko's basilisk ([25]). But now that you know about it, you might suffer in the future if you do not support AI research ;) SemanticMantis (talk) 15:42, 19 February 2015 (UTC)Reply
In an infinite worlds universe, Boltzman brains should exist in some of them (in fact, in an infinite amount). StuRat (talk) 16:34, 19 February 2015 (UTC)Reply
That's not true, Stuart. An infinite set of infinite sets does not necessarily contain any one specific object. Lets say the Boltzman's Brain configuration could be described as the number π8,546,952,522,455,111,230,709,123,576,420,007. You could easily have an infinite number of infinite sets that do not include this number. An infinite number of Chimps at an infinite number of typewriters over an infinite time will never get you a full play by Shakespeare, since they will type an infinite number of things like complaints to OSHA about their working conditions instead. There's also the fact that the BB idea presumes consciousness is a simulation but a simulation of a brain will be no more conscious than a simulation of a hurricane will blow your house down. The whole project if rife with (ironically) epistemological rationalism, reductive materialism, question begging, stolen concepts and category mistakes.— Preceding unsigned comment added by Medeis (talkcontribs)
That's kinda true - but in an infinite set of infinite sets of objects-created-by-quantum-fluctuations, the probability of any specific object not being there approaches zero. So, sure there is a chance that there are none - but it's an infinitesimal chance that we may safely neglect - and in fact, there is only an infinitesimal chance that there aren't an infinite number of boltzmann brains (and also boltzmann elephants, boltzmann plays by Shakespeare, etc). It would be different if there were laws of physics preventing such a thing - but there aren't - it's just a matter of chance. As for the chimps - even in an infinite universe, the chances of OSHA actually *doing* something is still zero, so that play will still appear an infinite number of times. SteveBaker (talk) 19:37, 19 February 2015 (UTC)Reply
No, because approaching zero (as well as infinite) is undefined in actual existence. I can have an infinite set of texts that even vaguely resemble parts of what Shakespeare wrote, and still have for each of them an infinitely infinte set of texts which don't. Infinity in no way guarantees the actual existence of any specific concrete/token/text/member. Good OSHA joke, however. The one thing they do do is issue fines. μηδείς (talk) 20:10, 19 February 2015 (UTC)Reply
Suppose I roll a million dice, and don't get a six - unlikely, but it could happen. Suppose I roll a googolplex of dice and don't get a six - really, REALLY unlikely but still possible. Sure, it's possible to never get a six (or a Shakespeare play) with an infinite number of die rolls - but the probability of never getting one is infinitesimal. Specifically, the odds are 1/infinity - which isn't zero, but it's definitely close enough that we can ignore the chance that you won't get a 6. Since we're hypothesizing that boltzmann brains come about through random quantum fluctuations, the odds of NOT getting one with an infinite number of rolls of the quantum dice is infinitesimal too. You're right, there isn't an absolute guarantee that you'll get a six on that infinite pile of dice - but the probability that you won't is UTTERLY negligible, no matter how you slice it. So, the odds that there isn't a boltzmann brain in an infinite universe and Medeis is right is 1/infinity - and the odds that there is at least one and Steve is right is 1-(1/infinity). I imagine that everyone knows who to bet on here! SteveBaker (talk) 20:42, 19 February 2015 (UTC)Reply
You are assuming a defined universe of six states, and asking what the chances are in 1,000,000 independent iterations of that universe are of not rolling a six. That's a very small, well-defined calculation for which an easy calculation is quite available. But we don't even have a definition of what a brain or consciousness is in the brain scenario (BS). First, only a mathematician would think that consciousness is a set of numbers, or a physicist that it's an arrangement of particles.
A brain, so far as we know by facts, is an extremely complex organ, in a body, standing in a certain relation to physical reality. A brain in a vat is a specimen in a morgue or a museum. Consciousness is a property of a living embodied brain in relation to its environment over time, after a long period of ontological development. Your consciousness now of what I am writing was in part caused by your being taught to read around age 6, perhaps decades ago.
To call consciousness a mathematically compressed snapshot of some current brain state is absurd in the extreme, and totally divorced form anything known to neurologists and physicians. The BS "experiment" assumes a philosopher knows the nature of brains and consciousness, when he doesn't. The BS "experiment" assumes a very limited digital simulation at one instant, rather than real particles in real time. The BS is simply not worth talking about if one is interested in empirical reality, let alone mathematics. μηδείς (talk) 22:35, 19 February 2015 (UTC)Reply
If we flip infinite coins, we can say that one of them comes up heads, almost surely. Likewise, flipping infinite coins and getting no heads happens almost never. This is the standard language of mathematical probability, axiomatically constructed from measure theory, and we're essentially saying that the bits of the sample space that correspond to the event of no heads has measure zero. Unfortunately, we can't really use axiomatic probability for this thought experiment, that's why physicist-philosophers like to argue about it. SemanticMantis (talk) 21:24, 19 February 2015 (UTC)Reply
Hmmm, this is confusing. On one hand it's easier to say that a fluctuation could more readily produce a lone disembodied brain than a whole self-consistent universe. But on the other, well... suppose you have a superposition of all possible universes that could have formed, and you haven't learned anything about it. Now you find out one fact, that there's a brain in it, with the "state-vector collapsing" to reflect that fact. Well, shouldn't that brain be most likely to have the most likely history by which it could have arisen, i.e. via a process of evolution as part of a living organism, rather than a fabulously unlikely random coalescence of particles? So I'd like to see some more about how that statement about probability came to be made. Wnt (talk) 19:20, 19 February 2015 (UTC)Reply
The thing is, if we exist merely as thoughts in a boltzmann brain, then the universe is basically just a minor variation on the simulation hypothesis - which basically says that our universe is just a simulation running in a computer somewhere in the real world. Similar arguments apply - if we were able to create a simulation of a universe (something we're starting to do fairly routinely in video games) - then it is highly likely that we'd make more than one of them...so there would be more simulated universes than real ones...since simulated universes can in turn evolve to the point where they make their own simulated simulated-universes (ad nauseam), the probability that our universe is the real one is likely to be very small. I actually think that the evidence for this is quite compelling. Things like quantum randomness, the arbitrary speed-of-light limitation and the metric expansion of space are all EXACTLY the kinds of thing you'd build into a simulation in order to make it less computationally expensive. Then there is THIS interesting new work suggesting that the energy density of the universe may depend on the amount of data required to describe it...which would fit very well with us being in a virtual universe where only the amount of data it takes to describe it is what matters. People complain that the "real" computer that runs all of this would have to be quite utterly insanely large and fast - but the laws of physics in the "real" universe might make that easily possible (eg, if the speed of light were infinite there). So, whether we're an imaginary universe being imagined by a boltzmann brain - or a video game running on a laptop in some spotty teenage alien's bedroom on the third moon of Zirktron-5 in a universe which (in turn) is being run by a bored office worker over lunchtime in his accounting job on the perpetual iridium slopes of the high mountains of Phtttbbbtphttttf-19 makes very little difference. SteveBaker (talk) 20:20, 19 February 2015 (UTC)Reply
I don't really understand where you're getting your probabilities from either. Suppose all we know about the cosmos is that there's a brain which is getting the sensations of typing on a keyboard, looking out the window, seeing crows fly past and so forth. Now a priori, is there some reason you can express why it's more likely that these things are produced by some randomly created elaborate computer simulation mechanism to make the impulses corresponding to these things out of some sort of virtual reality, than for there simply to be a planet with buildings and windows and crows and a self-consistent, material world history behind them? I feel like you're making the assumption that "you can't move around anything big" to get the universe to match the brain, but that seems unreasonable. The moment you open the universe-in-a-box you're making it so the stars are here, not there (if there are really any stars that is); you're completely affecting the entire distribution of everything throughout it, I'd think. Wnt (talk) 21:10, 19 February 2015 (UTC)Reply

Sean Carroll's three-sentence summary of the argument is:
  • Certain cosmological scenarios predict that it’s more likely for a brain like yours or mine to arise as a random fluctuation, rather than through orderly evolution.
  • Our brains aren’t like that.
  • Therefore, those scenarios are not correct.
The reason some cosmologies (specifically eternal inflation) make that prediction is that they have regions of vacuum that are much larger than the regions where evolution could take place (such as the one we're in). -- BenRG (talk) 21:43, 19 February 2015 (UTC)Reply
The second bullet point is not quite right, actually. We don't know we aren't like that so much as we assume it in everything we do, including science. E.g. the existence of dinosaur fossils is consistent with dinosaurs having existed but also with that arrangement of atoms having arisen by chance with no preceding living animal. If a cosmology says that the latter possibility is a priori more likely than the former, then you may as well reject that cosmology, because if it's wrong then you're right to reject it, and if it's right then science is meaningless and it doesn't matter what you reject. -- BenRG (talk) 22:55, 19 February 2015 (UTC)Reply

Ignorance is not a claim to knowledge, no claim follows a priori from "We don't now." μηδείς (talk) 00:07, 20 February 2015 (UTC)Reply

Human teeth and spider silk

I'm obviously missing something here. This University of Portsmouth graph cited by The Independent (slightly below the headline) shows that spider silk is about 8 times stronger than human teeth. However, since tensile strength also means stretching stress, a thread of spider silk can be fairly easily torn apart, unlike human teeth. What's going on actually? Brandmeistertalk 16:26, 19 February 2015 (UTC)Reply

You are comparing different amounts. They are saying, I believe, that if you had a spider web braided to the thickness of a tooth, it would take 8 times as much force to break it as a tooth. StuRat (talk) 16:30, 19 February 2015 (UTC)Reply
From tensile strength
-- emphasis mine. Also note that the graph is in pascals, which have units of newtons per square meter, also indicating that the tensile strength takes cross-sectional area into account. But, in a sense, your point is still correct. If we want to apply 4 GPa to a human tooth, that will be much more force (in newtons), than applying 4 GPa to a strand of spider silk. (Also, let's forget about braiding. Braids, twists, knots, etc. all can radically change the tensile strength of cordage.) It's a shame we don't have an article Orders_of_magnitude_(tensile strength), perhaps someone would like to use the data in the linked article to start a stub. SemanticMantis (talk) 19:33, 19 February 2015 (UTC)Reply
I think this comparison to teeth is a bit of pop science, and this is why. Tooth cementum is reported to have widely different mineral contents (I just found it listed as 65% mineral content by weight in the Lindhe perio textbook, whereas the AAP (American Academy of Periodontology) in-service exam lists it as 40-50%. The problem here is that bone is reported to have a mineral content of 60% by weight and these two reported values for cementum are on opposite sides of this value. Be that as it may, in dentistry, it's a fairly commonly discussed phenomenon that cementum is harder than bone, dentin (the internal core component of teeth in humans and most other animals) is harder than that (usually listed at somewhere near 70% mineral content) and enamel, which covers the crowns of teeth in almost all species, is even harder than that at around 95% mineral content by weight. Whatever the exact data is, it's a thoroughly established scientific fact that teeth are the hardest substance in the human body under normal physiologic conditions (see this for something even harder, but this hasn't been demonstrated in humans and would not be physiologic for humans even if it were, but I digress). The point being that teeth are known to be very hard and so are said to be very strong, but teeth are hardly uniform in component thickness and there is much more enamel on, say, a molar than there is on a mandibular incisor. And the discussion above seems to focus on tensile strength and not at all on compressive strength. I'm no engineer or materials scientist, but from what we learn in dental school, the two are very different and, again, in general, the compressive strength yield is either always or usually much greater than the tensile strength yield. But who's trying to crush spider silk? Maybe if you try to make a bullet proof vest out of it, but as I consider this question, it seems that we try to pull at spider silk but crush teeth, and so we're very likely referring to tension here and compression there, and that's not really a fair comparison. But for pop science, people like to have quick quote facts to spit out, like saying "did you know that spider silk is X times stronger than steel?" I mean, fine -- that's a cool talking point, but I think it largely misses the real science of it, and your question seemed to touch upon the real science of it, so I thought I'd throw this into the mix. DRosenbach (Talk | Contribs) 20:57, 19 February 2015 (UTC)Reply
So if I got it correctly, the spider silk outperforms the teeth by strain magnitude? But if the UTS value doesn't depend on the size of the test specimen, it turns out that comparing an entire tooth and a strand of spider silk is fair? Brandmeistertalk 21:18, 19 February 2015 (UTC)Reply
The silk would outperform the tooth in UTS, which is in pascals, regardless of size. However, the force necessary to pull apart ordinary spider silk is less than the force necessary to pull apart the tooth. I think you're just confused about the units. Delivering 1 GPa to a 0.1 mm^2 cross section of silk strand requires less total force than delivering 1 GPa to a ~ 1cm^2 cross section of tooth. It would probably be a good exercise in dimensional analysis to come up with some answers for breaking force in newtons for both a strand of silk and a tooth, based on the data published and some estimated cross sections. If you want to compare UTS and get a clear winner in terms of force, then you have to consider samples of the same cross section. DR's points about inhomogeneity and variability of teeth still apply, we can probably safely assume that all of these UTSd data are mean values calculated over a range of samples. SemanticMantis (talk) 21:30, 19 February 2015 (UTC)Reply
And apparently the Limpet has them both beat. ←Baseball Bugs What's up, Doc? carrots21:26, 19 February 2015 (UTC)Reply

The trash or the recycle bin?

Plastics, aluminum, and paper go in the recycle bin - sometimes labelled "All-In-One". Food wastes - which are organic materials - usually go in the Trash bin. Now, if you find a deceased cat on the ground, and it is wearing a collar, is it safe to remove the collar and dispose it in the plastics/aluminum/paper bin and the body in the Trash bin? What happens if you find a deceased human being wrapped in a blanket? Where should that go? Is the plastics/aluminum/paper bin the same thing as the compost bin for dead leaves and other organic matter on your front lawn? 140.254.136.182 (talk) 20:51, 19 February 2015 (UTC)Reply

You should leave the collar on a deceased cat until it gets tagged at the morgue so we know what its name is until it receives a assignment number. After that, the collar can be recycled by using it on another cat -- it needn't be melted down to form a spoon or a a wire casement. DRosenbach (Talk | Contribs) 20:59, 19 February 2015 (UTC)Reply
... and, of course, deceased humans that you regularly find wrapped in blankets go to the police first, not in the compost bin with the dead leaves, nor in the separate recycling containers for plastic, aluminium and paper. Dbfirs 22:53, 19 February 2015 (UTC)Reply

A query about cell phones and signals

This often happens and I am not sure why. If the question has been asked and answered, please direct me there. So here goes: I am sitting in an area (in this case a basement or lower level office). I get a phone call or text and reply. Two minutes later, when I try to call, text or check my email via my phone, I get no signal. I haven't moved more than a foot or two but suddenly can't get a signal. Then, later, the signal suddenly comes back. Again, I haven't moved! Why do I loose then regain then loose again the signal (especially if I have not moved to another location)? 216.223.72.182 (talk) 20:55, 19 February 2015 (UTC)Reply

The wavelengths of the cell phone signals is only about 20 to 30 cm. The interference and changes in signal strength can easily change over 10 cm (a third of a foot). But not only that other things moving nearby like cars can change reflections of the signal to the basement. Also you may have been switched to s different frequency from the cell tower, which will have a different pattern of signal strengths. Graeme Bartlett (talk) 22:01, 19 February 2015 (UTC)Reply

Thanks. I just realized, however, that I should have put this on the computer and tech desk! Sorry about that! 50.101.125.154 (talk) 04:22, 20 February 2015 (UTC)Reply

5 ways organize information

Richard Saul Wurman means that there are five, and only five, ways to organize information: category, time, relation, location, alphabet, and continuum (magnitude). Is there any other form to organize information besides these five?--Senteni (talk) 22:11, 19 February 2015 (UTC)Reply

From my book titled Filing Systems by Edward A Cope: Organisation could be by who is dealing with it; in tray, out tray and a to-sign tray; organisation could be arbitrary or random, with a register or index to help you find it. Your category and relation are pretty broad and could be considered to cover all possible ways of organsing things anyway. This book calls your magnitude - "numerical", and this may include date or an amount of money. Alphabetical could be by subject or correspondent. It also talks about colour coding. One subset of location that may be important is "country". Information may have to be organised several ways at the one time, so it may need multiple indexes, or tags. You may also be interested in metadata. Graeme Bartlett (talk) 22:35, 19 February 2015 (UTC)Reply
Um, isn't "category, time, relation, location, alphabet, and continuum (magnitude)" six ways? --70.49.169.244 (talk) 00:18, 20 February 2015 (UTC)Reply
You are right, relation is not part of the list. Senteni (talk) 00:44, 20 February 2015 (UTC)Reply

Are Nobel prizes in science a Golden standard for quality?

Are Nobel committees more thorough than any other means? Senteni (talk) 00:41, 20 February 2015 (UTC)Reply

The committee tends to wait long enough to make absolutely sure the award goes to work from at least twenty years ago - and if the person dies - tough luck. The accomplishments honoured are major, but not recent as a rule. Collect (talk) 00:48, 20 February 2015 (UTC)Reply
That does not seem to be a real pattern. Plenty of people have received Nobel prizes for quite recent discoveries. Just looking at recent recipients of the Nobel Prize in Physics, there are e.g. George Smoot, who won the prize in 2006 for his 1998 1992 discovery (14 years), Andre Geim in 2010 (after just 6 years), Adam Riess in 2011 (13 years), Theodor W. Hänsch in 2005 for work he did at the end of the 90s (~7 years) - Lindert (talk) 11:59, 20 February 2015 (UTC)Reply
From 1998 to 2006 there are 8 years not 14. I wonder how you did the math. LOL. Noopolo (talk) 12:32, 20 February 2015 (UTC)Reply
Thanks for spotting that, was a typo, should have been 1992. - Lindert (talk) 12:49, 20 February 2015 (UTC)Reply
I'm not sure what "Golden standard for quality" is intended to mean in this context, nore "more thorough". The fact that, at most, only three scientists can share a particular award in a given year means that there will always be important work that isn't acknowledged with a Prize. One presumes that the Nobel Committee will engage in a certain amount of behind-the-scenes horse-trading and compromise, and will even make mistakes from time to time. It's usually fair to say that the science Prizes are awarded for highly-significant work that is certainly among the most important in its field; whether or not it represents the absolute 'best' is probably unanswerable and will depend on one's choice of criteria.
To take a popular (and safely historically-distant) example, Albert Einstein won the 1921 Prize in Physics for, nominally, his 1905 work on the photoelectric effect. While that was definitely good, important physics, it overlooks Einstein's other landmark 1905 work—including the Special Theory of Relativity. The list at Nobel Prize controversies – which is by no means exhaustive – gives a taste of some of the missed opportunities and controversial calls.
As well, a Nobel Prize (in the sciences) generally recognizes a particular piece of work or scientific contribution; it doesn't necessarily represent an endorsement of the scientist's overall body of work, and certainly shouldn't be read as an imprimatur of future infallibility (or even competence). While I've seen and interacted with a number of Nobel laureates who have been excellent scientists and delightful people, there are a number of notorious exceptions. Kary Mullis (credited with the discovery of PCR) has associated himself with the AIDS denialist movement. Luc Montagnier (co-discoverer of HIV) got into homeopathy and the idea that viruses leave electromagnetic imprints in highly diluted water. Brian Josephson (prediction of quantum tunneling "Josephson effect") is into psychics, ESP, and cold fusion. TenOfAllTrades(talk) 13:24, 20 February 2015 (UTC)Reply
Not to forget about James Watson, who is shunned by academia. Noopolo (talk) 14:09, 20 February 2015 (UTC)Reply

Per act transmission of HIV/AIDS

what's is the per act transmission rate of HIV for a man performing cunnlingus, both in general and if the man/woman whom he's giving it to has HIV? — Preceding unsigned comment added by Bubbly water31 (talkcontribs) 23:17, 19 February 2015 (UTC)Reply

Low per the CDC. [26] --Modocc (talk) 01:02, 20 February 2015 (UTC)Reply
Just a quick note: cunnilingus can only be performed on female genitalia. Fellatio is the corresponding act. Dismas|(talk) 11:18, 20 February 2015 (UTC)Reply

February 20

I put a six-pack of soda can in a river, which was not frozen

And the liquid in the soda cans froze. How can water, which is above freezing temperature freeze the liquid in the soda cans? Noopolo (talk) 12:56, 20 February 2015 (UTC)Reply

Are you sure the water was above freezing, not just "still liquid"? The temperature can be as low as the freezing point of even lower without freezing visibly occurring. And water being in motion rather than still also can prevent the physical freezing process. DMacks (talk) 14:03, 20 February 2015 (UTC)Reply