Jump to content

Wikipedia:Village pump (technical): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Page not loading: Edit comment
Line 426: Line 426:
I'm logged into the [[WP:SEC|Secure Server]]. While looking at the [[Grub (search engine)]] article, I find a link to Wikipedia's Robots.txt file used as a source. Clicking on the link takes me to the secure server's Robots file instead of the en.wikipedia file. Is there a way to hardcode the link to prevent this behavior? --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> [[User:Roninbk|RoninBK]] <sub> [[User talk:Roninbk|T]] [[Special:Contributions/Roninbk|C]] </sub> </span> 08:28, 4 February 2011 (UTC)
I'm logged into the [[WP:SEC|Secure Server]]. While looking at the [[Grub (search engine)]] article, I find a link to Wikipedia's Robots.txt file used as a source. Clicking on the link takes me to the secure server's Robots file instead of the en.wikipedia file. Is there a way to hardcode the link to prevent this behavior? --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> [[User:Roninbk|RoninBK]] <sub> [[User talk:Roninbk|T]] [[Special:Contributions/Roninbk|C]] </sub> </span> 08:28, 4 February 2011 (UTC)
:But why is this such a problem? Anyhow, I don't think a "robots.txt" file is a [[WP:RS|reliable source]], and, what's more, the link seems to have been removed from the article.. — <span style="border:dashed #666;border-width:1px 0 0 1px">[[User:This, that and the other|This, that]]</span>, and <span style="border:dashed #666;border-width:0 1px 1px 0">[[User talk:This, that and the other|the other<small> (talk)</small>]]</span> 09:51, 4 February 2011 (UTC)
:But why is this such a problem? Anyhow, I don't think a "robots.txt" file is a [[WP:RS|reliable source]], and, what's more, the link seems to have been removed from the article.. — <span style="border:dashed #666;border-width:1px 0 0 1px">[[User:This, that and the other|This, that]]</span>, and <span style="border:dashed #666;border-width:0 1px 1px 0">[[User talk:This, that and the other|the other<small> (talk)</small>]]</span> 09:51, 4 February 2011 (UTC)
::Yeah, it would seem to be moot at this point. wouldn't it? I suppose the easiest solution for a problem is to simply remove the instance of it. --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> [[User:Roninbk|RoninBK]] <sub> [[User talk:Roninbk|T]] [[Special:Contributions/Roninbk|C]] </sub> </span> 19:09, 4 February 2011 (UTC)


==CSS: a lower diacritic is sometimes cut off (IPA)==
==CSS: a lower diacritic is sometimes cut off (IPA)==

Revision as of 19:10, 4 February 2011

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The technical section of the village pump is used to discuss technical issues about Wikipedia. Bugs and feature requests should be made at BugZilla.

Newcomers to the technical village pump are encouraged to read these guidelines prior to posting here. Questions about MediaWiki in general should be posted at the MediaWiki support desk.


Why are PNG's limited to 12.5 million pixels?

As a web-safe and lossless format, we should give this format full support. 12.5 mexapixels is now an affordable amateur camera. Is there any way to improve the thumbnail tool so it can support larger png files? - ʄɭoʏɗiaɲ τ ¢ 03:55, 20 January 2011 (UTC)[reply]

Basically this is just development work that needs to be done. The existing thumbnail tool reads the entire file into memory at once and has other scalability limitations. What's needed is a tool that works incrementally, efficiently, and with minimal memory overhead at any one time. This proves rather tricky because the pixels of a PNG are encoded in scan order, not using a quadtree. The best way to help is to build such a tool, integrate it with the latest Mediawiki, and submit a patch.
One interesting approach I just thought of is to create an "intermediate resolution" version that is cached - the intermediate resolution version would be slow to generate, but it would be under the current pixel limit and could be used to create any thumbnails smaller than itself. Dcoetzee 06:49, 20 January 2011 (UTC)[reply]
Heh, if I even knew where to begin, I would. I'm more of the one that takes the pictures; I'll leave coding to the experts. - ʄɭoʏɗiaɲ τ ¢ 18:50, 20 January 2011 (UTC)[reply]
Alas you rather highlight the issue - it's no-one's idea of fun :) So, in a volunteer-driven project, it's just not going to get done. Maybe one day the WMF could get it programmed (since they have that wonderful motivator known as "money"), I suppose, but I'm not holding my breath. Ah well. - Jarry1250 [Who? Discuss.] 19:57, 20 January 2011 (UTC)[reply]
Some people find coding fun. I think the fact is that much of this stuff is hidden under the hood. Where is the current thumbnail generator code located, for someone to pick apart and modify? - ʄɭoʏɗiaɲ τ ¢ 20:22, 20 January 2011 (UTC)[reply]
Thumbnailing is currently done by calling the ImageMagick utility which is what generates the high memory usage (roughly 4 bytes per pixel). One would need to either modify ImageMagick to provide a more conservative memory mode (and get those modifications accepted by the ImageMagick developer community) or one would need to develop an alternative means for Mediawiki to generate thumbnails from large images, and get our developer community to use that alternative. Dragons flight (talk) 20:54, 20 January 2011 (UTC)[reply]
ImageMagick has support for tera-pixel image sizes, what's needed is probably just to invoke the right ImageMagick options from MediaWiki. Command line example from here: convert -define registry:temporary-path=/data/tmp -limit memory 16mb logo: -resize 250000x250000 logo.miff Nicolas1981 (talk) 05:48, 24 January 2011 (UTC)[reply]
Hhmmm, apparently "limit memory" was added in late 2007. I hadn't seen that before. You are right that there may now be a solution. Dragons flight (talk) 06:32, 24 January 2011 (UTC)[reply]
PS. Since it appears to substitute disk caching for memory, it might still be too burdensome under some circumstances, but it maybe possible to adjust the limits. Dragons flight (talk) 06:40, 24 January 2011 (UTC)[reply]
Even a small boost to 15 million would be enough for most standard cameras. Thats enough to support 4230 x 3420. - ʄɭoʏɗiaɲ τ ¢ 15:50, 26 January 2011 (UTC)[reply]
Is there a better or more proper venue I could take this to? Its a small change to make a lot more detailed photos compatible. - ʄɭoʏɗiaɲ τ ¢ 17:28, 29 January 2011 (UTC)[reply]
The suggested fix has been included at bugzilla:9497. Maybe you could vote for that bug too. So far it has received only three votes, which isn't exactly much for such a long-standing and fairly serious limitation IMO. --Morn (talk) 13:04, 30 January 2011 (UTC)[reply]
There is no sign of a viable fix at that bug. As pointed out pngds is subject to random crashes. OrangeDog (τε) 11:14, 31 January 2011 (UTC)[reply]

I'm just looking to get the current limit increased, not to recode or use new software. I'm guessing I'd do that at the Meta village pump? - ʄɭoʏɗiaɲ τ ¢ 15:03, 31 January 2011 (UTC)[reply]

No, at bugzilla. However, I imagine that the devs won't increase it by a large amount due to performance, and won't increase it by a small amount as there is little benefit. OrangeDog (τε) 15:40, 1 February 2011 (UTC)[reply]

I'm not sure if increasing the size is a good idea. There is a reason why the limit exists ManishEarthTalkStalk 16:00, 1 February 2011 (UTC)[reply]

I know its there for a reason. I only wish for a 2.5 million pixel increase, which at worst is a 20% increase in load. Two years ago, average consumer-level cameras were 8-12 megapixels. Now they are 12-14 megapixels. Increasing the limit to 15 megapixels would allow 4230 x 4320, which is 14.5 megapixels, your high-end consumer-level camera. Download sizes are irrelevant, because if the image is the same size regardless of whether the thumbnail program is processing it. The difference is that an unprocessed thumbnail requires the user to view the full size (and several megabyte) picture, where as a thumbnail is a smaller several kilobyte equivalent to the full image. I always upload the largest image size possible, regardless of whether its going to show up in the articles as a grey square; thats wiki's issue, and a poor excuse for encouraging lower-quality content. - ʄɭoʏɗiaɲ τ ¢ 15:47, 2 February 2011 (UTC)[reply]
This should probably be a separate proposal, but it might be a good idea to allow pre-rendered reduced-size versions of images (similar to the text under any SVG image). It'd be neat if a PNG image said "This image rendered as PNG in other sizes: 25% 50% 75% 200% 400%". We could even have a bot running optimizing loss-less image compression. But again, these are separate proposals. I don't want to detract from this thread, merely point-out that problems with large file size can be overcome. Do we really want the photographers with the most high-end equipment reducing their image size? ▫ JohnnyMrNinja 12:00, 3 February 2011 (UTC)[reply]
This is already done. Thumbnails are cached, and Commons has an option to download thumbnail images at a variety of thumbnail sizes. In any case there is a simple work around for this bug for now: upload both a larger PNG version and a smaller one, and add links between the two, ideally using standardized templates for this purpose. This is how we have long dealt with the bug that the software cannot render JPEG thumbnails of PNG files (see commons:Template:JPEG version of PNG, commons:Template:PNG with JPEG version). Dcoetzee 17:05, 3 February 2011 (UTC)[reply]
Are there any consumer-level cameras that save images as anything but JPG? Mr.Z-man 23:29, 3 February 2011 (UTC)[reply]
I don't think many save to PNG, but many do offer a raw image format which is lossless and are often converted into TIFFs and PNGs. It makes sense, as long as you have the capacity. ▫ JohnnyMrNinja 01:21, 4 February 2011 (UTC)[reply]

Signature Length

Why can't the signature length box in "My preferences" be longer, so users can do more markups and such. CTJF83 02:45, 29 January 2011 (UTC)[reply]

Excessive signature length is usually seen as disruptive. More information on the signature guidelines regarding length are available at Wikipedia:SIG#Length. Nakon 02:52, 29 January 2011 (UTC)[reply]
How can I propose to make the available space sightly longer? I just wanna add and link "chat" on the end of my current one. CTJF83 03:14, 29 January 2011 (UTC)[reply]
You can probably condense your current signature by a bit with some CSS-foo. Nakon 03:21, 29 January 2011 (UTC)[reply]
If you can come up with something to help me out, I'd appreciate it. Please post to User_talk:Ctjf83#Sig. I'll look in 9 hours after work. CTJF83 03:26, 29 January 2011 (UTC)[reply]
I think you can omit quotemarks on word colors and every closing "</font>" when inside a span-tag (<span>), plus use color=gold (shorter than "yellow"), as: CTJF83, now with the shorter wikitext: [[User:Ctjf83|<font color=red>C<font color="#ff6600">T<font color=gold>J</font><font color=green>F<font color="#0000ff">8<font color="#6600cc">3]]. You might need one closing "/font" but not all of them. Each font color changes the prior color, "<font color=blue>" as: "This is blue text". HTML was NOT originally designed to demand a matching end-tag "</font>" for every font tag, but I don't know what the HTML fascists are planning.
The World Wide Web was designed by physicists, not by computer scientists, so it was peculiar that way: it allows "<center>" but not margin "<left>" or "<right>" which the center is between (so clueless). Now that computer experts are involved, it is still backward, and they should allow word "hue" (to avoid spelling bias against "colour"), plus denote non-nested tags perhaps as "<#" to use "<#font hue=red>" where the pound-sign "#" would indicate to omit the closing "/font" tag. Anyway, it seems like we need a WP essay about shortening signature text. -Wikid77 09:41, 29 January 2011 (UTC)
[reply]
Except that markup isn't closing the font tags properly, leaving the text in blue or green until they are closed. The colors in this signature fail per Web Content Accessibility Guidelines. ---— Gadget850 (Ed) talk 10:22, 29 January 2011 (UTC)[reply]
  • All of Wikipedia seems to fail the Web Content Accessibility Guidelines, based on the brightness formula: ((Red value * 299) + (Green value * 587) + (Blue value * 114)) / 1000. The brightness of tan is: 191.352 compared to the brightness of red: 76, a difference of 115, not the required 125, so redlinks would likely be considered invalid brightness in note boxes. The focus needs to be on "almost passing" the guideline, as an acceptable approach. -Wikid77 12:34, 29 January 2011 (UTC)[reply]
We aren't discussing "all of Wikipedia", just this signature. If there are problems in other areas, then those need to be discussed and fixed as needed. ---— Gadget850 (Ed) talk 17:20, 29 January 2011 (UTC)[reply]
<font>...</font> is a deprecated element, use <span>...</span> instead, or <b>...</b> if you want to save some characters. You do need the tags to be properly nested, as Graham says, you're causing no end of problems for users of screendreaders etc otherwise. Wikid, the HTML 3.2 specification, which introduced the <font> tag, clearly indicates that it "Requires start and end tags"; I don't know where you got your HTML-history from, but it's not accurate. The most recent version of HTML, HTML5, has moved to reduce the number of tags which need to be matched, loosening the restriction for various block elements like <p>, but that logic cannot be applied to inline elements.
The signature string is stored in a 255-byte database field, and increasing the size would require a database schema change, which I very much doubt will happen even if you ask for it. Happymelon 11:11, 29 January 2011 (UTC)[reply]
I agree that HTML5 still has major design problems, but most browsers seem to keep working anyway. Hopefully, those screenreaders will be fixed to match the real world, and accept changing a font without using the end-tag "</font>" as has been done for years. If the font is blue, and black is needed, just put "<font color=black>" and keep going. I realize the omission of end-tags thwarts the use of nested font styles, but it needs to be expected, and so span tags limit the font scope. The word "e-mail" has been misspelled as "email" in over 90% of webpages, so years ago, Google started matching either spelling as being the same word. It would make sense for all dictionaries to formally define "email" as a valid alternate spelling. If most people set their screenreaders to announce font colors, then perhaps let users know that multi-color signatures will be cumbersome to hear. When a committee designs an unworkable language feature, it typically has a real-world patch to become usable, so expect font settings to continue in that manner. -Wikid77 12:34, 29 January 2011 (UTC)[reply]
Can I ask whether you've tried reading this page with the greenscreen gadget enabled? If not, please do so: go to your preferences and check the box labeled ""Use a black background with green text on the Monobook skin"". Then come back and read your comment above, and perhaps you'll understand what we mean by lack of accessibility. Black text is not part of the MediaWiki style, it is a browser default. Adding <font> is not resetting to the browser default, it is imposing black text, whether or not that's what users want, and regardless of whether that's even accessible to users. Happymelon 13:14, 29 January 2011 (UTC)[reply]
And we aren't here to fix HTML or correct popular spelling, but to write an encyclopedia. Let's fix what we can. ---— Gadget850 (Ed) talk 17:20, 29 January 2011 (UTC)[reply]
So....how do I get the colors, gray background, and a link to my talk page with the word "chat"? CTJF83 12:51, 29 January 2011 (UTC)[reply]
I would try to shorten your signature instead of expanding it. It already is four lines of text which is quite excessive. Garion96 (talk) 17:24, 29 January 2011 (UTC)[reply]

This is an encyclopedia. I'm sorry, but if you can't get a short enough signature yourself, you're probably obsessing over it too much. Let's keep building an encyclopedia! (says the user who spent a good hour trying to design his own sig. :P) /ƒETCHCOMMS/ 05:37, 1 February 2011 (UTC)[reply]

Impending watchlist bankruptcy

I'm headed for watchlist bankruptcy again. Rather than dumping everything, I'd really like to have some sort of filter, e.g., that dumps heavily-watched pages, but keeps things that are on few watchlists. Alternatively, I'd be happy dumping anything that I haven't edited either the article or the talk page for, say, 90 days (especially user pages). Is there anything out there along these lines? WhatamIdoing (talk) 05:07, 29 January 2011 (UTC)[reply]

Drop me an email with the list, I can run it through several filters. ΔT The only constant 05:09, 29 January 2011 (UTC)[reply]
There also is a way to easily check in your watchlist if articles are redirects. That always gets rid of a couple of hundred articles for me. Garion96 (talk) 17:33, 29 January 2011 (UTC)[reply]
How many are 'bankruptcy'? I'm heading towards 7,000. Dougweller (talk) 16:27, 31 January 2011 (UTC)[reply]
Watchlist bankruptcy isn't usually invoked because of a technical restriction, but when someone finds they are having trouble managing their watchlist (see Wikipedia:Don't overload your watchlist!). I believe there are some technical issues when you exceed a certain number - "9800" is mentioned on the previous link and Wikipedia:WATCHLIST#Size limitation - but I was well beyond that number (I think close to 17000) when I declared watchlist bankruptcy last October, so that figure is probably a bit dated. –xenotalk 16:38, 31 January 2011 (UTC)[reply]
Ive had over 50k before. ΔT The only constant 16:41, 31 January 2011 (UTC)[reply]
Thanks. I figure that so long as I can see 12 hours worth of changes I'm ok. Dougweller (talk) 16:42, 31 January 2011 (UTC)[reply]
FWIW, the record was set by the now-departed User:Wik, who I believe had over 50,000 articles on his watchlist. (I may be wrong.[1]) He did so because he believed it was important to say, for example, Mongolia was in central Asia,[2] & would revert people endlessly to keep it so. (This is why he is a "now-departed user".) In any case, a short watchlist is a happy watchlist. -- llywrch (talk) 23:44, 2 February 2011 (UTC)[reply]
I gave up trying to manage my watchlist when it got past 2,000. It's one of the few major MediaWiki features I never use. The issue is that there are always some gadget(s) or program(s) that I don't realize has an auto-add-to-watchlist setting and I run a couple thousand pages in it. /ƒETCHCOMMS/ 05:39, 1 February 2011 (UTC)[reply]
It's less than 2000 pages at the moment, but I'm not keeping up. I've done some manual trimming, but it's time-consuming. Garion, the redirects on my list are commonly things that I'm watching for a reason, e.g., to make sure that they stay redirects. Δ, thanks for the offer. I'll probably email the list to you later this month. WhatamIdoing (talk) 20:21, 2 February 2011 (UTC)[reply]
About redirects, the ones I still have I have for a reason. But old page moves redirects or vandalism redirects I can get rid off. See User:Garion96/monobook.css for the script. It just strikes through the redirects but doesn't remove them. Garion96 (talk) 22:38, 2 February 2011 (UTC)[reply]

en dashes and searching text

The MOS (in MOS:ENDASH) suggests en dashes rather than hyphens in certain constructions, including in article titles. The only real problem with that is that it makes it very difficult to search for the term within the article text. Is there any way for en dashes to be counted as hyphens when searching, just as case distinctions are ignored, or is that entirely browser dependent? — kwami (talk) 01:48, 30 January 2011 (UTC)[reply]

It's browser dependent. You'll probably need a browser that supports regular expression searches in order for it to do what you want, or you could just do each search manually. Gary King (talk · scripts) 02:15, 30 January 2011 (UTC)[reply]
I was thinking of general accessibility to our readers. This would be an argument against implementing the MOS. — kwami (talk) 12:18, 1 February 2011 (UTC)[reply]

Performance synergism gives amazing performance

Nearing the end of January, and the performance issues are really reaping major synergistic benefits. As you might know, that old, archaic essay "WP:Don't worry about performance" had become a negative mantra, casting a cloudy chill on improving performance issues. As part of the "Performance Resistance Movement", I have done the exact opposite now: to focus on performance in my spare minutes around Wikipedia. While re-writing some string-handling templates to avoid the 40-level expansion nest limit, I have again discovered:

  • Performance synergy: Improving just a few aspects, of total performance, will create a synergy which produces other major improvements. I wrote {{strlen_quick}} to focus on using only 5 expansion levels (rather than 9-to-14 deep); however, in the process, I discovered ways to make it 2x times (TWICE) the speed of the "fully optimized" {str_len}, as it runs 12x times shorter. An article now can use 37,000(!) instances of {strlen_quick}. An effort to make {Italic_title} 4x times faster (such as with "(film)" suffix) has synergized as 100x(!) faster. The synergism works that way: start trimming a few if-else branches and end with a template 12x shorter, or 100x faster, than before.
  • POV-pushing is overcome by multiple POVs: The cure for POV-pushing seems to be to allow multiple WP:POV forks (not delete them). Example: all algorithms for getting string-length counts had been deleted down to one POV, which used binary search of strings. I resurrected the other deleted variations, and thereby found techniques to transcend the one remaining POV method, with a hybrid POV method, running 2x times faster and 12x shorter. This concept was formalized in the Strategy Wiki, to have an Arab-POV version of article "Palestine" to overcome systemic bias in the so-called NPOV article. Meanwhile, that concept has led to massive improvements of string templates. The whole idea of POV-forks can be seen in the crucial Recovery of Aristotle from the Arab world, which had kept accurate texts of Aristotle, beyond those censored by the Church. As Pliny said, "There is always something new out of Africa."

Performance synergism will "bootstrap" to higher synergism: by improving the performance of string templates, they can be used in lists of string data containing 5,000 examples on 1 page, to analyze ways to further improve those string templates and others. I have discovered simple ways to streamline {Cite_web} and {Cite_book} to allow over 1,000 references within a separate list page, replete with the COinS metadata. We could even have a "references-population" template, which supplies current sources to multiple articles, all sharing from the same large, central {Cite_web} list, because improving performance had made keeping a large central template of current sources possible. The task began as a focus on improving template performance, and then led to the epiphany that POV forks are the solution (not the problem), while leading to a way to maintain current WP:RS sources within numerous articles at one time. Amazing performance synergism. -Wikid77 (talk) 12:58, 30 January 2011 (UTC)[reply]

Let me be the first to say that I don't really understand the connection between the Israel-Palestine conflict, and string processing templates... :D But let me also be the first to congratulate you on these performance enhancements.
You might want to read the recent wikitech-l thread on WP:PERF; I think it you'll find it interesting. Happymelon 13:42, 30 January 2011 (UTC)[reply]
I guess the credit should go to 2006-2007 User:Polonium and others, for the alternate string-length algorithms. I had learned in college that performance can often be improved perhaps 5x faster (re: Donald Knuth), but the template speed increases of 12x, 100x or 1,000x faster are still a shock to me. Also, using many large parameters in a template consumes the "post-expand include size", so reducing a numeric formula parameter by using {#expr: <formula>} can shrink the post-expand or argument sizes as perhaps 5x smaller. I was stunned when {strlen_quick} reduced the post-expand by 12x, increasing capacity from 2,900 instances to allow 37,000 uses of {strlen_quick} per page! The other string algorithms had been deleted, or rather redirected, so there was only one POV for how to check string lengths. I guess the Strategy-Wiki issue for an Arab-POV fork of article "Palestine" would reveal insights not found in the main article, just as the faster string algorithms were gone from the main {str_len} template. A classic case of "POV funnel" is the Amanda Knox case, where many Europeans did not understand how she is in major TV news in America, every few months, as "will she get a fair re-trial and be set free" rather than wondering what motive for killing her flatmate of 6 weeks. The MoMK article was reduced to omit Knox's "POV-boring" background as a guitar-playing, honors student who called her roommate about their Halloween costumes the day before the murder. See, the POV-boring details are what made the case notable in the U.S. as why would a "straight-A" student, who sings with guitar, work 7 jobs in Seattle to pay her way as an exchange student in Italy, then want to kill her British roommate of 6 weeks (whose rent money vanished) but leave no hair, fingerprint or DNA evidence, unless she was hit by police to give a false confession as she testified? Understandably, some European users always removed those boring details as insignificant, as WP:UNDUE POV-boring text compared to other details. Only when an article can focus on the POV-boring concepts of a "huggy bookworm" whose new friend died, can readers understand why millionaire Donald Trump advised boycotting Italy until Knox is freed. Perhaps that focus is similar to a Arab-POV article about Palestine, where seemingly POV-boring or only-an-Arab-would-care details are being omitted, but I'm not sure there. For checking string-length, the better solution was in the deleted (or redirected) WP:POV fork templates which had faster, shorter algorithms. Having multiple pages about an issue can lead to a better understanding of the all-encompassing (encyclopedic) viewpoints. That's the multi-template connection to multi-POV Israel-Palestine articles. -Wikid77 23:49, 30 January 2011 (UTC)[reply]
Please stop saying synergy. It makes me retch. OrangeDog (τε) 17:46, 30 January 2011 (UTC)[reply]
Perhaps I should say that performance improvements are a "win-win game" (!) where the "pie gets bigger" rather than users fighting over the pieces of the pie?!?! -Wikid77 23:49, 30 January 2011 (UTC)[reply]
Oh please, enough! I have to listen to that kind of b/s speak all day long! We need to realign our white space initiatives on a going forward basis. – ukexpat (talk) 16:36, 31 January 2011 (UTC)[reply]
  • Anyway, more progress: fearing the expansion depth of Template:Val (nested 30 levels for 14-digit decimals), I had been reluctant to use it in complex templates. So, I rewrote portions as 5-depth-level templates {{gapnum}} and {{gapnum/dec}} to put space-gaps in numbers. Then, using those optimized templates, with parameters set to use just 5 expansion-depth levels, I wrote Template:Convert/gaps to put space gaps in numeric conversions, as requested by users for metric measurements. -Wikid77 09:46, 4 February 2011 (UTC)[reply]

Having trouble editing/unintended vandalism

OK, so I was working on the 2010-11 Australian region cyclone season, and just adding the dissipation date for 06U (revision on 00:22 4 January), and although I had gotten the changes I wanted, I somehow exposed some formatting in the line of the table for TC Tasha. This was due to an infinite loop. What I did (if I remember correctly) was that I was submitting it, and it got stuck in the infinite loop. So, I either closed the computer (it was a laptop) or hit the Back button on the browser (I don't remember which), and then the formatting came out all screwy. I am using Safari version 5.0.3 on a Mac OSX Version 10.6.6 with a 2.4 GHz Intel Core 2 Duo processor. (The loop occurs even when I am editing small sections of a larger page, though the loop does not always happen, even when I am editing larger pages.)

(In fact, as I am typing this, the page for the 2010-11 South Pacific cyclone season is stuck in an infinite loop, while I am trying to edit that page.) — Preceding unsigned comment added by VeryPunny (talkcontribs) 19:12, 30 January 2011 (UTC)[reply]

What kind of infinite loop are you experiencing? Can you copy and paste any error messages that you see? Gary King (talk · scripts) 22:56, 30 January 2011 (UTC)[reply]
No, there are no error messages, the screen is just frozen, but otherwise works normally. Interestingly, this does not happen when I am simply browsing around without editing. — Preceding unsigned comment added by VeryPunny (talkcontribs) 17:12, 31 January 2011 (UTC)[reply]

Transparent table background.

For a long time, the plain table background was white, but this has changed to transparent in the upcoming 1.17 release of MediaWiki. Since there may be templates that rely on a white background, I'm putting the following code in Common.css, in order to spot any glitches that may pop up before 1.17 is deployed. Edokter (talk) — 21:08, 30 January 2011 (UTC)[reply]

/* Transparent table background. Remove when 1.17 is deployed */
table {
    background-color: transparent;
}
  • So, now, a table will default to transparent background, but a quotebox or preformat-box will remain white, as shown below:
This indented table now defaults to transparent, but class=wikitable will remain white.
This is a quotebox, indented by leading spaces.

However, the quotebox (above) & preformat-box (below) remain white.

This text is within the tags <pre></pre>.

Tables are often used for multiple columns in see-also sections.

So, people should change any unclassed tables which need to be white, by setting style="background:white". -Wikid77 10:06, 31 January 2011 (UTC)[reply]
Basically yes, but since all pages already have a white background (in Vector), it would not be absolutely necessary. (PS. Wikitables and pre-formatted boxes have a gray background.) Edokter (talk) — 19:29, 31 January 2011 (UTC)[reply]

William H. Ryan, Jr.

The William H. Ryan, Jr. Table of Contents has a line " * 4.1 Non-state Territories of the United States" appearing under References. The Commonwealth of Pennsylvania is one of the original thirteen states. Some technical person should investigate. Please tell me if I should be reporting this situation elsewhere, rather than here. --DThomsen8 (talk) 15:49, 31 January 2011 (UTC)[reply]

It's a badly-written navbox {{U.S. state attorneys general}} --Redrose64 (talk) 15:52, 31 January 2011 (UTC)[reply]
 Done - I've fixed it now. --NSH001 (talk) 16:34, 31 January 2011 (UTC)[reply]
Thank you. That was a really quick fix. --DThomsen8 (talk) 16:45, 31 January 2011 (UTC)[reply]

Thumbnail software

Which thumbnail software does Wikipedia use? According to Mediawiki, it should be either ImageMagick or GD. But I'm not sure which, and how it's implemented (on commons, for an image, there's no thumb.php?size=xyz etc.). I'm thinking of fixing the bugs with Animated GIF thumbnailing. Thanks, ManishEarthTalkStalk 16:19, 31 January 2011 (UTC)[reply]

Wikipedia is probably using the recommended ImageMagick, since the ImageMap extension requires it, apparently, and it's installed here. Gary King (talk · scripts) 18:45, 31 January 2011 (UTC)[reply]
Yes, it's ImageMagick. The parameters for calling it will be in svn somewhere - you could check related bugzilla issues which may lead you there. OrangeDog (τε) 18:55, 31 January 2011 (UTC)[reply]
Thanks! ManishEarthTalkStalk 01:43, 1 February 2011 (UTC)[reply]

Keep getting logged out

Hi, I'm using Internet Explorer 8; I always check the stay logged in checkbox, but in the last few weeks Wikipedia won't remember my login for more than 20 minutes. I've tried clearing my cookies as suggested but that didn't fix the problem. Any ideas? Thanks. Some guy (talk) 20:28, 31 January 2011 (UTC)[reply]

You can try Firefox, Opera, or Chrome. Otherwise see if Update for Internet Explorer for Windows Vista (KB2467659) is what is causing your problem. – Allen4names 21:03, 31 January 2011 (UTC)[reply]

Is there a bot that can add WikiProject templates based on categories or stubs?

I wonder if there is a bot that could check which articles in a given category (ex. Polish singers) or using given templates (ex. Poland-bio-stub) are not categorized with a corresponding WikiProject assessment template (ex. WikiProject Poland template), and then add the wikiprojects template to those talk pages (preferably, assessing it as stub if the articles have a stub template)? That seems like something a bot should be able to do with a good success rate. --Piotr Konieczny aka Prokonsul Piotrus| talk 20:52, 31 January 2011 (UTC)[reply]

Sure, see Category:WikiProject tagging bots. –xenotalk 20:56, 31 January 2011 (UTC)[reply]

Footnote formatting Flag will not go away when I changed an "ibid" to a footnote

Cleared up and "ibid" footnote issue on this page: http://en.wikipedia.org/wiki/Jacquie_Jordan

But the message following wikipedia flag/box has not gone away: "Constructs such as ibid. and loc. cit. are discouraged by Wikipedia's style guide for footnotes, as they are easily broken. Please improve this article by replacing them with named references (quick guide), or an abbreviated title."

I was wondering what I am missing RE formatting, etc...

Mark Parsons — Preceding unsigned comment added by Parsonseditor (talkcontribs) 21:13, 31 January 2011 (UTC)[reply]

The message is a note added by a user, not an automatically generated tag - it won't go away until explicitly removed. I've done so, now. In future, please do feel free to remove the notices when the issue's been resolved! Shimgray | talk | 21:24, 31 January 2011 (UTC)[reply]

This is bizarre

In Double (basketball), Quadruple-double section, NBA subsection, in the notes, it reads:

"Olajuwon was originally credited a quadruple-double as shown by the box score; however, the NBA stripped Olajuwon of one assist assist after reviewing the game tape.[52]"

However, when I went to remove one "assist", there was only one there. Is this my computer? ~EDDY (talk/contribs)~ 01:20, 1 February 2011 (UTC)[reply]

It shows only one for me in the rendered text. Have you tried clearing your browser cache? Ucucha 01:23, 1 February 2011 (UTC)[reply]
I see only one "assist", too. I can't think of any reason why that word would appear twice for you; it shouldn't be a cache problem, either, since "assist assist" hasn't appeared in the article before, at least not in the past few dozen edits. Gary King (talk · scripts) 03:03, 1 February 2011 (UTC)[reply]

Left floated image overlaps table of contents

TOC and image overlaps

While reading the Jingshan Park article I noticed that the TOC and image overlapped. I'm using Safari on Mac OS X and the bug only appears if the browser has a particular width, although at other withs the image border can overlap the text.--Salix (talk): 12:21, 1 February 2011 (UTC)[reply]

Too many floating element wil cause problems sometimes. I've moved the image to the right. Edokter (talk) — 12:45, 1 February 2011 (UTC)[reply]

Special:NewPages in reverse order

Currently, Special:NewPages lists new pages in chronological order from newest to oldest. Users have to make an effort to patrol at the back of the backlog. I have two questions.

  1. Is it possible to make Special:NewPages list pages in reverse chronological order from oldest to newest?
  2. Have we ever considered going in reverse order before?

Thanks. - Hydroxonium (H3O+) 15:19, 1 February 2011 (UTC)[reply]

What effort? There's a backlog link . Anyways, for a history page or NewPages, just add ?dir=prev (or &dir=prev if there already is a question mark in the URL) to the URL. ManishEarthTalkStalk 16:07, 1 February 2011 (UTC)[reply]
Or click on the "earliest" link to go to the earliest pages. Gary King (talk · scripts) 17:17, 1 February 2011 (UTC)[reply]
Oops. I should have clarified that. There are over 1,000 users patrolling the front of the list and only a handful patrolling the back of the list. So several hundred articles fall off the end and aren't reviewed each month. If the list is reversed, then maybe more people would patrol at the end instead of the front. - Hydroxonium (H3O+) 17:36, 1 February 2011 (UTC)[reply]
I doubt they'd be willing to change it. It's newest-to-oldest probably because every single page listing edits on Wikipedia are newest-to-oldest (Recent Changes, history pages, even nomination pages like WP:FAC, WP:RFA, etc.). And people are used to newest-to-oldest already. Gary King (talk · scripts) 17:38, 1 February 2011 (UTC)[reply]
and there's an advantage in trying to check the new ones , for this gets the worst problematic material deleted fastest. (of course, it also gets some things deleted too fast when they're still under construction.) Perhaps we should make the option to go from the end a little more prominent , to get a better balance. DGG ( talk ) 05:20, 2 February 2011 (UTC)[reply]

Most of the Newpage backlog pages arent patrolled as they are sticky subjects (notability, etc). We should tag and patrol with a special tag, so BLP/Notability experts can check it out. ManishEarthTalkStalk 11:50, 2 February 2011 (UTC) [reply]

or more simply, those who consider themselves such experts should be encouraged to go from the back. When I patrol, if I want to do something easy, I look at the front, if I'm up to actual thinking, I look at the back. DGG ( talk ) 20:22, 2 February 2011 (UTC)[reply]
But no one does. I've seen the same pages stuck there for months because they're BLP or sticky stuff. We should tell the people watching WP:BLP and WP:Notability to do somethign about it. ManishEarthTalkStalk 15:44, 3 February 2011 (UTC)[reply]
You can't "see the same pages stuck there for months" - they drop out of Special:Newpages after one month. --Redrose64 (talk) 15:50, 3 February 2011 (UTC)[reply]
This might be a good chance to advertise this new bot. Pretty good if you're having trouble staying on top of the new page list cut off. - Kingpin13 (talk) 15:53, 3 February 2011 (UTC)[reply]

Problem with contributions not moving after username change

Last year, Feb 22 2010, I RTV'd per CHU [3]. However, not all of my contributions moved over [4], even though my old userpage claims the username is not registered [5] and there is no "user contributions" link in the toolbox section. Can the remaining 3 or 400 contributions be moved over to the other username (see [6] for new user name on 2/22/2010). If not, can they be moved to a similar named username. Rgrds. --64.85.215.37 (talk) 19:29, 1 February 2011 (UTC)[reply]

Can't rename a user that doesn't exist. I'll post a note at bugzilla:17313 to see if we can get a dev to intervene. –xenotalk 19:35, 1 February 2011 (UTC)[reply]

Way to get the number of empty subcategories - API

I currently run a bot that checks for all the empty subcategories of Category:Wikipedia files with a different name on Wikimedia Commons and Category:Wikipedia files with a different name on Wikimedia Commons and Category:Wikipedia files with the same name on Wikimedia Commons. The only way I know how to do this on the API is to actually run a separate query for each subcategory. That means that any time I want to run the update (~1-3 times per day), I have to make ~1200 queries to the server (which seems like an obnoxiously high number). Is there a way to do this any quicker? I note that the HTML version of the page shows 200 subcategories a time (just click on either category and you'll see what I mean). Magog the Ogre (talk) 23:30, 1 February 2011 (UTC)[reply]

It sounds like prop=categoryinfo will give you the information you want about each subcategory. You could pass up to 500 subcategory names (separated by '|') in the titles= parameter, or you could use categorymembers on each parent cat something like this. Anomie 00:57, 2 February 2011 (UTC)[reply]

Thank you. File:Smily.png (talk) 04:03, 2 February 2011 (UTC)[reply]

OK, where did you figure out how to do that query? I am entirely unfamiliar with this whole "generator" syntax. Magog the Ogre (talk) 22:04, 2 February 2011 (UTC)[reply]

This category just had a number of pages added, mostly Boston Red Sox players such as Tim Wakefield. None of which belong. Anyone know why? (Where should such anomalies be reported - I saw some other mistakes earlier today)--SPhilbrickT 01:01, 2 February 2011 (UTC)[reply]

Probably caused by this. The articles incorrectly placed in the category should be gone from the category soon. Ucucha 01:03, 2 February 2011 (UTC)[reply]
By the way, you can get the articles to stop displaying in CAT:CSD by doing a null edit on them; I just did that for Wakefield. Ucucha 01:06, 2 February 2011 (UTC)[reply]
Oddly, I was just looking at that diff, but nothing sunk in. Adding a hangon to an article not speedied, adds it to the list? Odd.--SPhilbrickT 01:08, 2 February 2011 (UTC)[reply]
all gone.--SPhilbrickT 01:10, 2 February 2011 (UTC)[reply]
That's quite purposeful. An extremely high percentage of users accidentally replace the CSD nomination with the hangon template, and keeping the category makes sure that admins notice.—Kww(talk) 01:11, 2 February 2011 (UTC)[reply]
I don't see the point of the category on {{hang on}} either, and have removed it. The hangon template already adds a different category, Category:Contested candidates for speedy deletion, so I don't see the need for another one. Ucucha 01:14, 2 February 2011 (UTC)[reply]
But that's just a subcategory--listing it there automatically lists int in the main category also. And so it should. Some of us who patrol speedy check the critical subcategories first, some go by the time the speedy was placed, some--like myself--go through it alphabetically. Just as Kww said, many new users who don't carefully read the instructions think that by placing the hangon tag they can remove the original speedy tag. They can't , but for the hangon tag to continue to list in the category makes sure we deal with the articles just the same. (The only problem is when someone adds a hangon tag to contest a prod or an AfD nomination, & thus gets the article listed in speedy also, which was certainly not their intent. When we admins see that, we just remove the tag & if necessary, explain what to do.) DGG ( talk ) 05:16, 2 February 2011 (UTC)[reply]
I've restored the category pending further discussion.—Kww(talk) 12:02, 2 February 2011 (UTC)[reply]

Expand Tag

I recently put {{Expand}} (Hereinafter "Expand Tag). On the Reasonable doubt article. When I went to the article after saving it it had Template:Expand where the Expand Tag box should be. Was the Expand Tag discontinued? Also if possible could someone put an alert on my talk page that my question was answered? Thank You, Etineskid (talk) 02:26, 2 February 2011 (UTC)[reply]

Yes, it was deleted after much debate. It was seen as being too general to be useful. See Wikipedia:Templates_for_discussion/Log/2010_December_16#Template:Expand. Fences&Windows 03:19, 2 February 2011 (UTC)[reply]

Article page is a dabpage, talk page is a redirect

Is there a bot which can detect when an article page is a dab page but the talk page associated with it is a redirect, and then fix it by replacing the talkpage redirect with the {{WikiProject Disambiguation}} template? DuncanHill (talk) 15:23, 2 February 2011 (UTC)[reply]

Wikipedia:Bot requests. OrangeDog (τε) 11:26, 3 February 2011 (UTC)[reply]
It wouldn't be a bad idea to tag regular dab pages in the process. ▫ JohnnyMrNinja 11:34, 3 February 2011 (UTC)[reply]

Page views

From WT:Invitation to edit: Does anyone know how to get the number of page views (per month is good enough) specifically served to readers who are not logged into an account? (We're talking about ~20 pages, no more than six [specific] months each.) WhatamIdoing (talk) 20:22, 2 February 2011 (UTC)[reply]

Add new features to show/hide content based on user's groups?

What do you guys think of creating CSS classes that can be used by editors to show/hide content based on a user's groups? For instance, I am a member of the "autoreviewer", "reviewer", "rollbacker", "user", and "autoconfirmed" groups. There are some templates, such as {{Invitation to edit}}, that could take advantage of this by only showing the template to anonymous editors. The first discussion that I saw mentioning this was by User:MSGJ, in a discussion that was six months ago, and then it was brought up again today. If you guys want an idea of how this might work, then install this script, and then wrap any code in <div class="for-sysop-only">Text goes here</div>. The class can use any existing group, which are all listed at the top of the script, with "anonymous" and "user" added as well, since they aren't explicitly defined by MediaWiki. You can also use <div class="not-for-sysop">Text goes here</div> as well. Both examples should be self-explanatory by their class names. Thoughts on integrating this or a variant in MediaWiki:Common.js? Wikipedia:Upload already has custom code written for it in Common.js that shows different content for logged-in/out users. Gary King (talk · scripts) 20:28, 2 February 2011 (UTC)[reply]

Vandalism that only shows up for non-logged-in users? Anomie 21:58, 2 February 2011 (UTC)[reply]
Perhaps hidden content could be marked with an icon? Or just tag anonymous edits including these classes. It could also be a gadget for those who want to hide templates such as {{Invitation to edit}}, so then by default nothing is changed. Gary King (talk · scripts) 22:26, 2 February 2011 (UTC)[reply]

Random article

Random article used to support the back button going to the previous random article, but now it takes the user back to the main page. Please restore the original behavior. —Preceding unsigned comment added by 66.14.154.3 (talk) 18:29, 2 February 2011 (UTC)[reply]

I have this problem too. Windows 7 and Chrome. Of course you can find it by looking at your browsing history... Ericoides (talk) 22:29, 2 February 2011 (UTC)[reply]
The problem seems to only be with the Vector skin. I don't have that problem in Monobook. Gary King (talk · scripts) 22:32, 2 February 2011 (UTC)[reply]
Note: The first comment in this section was originally posted to Talk:Main Page. Graham87 01:23, 3 February 2011 (UTC)[reply]
Thanks Graham. Gary, the problem happens irrespective of the skin (in Chrome). Ericoides (talk) 07:16, 3 February 2011 (UTC)[reply]

I have windows 7 and four different browsers. For each, I was not logged in (therefore Vector skin), started at main page, then went for "Random article" three times, then the "back" button once.

  • Google Chrome 8.0 - returns to main page
  • Mozilla Firefox 3.6.13 - returns to second random article
  • MS Internet Explorer 7.0 - returns to second random article
  • Opera 11.01 - returns to second random article

In Firefox and Chrome, you can right-click the "back" button to select from a list. This demonstrates that whilst successive articles reached through normal wikilinks are added to this list in both browsers, those reached through "Random article" are not added to the list in Chrome. --Redrose64 (talk) 14:24, 3 February 2011 (UTC)[reply]

EasyTimeline not working correctly?

In the distributed.net article, there is a timeline that shows the project's many milestones. However, the sections for OGR and DES had to be split into separate bars due to overlapping. The bars were previously named OGRa, OGRb, OGRc, DESa, DESb and DESc.

This seemed a little awkward, so I decided to use ditto marks instead:

bar:OGR from:start till:[date 2] text:OGR
bar:OGR from:[date 2] till:[date 3] text:&qu ot; [no space]
bar:OGR from:[date 3] till:end text:&qu ot; [nospace]

However, this caused all of the data to be merged into one bar. It did not work like described in the help files. Am I doing something wrong, or is this a bug? --Ixfd64 (talk) 01:29, 3 February 2011 (UTC)[reply]

Help with cookies - please!

I really am stuck! Please see this sequence of HTTP request and response headers made from the wolfsbane toolserver. The first pair are from the use of login() in Wikibot.php5. The next is a GET which sends the expected six cookies apparently correctly. But the response to it has completely ignored the cookies. Can anybody spot what I am doing wrong? That report was generated with this call and the source of the script can be seen here. — [[::User:RHaworth|RHaworth]] (talk · contribs) 01:54, 3 February 2011 (UTC)[reply]

enwiki_session (and possibly centralauth_session) should only be sent when making edits and doing other stuff that requires tokens - you get a new one every time you fetch an edit (or whatever) token. MER-C 03:05, 3 February 2011 (UTC)[reply]
No, they can be sent whenever you make a request, as long as you update them whenever the server happens to send back a new one. Most HTTP toolkits will actually handle that automatically for you, anyway, and every one of them will send those cookie with every request. Anomie 03:19, 3 February 2011 (UTC)[reply]
Oh well. I guess things have changed in the four years since I wrote that code. MER-C 08:50, 3 February 2011 (UTC)[reply]
(edit conflict) I see no indication in that transcript that the server ignored the cookies, and only the first query (for Special:WhatLinksHere/Template:Oscoor) would show any difference anyway. Although I do note that when I query that page logged out the Content-Length is 19924, while logged in it is 21329 due to extra Javascript variables and such; in your transcript the Content-Length is 22171, which might indicate that your login was successful. Redo the transcript with a query for http://en.wikipedia.org/wiki/Special:MyPage, that will tell us for sure whether you're logged in or not because it will serve an HTTP 302 redirect to either your userpage or the userpage of the IP address.
BTW, if you want to test if you're logged in from the script itself, fetch http://en.wikipedia.org/w/api.php?action=query&meta=userinfo (with the appropriate format= parameter, of course) and check the response, that's more straightforward for a bot than the Special:MyPage test. Anomie 03:17, 3 February 2011 (UTC)[reply]
I recommend you check whether you are still logged in in this fashion. It is pointless for read requests to attach cookies unless you want a token. MER-C 08:50, 3 February 2011 (UTC)[reply]
Very many thanks, I get it now. I was being silly. — [[::User:RHaworth|RHaworth]] (talk · contribs) 10:44, 3 February 2011 (UTC)[reply]
If you don't pass the cookies for a read request, then MediaWiki won't see you as being logged in and you won't get the advantages of apihighlimits. Anomie 01:48, 4 February 2011 (UTC)[reply]

Possible bug in moving files and listing in categories

Earlier, I did this file move [7]: from File:1985 Street.PNG to a more suitable and proper filename File:Back to the Future Part II & III.png with redirect. When I go to the category this file is listed under, Category:Screenshots of Nintendo Entertainment System games, this new filename is still listed under the "1" section, as if the software is still seeing the old filename (which is now a redirect). Has anyone else seen this issue, or did I just stumble upon a bug somewhere? –MuZemike 04:54, 3 February 2011 (UTC)[reply]

Looks like any subsequent edit to the categorized page (including null edits) will update a sort-key which defaults to {{PAGENAME}}. ―cobaltcigs 05:02, 3 February 2011 (UTC)[reply]

Automate stock information through RSS feeds

I am not an expert on matters of stock markets, but wouldn't it be possible to use a reliable RSS feed to guide a bot to update Template:Infobox company? We could place a flag like NASDAQ = ????, and we could start on only a few companies to begin with. The bot could update with most recent closing price and most recent stock volume, and the template could automatically generate a company value (assuming that those numbers would equal that, I'm not certain myself). Does this sound like something that would be feasible to do? ▫ JohnnyMrNinja 05:57, 3 February 2011 (UTC)[reply]

Sounds like the bot would have to make thousands of edits a day. It makes more sense to just put market capitalization values "as of January 1" or the last quarter, instead of transforming Wikipedia into an up-to-date stock ticker. Gary King (talk · scripts) 06:16, 3 February 2011 (UTC)[reply]
There are many bots that make thousands of edits a day. But sure, it could be made to only work once a month or once a week (weekly would probably be best at any rate). Many articles on publicly-traded companies contain well-out-of-date financial info. I'm more interested to know if it is technically possible, for a bot to respond to an RSS feed. ▫ JohnnyMrNinja 07:02, 3 February 2011 (UTC)[reply]
Ok, I brought this to Proposals to see if it can get support. ▫ JohnnyMrNinja 07:47, 3 February 2011 (UTC)[reply]
An RSS feed is just XML. Most languages with bot software should have an RSS parsing library somewhere out there (e.g. [8]), and if not then it is relatively easy to roll your own. MER-C 07:18, 3 February 2011 (UTC)[reply]
True that some bots can make a huge amount of edits, but I think that few of them constantly update the same page over and over again. I was just thinking how useless it might be to have a history page for a rarely edited article filled to the brim with updates by a single bot. (I do realize that some bots do constantly update pages over and over again, such as WP 1.0 statistics pages, but those exist explicitly for the bot to edit.) Gary King (talk · scripts) 16:09, 3 February 2011 (UTC)[reply]

You could store the information as key–value pairs within one template, using a {{#switch:}} statement. ―cobaltcigs 07:26, 3 February 2011 (UTC)[reply]

Yes but why? Constantly updating information like this is not what an encyclopaedia is for. If people want stock info they should go to a stock market website. If they want financial news they should consult a financial news agency. OrangeDog (τε) 11:23, 3 February 2011 (UTC)[reply]
I’m not going to argue about that. A specialized (non-encyclopedia) wiki may find this technique useful however. ―cobaltcigs 23:24, 3 February 2011 (UTC)[reply]

If done correctly, this can be achieved with one bot update every week/month. The bot will collect all the information into one file with a huge {{#switch:}} as cobaltcigs suggested above. I still would not support it, from reasons similar to what OrangeDog mentioned, but this belongs on the proposal, not on the technical discussion, so I'll voice it there. --Muhandes (talk) 16:30, 3 February 2011 (UTC)[reply]

JavaScript Standard Library Gadget

The Javascript library wouldn't make most of my user scripts work on IE 7 or IE 8, while they do work on Mozilla Firefox 4 Beta 10 and Firefox 3.5. --Yutsi Talk/ Contributions 14:08, 3 February 2011 (UTC)[reply]

Texas State Historical Association website

A shot in the dark here - maybe some Wikipedia editor has run across some information. It would appear that the servers at the Handbook of Texas Online have been down for a couple of days. I've tried it on both my Firefox and IE. Even the Main Page either brings up a message of "the connection has timed out", or "server not found". If the web site is down, there's no one to contact about this. Although, the timing of this also tends to coincide with the Egypt events slowing down the internet as a whole. Just wondering if anyone else has heard anything.

Also, I've tried View/Page Source to see their server name. When I do that, it's completely blank - nothing there.

If I go through U of North Texas at Denton, where this is supposedly based, everything on their site comes up. Except what they have for the UNT TSHA portal. Again, same messages and blank that shows no server.

Is it possible the Handbook is no longer available onine?

Maile66 (talk) 16:50, 3 February 2011 (UTC)[reply]

OK, It's back, after two days in the cosmos. So, never mind. Maile66 (talk) 19:38, 3 February 2011 (UTC)[reply]
  • You might want to contact their webmaster, who might explain why the server was offline, in case there will be a repeat pattern. For instance, the Swedish webserver for the article hit-counters (stats.grok.se) has had space limits, which stopped logging hit-counts, and would lose new data. That problem occurred during extreme events, such as the release of new blockbuster films (where zillions of hits were logged), or when the webmaster went on summer vacation. The pattern predicted when to get stats before losing service. -Wikid77 09:46, 4 February 2011 (UTC)[reply]

Diff shows amendment but page text doesn't

See this diff; I've added a section anchor so that it goes to the "Wh" section. If I look down among those, I see this:

Station
(Town, unless in station name)
Rail company Year closed
Whitchurch Halt GWR 1959
Whitchurch South (Hampshire) GWR 1960
White Bear Lancashire and Yorkshire Railway & Lancashire Union Railway joint 1960

However, an examination of the diff at the very top of the page shows that I changed the second of these from [[Whitchurch (DN&S) railway station|Whitchurch South (Hampshire)]] to [[Whitchurch Town railway station|Whitchurch Town]] (Hampshire) so this row should show as:

Station
(Town, unless in station name)
Rail company Year closed
Whitchurch Town (Hampshire) GWR 1960

What's going on? --Redrose64 (talk) 18:02, 3 February 2011 (UTC)[reply]

I purged the page and it's fine now. Gary King (talk · scripts) 19:53, 3 February 2011 (UTC)[reply]

Trouble substituting a #switch block

I'd like to make {{WikiCup nomination}} substitutable. It has a #switch statement in it, which when substituted, should only output the result and not the entire block. However, I notice that {{subst:#switch:}} doesn't work. So, how do I go about substituting this block? Gary King (talk · scripts) 06:37, 4 February 2011 (UTC)[reply]

Are you sure it isn't working when the variables have been set? It looked like it worked for me, but I might not understand the issue. There is no default set, so it wouldn't work without variables... ▫ JohnnyMrNinja 06:57, 4 February 2011 (UTC)[reply]
(Note: the /doc prescribes and uses {{cupnom}} in the examples, which redirects to {{WikiCup nomination}}). I created Template:WikiCup nomination/sandbox (with subst:#switch and Template:WikiCup nomination/testcases. It looks like the subst acts after processing the parameters, but before the #switch is performed. Without the subst: (as is the main T now) it works fine. -DePiep (talk) 09:47, 4 February 2011 (UTC)[reply]
And, if you want the whole template to be subst:-able (as you write), the template should be entered like this: {{subst:WikiCup nomination|some page|FAC}}. Seems to work (see testcases). -DePiep (talk) 11:25, 4 February 2011 (UTC)[reply]

Secure server and linking to Robots.txt.

I'm logged into the Secure Server. While looking at the Grub (search engine) article, I find a link to Wikipedia's Robots.txt file used as a source. Clicking on the link takes me to the secure server's Robots file instead of the en.wikipedia file. Is there a way to hardcode the link to prevent this behavior? -- RoninBK T C 08:28, 4 February 2011 (UTC)[reply]

But why is this such a problem? Anyhow, I don't think a "robots.txt" file is a reliable source, and, what's more, the link seems to have been removed from the article.. — This, that, and the other (talk) 09:51, 4 February 2011 (UTC)[reply]
Yeah, it would seem to be moot at this point. wouldn't it? I suppose the easiest solution for a problem is to simply remove the instance of it. -- RoninBK T C 19:09, 4 February 2011 (UTC)[reply]

CSS: a lower diacritic is sometimes cut off (IPA)

In {{IPA vowel chart}} (and its sub {{IPA vowel chart/vowelpair}}), IPA symbols with a lower diacritic are used, e.g. . An editor notes that sometimes that lower diacritic is cut off. Possibly browser/skin/font/zoom related (I can only reproduce this myself by zooming ++ in FF).
My question is: how to set the (inline) box to prevent this? I tried using "vertical-align:" but did not get an effect, so probably I did not use or understand it correctly.
At the moment, sandboxes & vowelpair/doc is prepared and available for this: Template:IPA vowel chart/sandbox(edit talk links history), sub Template:IPA vowel chart/vowelpair/sandbox(edit talk links history). -DePiep (talk) 11:46, 4 February 2011 (UTC)[reply]

hold on: the sandbox showed it: lower opaque boxes obscure the lower part. From here it is easy. (But still don't understand "vertical-align:" correctly ...). -DePiep (talk) 12:05, 4 February 2011 (UTC)[reply]
The sequence: if I have a letter with no risers (nothing above "x" height, so no "X, h" etc), is there a way to cut off unocccupied top space of the inline-box where "e" is in? -DePiep (talk) 12:31, 4 February 2011 (UTC)[reply]

Page not loading

Can someone explain why National Register of Historic Places listings in Northwest Quadrant, Washington, D.C. is not loading for me? I've tried it once myself, and my bot has tried it about 20 times. It's a problem that seems isolated to this page. Is it temporary? Magog the Ogre (talk) 18:23, 4 February 2011 (UTC)[reply]

Interestingly, it's come up, just very slowly for me now. Apparently more slowly than 30 seconds (the bot timeout default). Magog the Ogre (talk) 18:27, 4 February 2011 (UTC)[reply]

It's a very long page, with almost 300+ images, which is the most likely source of your problem. I can't see anything obvious that could cause it to not load at all though. It might be worth investigating a possible split or reduction of that list. --Dorsal Axe 19:08, 4 February 2011 (UTC)[reply]