Jump to content

Commons:Village pump/Proposals

Add topic
From Wikimedia Commons, the free media repository

Shortcuts: COM:VP/P • COM:VPP

Welcome to the Village pump proposals section

This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2026/02.

Please note
  • One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
  • Have you read the FAQ?

 
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days.

Add autopatrol to file movers

[edit]

Special:ListGroupRights here you can see what rights groups have. file mover doesnt have autopatrol now.

i briefly searched the archives and found the following. Commons:Village_pump/Proposals/Archive/2012/08#c-Philosopher-2012-08-04T23:26:00.000Z-Bundled_rights_(Filemover)_-_+1 2012 decision to do exactly this but not acted upon?

similarly jdx also suggested the same Commons:Village_pump/Proposals/Archive/2019/02#c-Jdx-2019-03-18T08:24:00.000Z-Add_rights_from_the_autopatrollers_user_group_to_the_rollbackers_user_group:_vot RoyZuo (talk) 17:32, 5 February 2026 (UTC)Reply

 Support for Filemovers as well as rollbackers. Shaan SenguptaTalk 15:16, 5 March 2026 (UTC)Reply

Publicizing Commons:Uploading works by a third party as guideline

[edit]

Hello,

I'd like to propose that Commons:Uploading works by a third party (COM:THIRD for short) is upgraded from an essay to a guideline. The text is already developed enough, quite widely used as reference on the COM:Help desk among others and is built in a way to fit very well under a description as guide-line: The page IMHO provides sound guidance for people toiling away at collecting third-party works; you may read about do's and don'ts on that subject. Such a consensual upgrade is surely warranted for this helpful work. Regards, Grand-Duc (talk) 20:03, 18 February 2026 (UTC)Reply

Hi, Yes, but it should first be translate into the main languages, at least Spanish, French, Arabic, Chinese, German, Russian, etc. I will do French. Yann (talk) 20:06, 18 February 2026 (UTC)Reply
I suggested marking the page for translation in the talk page back then, but so far only one has supported as of writing this. As for the translation, I'll do Indonesian. HyperAnd [talk] 06:58, 19 February 2026 (UTC)Reply
Should I mark only a section for translation (with the appropriate number of translation units) or the whole page? Abzeronow (talk) 04:50, 20 February 2026 (UTC)Reply
  1. it is way too long in itself. that's not gonna be helpful for its intended targets: people who dont have much knowledge of copyright matters. newbies are not gonna read the whole thing just to figure out if they can upload a few photos, so they either ignore it or give up uploading.
  2. i dont see how its content is not already covered by other policy or guideline pages including com:l com:dw...
  3. it contains unnecessary, problematic and esoteric jargons such as RTFM.
  4. in addition to trimming down, i think it can be split into 2 pages. one that deals with copyright-expired/inherited stuff; the other for stuff whose authors can be contacted by the uploaders. they have rather different procedures.
RoyZuo (talk) 15:12, 20 February 2026 (UTC)Reply
i think the best method to educate newcomers is to write as succinctly as possible, and make short explanatory videos. that's way more engaging and informative than a long page of texts. RoyZuo (talk) 15:30, 20 February 2026 (UTC)Reply
  • OK, French is mostly done (thanks to Google). It needs proofreading.
  • First, thanks a lot to Jmabel for this huge help page. But by experience, I think that most people start with a wrong assumption. At least 95% of such pictures are of personalities, and people usually assume that they mostly need a permission from the subject. While a permission from the subject might sometimes be useful to avoid the personality requesting deletion, they first of all need a permission from the copyright holder. So it needs a big warning as introduction. Yann (talk) 23:28, 20 February 2026 (UTC)Reply

Captcha editing?

[edit]

Hello,

@GPSLeo, Jmabel, Yann: is there now a thingy that makes all anonymous users (mobile or otherwise) complete a CAPTCHA whenever they submit an edit to a file? I remember when the AbuseFilter for mobile edits (as in, this warning here) was implemented, but right now I'm editing on a desktop, not a mobile.

You see, the first and second times I edited File:Carthamus tinctorius 050709b.JPG today, I didn't have to solve a CAPTCHA, but when I made a third edit to it, I needed to solve one, so it seems this new thing was implemented between 14:12 and 14:15 today. How come? ~2026-93563-4 (talk) 14:29, 24 February 2026 (UTC)Reply

Then again, I did NOT have to solve one when I edited File:CSIRO ScienceImage 10707 Safflower plant.jpg just now at 14:30. What gives? ~2026-93563-4 (talk) 14:31, 24 February 2026 (UTC)Reply
These are automated filters by the mediawiki itself and not set by abuse filters. They are configured by the server admins. We do not have detailed information on how they work. If you do not want to solve these captchas, you have to create an account. GPSLeo (talk) 16:22, 24 February 2026 (UTC)Reply

Mass upload proposal

[edit]

I'm searching for a way to upload a big batch of pictures; to do it myself or help from an experienced user to upload them.

The source website: catza.net

The licence: CC BY 3.0

The author: Heikki Siltala

The text from the website on attribution: The All photos © Heikki Siltala. The photos are immediately available both for non-commercial and commercial uses under the Creative Commons Attribution 3.0 License. There is no need to get a more specific permission or to pay money. The attribution is Heikki Siltala or catza.net.

The ideal way would be to automatically file the pictures by its description. For example this picture (https://catza.net/en/view/code/MCO_g_09_22/172054/) has the description: Escape's Rihanna, JW [MCO g 09 22] . album RuRok cat show Helsinki 2011-04-23 . cat Escape's Rihanna . breeder Escape's . FI . breed MCO . lens Sigma 85mm f/1.4 EX DG HSM . f/1.8 . 1/125 s . ISO 2000 . 85 mm . 12:21:57 . id 172054

So it can be uploaded as: Name: Escape's Rihanna, JW - MCO g 09 22.jpg

== {{int:filedesc}} ==
{{Information
| Description    = {{en|Escape's Rihanna, JW [MCO g 09 22] . album RuRok cat show Helsinki 2011-04-23 . cat Escape's Rihanna . breeder Escape's . FI . breed MCO . lens Sigma 85mm f/1.4 EX DG HSM . f/1.8 . 1/125 s . ISO 2000 . 85 mm . 12:21:57 . id 172054}}
| Date           = 2011-04-23
| Source         = https://catza.net/en/view/code/MCO_g_09_22/172054/
| Author         = [https://catza.net/ Heikki Siltala]
| Permission     = All photos © Heikki Siltala. The photos are immediately available for both non-commercial and commercial uses under the Creative Commons Attribution 3.0 License. There is no need to get a more specific permission or to pay money. The attribution is Heikki Siltala or catza.net. The earlier www.heikkisiltala.com is also allowed.
}}

== {{int:license-header}} ==
{{CC-BY-3.0}}

[[Category:Photographs by Heikki Siltala (Catza)]]
[[Category:EMS Code g 09 22]]
[[Category:Helsinki cat show 2011]]

If possible the breed category could also be assigned through this code list: https://catza.net/en/list/breed/a2z/

What would be the best way to approach this upload? YukiKoKo (talk) 10:45, 25 February 2026 (UTC)Reply

@YukiKoKo: Hi, and welcome. COM:BATCH would be a good place to start. Please see what Yann needed to do in Special:Diff/1171701501 to mitigate the effects of your headings and templates, and avoid that need in the future.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 13:04, 25 February 2026 (UTC)Reply
@YukiKoKo: You indicated, you wanted to try yourself. I would recommend to have a look at: Commons:Pattypan --Schlurcher (talk) 07:50, 26 February 2026 (UTC)Reply
I've made a request for batch uploading (https://commons.wikimedia.org/wiki/Commons:Batch_uploading/Catza), so I will first wait how that will turn out. But I will have a look at Pattypan in case the batch uploading feature isn't possible. YukiKoKo (talk) 11:52, 27 February 2026 (UTC)Reply
I would just manually upload useful photos instead. Photos like [1] aren't really useful and photos like [2] and [3] require an evaluation of the local freedom of panorama laws. There are also a lot of duplicates like [4] and [5] with one just being a redundant (in terms of educational value) black and white of the same image. Traumnovelle (talk) 22:22, 2 March 2026 (UTC)Reply

Narrow scope for AI on Commons

[edit]

With the recent adoption of Commons:AI images of identifiable people as a guideline, along with the increasing scrutiny and backlash against generative AI technology, I think we should consider restricting the uploading of AI to only situations where it is strictly necessary. More formally I propose adopting the following scope guidelines for AI generated content on Commons and amending Commons:AI-generated media to include and reflect the following:
Any AI generated or modified file on Commons must meet at least one of the following requirements:
1. It is an independently notable work or part of an independently notable work
2. It is currently being used per the principles of COM:INUSE
3. It is the only example of the output of a particular piece of software (for example, Sora or Grok) or type of output (for example, music or video). Dronebogus (talk) 01:50, 1 March 2026 (UTC)Reply

 Oppose, I don't think it is a good idea for now, since it would require significant changes to Commons:AI images of identifiable people when it has just recently been adopted as a guideline, and specific aspects of the text are still being discussed in its talk page. Thanks. Tvpuppy (talk) 02:36, 1 March 2026 (UTC)Reply
@Tvpuppy: with respect, that’s a weak reason to oppose something. Obviously the old policy would be superseded by and folded into the new one since COM:AIIP is very short and covers a narrower part of the same topic in a very similar way. Dronebogus (talk) 06:12, 2 March 2026 (UTC)Reply
 Support per nom.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 03:01, 1 March 2026 (UTC)Reply
 Oppose I still think something I proposed over a year ago would be very much in scope and should happen. I pretty much avoid using generative AI myself so this is a "proposing someone else should do this," but here goes.
We should identify anywhere between half a dozen and 100 different reasonably specific things that a reasonable person might ask AI to generate, e.g. "a photorealistic depiction of New York's Times Square in 1965," "a photorealistic depiction of a macaque," "an anime-style representation of Oliver Twist," "a watercolor of a European dragon," "a 32-bar musical passage in the style of Beethoven." These could be more specific if that works better. Then roughly every three months, or when a particular engine puts out a new release, we would give these same queries to a number of currently available AI engines and upload both their initial creations and what possibly better result a human can get by tweaking in dialog with the AI, with that dialog being part of the documentation. Over time, I imagine we would develop a very good history of the evolution of this technology. I would think that should certainly be in scope, and much more useful than the haphazard stabs people have taken at this sort of thing.
This is an example of what would be precluded by the proposal here, and I imagine that is not the only thing that would be worth doing that would involve using AI. - Jmabel ! talk 03:41, 1 March 2026 (UTC)Reply
@Jmabel: doesn't the current framework of policies and guidelines already see for that some AI generated media are permitted on Commons in any case, even under the assumption that in the future, new additions are unwanted on SCOPE reasons? Namely, I'm thinking along the lines of COM:IAR and COM:PORN. And isn't there a wording in law texts that is only slightly more permitting than a direct and strict prohibition, something like a "shall not" vs. a "must not"?
So, we could say that AI generated media are generally unwanted / not allowed / out of scope (similar to the rule for new uploads in PORN) but with a comparable small circumventing exception, which would allow only evidently good material actually enhancing our collections, using such a "shall-based" wording.
Your example of an upload series with an actual "storyboard" and a well-thought concept would and should be permitted in any case as it it designed and shows for providing actual technological knowledge, and not by a small amount (barring developments in court decisions which could outlaw AI for our purposes).
I'm not fundamentally opposed to an AI tool usage. In fact, in my family, we have already used AI generated imagery several times to enhance me son's homework to good effects (and the Microsoft Image Generator that we used is also good for laughs when it e.g. blocks a totally inconspicuous German prompt containing the word "Wolfsrudel", "wulf pack", I think because of Nazi associations - replacing it with "mehrere Wölfe", "several wolves", and leaving the remainder unchanged made the prompt work). But I wouldn't never think about using these tools to produce media for Commons, in my opinion, they simply don't fit with our aims, besides a few limited exceptions. Regards, Grand-Duc (talk) 05:31, 1 March 2026 (UTC)Reply
I don't see where the proposal offers any leeway here. COM:PORN doesn't really say anything about limiting porn: "Low-quality images of x that do not contribute anything educationally useful to our existing collection of images are not needed on Wikimedia Commons." is true for any value of x.--Prosfilaes (talk) 05:46, 1 March 2026 (UTC)Reply
(cross-posted) @Grand-Duc: unless I am misreading, and I do not think I am, Dronebogus's proposal here would absolutely bar what I am suggesting, so I am opposing the proposal. In terms of allowance for this sort of thing It is the only example of the output of a particular piece of software (for example, Sora or Grok) or type of output (for example, music or video) is much narrower than what I am suggesting here.
As I've said before, at least at the current state of generative AI I'm pretty skeptical about the use of AI imagery to illustrate anything other than the topic of AI imagery, but Dronebogus's proposal seems possibly even a bit narrow for illustrating AI imagery in Wikipedia. Do we really mean to say that we can have no pool of illustrations of what can be done with a given AI tool beyond what is already in use in existing articles, not even something that illustrates a capability that might not otherwise be obvious? And is this going to be the one area in which Commons has no virtually no interest in content of historical interest (the history of the development of generative AI)? Because that would seem to be a consequence of adopting this proposal as it stands. - Jmabel ! talk 05:53, 1 March 2026 (UTC)Reply
@Jmabel: You are looking for unreasonable reasons to oppose a reasonable proposal. If someone actually did whatever you’re proposing they would presumably put it in an article, no? Then it would be COM:INUSE and not a violation. Dronebogus (talk) 05:54, 1 March 2026 (UTC)Reply
@Dronebogus: No, they would not (mostly) be put in an article. I can't think of anywhere that files on Commons that amount to a large data set are all put in an article somewhere else. A good example of this (not AI-related) that I'm (slowly) curating at the moment is , an early 20th-century collection of mostly 19th-century photographs, mainly of Seattle, with comments by Thomas Prosch. Most of these will never make it into an article, partly because for many of them if we wanted just the photographic image (not his hand-written notes), we have a better print elsewhere. If you want, I could provide numerous other examples of content we absolutely should have on Commons that is never likely to find its way into any of our "sister projects." - Jmabel ! talk 05:45, 2 March 2026 (UTC)Reply
I think an exception for illustrating AI even if not INUSE could be added to the guidelines, but I’m not sure how to word it. I want Commons to be able to provide illustrations on the topic of AI art, but I don’t want AI art to be used outside of AI related topics. The purpose of this proposal is to try to stop the latter before it happens while acknowledging and working around the necessity of the former. Dronebogus (talk) 05:58, 2 March 2026 (UTC)Reply
@Dronebogus: we can limit how AI-generated content on Commons is categorized, but we cannot limit how other projects use our content. - Jmabel ! talk 18:52, 2 March 2026 (UTC)Reply
They won’t use AI if we don’t host it. Dronebogus (talk) 00:30, 3 March 2026 (UTC)Reply
 Oppose "It is the only example of the output of a particular piece of software" feels absolutely putative. There is basically no case, besides a unique 2D piece of artwork, where two examples isn't better than one. As Jmabel says, chronologically and by subject are valuable views into how a generative AI produces files. We shouldn't demand that one file an old version of Grok got hilariously wrong is the only image we'll store here.--Prosfilaes (talk) 05:46, 1 March 2026 (UTC)Reply
@Prosfilaes: the “only one example” clause could be amended to include versions of a piece of software— i.e. baz by Grok 1.0.jpg is not incompatible with baz by Grok 1.7.jpg Dronebogus (talk) 06:06, 2 March 2026 (UTC)Reply
 Oppose I didn't understand item 3 in requirement. Please rephrase it. Gryllida (talk) 07:37, 1 March 2026 (UTC)Reply
I don’t know what doesn’t make sense. It states that one potential rationale for keeping an AI-generated or modified file would be that no other files exist demonstrating the output of the software used to generate it, and/or there are no other AI files of the same media type (ex. Audio or video). For example, if baz.jpg was the only file generated by foo.AI, or baz.mp4 was the only AI video on Commons, then it would be in scope because no other examples or foo.AI outputs were available on Commons or no other examples of AI videos were on commons. Dronebogus (talk) 20:29, 1 March 2026 (UTC)Reply
 Oppose No need for this censorship of a production method and tool increasingly common throughout society. No right to force the bias or opinions of a few as repressive restrictions onto all, instead of looking at the case(s) at hand via standard procedures and existing policies. Prototyperspective (talk) 21:19, 1 March 2026 (UTC)Reply
No need for this imposition of non-human-created slop on a project that features human-created human-curated works that provide an educational resource increasingly common throughout society. No right to force the pro-AI POV, bias, or opinions of one AI advocate to open the floodgates to all AI advocates.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 22:10, 1 March 2026 (UTC)Reply
No need to use the files if you don't like them. And it's not pro-AI POV bias, I just don't wish for this novel increasingly common production method to be censored indiscriminately. And floodgates is a false description. You could start working on the actual flood of 92,000 files Category:All media needing categories as of 2021 instead of forcing your censor-things-I-don't-like attitude onto others when there is no genuine problem so far or flood at all. Prototyperspective (talk) 22:15, 1 March 2026 (UTC)Reply
The flood is already here; Category:AI-generation related deletion requests is just what we've been able to catch since 2022-12-03.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 22:48, 1 March 2026 (UTC)Reply
If you look at how many AI files are on Commons overall, that's a tiny percentage and low fraction...e.g. much fewer than the uncategorized files of just one year or various kinds of useless photos, such as blurry photos or mundane photos of nothing in particular showing things there's thousands of photos of already etc. Moreover, the policy proposed here would rather increase rather than reduce the amount of work and for no reason. At least it wouldn't really help with this and low quality files by noncontributors can already even be speedy deleted. There's also lots of low-quality drawings and logos, yet drawings and logos aren't all banned. Prototyperspective (talk) 10:32, 2 March 2026 (UTC)Reply
@Prototyperspective: So let's just ban all AI-generated content - less work, much brighter line.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 19:15, 2 March 2026 (UTC)Reply
Goes back to what I said there. Also people don't need to make these DRs and spend any time on them, I understand that you do not recognize any usefulness of any media produced in this way (basically called a bias) but it doesn't mean it doesn't exist, and third we'd get far more uploads with it not being declared & labelled as made using AI so it could just as well be more work. Fourth, we don't ban lots of other things with more DRs or where the fraction of useful files is low such as Category:MobileUpload-related deletion requests, Category:Nudity and sexuality-related deletion requests, etc. Things can already be easily deleted and often speedily so. Why should we ban a notable organization's logo just because it's made in a low-budget method that uses novel tools for example? But let's not continue this discussion. Prototyperspective (talk) 22:15, 2 March 2026 (UTC)Reply
 Comment - at the present time, the biggest issue I'm seeing with AI-generated content is users "retouching" photos using ChatGPT, Gemini, Apple Photos Cleanup, or other similar AI tools before uploading them to Commons. What's most in need of change right now is the user messaging around this issue, not policy - something as simple as "if you're going to upload an AI image, please upload the original first, and don't upload AI images of people" would be a huge help. Omphalographer (talk) 03:20, 4 March 2026 (UTC)Reply
+1 - Jmabel ! talk 03:23, 4 March 2026 (UTC)Reply
Maybe if this doesn’t pass we just ban AI enhancement? Dronebogus (talk) 11:12, 4 March 2026 (UTC)Reply
The problem is not editing with AI tools itself. The problem is how people do this. Removing a lens flare or dirt on the sensor with an AI tool in Photoshop or CaptureOne is fine. Uploading a photo to ChatGPT for the same purpose is not, as ChatGPT might change anything and not just what you wanted to be changed. GPSLeo (talk) 18:37, 4 March 2026 (UTC)Reply
I agree with GPSLeo. There are already a fair number of good, specialized, AI-based graphics tools, but the attempts at general-purpose tools have largely shown that it is relatively easy to build an artificial bullshit artist, and much harder to (at least for now) to build an artificial expert. - Jmabel ! talk 21:42, 4 March 2026 (UTC)Reply
@Jmabel: I still remember bullshit artist en:User:Bad article creation bot.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 23:11, 4 March 2026 (UTC)Reply
Maybe then, if it’s not already mandatory, make it required to upload the original alongside the retouched version and disallow overwriting a non-AI modified image with an AI modified one. Any AI retouched image without the original available should be speedy deleted. Dronebogus (talk) 04:43, 5 March 2026 (UTC)Reply
Uploading the original for every file only because someone routinely runs a dust spot removal over all files seems to be completely exaggerated. Such a rule would be hard to fit with the workflow of most photographers. GPSLeo (talk) 05:27, 5 March 2026 (UTC)Reply
We could work out a common-sense exception for trusted users who upload professional grade photography and provide detailed specifications on their hardware (i.e. cameras) and software (i.e. what AI tool they used and how). I think 99% of cases where it’s even evident AI has been used are your average joe shmoe single-upload user putting a grainy 100px historical image through slop.ai to make it 200% more betterer and inadvertently adding Bigfoot into the image. Dronebogus (talk) 06:31, 5 March 2026 (UTC)Reply
Uploading the original for every file only one could require them to upload the untouched original as the first version and only upload modified ones as new revisions of the file. In the file history section users can then still see the other version(s). Prototyperspective (talk) 11:56, 5 March 2026 (UTC)Reply
workflow of most photographers: still, all things being equal, barring copyright or personality rights issues, it is certainly best practice for documentary photography to make your original photo, straight from the camera, available (and, typically, overwrite that with the preferred version). I'll admit I'm not 100% on doing that myself, but I'm close. And that is entirely independent of AI-driven tools, which I don't use. Typical examples: File:Nicolae Tonitza - Portretul lui Gala Galaction (Omul unei lumi noi) (1919-1920).jpg, File:Ithaca, NY - W State Street, looking west from S Cayuga Street.jpg. I would not require this, but it certainly can be a lot clearer than a verbal description of retouching. - Jmabel ! talk 00:02, 6 March 2026 (UTC)Reply
We have so many complaints from good photographers, who want to contribute but fail with the technical difficulties. Requesting them to upload the original and then the editing version would make the process even more complicated. GPSLeo (talk) 07:35, 6 March 2026 (UTC)Reply
required to upload the original: that would completely eliminate anything from third parties. - Jmabel ! talk 23:49, 5 March 2026 (UTC)Reply
I was referring to those uploads where the original is by the user who uploads or the user has access to it. If the unedited original is not available to them because only the modified version was posted online that obviously makes it sth that can't be expected from them. Users that forgot doing so could be asked to upload it as a new revision and then revert the revision. Prototyperspective (talk) 11:25, 6 March 2026 (UTC)Reply
and much harder to (at least for now) to build an artificial expert that's the wrong way to use these tools – they are not there for any of the expertise, the expertise should be about 100% in the human who uses these tools in often sophisticated ways, not in the tool. Prototyperspective (talk) 11:54, 5 March 2026 (UTC)Reply
From what I've seen some users say in response to DRs, part of the problem is that many consumer AI tools (including, but not limited to, ChatGPT) simply don't behave predictably when processing images. Sometimes they'll do an acceptable job of retouching an image - e.g. removing dust and scratches, colorizing black and white photos, adjusting levels and contrast - and sometimes they'll go off the rails and completely recreate an image from "memory", introducing changes in the content of the image. It's not clear what controls how these tools will behave, or if it's even possible to reliably control them. And unless we can give users specific, reliable advice on how to use these tools responsibly, the safest option will be to advise against using them. Omphalographer (talk) 22:13, 4 March 2026 (UTC)Reply
As an example: the uploader of File:201A Tube characteristics.png used a "text recognition" feature in (Microsoft) "Word with Copilot" which replaced all the labels in the chart with nonsense. (Worse: it wasn't even the usual unreadable text - most of it was contextually appropriate nonsense, making the problem harder to notice.) The original has been uploaded now, but you can compare to the modified version in file history. Omphalographer (talk) 03:19, 5 March 2026 (UTC)Reply
Removing a lens flare or dirt on the sensor with an AI tool in Photoshop or CaptureOne is fine. Uploading a photo to ChatGPT for the same purpose is not, as ChatGPT might change anything and not just what you wanted to be changed this comes from inexperience with these tools – valid point in principle but there are now tools where you can select the part of the image to change and describe how so it does the same as those other tools, just much easier, low-budget, quicker and often better. Prototyperspective (talk) 11:55, 5 March 2026 (UTC)Reply
 Oppose I think we're doing ok with the slow accretion of guidelines and best practices regarding AI. This seems like far enough to be a non-starter. It is currently being used kind of doesn't make sense without an additional exception, as nothing is in use at the time of upload, but it must be in scope to be uploaded. — Rhododendrites talk14:48, 5 March 2026 (UTC)Reply
I agree the proposal is DOA in its current form, but this discussion has resulted in a lot of constructive criticism I’ll apply to a revised version. I still absolutely believe Wikimedia needs to take a hard line against generative AI (just like crypto and all the other toxic, kleptocrat-driven web 3.0 bullshit being forced down our throats). But we also need to talk about generative AI in an educational context. I want Commons to have a broadly anti-AI policy written down that also accommodates the necessity of hosting AI generated content to illustrate and discuss such content in a way that feels sensible and doesn’t rely on either being extremely vague or extremely specific. Dronebogus (talk) 14:58, 5 March 2026 (UTC)Reply
Millions of people and lots of countries and their education systems etc think differently. There is no reason to make Commons very biased in one way or the other and exclude lots of content or take a political stance on this. Your view of this novel technology is your opinion. Prototyperspective (talk) 15:05, 5 March 2026 (UTC)Reply
You are literally the only person I’ve ever encountered passionately defending AI generative garbage who doesn’t appear to have an economic stake in it. The broad consensus of the general online public that actually bothers to voice an opinion is that nearly all generative AI technology and output sucks. I’d say it’s a solution in search of a problem, but that’s too generous. It’s a “solution” to the “problem” of needing humans to produce creative works. And before you say “it gives people who can’t do x a chance to do x”— that’s a feature of being human, not a bug. If you can’t do x you either learn or ask someone else! That’s like the idea behind Wikimedia! Generative AI as it currently stands is directly contrary to this idea of human beings sharing knowledge and skills! Dronebogus (talk) 15:15, 5 March 2026 (UTC)Reply
That doesn't surprise me – related concepts are 'echo chamber', 'filter bubble', and 'confirmation bias'. And that's not the online consensus at all which is a bad way to assess consensus anyway. Generative AI as it currently stands is directly supportive of the idea of human beings sharing knowledge and skills as more people have access to better idea/concept visualization and more media depictions can finally enter the public domain/creative commons. Prototyperspective (talk) 15:18, 5 March 2026 (UTC)Reply
(Edit conflict) Well, this opinion is shared by a lot of people. We need to be very cautious about such generalizations. The dominant discourse is pro-AI, but it doesn't mean the majority of people are pro-AI. At the very least, most people I know are very skeptical or critical about AI. I don't know how we should formulate Commons policies about AI, but we should keep an independent and critical view about it. Yann (talk) 15:18, 5 March 2026 (UTC)Reply
The dominant discourse is pro-AI if by “dominant” you mean “rich and loud”. If you look at social media, comments sections, youtubers, artists, people on this very website, it’s overwhelmingly negative. Dronebogus (talk) 15:21, 5 March 2026 (UTC)Reply
If I look outside of reddit and Wikipedia, it's nuanced and/or positive. In any case, that's a bad way to gauge the public view; for example there are people stoking up divisions and polarizations, paid commenters, algorithms that drive disagreement and upset, etc etc. It doesn't matter either way what the majority opinion on this is. We don't censor lots of other things that people don't like – people are free to hate these things and not use them. “rich and loud” the loud ones are the ones being hyperbolic nonnuanced haters of anything that has anyhow to do with generative AI. Prototyperspective (talk) 15:27, 5 March 2026 (UTC)Reply
people stoking up divisions and polarizations because they are voicing their honest dislike of this technology and what it’s doing to art and culture? paid commenters Yes, I’m sure there’s big money to be made trashing big tech’s new favorite thing in the whole world, something that basically prints money for free. It doesn't matter either way what the majority opinion on this is public opinion does actually matter. We don't censor lots of other things that people don't like I’m not saying we should censor it, but just like how w:wp:gratuitous states we shouldn’t use explicit images to illustrate non-explicit subjects I don’t think we should use AI to illustrate topics unrelated to AI. hyperbolic nonnuanced haters of anything that has anyhow to do with generative AI. I don’t hate or disapprove of generative AI 100%, if w:Neuro-sama counts as generative AI. And while I don’t exactly like that he used it, ZUN also used AI in the latest Touhou game and took pains to demonstrate how to use it in an ethical manner that doesn’t negate the importance of real, serious human contribution. Dronebogus (talk) 17:11, 5 March 2026 (UTC)Reply
I was giving examples why it's not a good idea to base things on personal subjective impressions of online opinion. There's financial interests for and against various kinds of AI uses and AI uses in general. And if we censored away everything we feel is widely disliked we may be moving to censoring videos of sexual intercourse, homosexuality, fetishes, religious desecration, and political caricatures next. And claiming you are not saying/proposing sth does not make it so. Prototyperspective (talk) 17:59, 5 March 2026 (UTC)Reply
FWIW, I'm pretty neutral on the long-term potential of generalized AI, but so far we are at a phase similar to when Ambrose Bierce remarked about electricity circa 1890 that so far it had been shown that it could pull a streetcar better than a candle and light a room better than a horse. - Jmabel ! talk 00:10, 6 March 2026 (UTC)Reply
Yes, I feel the same way. Only it’s worse than just inferior; it’s actively harmful. AI as a concept has potential, but right now it’s being applied fast-and-loose in places it doesn’t need to be applied, or places it could be applied responsibly but isn’t. It’s more like how back in the early-mid 20th century we thought we’d be warming our hands by a lump of radium in the fireplace— yeah, radioactivity is useful, but not like THAT. Thank god no-one started putting radium fireplaces in homes by default like every tech corporation is doing with AI in everything. Dronebogus (talk) 06:07, 6 March 2026 (UTC)Reply
Okay so how much have you used latest AI? I felt like this about LLMs (because they just parrot things to sound plausible, not accurate) but these aren't LLMs and it's not about how we feel. I doubt you have used them for coding, diagrams, creative ideas you didn't have time for, or specific images you have in mind spending hours to create them. In this area often feels like people have super strong opinions and extensive advise to give but little experience or data underneath it. I'm not saying it's not harmful or that it isn't currently overdone but knee-jerk reactions to e.g. companies scrambling to put AI into everything where it's not needed/wanted/useful or sensationalist media coverage relating to some real issues aren't helping and additionally would further the perception that they're entirely useless and a problem when reality is more nuanced than that. Prototyperspective (talk) 11:34, 6 March 2026 (UTC)Reply
I was just thinking about how generative AI is like nutrient paste in RimWorld: maybe you don’t care relying on nutrient paste puts talented, passionate chefs out of a job because now everyone can be a “chef” at the push of a button. Maybe you can justify the space wasted by the room-sized dispenser by pointing out a regular stove uses slightly more electricity and is far less efficient in its output. Maybe you think a human cooking a delicious meal is functionally identical to the dispenser grinding up the ingredients into flavorless mush. Maybe you even like nutrient paste and know lots of people who do. But the fact is most people hate eating nutrient paste. They don’t like seeing a freezer stocked with nutrient paste meals. They don’t like biting into their food and finding out it’s actually just paste. They don’t forced out of their cooking jobs they spent years honing and getting replaced by “nutrient paste engineers” (which isn’t a real job in RimWorld just like how “prompt engineer” isn’t a real job IRL). You can start your own colony with a cult of transhumanism that mandates that everyone eat nutrient paste, and attract lots of like-minded nutrient paste eaters to your colony, but most of us at the Wikimedia colony would just like to eat real human food. Dronebogus (talk) 12:07, 6 March 2026 (UTC)Reply
Why would one eat nutrient paste if the other tastes better.
If one has the option for both in a specific case like say a specific meal occasion (such as a lunch during travel on day xy) I see no reason for why to pick it. Especially when both meals are equivalent or the nutrient paste is better because eg it's healthier and tastes better then why the heck should I be forced to only eat the manmade dish with other options being prohibited? If you think for cases where both are available the latter is intrinsically better due to being manmade/handmade the traditional way then you're free to have this opinion but shouldn't insist on everybody adopting the same view. Btw, the ideas/philosophy has some resemblance to this. Prototyperspective (talk) 12:27, 6 March 2026 (UTC)Reply
That’s the thing: nutrient paste can technically meet your colonists’ raw nutritional requirements, and extremely efficiently too, but it tastes disgusting unless you are an ascetic who doesn’t care about taste or have adopted a pro-nutrient paste ideology. To use a real world example: the w:dilberito, which was basically real life nutrient paste. It was supposed to be the next big thing in food. It (supposedly) provided everything your body needed, but it apparently tasted awful. It was only acceptable fare to people who can eat without concern for taste (and maybe like two people who actually enjoyed it). The point is AI generated content may be able to technically meet the minimum requirements of whatever it’s being used for, but most people think it’s about as palatable as nutrient paste or a diberito. And putting AI in an article or whatever is like putting nutrient paste it in a meal at a restaurant— you can order something else, but if I wanted this meal I have to eat the paste as part of it. Dronebogus (talk) 12:50, 6 March 2026 (UTC)Reply
most people think it’s about as palatable as nutrient paste or a diberito you think that. I don't. Millions and probably most people don't, in my country I think and it seems to be most people. Regardless of what they think, we shouldn't censor things based on taste. There's country where homosexuality is punished and acceptance of it a minority view. That files are on Commons don't mean they have to be used. It's not technical requirements but holistic all-criteria requirements which is more broad than making some criteria you personally are a fan of about production methodology a critical decisive top criteria. Prototyperspective (talk) 12:54, 6 March 2026 (UTC)Reply
millions and probably most people uh, citation needed. I at least have anecdotal evidence a lot of people do not like AI. I can point out English Wikipedia, the biggest Wikimedia site by far and one of the biggest websites on the planet, has a laundry list of policies, essays, and guidelines on AI that are mostly negative. I could point out the lengthy “concerns” section on the AI boom article, or the existence of w:AI slop as a concept and term. I could point out the extremely negative reaction to uses of AI in the media, like w:It's the Most Terrible Time of the Year, or the backlash against w:Théâtre D'opéra Spatial. You are relying on a silent majority that possibly doesn’t even exist, and comparing hostility towards AI generated content (a new and highly controversial concept/technology) to intolerance of homosexuality (a natural, healthy behavior among humans and animals that nevertheless results in people getting marginalized, hurt, and killed by ignorant individuals and societies). Dronebogus (talk) 13:10, 6 March 2026 (UTC)Reply
I'd say citation needed for your claims. Given that millions use these tools, it's not a stretch or near-self-explanatory. But again it's not about and should not be about what the dominant or >50% majority contemporary opinion on a subject is. I'm sadly well aware that the existence of the term "AI slop" is what many people believe is what can settle debates or a strong point or just slightly convincing. The majority goes about their day and either uses the tools at work or for fun or daily life things and/or doesn't bother about how Wikimedia projects handle this. I'm not "relying" on them because again majority taste and sentiment aren't what matters. Prototyperspective (talk) 13:29, 6 March 2026 (UTC)Reply
Okay, let’s assume you’re right that, yes, a majority of people like or don’t care about AI. A non trivial minority really does not like it. There is no offense to either camp in using exclusively human made files in the vast majority contexts. However the anti-AI camp is offended by the use of AI and the pro-AI camp’s use of it and subsequent justification of it; both parties come out unsatisfied and hostile toward each other with no real benefit to show for it. So human content is a win for both parties and AI is a lose for both parties. Dronebogus (talk) 13:44, 6 March 2026 (UTC)Reply

In addition to Dronebogus says, with which I totally agree, we cannot say that AI-generated and human generated content are a “free” choice. Because AI use is cheap and easy for the end-user, many people are tempted to use it. But it is neither free or easy for the society in general. This is not a competition on equal terms. Yann (talk) 13:51, 6 March 2026 (UTC)Reply

It's not a competition to begin with. Computer use is neither free or easy for the society in general. Prototyperspective (talk) 13:59, 6 March 2026 (UTC)Reply
It gets easier and, measured in money, cheaper every day - to the point were you can just talk to a device and ask it to have media generated along your guidelines. The upload to Wikimedia is just a formality. So, no argument here. Alexpl (talk) 21:52, 6 March 2026 (UTC)Reply
That is speculation and is currently not true except if quality and accuracy are none of your criteria and/or if it's sth quite simple. I did not make a new 'argument' there but just addressed 2 claims in the prior comment and showed how these are basically false. If it gets easier and cheaper to create good-quality useful visual illustrations for subjects where these would be useful then that's great. Prototyperspective (talk) 22:05, 6 March 2026 (UTC)Reply
That is the core of your misconception— that AI art is good, even disregarding personal taste. AI art is frequently full of errors and “hallucinations”. Even if accurate it simply doesn’t inspire trust among anyone with critical thinking skills who have seen the utter BS it has spit out in the past. So going back to the nutrient paste analogy, it’s like there being a non-zero chance of the nutrient paste containing toxic waste to make it seem more substantial. A human chef might cook food badly or improperly, resulting in anything from a lousy meal to food poisoning, but they won’t put toxic waste in your food and lie about it meeting your nutritional requirements. Dronebogus (talk) 11:00, 7 March 2026 (UTC)Reply
I don't want Prototyperspective and his ilk piping in the electronic version of toxic waste.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 11:08, 7 March 2026 (UTC)Reply
For the venue of a revised proposal mentioned above (14:58, 5 March 2026 (UTC)), I would encourage making a global RFC page. With the right timing and preparation, you can use m:CentralNotice, too. I think a broad restriction (or acceptance) of AI-generate media is a wider issue than just Commons. Of course, Commons editors can be invited to comment on it. I'm happy to help drafting an RFC like that. whym (talk) 09:32, 10 March 2026 (UTC)Reply

Proposal: Allow file movers to delete single-revision redirects during file moves

[edit]

I would like to propose adding the delete-redirect right to the file mover user group on Wikimedia Commons. This would allow file movers to delete single-revision redirects when they block a file move.

Background

[edit]

On Wikimedia Commons, file renaming is performed by users with the file mover or sysop right. However, when the destination title already exists as a redirect, the move can fail even if that redirect is trivial.

In such situations, file movers must request administrator assistance to delete the redirect and complete the move. In many cases, these redirects are:

  • created automatically by previous file moves
  • redirects with only one revision
  • redirects with no meaningful history or content

Despite being technically trivial, these situations require administrator intervention, which creates unnecessary delays and additional administrative work.

Existing precedent

[edit]

Similar issues have been discussed in the context of page moves on other Wikimedia projects. MediaWiki development work has recognized that single-revision redirects generally have no meaningful history and can safely be removed when they block a move operation.

The purpose of the delete-redirect capability is not to grant general deletion powers, but to allow the system to remove trivial redirects automatically during a move action.

Proposed change

[edit]

Grant the delete-redirect user right to the file mover group on Wikimedia Commons.

In practice, this would allow file movers to delete redirects only when all of the following conditions are met:

  1. The file is a redirect.
  2. The redirect has only one revision.
  3. The deletion occurs as part of a file move operation.
  4. The redirect would otherwise block the move.

This would not grant file movers general file/page deletion rights.

Benefits

[edit]

This change would:

  • reduce routine administrator workload
  • speed up routine file renaming
  • eliminate many trivial admin requests
  • make the file mover workflow more efficient

Commons contains millions of files and frequent renaming requests. Allowing file movers to resolve these minor redirect conflicts directly would streamline maintenance without introducing meaningful risk.

Safeguards

[edit]

The proposal is intentionally limited:

  • Only single-revision redirects can be removed.
  • The deletion occurs only within the move process.
  • File movers would not gain general deletion rights.

Request

[edit]

I would like to gather community feedback on whether the file mover group should be granted the delete-redirect right for this limited purpose.

If there is consensus, a configuration change could be requested via Phabricator. Regards, ZI Jony (Talk) 08:44, 5 March 2026 (UTC)Reply

Comments

[edit]
  • Just a few questions:
  • Which problem would be solved? Unlike articles on WP file names can be / are trivial on Commons. There may exist a zillion files of a woodpecker, differing by a number, situation, action of the bird, etc. If renaming is blocked, one could add a number to the file name.
  • Can this have disadvantages? Such as wheelwarring about a filename? Regards, Ellywa (talk) 11:32, 7 March 2026 (UTC)Reply
Ellywa, thanks for raising these points.
I agree that this situation is probably not very common, and the proposal is not meant to solve a large systemic problem. It is more about handling those occasional cases where a technically trivial redirect blocks a move and requires unnecessary admin intervention.
For example, in the current request to rename File:2020 New Jersey Question 1 results by county.svg to File:2020 New Jersey Question 1 results map by county.svg, the destination title already exists as a redirect pointing back to the original file. Even though this redirect has no meaningful history, the move cannot proceed unless an administrator deletes it first, or the administrator does so themselves.
This is exactly the kind of situation the proposal tries to address. The redirect is simply a leftover technical artifact, but resolving it still requires admin involvement.
Of course, a file mover could choose a slightly different name instead, but in cases where the requested title is the most accurate or natural one, it would be helpful if trivial single-revision redirects like this could be removed as part of the move process.
So while the case may be rare, the idea is to make these small maintenance tasks smoother and reduce minor admin requests when the redirect involved has no real content or history. Regards, ZI Jony (Talk) 06:51, 8 March 2026 (UTC)Reply
  • A filemover could move the redirect itself to an intermediate name (without leaving another redirect), then move the original file (again without leaving a redirect), then move the intermediate name redirect to the original source name of the move, changing it to point to the new name. Certainly, being able to delete that redirect then do an normal move is easier, and maybe leaves a better history, so if the safeguards can be implemented to not delete redirects with history (I have little idea about that) it's probably fine. But seems to me like it's still possible to avoid involving admins even now. Carl Lindberg (talk) 19:40, 8 March 2026 (UTC)Reply
    Carl Lindberg, thank you for explaining that workaround. You are correct that it is technically possible to complete the move without admin involvement by moving the redirect to an intermediate title and then performing a sequence of moves. However, in practice that approach has a few drawbacks.
    First, it requires several additional steps compared to a normal move. Instead of one straightforward move, the file mover has to perform multiple moves and carefully manage redirects in between. For routine file renaming work this quickly becomes cumbersome.
    Second, it can make the page history less clear. Multiple intermediate moves may create a more complicated history that is harder to follow later, whereas deleting a trivial single-revision redirect and performing a normal move keeps the history cleaner and easier to understand.
    Third, while the workaround avoids direct admin involvement at that moment, it still creates extra maintenance work overall. File movers need to spend additional time performing the workaround, and sometimes the intermediate redirects created during the process may later require cleanup anyway.
    The intention of this proposal is simply to allow file movers to resolve these very limited situations in a straightforward way when the blocking redirect has only a single revision and no meaningful history. It would not grant general deletion rights, but would remove the need for workarounds or small admin requests in these cases.
    So while the workaround exists, the proposal aims to make the workflow simpler and cleaner for those occasional cases where a trivial redirect blocks a file move. Regards, ZI Jony (Talk) 13:11, 9 March 2026 (UTC)Reply
  • Can the software even detect (at the relevant time) that there is a single-revision redirect and allow a user who does not normally have deletion rights to make a deletion? If this requires a non-trivial software change by a WMF engineer, that would seem to me to be wildly out of proportion to any benefit here, especially given how little resource WMF is devoting to support for Commons overall. - Jmabel ! talk 20:59, 9 March 2026 (UTC)Reply
    @Jmabel: I know for sure, from my DE-WP experience, that you could for years overwrite a redirect page that only consist of a single revision when doing a page move. Look at this recent example from a few minutes ago. So, any needed code is extant, I think. Regards, Grand-Duc (talk) 21:11, 9 March 2026 (UTC)Reply
    Jmabel, As far as I understand, this would not require any new or complex software development. MediaWiki already has logic to detect when the target page of a move is a redirect and how many revisions it has. The proposal is simply about allowing the existing delete-redirect capability to be used by the file mover group during a move operation.
    In other words, the software already checks the revision count of redirects when handling moves. The idea here is that if the redirect has only one revision and it blocks the move, the system could allow the redirect to be removed automatically as part of the move action. If the redirect has more than one revision, then the normal process would still apply, and an administrator would be required to delete it.
    So the scope is intentionally very narrow: it would only work during a move action and only for single-revision redirects. It would not grant general deletion rights.
    For related background and technical discussion, see T239277. Regards, ZI Jony (Talk) 10:15, 10 March 2026 (UTC)Reply
  • Ellywa, Jmabel, and Carl Lindberg: would you like to give your opinions? Regards, ZI Jony (Talk) 00:30, 16 March 2026 (UTC)Reply
    • As long is it isn't a big resource ask (and it seems it isn't), I don't care a lot. I just hope that anyone who starts deleting stuff actually knows what they are doing. I wouldn't give a lot of slack to someone who did this and deleted redirects that should have been kept. - Jmabel ! talk 02:12, 16 March 2026 (UTC)Reply
      My question about wheelwarring around a file name has not been answered. Currently there are 1,754 filemovers. If a few of them start warring it will result in more workload for the admins than doing some handwork for renaming. There is no way how we can be certain that all these people will stick to the rules about renaming. Perhaps filemovers who need more possibilities can apply for adminship. Ellywa (talk) 07:39, 16 March 2026 (UTC)Reply
      If a few of them start warring they will lose their filemover privileges. Period. - Jmabel ! talk 17:31, 16 March 2026 (UTC)Reply
      Thanks Jmabel, that’s a fair point, and I agree. The intention here is to keep this limited to very clear-cut cases, so it stays low-risk and doesn’t create extra work. Regards, ZI Jony (Talk) 09:27, 17 March 2026 (UTC)Reply
      Ellywa, thanks for following up. Let me address the wheel-warring concern directly. This proposal is limited to single-revision redirects with no meaningful history, which are typically just technical leftovers from earlier moves/creation. In that sense, it does not expand the scope of *what* can be contested in naming, only removes a technical step (admin deletion) in cases where the redirect itself has no substantive value. If filemovers were to start warring over filenames, that would already be an issue under current practice, regardless of this change. This proposal does not introduce a new type of conflict, but only changes how a trivial redirect is handled during a move. As with other filemover actions, expectations around appropriate use remain the same, and misuse (including repeated or contested moves) can already lead to the removal of the permission. So the safeguard here is behavioural enforcement, which already exists today. In short, the proposal reduces a small amount of technical friction, but does not change the underlying rules or increase the scope for disputes. Regards, ZI Jony (Talk) 09:27, 17 March 2026 (UTC)Reply

Support

[edit]
  1.  Support Schlurcher (talk) 08:17, 8 March 2026 (UTC)Reply
  2.  Support with the proposed safeguards. Tvpuppy (talk) 11:55, 8 March 2026 (UTC)Reply
  3.  Support.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 11:59, 8 March 2026 (UTC)Reply
  4.  Support. Rehman 15:43, 8 March 2026 (UTC)Reply
  5.  Support --Wolfy13399 (talk) 14:54, 11 March 2026 (UTC)Reply
  6.  Support HurricaneZetaC 21:15, 11 March 2026 (UTC)Reply
  7.  Support Shaan SenguptaTalk 03:07, 12 March 2026 (UTC)Reply
  8.  Support, I'm already used to this feature from DE-WP. It's hardly ever source for problems AFAIK, and I'm willing to support changes that better the processes surrounding filemoves (especially when speaking about redirects left after a COM:FR#FR3 "erroneous filename" move, like wrong species names in biology). Grand-Duc (talk) 11:57, 17 March 2026 (UTC)Reply

Oppose

[edit]
  1. --

Neutral

[edit]
  1. --

Feature Request: Revert back to the original vector 2010 design.

[edit]

The new layout is horible. I understand that it came out around late 2022 for new users to read better using the website, but lets face it, it came out during a mass pandemic back when everyone was stuck inside and DEPRESSED. Also most people still have family computers (including my family) so if the redesign is related in responce to everyone trying to access wikipedia from their iphones there's an APP for that. Also the entire point of the website is for education, right? So the new design actually defeats the point of using the website to begin with. The 2022 layout actually removes side links and those fold out bars on the bottom of wikipedia pages so that you can't learn more about a topic, which prevents you from learning more something, and redesigning it would not only look horible but given the current design might not even be possible. Plus more people have started going back to the original color design by repainting their homes, so why should wikipedia be any different?

Plus, changing the current design to the vector 2010 skin would be extremly easy and wouldn't require that much effort.


If you want to support this arguement, do this: Download the old wiki or old wiki redirect extension on either google chrome or mozilla firefox.


See what layout you find better, the original one with the quick links on the side and the information tabs at the bottom of the website, or the current design.


Ok let me make a point about a few arguements I might get.

"You're just resistant to change"

Depending on where you live, you may have noticed other people repainting their houses with the color design for the same reason, because they couldn't deal with the grey modern design that just looks horrible.

Also there's a psychologcal effect of the more minimalist designs, and even if the claim is that the design helps new users read because there's less space, like I mentioned before, wikipedia came out with their own app on the iphone YEARS ago.

There's no difference between a high school student using wikipedia for history class and the guilded age and my annoying younger cousins learning how to use the using website on the family computer like I did when I was really little.

"The current skin helps reading comprehension for new (younger) users because there's less stuff on the website"

Actually, there's no difference between someone in high school using wikipedia for class and newer users using it for school. I learned how to use the internet on the family computer, so what's the difference?

"You can just change the website back to the old design on your account"

Not everyone is good with computers and knows how to do that. Plus, wikipedia stops people from creating an account on ANY public internet, so even if you go to your local library and try to create an account or do that at school, it doesn't work.

If you have to use the built in email feature on wikipedia to create a new account so that you can change the design and the new design slows down you from learning anything, you're probably going to end up with your parents getting pissed off because you got an F on your report card as a result.


"The people that work at the company that maintains wikipedia can just add the features found in the original design and use that on the current skin"

Not Exactly.

The new design not ontly prevents you from adding the links on the side of the website, but it would also look horible.

Which slows down you from learning anything anything where the original design from 2010 didn't and had the features like side links or popout tabs on the bottom of the page.

Also when was the last time you actually saw someone using wikipedia in high school? I haven't seen anyone use it at my school. Jelleyjelly (talk) 02:18, 6 March 2026 (UTC)Reply

Only a small fraction of people use the app instead of mobile Web and this is Commons, not Wikipedia where an even lower fraction uses the Commons app. I think proposals would have more likelihood of getting implemented if you were requesting specific changes to the new skin or some new configurability for it by which Commons could adjust how it looks. Could you describe/name very briefly (this is a long post), which exact things you don't like about the new UI? The sidebar is there by default unless it has been hidden. Prototyperspective (talk) 11:39, 6 March 2026 (UTC)Reply
The current design is generally speaking insainly difficult to navigate where the original is much easier to use and isn't in your face. I don't think there's a good way to fix the current design.
Original (vector2010):[6]
Current:[7] ~2026-14584-69 (talk) 00:44, 7 March 2026 (UTC)Reply
Well the TOC on the side does make it easier to navigate and closing the right or both panels in your screenshots would solve the narrow space issue. Prototyperspective (talk) 21:32, 8 March 2026 (UTC)Reply
2022 layout actually removes side links and those fold out bars on the bottom of wikipedia pages, @Jelleyjelly perhaps you are on mobile view? I assume you are referring to English Wikipedia and I can still see the "side links" and the "fold out bars on the bottom" using the desktop view of the new 2022 layout, so they definitely did not remove them. Thanks. Tvpuppy (talk) 15:29, 6 March 2026 (UTC)Reply
Yeah but the vector 2010 skin had a directory of links, not a drop down menu with a ton of stuff removed and that made it easy to use ~2026-14584-69 (talk) 00:48, 7 March 2026 (UTC)Reply
@~2026-14584-69: It didn't support dark mode as well. However, if you want to go back to using it like it was in 2010-2021, you may use "useskin=vector" in the command line (with "?" or "&" as appropriate) or set your appearance/rendering preferences as a logged-in user in Special:Preferences#mw-prefsection-rendering. YMMV as a temporary account; incognito, I get "Please create an account to change preferences".   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 02:21, 7 March 2026 (UTC)Reply
Ok yeah but what about everyone else that's either not great with computers or doesn't know how to do that? Why not just make it the default, not just for people like me that use the useskin=vector all the time? ~2026-14584-69 (talk) 03:18, 7 March 2026 (UTC)Reply
@~2026-14584-69: That would not be progress. I resisted the new looks for a while in favor of Monobook, but dark mode won me over.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 10:38, 7 March 2026 (UTC)Reply
Yeah but not all browsers have dark mode. Also even if that were true, why not make an extension that has dark mode with the vector 2010 skin? ~2026-14678-29 (talk) 13:50, 7 March 2026 (UTC)Reply
this would be sick. -Nard (Hablemonos) (Let's talk) 14:34, 7 March 2026 (UTC)Reply
 Oppose You can change your preferred skin in preferences. Regarding sidelinks and "fold out bars" (I guess you are talking about navigation boxes), you can still move the sidelinks to sidebar and navboxes are still visible on Vector 2022. Nemoralis (talk) 12:54, 9 March 2026 (UTC)Reply
 Oppose Seeing from a UX POV: I approve that Wikipedia/WMC needs a more modern UI to create an impression on the users that Wikipedia becomes more modern (psychological effect: people often think something is modern when it looks more modern). Anyway, if you demand another design, you can change it as proposed before --PantheraLeo1359531 😺 (talk) 16:29, 9 March 2026 (UTC)Reply

Possible upload: Leipzig address books

[edit]

Hi all,

I’ve compiled a list of public-domain Leipzig address books from the digital collections of the SLUB Dresden. They cover 1830–1937 and total about 100 pdfs. They grow with population, with the 1937 edition containing about 2000 pages.

My plan would be to upload them to Commons (with attribution to SLUB) as PDFs with included OCR (they are in Fraktur so require some fiddling in Tesseract, I still haven't gotten it to recognize ligatures like tz, etc.). Just being able to search them is I think tremendously useful. I would then like to create Wikisource index pages so that the OCR can be improved.

Before starting, I wanted to check whether:

  • these are already uploaded somewhere I may have missed
  • there is a preferred format (PDF vs DjVu)
  • there are recommendations for batch upload tools or workflows.

I am working from a data set that looks like:

https://gist.github.com/amundo/85d2cbff9efc7e17e384c767a310b1d4


Thanks! Babbage (talk) 15:51, 6 March 2026 (UTC)Reply

@Babbage Hier are already some files from the SLUB Category:Documents by Sächsische Landesbibliothek – Staats- und Universitätsbibliothek Dresden --PantheraLeo1359531 😺 (talk) 17:23, 10 March 2026 (UTC)Reply
Thanks PantheraLeo1359531 😺. I looked through the category but didn’t find any address books there. Babbage (talk) 14:16, 11 March 2026 (UTC)Reply
I don't know if there is an official format preference, but PDF is far more widespread. OpenRefine is a mass-upload tool, but needs some knowledge to use --PantheraLeo1359531 😺 (talk) 17:23, 10 March 2026 (UTC)Reply
Thanks again. My current plan is to use Tesseract with Fraktur to try to get decent OCR and then upload the output as PDFs. Babbage (talk) 14:18, 11 March 2026 (UTC)Reply
If it is not about several ten thousand files, it would not be a huge problem if there are some duplicates. They can be deleted afterwards. OCR is highly appreciated :) --PantheraLeo1359531 😺 (talk) 15:02, 11 March 2026 (UTC)Reply
Cool. I am working to figure out how to get the OCR to handle columns better, pretty persnickety! Babbage (talk) 17:18, 11 March 2026 (UTC)Reply