|
From: | Ángel González |
Subject: | Re: [Bug-wget] bad filenames (again) |
Date: | Sun, 23 Aug 2015 16:15:04 +0200 |
User-agent: | Thunderbird |
On 20/08/15 04:42, Eli Zaretskii wrote:
From: Ángel González wrote: On 19/08/15 16:38, Eli Zaretskii wrote:Indeed. Actually, there's no need to allocate memory dynamically, neither will malloc nor with alloca, since Windows file names have fixed size limitation that is known in advance. So each conversion function can use a fixed-sized local wchar_t array. Doing that will also avoid the need for 2 calls to MultiByteToWideChar, the first one to find out how much space to allocate.Nope. These functions would receive full path names, so there's no maximum length.*Please see the URL I mentioned earlier in this thread: _all_ Windows file-related APIs are limited to 260 characters, including the drive letter and all the leading directories.
Wrong. I can work with a larger one by using a UNC path.
I had tried to skip over the specific details in my previous mail. I didn't meant that the limit would be bigger, but that there isn't one (that you can rely on, at least). On Windows 95/98 you had this 260 character limit, and you currently still do depending* _Some_ Windows when using _some_ filesystems / apis have fixed limits, but there are ways to produce larger paths...The issue here is not whether the size limits differ, the issue is whether the largest limit is still fixed. And it is, on Windows.
on the API you are using. But that's not a system limit any more.
[Prev in Thread] | Current Thread | [Next in Thread] |