- 01 Apr, 2022 1 commit
-
-
Rob Swindell authored
When loadfiles() calls sortfiles(), only the file's index records have been read in, so trying to sort on any header field won't work. This bug wasn't observable when sorting by date ascending, since that's the natural index order of the files already (order imported/added), only observed when sorting by date descending (newest at the top).
-
- 30 Mar, 2022 1 commit
-
-
Rob Swindell authored
(I'm looking at you, mist1221.zip) ... so first try to extract DIZ files from the root of the archive, then try again searching nested directories too. <sigh>
-
- 24 Mar, 2022 1 commit
-
-
Rob Swindell authored
Credits and daily free credits are accurate to the byte up to (a maximum) of 18446744073709551615 (that's 18 Exbibytes - 1). User's upload and download byte stats are now similarly extended in maximum range, but the accuracy is only "to the byte" for values less than 10,000,000,000. Beyond that value, the accuracy declines, but is generally pretty damn accurate (to 4 decimal places beyond the nearest multiple of a power of 1024), so I don't expect that to be an issue. This method of storing upload/download byte stats allowed me to use the same 10-character user record fields in the user.dat file. As a side-effect of this enhancements: * User and file credit values are now expressed in multiples of powers of 1024 (e.g. 4.0G rather than 4,294,967,296). * Free credits per day per security level has now been extended from 32 to 64-bits (to accommodate values >= 4GB). * adjustuserrec() now longer takes the record length since we can easily determine that automatically and don't need more "sources of truth" that can be out-of-sync (e.g. the U_CDT field length going from 10 to 20 chars with this change). * setting the stage for locale-dependent thousands-separators (e.g. space instead of comma) - currently still hard-coded to comma * more/better support for files > 4GB in size (e.g. in the batch download queue) * user_t ulong fields changed to either uint32_t or uint64_t - I didn't realize how many long/ulong's remained in the code (which are sometmies 32-bit, sometimes 64-bit) - ugh * Steve's ultoac() function renamed to u32toac() and created a C++ wrapper that still uses the old name, for homage
-
- 21 Mar, 2022 2 commits
-
-
Rob Swindell authored
Used the reserved 16-bits in the file index record to extend the supported index-file-size from 4294967295 (4GB) to 281474976710655 (281TB). I think that's big enough for the foreseeable future. :-)
-
Rob Swindell authored
Size is indexed, so might as well sort by it. This does have an issue with files >= 4GB in length however, so I'm looking at that next.
-
- 02 Mar, 2022 1 commit
-
-
Rob Swindell authored
-
- 23 Feb, 2022 1 commit
-
-
Rob Swindell authored
If the extended description is UTF-8, first convert it to CP437.
-
- 28 Jan, 2022 1 commit
-
-
Rob Swindell authored
This change is just for internal consistency and convenience right now: the lib_t.vdir is a "sanitized" copy of the lib's short name (spaces are converted to dots or underscores based on the logic that the FTP server used in dotname()) and the dir_t.vdir is just a pointer to the dir's code_suffix. No other permutations are made (e.g. lower-casing the strings). Although the virtual directory names of libraries will now appear in mixed case in the FTP server (previously, they were all lowercase), the directory names are actually treated case-insensitively, so it should not make any difference. If forced-lowercase is preferred for some reason, please speak up. This change leads the way to eventually, possibly, making these virtual path elements sysop-configurable. For now, it's just better to have a *copy* of the lib's short name that is appropriately modified to make a suitable directory name and have that vpath element available globally (to all servers and services) in a consistent manner. So Nelgin asked (about filebase access via http), what if the library short name has a space in it? The answer now is, the spaces are replaced with a '.' or '_' (if there's already dots in the name).
-
- 27 Jan, 2022 1 commit
-
-
Rob Swindell authored
This fixes issue #328. The user actually *can* remove files from the batch queues in v3.19b, but you have to type the filenames which is not obvious from the prompt which implies you need to type the file index position (e.g. '1' for the first file in the queue). In all Synchronet versions prior, you could only remove by number (and not by name). The fix is to allow either the number or the name of the file to be entered at the RemoveWhich prompt and the file is removed from the queue successfully. Thanks Ragnarok!
-
- 23 Jan, 2022 1 commit
-
-
Rob Swindell authored
The file_list[] parameter was expected to contain only files, but the directory() function (used to create that file_list[]) returns a list of all directory entries, including sub-directories. I could (and maybe will) add an option to directory() to only include files or dirs, but this seemed the more direct fix for the problem reported by DesotoFireflite (VALHALLA): TickIT's nodelist_handler.js appears to be creating and leaving behind a sub-directory of the temp directory, triggering this error: 1/23 11:36:56a QNET libarchive error -1 (13 opening c:\SBBS\temp\event\nodelist_handler/) creating c:\SBBS\data\VERT.REP Why isn't the temp directory fully cleaned up after/between events? That's another thing to look into.
-
- 16 Jan, 2022 2 commits
-
-
Rob Swindell authored
-
Rob Swindell authored
file_area.web_vpath_prefix file-metadata-object (return value of FileBase.get()).vpath
-
- 14 Jan, 2022 1 commit
-
-
Rob Swindell authored
CID 345291 It's actually a false positive because if an extension (".suffix") exists in filespec, it must also exist in newfilespec since it's a copy, but whatever. It's better form to check.
-
- 11 Jan, 2022 2 commits
-
-
Rob Swindell authored
Default to 64 characters. Maximum value is 65535 characters, but filenames larger than 64 characters may be problematic (e.g. searching for them, displaying them, security concerns), so only increase with caution. Shorter values are fine, but 0 will just revert back to the default.
-
Rob Swindell authored
As discovered while making the Synchronet v3.18b feature video (https://www.youtube.com/watch?v=_IWzIV0_sZ4), when only a shortened version of a long filename is displayed (e.g. due to 80 column terminal width limitations), trying to download that filename by specifying the filename at the Download File(s) Filespec [All Files]: prompt can be problematic. For example (as seen in the video), the file "SyncTERM-1.1-setup.exe" is displayed as "SyncTERM.exe" (on an 80-column terminal), yet trying to download "SyncTERM.exe" (or "syncterm.exe") using the 'D'ownload command would fail to find a file with that name (understandably, but frustratingly so). This change will transform the requested filename-to-load if it is at least 12 characters in length and contains no wildcards (* or ?), to include a filename extending wildcard: "filename.txt" will become "filename*.txt" and "longfilename" will become "longfilename*". For requested filespecs of NULL (all files) or specs containing wildcards or specs (filenames) less than 12 characters in length, no filespec transform takes place: so trying to list/download "a" doesn't match "apple.txt".
-
- 04 Jan, 2022 2 commits
-
-
Rob Swindell authored
By extracting with with_path=true, the file_list matching won't match the nested DIZ files.
-
Rob Swindell authored
Previously, extracted files were always overwritten (so that is the "default" for Archive.extract() and mostly what I'm specifying in the C/C++ code by default now), but this caused a problem with DIZ extraction: archives that contained multiple DIZ files (e.g in sub-directories), the last to be extracted would be used. A maximum of 3 DIZs can be extracted, so it would usually be the 3rd DIZ in the archive if there were that many. Another solution would be to *only* extract DIZ files from the root of the archive and I should look into that as well, but the always-overwrite behavior also seemed to be wrong, so that *also* needed fixing (allow caller to control behavior). This fixes issue #317, at least for archives where the root DIZ exists *before* any nested DIZ files. I'll have to try and create a purposeful archive to test the other conditions (where the root DIZ would appear *after* the nested DIZ(s)).
-
- 16 Nov, 2021 1 commit
-
-
Rob Swindell authored
%+ will now expand to the current user's real name, automatically enclosed in quotes if it contains any spaces.
-
- 10 Jun, 2021 1 commit
-
-
Rob Swindell authored
It's anticipated that this will be used for JS-populated file metadata in JSON format in the future (and not just "archive contents" in .ini format). Also, fix the double-free issue that was occurring when moving files with extended file descriptions (sbbs_t::movefile()). This was actually the primary problem I was fixing here, but noticed the metadata issue: metadata would not have been moved along with the other file info between bases.
-
- 09 Jun, 2021 2 commits
-
-
Rob Swindell authored
Hopefully this helps get to the bottom of Ragnarok's reported problem creating ZIP QWK files with libarchive.
-
Rob Swindell authored
Not supported by default on Windows and perhaps not on all *nix systems. You can still support creation of tbz files if you like, but you'll need to setup an external "Compressible File Type" in SCFG to do it.
-
- 06 Jun, 2021 2 commits
-
-
Rob Swindell authored
5 options: - Safest Subset - Most ASCII, Excluding Spaces (the default) - Most ASCII, Including Spaces - Most CP437, Excluding Spaces - Most CP437, Including Spaces
-
Rob Swindell authored
sbbs_t::checkfname() now checks the file.can too. new filedat.c functions: - safest_filename() - not currently used - illegal_filename() - returns true for a highly-suspicious (e.g. hack attempt) filename - allowed_filename() - returns true if the filename is good for upload (assumed to be already checked to be legal as well). Importantly, filenames beginning or ending in a '.' are now unallowed: - 'dot files' are hidden (by default) on *nix - files ending in a '.' are problematic on Windows
-
- 23 May, 2021 1 commit
-
-
Rob Swindell authored
Resolve error reported on irc with Ubuntu (don't know what version): <rjwboys> ok now i get filedat.c:896:3: error: unknown type name ‘la_int64_t’
-
- 16 May, 2021 2 commits
-
-
Rob Swindell authored
CID 330960, 330967, 330988
-
Rob Swindell authored
This may have contributed to plt's file editing woes.
-
- 13 May, 2021 3 commits
-
-
Rob Swindell authored
Do this in JS and use JSON for format instead of .ini.
-
Rob Swindell authored
This was really slowing down the upgrade_to_v319 and there's no current consumer of the data. Consider adding back in JSON format later or just leave it to JS things to use for JSON-formatted metadata.
-
Rob Swindell authored
Don't use iniSet* since we know we're not updating existing ini entries. Use strListAppendFormat() instead.
-
- 05 May, 2021 1 commit
-
-
Rob Swindell authored
-
- 04 May, 2021 1 commit
-
-
Rob Swindell authored
Some archives contain exactly the same files as others, but in a different order. Believe it or not.
-
- 03 May, 2021 1 commit
-
-
Rob Swindell authored
e.g. when using the JS FileBase.update() method
-
- 02 May, 2021 1 commit
-
-
Rob Swindell authored
This will allow fast/easy display of archive contents without actually reading the archive files. Introduces some new functions: - list_archive_contents() - smb_adddfile_withlist() A new SMB convenience variable ("tail", aliased as "content" for a file). A new file detail level ("file_detail_content", exposed in JS as FileBase.DETAIL.CONTENTS) which adds a "content" array property to file metadata objects for JS FileBase.get(). Files already added to the new filebases won't have this archive content automatically - I'm looking into that now (likely a new or updated JS script to run).
-
- 25 Apr, 2021 1 commit
-
-
Rob Swindell authored
-
- 24 Apr, 2021 1 commit
-
-
Rob Swindell authored
Inspired by Blocktronics (and other ANSI art group) packs' FILE_ID.DIZ/ANS files: * Support (and prioritize) FILE_ID.ANS * Convert ANSI color/attribute sequences in DIZ files to Ctrl-A equivalent (uses SAUCE width and ICE color, if specified) * Don't treat DIZ as a series of lines, they're not always nowadays. * New putmsg() mode: P_INDENT to print files indented by current column * Display full (up to 64-char) filenames in lists when using 132+ column terminal. * Use the Author, Group, and Title fields from the SAUCE if present/non-blank * 2 new text.dat strings: 301 (FiAuthor) and 302 (FiGroup) * Also fix bug with repeated Cost header field on bulk-uploaded files. I know this'll break the *nix build (sauce.c dependency), but I'll fix that next.
-
- 22 Apr, 2021 2 commits
-
-
Rob Swindell authored
-
Rob Swindell authored
Increase total extended description length from 1024 to 4000 characters. Perhaps this should be configurable?
-
- 21 Apr, 2021 1 commit
-
-
Rob Swindell authored
Extracting a file_id.diz would fail if the file contained any disallowed filenames before the DIZ, e.g.: Error: disallowed filename '_blockmen_res[v]olution.ans' (after extracting 0 items successfully)
-
- 17 Apr, 2021 1 commit
-
-
Rob Swindell authored
I forget who it was that said they were still using this feature in v3.18, but here you go, it's working again (the /D and /U commands). I'm not migrating any file sender/recipient info from v3.18, so only files added after upgrading to this will be downloadable from the "user" directory (if you have one). Something that I never implemented before but noticed is missing is the removal (or dereferencing) of user-to-user files that were sent from/to a user that is then deleted. So that's still a TODO item.
-
- 08 Apr, 2021 1 commit
-
-
Rob Swindell authored
When only reading the index (detail = file_detail_index), smb_getfile() just sets the file->name convenience pointer to point to the name in the index. Then when loadfiles() would sort the list, these pointers would not be adjusted (so they would point to the wrong names) resulting in a corrupted file list (e.g. name/size mismatches and no logical sort order). The solution is to call smb_getfile() on each file *after* the read index records have been sorted. This also means that the sort-by-name routines needed to always sort using the index name and not the convenience pointer (which is NULL in this case). While fixing this, I noticed there was no bounds checking in the loadfiles() and loadfilenames() read-loops, so if the indexes happened to be longer than the total_files value from the status header, a buffer under-alloc/overflow would occur and a likely crash as a result. So stop reading the index when the expected maximum number of index records have been read.
-