1. 30 Mar, 2022 1 commit
  2. 24 Mar, 2022 1 commit
    • Rob Swindell's avatar
      Support user credits and transfer stats > 4GB in total · 1cac2c8a
      Rob Swindell authored
      Credits and daily free credits are accurate to the byte up to (a maximum) of 18446744073709551615 (that's 18 Exbibytes - 1).
      
      User's upload and download byte stats are now similarly extended in maximum range, but the accuracy is only "to the byte" for values less than 10,000,000,000. Beyond that value, the accuracy declines, but is generally pretty damn accurate (to 4 decimal places beyond the nearest multiple of a power of 1024), so I don't expect that to be an issue. This method of storing upload/download byte stats allowed me to use the same 10-character user record fields in the user.dat file.
      
      As a side-effect of this enhancements:
      * User and file credit values are now expressed in multiples of powers of 1024 (e.g. 4.0G rather than 4,294,967,296).
      * Free credits per day per security level has now been extended from 32 to 64-bits (to accommodate values >= 4GB).
      * adjustuserrec() now longer takes the record length since we can easily determine that automatically and don't need more "sources of truth" that can be out-of-sync (e.g. the U_CDT field length going from 10 to 20 chars with this change).
      * setting the stage for locale-dependent thousands-separators (e.g. space instead of comma) - currently still hard-coded to comma
      * more/better support for files > 4GB in size (e.g. in the batch download queue)
      * user_t ulong fields changed to either uint32_t or uint64_t - I didn't realize how many long/ulong's remained in the code (which are sometmies 32-bit, sometimes 64-bit) - ugh
      * Steve's ultoac() function renamed to u32toac() and created a C++ wrapper that still uses the old name, for homage
      1cac2c8a
  3. 21 Mar, 2022 2 commits
  4. 02 Mar, 2022 1 commit
  5. 23 Feb, 2022 1 commit
  6. 28 Jan, 2022 1 commit
    • Rob Swindell's avatar
      Add 'vdir' (virtual directory name) member to lib_t and dir_t · 51ab0a7f
      Rob Swindell authored
      This change is just for internal consistency and convenience right now: the lib_t.vdir is a "sanitized" copy of the lib's short name (spaces are converted to dots or underscores based on the logic that the FTP server used in dotname()) and the dir_t.vdir is just a pointer to the dir's code_suffix. No other permutations are made (e.g. lower-casing the strings). Although the virtual directory names of libraries will now appear in mixed case in the FTP server (previously, they were all lowercase), the directory names are actually treated case-insensitively, so it should not make any difference. If forced-lowercase is preferred for some reason, please speak up.
      
      This change leads the way to eventually, possibly, making these virtual path elements sysop-configurable. For now, it's just better to have a *copy* of the lib's short name that is appropriately modified to make a suitable directory name and have that vpath element available globally (to all servers and services) in a consistent manner.
      
      So Nelgin asked (about filebase access via http), what if the library short name has a space in it? The answer now is, the spaces are replaced with a '.' or '_' (if there's already dots in the name).
      51ab0a7f
  7. 27 Jan, 2022 1 commit
    • Rob Swindell's avatar
      Allow files to be removed from batch queues by number · 86e39e82
      Rob Swindell authored
      This fixes issue #328.
      
      The user actually *can* remove files from the batch queues in v3.19b, but you have to type the filenames which is not obvious from the prompt which implies you need to type the file index position (e.g. '1' for the first file in the queue). In all Synchronet versions prior, you could only remove by number (and not by name).
      
      The fix is to allow either the number or the name of the file to be entered at the RemoveWhich prompt and the file is removed from the queue successfully.
      
      Thanks Ragnarok!
      86e39e82
  8. 23 Jan, 2022 1 commit
    • Rob Swindell's avatar
      create_archive() will skip directories in supplied file_list · 77e2d88e
      Rob Swindell authored
      The file_list[] parameter was expected to contain only files, but the directory() function (used to create that file_list[]) returns a list of all directory entries, including sub-directories. I could (and maybe will) add an option to directory() to only include files or dirs, but this seemed the more direct fix for the problem reported by DesotoFireflite (VALHALLA):
      
      TickIT's nodelist_handler.js appears to be creating and leaving behind a sub-directory of the temp directory, triggering this error:
       1/23  11:36:56a  QNET libarchive error -1 (13 opening c:\SBBS\temp\event\nodelist_handler/) creating c:\SBBS\data\VERT.REP 
      
      Why isn't the temp directory fully cleaned up after/between events? That's another thing to look into.
      77e2d88e
  9. 16 Jan, 2022 2 commits
  10. 14 Jan, 2022 1 commit
  11. 11 Jan, 2022 2 commits
    • Rob Swindell's avatar
      Allow maximum uploaded filename length to be configured · eb8114bd
      Rob Swindell authored
      Default to 64 characters. Maximum value is 65535 characters, but filenames larger than 64 characters may be problematic (e.g. searching for them, displaying them, security concerns), so only increase with caution. Shorter values are fine, but 0 will just revert back to the default.
      eb8114bd
    • Rob Swindell's avatar
      loadfiles() will perform liberal filename matching when len > 12 chars · 60e4d7af
      Rob Swindell authored
      As discovered while making the Synchronet v3.18b feature video (https://www.youtube.com/watch?v=_IWzIV0_sZ4), when only a shortened version of a long filename is displayed (e.g. due to 80 column terminal width limitations), trying to download that filename by specifying the filename at the Download File(s) Filespec [All Files]: prompt can be problematic.
      
      For example (as seen in the video), the file "SyncTERM-1.1-setup.exe" is displayed as "SyncTERM.exe" (on an 80-column terminal), yet trying to download "SyncTERM.exe" (or "syncterm.exe") using the 'D'ownload command would fail to find a file with that name (understandably, but frustratingly so).
      
      This change will transform the requested filename-to-load if it is at least 12 characters in length and contains no wildcards (* or ?), to include a filename extending wildcard: "filename.txt" will become "filename*.txt" and "longfilename" will become "longfilename*".
      
      For requested filespecs of NULL (all files) or specs containing wildcards or specs (filenames) less than 12 characters in length, no filespec transform takes place: so trying to list/download "a" doesn't match "apple.txt".
      60e4d7af
  12. 04 Jan, 2022 2 commits
    • Rob Swindell's avatar
      The simpler fix to issue #317 (nested DIZ files) · 0553ef9b
      Rob Swindell authored
      By extracting with with_path=true, the file_list matching won't match the nested DIZ files.
      0553ef9b
    • Rob Swindell's avatar
      Add overwrite argument to extract_file_from_archive and Archive.extract · 16cfe2d3
      Rob Swindell authored
      Previously, extracted files were always overwritten (so that is the "default" for Archive.extract() and mostly what I'm specifying in the C/C++ code by default now), but this caused a problem with DIZ extraction: archives that contained multiple DIZ files (e.g in sub-directories), the last to be extracted would be used. A maximum of 3 DIZs can be extracted, so it would usually be the 3rd DIZ in the archive if there were that many.
      
      Another solution would be to *only* extract DIZ files from the root of the archive and I should look into that as well, but the always-overwrite behavior also seemed to be wrong, so that *also* needed fixing (allow caller to control behavior).
      
      This fixes issue #317, at least for archives where the root DIZ exists *before* any nested DIZ files. I'll have to try and create a purposeful archive to test the other conditions (where the root DIZ would appear *after* the nested DIZ(s)).
      16cfe2d3
  13. 16 Nov, 2021 1 commit
  14. 10 Jun, 2021 1 commit
    • Rob Swindell's avatar
      Standardize on "metadata" as the description of a file's "tail" dfield · 3549be9f
      Rob Swindell authored
      It's anticipated that this will be used for JS-populated file metadata in JSON format in the future (and not just "archive contents" in .ini format).
      
      Also, fix the double-free issue that was occurring when moving files with extended file descriptions (sbbs_t::movefile()). This was actually the primary problem I was fixing here, but noticed the metadata issue: metadata would not have been moved along with the other file info between bases.
      3549be9f
  15. 09 Jun, 2021 2 commits
  16. 06 Jun, 2021 2 commits
    • Rob Swindell's avatar
      Give sysop more control over characters allowed in uploaded filenames · 755452d7
      Rob Swindell authored
      5 options:
      - Safest Subset
      - Most ASCII, Excluding Spaces (the default)
      - Most ASCII, Including Spaces
      - Most CP437, Excluding Spaces
      - Most CP437, Including Spaces
      755452d7
    • Rob Swindell's avatar
      More uniform safe/illegal/allowed filename (for upload) determination · 06fff14d
      Rob Swindell authored
      sbbs_t::checkfname() now checks the file.can too.
      new filedat.c functions:
      - safest_filename() - not currently used
      - illegal_filename() - returns true for a highly-suspicious (e.g. hack attempt) filename
      - allowed_filename() - returns true if the filename is good for upload (assumed to be already checked to be legal as well).
      
      Importantly, filenames beginning or ending in a '.' are now unallowed:
      - 'dot files' are hidden (by default) on *nix
      - files ending in a '.' are problematic on Windows
      06fff14d
  17. 23 May, 2021 1 commit
    • Rob Swindell's avatar
      Use int64_t instead of la_int64_t · 92fa411c
      Rob Swindell authored
      Resolve error reported on irc with Ubuntu (don't know what version):
      <rjwboys> ok now i get filedat.c:896:3: error: unknown type name ‘la_int64_t’
      92fa411c
  18. 16 May, 2021 2 commits
  19. 13 May, 2021 3 commits
  20. 05 May, 2021 1 commit
  21. 04 May, 2021 1 commit
  22. 03 May, 2021 1 commit
  23. 02 May, 2021 1 commit
    • Rob Swindell's avatar
      Store contents (list) of archive files in filebase (in the "msg tail") · 5374a113
      Rob Swindell authored
      This will allow fast/easy display of archive contents without actually reading the archive files.
      
      Introduces some new functions:
      - list_archive_contents()
      - smb_adddfile_withlist()
      
      A new SMB convenience variable ("tail", aliased as "content" for a file).
      A new file detail level ("file_detail_content", exposed in JS as FileBase.DETAIL.CONTENTS) which adds a "content" array property to file metadata objects for JS FileBase.get().
      
      Files already added to the new filebases won't have this archive content automatically - I'm looking into that now (likely a new or updated JS script to run).
      5374a113
  24. 25 Apr, 2021 1 commit
  25. 24 Apr, 2021 1 commit
    • Rob Swindell's avatar
      DIZ enhancements: Read/use SAUCE data, support ANSI, increase max 1->4K · 2a8e1c11
      Rob Swindell authored
      Inspired by Blocktronics (and other ANSI art group) packs' FILE_ID.DIZ/ANS files:
      * Support (and prioritize) FILE_ID.ANS
      * Convert ANSI color/attribute sequences in DIZ files to Ctrl-A equivalent (uses SAUCE width and ICE color, if specified)
      * Don't treat DIZ as a series of lines, they're not always nowadays.
      * New putmsg() mode: P_INDENT to print files indented by current column
      * Display full (up to 64-char) filenames in lists when using 132+ column terminal.
      * Use the Author, Group, and Title fields from the SAUCE if present/non-blank
      * 2 new text.dat strings: 301 (FiAuthor) and 302 (FiGroup)
      * Also fix bug with repeated Cost header field on bulk-uploaded files.
      
      I know this'll break the *nix build (sauce.c dependency), but I'll fix that next.
      2a8e1c11
  26. 22 Apr, 2021 2 commits
  27. 21 Apr, 2021 1 commit
  28. 17 Apr, 2021 1 commit
    • Rob Swindell's avatar
      Restore the user-to-user file transfer feature · bc883458
      Rob Swindell authored
      I forget who it was that said they were still using this feature in v3.18, but here you go, it's working again (the /D and /U commands). I'm not migrating any file sender/recipient info from v3.18, so only files added after upgrading to this will be downloadable from the "user" directory (if you have one).
      
      Something that I never implemented before but noticed is missing is the removal (or dereferencing) of user-to-user files that were sent from/to a user that is then deleted. So that's still a TODO item.
      bc883458
  29. 08 Apr, 2021 1 commit
    • Rob Swindell's avatar
      Sorted loadfiles() results were corrupted when detail was < normal · 4391ca75
      Rob Swindell authored
      When only reading the index (detail = file_detail_index), smb_getfile() just sets the file->name convenience pointer to point to the name in the index. Then when loadfiles() would sort the list, these pointers would not be adjusted (so they would point to the wrong names) resulting in a corrupted file list (e.g. name/size mismatches and no logical sort order).
      
      The solution is to call smb_getfile() on each file *after* the read index records have been sorted.
      
      This also means that the sort-by-name routines needed to always sort using the index name and not the convenience pointer (which is NULL in this case).
      
      While fixing this, I noticed there was no bounds checking in the loadfiles() and loadfilenames() read-loops, so if the indexes happened to be longer than the total_files value from the status header, a buffer under-alloc/overflow would occur and a likely crash as a result. So stop reading the index when the expected maximum number of index records have been read.
      4391ca75
  30. 04 Apr, 2021 1 commit