1
0
mirror of https://github.com/MarlinFirmware/Marlin.git synced 2024-11-22 18:25:18 +00:00

🧑‍💻 Support files updates

This commit is contained in:
Scott Lahteine 2023-12-15 17:37:36 -06:00
parent 5f84e7f43b
commit a18045a96a
53 changed files with 5504 additions and 332 deletions

View File

@ -1,19 +1,29 @@
# editorconfig.org
root = true
[*]
trim_trailing_whitespace = true
insert_final_newline = true
[{*.patch,syntax_test_*}]
trim_trailing_whitespace = false
[{*.c,*.cpp,*.h,*.ino,*.py,Makefile}]
end_of_line = lf
[{*.c,*.cpp,*.h,*.ino}]
charset = utf-8
[{*.c,*.cpp,*.h,*.ino,Makefile}]
trim_trailing_whitespace = true
insert_final_newline = true
end_of_line = lf
indent_style = space
indent_size = 2
[{Makefile}]
indent_style = tab
indent_size = 2
[*.md]
# Two spaces at the end of the line means newline in Markdown
trim_trailing_whitespace = false
[{*.py}]
indent_style = space
indent_size = 4

View File

@ -28,15 +28,9 @@ Project maintainers are responsible for clarifying the standards of acceptable b
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [marlinfirmware@github.com](mailto:marlinfirmware@github.com). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by following GitHub's [reporting abuse or spam article](https://docs.github.com/en/communities/maintaining-your-safety-on-github/reporting-abuse-or-spam). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.
## Attribution

View File

@ -26,11 +26,12 @@ The following is a set of guidelines for contributing to Marlin, hosted by the [
## Code of Conduct
This project and everyone participating in it is governed by the [Marlin Code of Conduct](code_of_conduct.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to [marlinfirmware@github.com](mailto:marlinfirmware@github.com).
This project and everyone participating in it is governed by the [Marlin Code of Conduct](code_of_conduct.md). By participating, you are expected to uphold this code. Please report unacceptable behavior by following GitHub's [reporting abuse or spam article](https://docs.github.com/en/communities/maintaining-your-safety-on-github/reporting-abuse-or-spam).
## I don't want to read this whole thing I just have a question!!!
> **Note:** Please don't file an issue to ask a question. You'll get faster results by using the resources below.
> [!NOTE]
> Please don't file an issue to ask a question. You'll get faster results by using the resources below.
We have a Message Board and a Facebook group where our knowledgable user community can provide helpful advice if you have questions.
@ -55,7 +56,8 @@ This section guides you through submitting a Bug Report for Marlin. Following th
Before creating a Bug Report, please test the "nightly" development branch, as you might find out that you don't need to create one. When you are creating a Bug Report, please [include as many details as possible](#how-do-i-submit-a-good-bug-report). Fill out [the required template](ISSUE_TEMPLATE/bug_report.yml), the information it asks for helps us resolve issues faster.
> **Note:** Regressions can happen. If you find a **Closed** issue that seems like your issue, go ahead and open a new issue and include a link to the original issue in the body of your new one. All you need to create a link is the issue number, preceded by #. For example, #8888.
> [!NOTE]
> Regressions can happen. If you find a **Closed** issue that seems like your issue, go ahead and open a new issue and include a link to the original issue in the body of your new one. All you need to create a link is the issue number, preceded by #. For example, #8888.
#### How Do I Submit A (Good) Bug Report?

5
.gitignore vendored
View File

@ -25,6 +25,9 @@ bdf2u8g.exe
genpages.exe
marlin_config.json
mczip.h
language*.csv
out-csv/
out-language/
*.gen
*.sublime-workspace
@ -130,7 +133,9 @@ spi_flash.bin
fs.img
# CMake
buildroot/share/cmake/*
CMakeLists.txt
!buildroot/share/cmake/CMakeLists.txt
src/CMakeLists.txt
CMakeListsPrivate.txt
build/

View File

@ -6,6 +6,7 @@
"platformio.platformio-ide"
],
"unwantedRecommendations": [
"ms-vscode-remote.remote-containers",
"ms-vscode.cpptools-extension-pack"
]
}

View File

@ -5,6 +5,7 @@ CONTAINER_IMAGE := marlin-dev
help:
@echo "Tasks for local development:"
@echo "* format-pins: Reformat all pins files
@echo "* tests-single-ci: Run a single test from inside the CI"
@echo "* tests-single-local: Run a single test locally"
@echo "* tests-single-local-docker: Run a single test locally, using docker"
@ -27,7 +28,7 @@ help:
tests-single-ci:
export GIT_RESET_HARD=true
$(MAKE) tests-single-local TEST_TARGET=$(TEST_TARGET)
$(MAKE) tests-single-local TEST_TARGET=$(TEST_TARGET) PLATFORMIO_BUILD_FLAGS=-DGITHUB_ACTION
.PHONY: tests-single-ci
tests-single-local:
@ -57,3 +58,12 @@ tests-all-local-docker:
setup-local-docker:
$(CONTAINER_RT_BIN) build -t $(CONTAINER_IMAGE) -f docker/Dockerfile .
.PHONY: setup-local-docker
PINS := $(shell find Marlin/src/pins -mindepth 2 -name '*.h')
.PHONY: $(PINS)
$(PINS): %:
@echo "Formatting $@" && node buildroot/share/scripts/pinsformat.js $@
format-pins: $(PINS)

View File

@ -63,8 +63,8 @@ HARDWARE_MOTHERBOARD ?= 1020
ifeq ($(OS),Windows_NT)
# Windows
ARDUINO_INSTALL_DIR ?= ${HOME}/Arduino
ARDUINO_USER_DIR ?= ${HOME}/Arduino
ARDUINO_INSTALL_DIR ?= ${HOME}/AppData/Local/Arduino
ARDUINO_USER_DIR ?= ${HOME}/Documents/Arduino
else
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
@ -82,11 +82,11 @@ endif
# Arduino source install directory, and version number
# On most linuxes this will be /usr/share/arduino
ARDUINO_INSTALL_DIR ?= ${HOME}/Arduino
ARDUINO_VERSION ?= 106
ARDUINO_INSTALL_DIR ?= ${HOME}/AppData/Local/Arduino # C:/Users/${USERNAME}/AppData/Local/Arduino
ARDUINO_VERSION ?= 10819
# The installed Libraries are in the User folder
ARDUINO_USER_DIR ?= ${HOME}/Arduino
ARDUINO_USER_DIR ?= ${HOME}/Documents/Arduino
# You can optionally set a path to the avr-gcc tools.
# Requires a trailing slash. For example, /usr/local/avr-gcc/bin/
@ -656,18 +656,18 @@ ifeq ($(HARDWARE_VARIANT), $(filter $(HARDWARE_VARIANT),arduino Teensy Sanguino)
# Old libraries (avr-core 1.6.21 < / Arduino < 1.6.8)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/SPI
# New libraries (avr-core >= 1.6.21 / Arduino >= 1.6.8)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/SPI/src
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/arduino/avr/1.8.6/libraries/SPI/src
endif
ifeq ($(IS_MCU),1)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/cores/arduino
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/arduino/avr/1.8.6/cores/arduino
# Old libraries (avr-core 1.6.21 < / Arduino < 1.6.8)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/SPI
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/SoftwareSerial
# New libraries (avr-core >= 1.6.21 / Arduino >= 1.6.8)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/SPI/src
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/SoftwareSerial/src
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/arduino/avr/1.8.6/libraries/SPI/src
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/arduino/avr/1.8.6/libraries/SoftwareSerial/src
endif
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/LiquidCrystal/src
@ -681,17 +681,17 @@ ifeq ($(WIRE), 1)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/Wire
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/Wire/utility
# New libraries (avr-core >= 1.6.21 / Arduino >= 1.6.8)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/Wire/src
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/libraries/Wire/src/utility
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/avr/1.8.6/libraries/Wire/src
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/avr/1.8.6/libraries/Wire/src/utility
endif
ifeq ($(NEOPIXEL), 1)
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/Adafruit_NeoPixel
endif
ifeq ($(U8GLIB), 1)
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib/csrc
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib/cppsrc
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib/fntsrc
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib-HAL
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib-HAL/src
# VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib
# VPATH += $(ARDUINO_INSTALL_DIR)/libraries/U8glib/src
endif
ifeq ($(TMC), 1)
VPATH += $(ARDUINO_INSTALL_DIR)/libraries/TMCStepper/src
@ -700,9 +700,9 @@ endif
ifeq ($(HARDWARE_VARIANT), arduino)
HARDWARE_SUB_VARIANT ?= mega
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/arduino/avr/variants/$(HARDWARE_SUB_VARIANT)
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/avr/1.8.6/variants/$(HARDWARE_SUB_VARIANT)
else ifeq ($(HARDWARE_VARIANT), Sanguino)
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/marlin/avr/variants/sanguino
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/avr/1.8.6/variants/sanguino
else ifeq ($(HARDWARE_VARIANT), archim)
VPATH += $(ARDUINO_INSTALL_DIR)/packages/ultimachine/hardware/sam/1.6.9-b/system/libsam
VPATH += $(ARDUINO_INSTALL_DIR)/packages/ultimachine/hardware/sam/1.6.9-b/system/CMSIS/CMSIS/Include/
@ -718,7 +718,7 @@ else ifeq ($(HARDWARE_VARIANT), archim)
LDLIBS = $(ARDUINO_INSTALL_DIR)/packages/ultimachine/hardware/sam/1.6.9-b/variants/archim/libsam_sam3x8e_gcc_rel.a
else
HARDWARE_SUB_VARIANT ?= standard
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/$(HARDWARE_VARIANT)/variants/$(HARDWARE_SUB_VARIANT)
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/avr/1.8.6/variants/$(HARDWARE_SUB_VARIANT)
endif
LIB_SRC = wiring.c \
@ -733,7 +733,7 @@ endif
ifeq ($(HARDWARE_VARIANT), Teensy)
LIB_SRC = wiring.c
VPATH += $(ARDUINO_INSTALL_DIR)/hardware/teensy/cores/teensy
VPATH += $(ARDUINO_INSTALL_DIR)/packages/arduino/hardware/teensy/cores/teensy
endif
LIB_CXXSRC = WMath.cpp WString.cpp Print.cpp SPI.cpp
@ -880,7 +880,7 @@ AVRDUDE_WRITE_FLASH = -Uflash:w:$(BUILD_DIR)/$(TARGET).hex:i
ifeq ($(shell uname -s), Linux)
AVRDUDE_CONF = /etc/avrdude/avrdude.conf
else
AVRDUDE_CONF = $(ARDUINO_INSTALL_DIR)/hardware/tools/avr/etc/avrdude.conf
AVRDUDE_CONF = $(ARDUINO_INSTALL_DIR)/packages/arduino/tools/avrdude/6.3.0-arduino17/etc/avrdude.conf
endif
AVRDUDE_FLAGS = -D -C$(AVRDUDE_CONF) \
-p$(PROG_MCU) -P$(AVRDUDE_PORT) -c$(AVRDUDE_PROGRAMMER) \

View File

@ -2,7 +2,7 @@
Marlin Firmware
(c) 2011-2020 MarlinFirmware
(c) 2011-2024 MarlinFirmware
Portions of Marlin are (c) by their respective authors.
All code complies with GPLv2 and/or GPLv3
@ -27,7 +27,7 @@ Configuration
- https://github.com/MarlinFirmware/Configurations
Example configurations for several printer models.
- https://www.youtube.com/watch?v=3gwWVFtdg-4
- https://youtu.be/3gwWVFtdg-4
A good 20-minute overview of Marlin configuration by Tom Sanladerer.
(Applies to Marlin 1.0.x, so Jerk and Acceleration should be halved.)
Also... https://www.google.com/search?tbs=vid%3A1&q=configure+marlin

View File

@ -3,10 +3,50 @@
# config.ini - Options to apply before the build
#
[config:base]
#
# ini_use_config - A comma-separated list of actions to apply to the Configuration files.
# The actions will be applied in the listed order.
# - none
# Ignore this file and don't apply any configuration options
#
# - base
# Just apply the options in config:base to the configuration
#
# - minimal
# Just apply the options in config:minimal to the configuration
#
# - all
# Apply all 'config:*' sections in this file to the configuration
#
# - another.ini
# Load another INI file with a path relative to this config.ini file (i.e., within Marlin/)
#
# - https://me.myserver.com/path/to/configs
# Fetch configurations from any URL.
#
# - example/Creality/Ender-5 Plus @ bugfix-2.1.x
# Fetch example configuration files from the MarlinFirmware/Configurations repository
# https://raw.githubusercontent.com/MarlinFirmware/Configurations/bugfix-2.1.x/config/examples/Creality/Ender-5%20Plus/
#
# - example/default @ release-2.0.9.7
# Fetch default configuration files from the MarlinFirmware/Configurations repository
# https://raw.githubusercontent.com/MarlinFirmware/Configurations/release-2.0.9.7/config/default/
#
# - [disable]
# Comment out all #defines in both Configuration.h and Configuration_adv.h. This is useful
# to start with a clean slate before applying any config: options, so only the options explicitly
# set in config.ini will be enabled in the configuration.
#
# - [flatten] (Not yet implemented)
# Produce a flattened set of Configuration.h and Configuration_adv.h files with only the enabled
# #defines and no comments. A clean look, but context-free.
#
ini_use_config = none
# Load all config: sections in this file
;ini_use_config = all
# Disable everything and apply subsequent config:base options
;ini_use_config = [disable], base
# Load config file relative to Marlin/
;ini_use_config = another.ini
# Download configurations from GitHub

View File

@ -32,6 +32,9 @@ $SED -i~ -e "20,30{/#error/d}" Marlin/Configuration.h
rm Marlin/Configuration.h~
unset IFS; set +f
# Suppress fatal warnings
echo -e "\n#define NO_CONTROLLER_CUSTOM_WIRING_WARNING" >> Marlin/Configuration.h
echo "Building the firmware now..."
$HERE/mftest -s -a -n1 || { echo "Failed"; exit 1; }

View File

@ -9,7 +9,8 @@ SED=$(which gsed sed | head -n1)
shift
while [[ $# > 1 ]]; do
PIN=$1 ; VAL=$2
eval "${SED} -i '/^[[:blank:]]*\(\/\/\)*[[:blank:]]*\(#define \+${PIN}\b\).*$/{s//\2 ${VAL}/;h};\${x;/./{x;q0};x;q9}' Marlin/src/pins/$DIR/pins_${NAM}.h" ||
(echo "ERROR: pins_set Can't find ${PIN}" >&2 && exit 9)
FOUT="${DIR}/pins_${NAM}.h"
eval "${SED} -i '/^[[:blank:]]*\(\/\/\)*[[:blank:]]*\(#define \+${PIN}\b\).*$/{s//\2 ${VAL}/;h};\${x;/./{x;q0};x;q9}' Marlin/src/pins/${FOUT}" ||
(echo "ERROR: pins_set Can't find ${PIN} in ${FOUT}" >&2 && exit 9)
shift 2
done

View File

@ -4,9 +4,10 @@
#
TMPDIR=`mktemp -d`
HERE=`dirname "$0"`
# Reformat a single file to tmp/
if uncrustify -l CPP -c ./buildroot/share/extras/uncrustify.cfg -f "$1" >$TMPDIR/uncrustify.out ; then
if uncrustify -l CPP -c "$HERE/../share/extras/uncrustify.cfg" -f "$1" >$TMPDIR/uncrustify.out ; then
cp "$TMPDIR/uncrustify.out" "$1" ; # Replace the original file
else
echo "Something went wrong with uncrustify."

View File

@ -8,28 +8,37 @@
# use_example_configs release-2.0.9.4:Creality/CR-10/CrealityV1
#
# If a configpath has spaces (or quotes) escape them or enquote the path
# If no branch: prefix is given use configs based on the current branch name.
# e.g., For `latest-2.1.x` name the working branch something like "my_work-2.1.x."
# The branch or tag must first exist at MarlinFirmware/Configurations.
# The fallback branch is bugfix-2.1.x.
#
which curl >/dev/null && TOOL='curl -L -s -S -f -o wgot'
which wget >/dev/null && TOOL='wget -q -O wgot'
CURR=$(git branch 2>/dev/null | grep ^* | sed 's/\* //g')
[[ $CURR == "bugfix-2.0.x" ]] && BRANCH=bugfix-2.0.x || BRANCH=bugfix-2.1.x
REPO=$BRANCH
case "$CURR" in
bugfix-2.*.x ) BRANCH=$CURR ;;
*-2.1.x|2.1.x ) BRANCH=latest-2.1.x ;;
*-2.0.x|2.0.x ) BRANCH=latest-2.0.x ;;
*-1.1.x|1.1.x ) BRANCH=latest-1.1.x ;;
*-1.0.x|1.0.x ) BRANCH=latest-1.0.x ;;
* ) BRANCH=bugfix-2.1.x ;;
esac
if [[ $# > 0 ]]; then
IFS=: read -r PART1 PART2 <<< "$@"
[[ -n $PART2 ]] && { UDIR="$PART2" ; REPO="$PART1" ; } \
[[ -n $PART2 ]] && { UDIR="$PART2" ; BRANCH="$PART1" ; } \
|| { UDIR="$PART1" ; }
RDIR="${UDIR// /%20}"
echo "Fetching $UDIR configurations from $REPO..."
echo "Fetching $UDIR configurations from $BRANCH..."
EXAMPLES="examples/$RDIR"
else
EXAMPLES="default"
fi
CONFIGS="https://raw.githubusercontent.com/MarlinFirmware/Configurations/$REPO/config/${EXAMPLES}"
CONFIGS="https://raw.githubusercontent.com/MarlinFirmware/Configurations/$BRANCH/config/${EXAMPLES}"
restore_configs

View File

@ -45,6 +45,15 @@
//"program": "${workspaceRoot}/.pio/build/simulator_windows/MarlinSimulator",
//"targetArchitecture": "arm64",
"MIMode": "lldb"
},
{
"name": "Launch Sim (Windows gdb)",
"request": "launch",
"type": "cppdbg",
"cwd": "${workspaceRoot}",
"program": "${workspaceRoot}/.pio/build/simulator_windows/debug/MarlinSimulator.exe",
"MIMode": "gdb",
"miDebuggerPath": "C:/msys64/mingw64/bin/gdb.exe"
}
]
}

View File

@ -1,7 +1,7 @@
MEMORY
{
ram (rwx) : ORIGIN = 0x20000000, LENGTH = 64K - 40
rom (rx) : ORIGIN = 0x08007000, LENGTH = 512K - 28K
rom (rx) : ORIGIN = 0x08007000, LENGTH = 512K - 64K
}
/* Provide memory region aliases for common.inc */

View File

@ -9,7 +9,7 @@ if pioutil.is_pio_build():
board = marlin.env.BoardConfig()
def calculate_crc(contents, seed):
accumulating_xor_value = seed;
accumulating_xor_value = seed
for i in range(0, len(contents), 4):
value = struct.unpack('<I', contents[ i : i + 4])[0]
@ -68,7 +68,7 @@ if pioutil.is_pio_build():
uid_value = uuid.uuid4()
file_key = int(uid_value.hex[0:8], 16)
xor_crc = 0xEF3D4323;
xor_crc = 0xEF3D4323
# the input file is exepcted to be in chunks of 0x800
# so round the size
@ -123,4 +123,4 @@ if pioutil.is_pio_build():
fwpath.unlink()
marlin.relocate_firmware("0x08008800")
marlin.add_post_action(encrypt);
marlin.add_post_action(encrypt)

View File

@ -53,10 +53,11 @@ if pioutil.is_pio_build():
# Get a reference to the FEATURE_CONFIG under construction
feat = FEATURE_CONFIG[feature]
# Split up passed lines on commas or newlines and iterate
# Add common options to the features config under construction
# For lib_deps replace a previous instance of the same library
atoms = re.sub(r',\s*', '\n', flines).strip().split('\n')
# Split up passed lines on commas or newlines and iterate.
# Take care to convert Windows '\' paths to Unix-style '/'.
# Add common options to the features config under construction.
# For lib_deps replace a previous instance of the same library.
atoms = re.sub(r',\s*', '\n', flines.replace('\\', '/')).strip().split('\n')
for line in atoms:
parts = line.split('=')
name = parts.pop(0)
@ -91,7 +92,7 @@ if pioutil.is_pio_build():
val = None
if val:
opt = mat[1].upper()
blab("%s.custom_marlin.%s = '%s'" % ( env['PIOENV'], opt, val ))
blab("%s.custom_marlin.%s = '%s'" % ( env['PIOENV'], opt, val ), 2)
add_to_feat_cnf(opt, val)
def get_all_known_libs():
@ -213,7 +214,7 @@ if pioutil.is_pio_build():
#
def MarlinHas(env, feature):
load_marlin_features()
r = re.compile('^' + feature + '$')
r = re.compile('^' + feature + '$', re.IGNORECASE)
found = list(filter(r.match, env['MARLIN_FEATURES']))
# Defines could still be 'false' or '0', so check
@ -226,6 +227,8 @@ if pioutil.is_pio_build():
elif val in env['MARLIN_FEATURES']:
some_on = env.MarlinHas(val)
#blab("%s is %s" % (feature, str(some_on)), 2)
return some_on
validate_pio()

51
buildroot/share/PlatformIO/scripts/configuration.py Normal file → Executable file
View File

@ -1,8 +1,9 @@
#!/usr/bin/env python3
#
# configuration.py
# Apply options from config.ini to the existing Configuration headers
#
import re, shutil, configparser
import re, shutil, configparser, datetime
from pathlib import Path
verbose = 0
@ -43,6 +44,7 @@ def apply_opt(name, val, conf=None):
if val in ("on", "", None):
newline = re.sub(r'^(\s*)//+\s*(#define)(\s{1,3})?(\s*)', r'\1\2 \4', line)
elif val == "off":
# TODO: Comment more lines in a multi-line define with \ continuation
newline = re.sub(r'^(\s*)(#define)(\s{1,3})?(\s*)', r'\1//\2 \4', line)
else:
# For options with values, enable and set the value
@ -88,9 +90,38 @@ def apply_opt(name, val, conf=None):
elif not isdef:
break
linenum += 1
lines.insert(linenum, f"{prefix}#define {added:30} // Added by config.ini\n")
currtime = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
lines.insert(linenum, f"{prefix}#define {added:30} // Added by config.ini {currtime}\n")
fullpath.write_text(''.join(lines), encoding='utf-8')
# Disable all (most) defined options in the configuration files.
# Everything in the named sections. Section hint for exceptions may be added.
def disable_all_options():
# Create a regex to match the option and capture parts of the line
regex = re.compile(r'^(\s*)(#define\s+)([A-Z0-9_]+\b)(\s?)(\s*)(.*?)(\s*)(//.*)?$', re.IGNORECASE)
# Disable all enabled options in both Config files
for file in ("Configuration.h", "Configuration_adv.h"):
fullpath = config_path(file)
lines = fullpath.read_text(encoding='utf-8').split('\n')
found = False
for i in range(len(lines)):
line = lines[i]
match = regex.match(line)
if match:
name = match[3].upper()
if name in ('CONFIGURATION_H_VERSION', 'CONFIGURATION_ADV_H_VERSION'): continue
if name.startswith('_'): continue
found = True
# Comment out the define
# TODO: Comment more lines in a multi-line define with \ continuation
lines[i] = re.sub(r'^(\s*)(#define)(\s{1,3})?(\s*)', r'\1//\2 \4', line)
blab(f"Disable {name}")
# If the option was found, write the modified lines
if found:
fullpath.write_text('\n'.join(lines), encoding='utf-8')
# Fetch configuration files from GitHub given the path.
# Return True if any files were fetched.
def fetch_example(url):
@ -130,7 +161,7 @@ def fetch_example(url):
def section_items(cp, sectkey):
return cp.items(sectkey) if sectkey in cp.sections() else []
# Apply all items from a config section
# Apply all items from a config section. Ignore ini_ items outside of config:base and config:root.
def apply_ini_by_name(cp, sect):
iniok = True
if sect in ('config:base', 'config:root'):
@ -194,7 +225,7 @@ def apply_config_ini(cp):
cp2 = configparser.ConfigParser()
cp2.read(config_path(ckey))
apply_sections(cp2, sect)
ckey = 'base';
ckey = 'base'
# (Allow 'example/' as a shortcut for 'examples/')
elif ckey.startswith('example/'):
@ -206,7 +237,17 @@ def apply_config_ini(cp):
fetch_example(ckey)
ckey = 'base'
if ckey == 'all':
#
# [flatten] Write out Configuration.h and Configuration_adv.h files with
# just the enabled options and all other content removed.
#
#if ckey == '[flatten]':
# write_flat_configs()
if ckey == '[disable]':
disable_all_options()
elif ckey == 'all':
apply_sections(cp)
else:

View File

@ -14,7 +14,7 @@ if pioutil.is_pio_build():
assets_path = Path(env.Dictionary("PROJECT_BUILD_DIR"), env.Dictionary("PIOENV"), "assets")
def download_mks_assets():
print("Downloading MKS Assets")
print("Downloading MKS Assets for TFT_LVGL_UI")
r = requests.get(url, stream=True)
# the user may have a very clean workspace,
# so create the PROJECT_LIBDEPS_DIR directory if not exits
@ -25,7 +25,7 @@ if pioutil.is_pio_build():
fd.write(chunk)
def copy_mks_assets():
print("Copying MKS Assets")
print("Copying MKS Assets for TFT_LVGL_UI")
output_path = Path(tempfile.mkdtemp())
zip_obj = zipfile.ZipFile(zip_path, 'r')
zip_obj.extractall(output_path)

View File

@ -23,7 +23,7 @@ if pioutil.is_pio_build():
assert isfile(original_file) and isfile(src_file)
shutil.copyfile(original_file, backup_file)
shutil.copyfile(src_file, original_file);
shutil.copyfile(src_file, original_file)
def _touch(path):
with open(path, "w") as fp:

View File

@ -5,7 +5,8 @@
# the appropriate framework variants folder, so that its contents
# will be picked up by PlatformIO just like any other variant.
#
import pioutil
import pioutil, re
marlin_variant_pattern = re.compile("marlin_.*")
if pioutil.is_pio_build():
import shutil,marlin
from pathlib import Path
@ -30,10 +31,11 @@ if pioutil.is_pio_build():
}
platform_name = framewords[platform.__class__.__name__]
else:
platform_name = PackageSpec(platform_packages[0]).name
if platform_name in [ "usb-host-msc", "usb-host-msc-cdc-msc", "usb-host-msc-cdc-msc-2", "usb-host-msc-cdc-msc-3", "tool-stm32duino", "biqu-bx-workaround", "main" ]:
platform_name = "framework-arduinoststm32"
spec = PackageSpec(platform_packages[0])
if spec.uri and '@' in spec.uri:
platform_name = re.sub(r'@.+', '', spec.uri)
else:
platform_name = spec.name
FRAMEWORK_DIR = Path(platform.get_package_dir(platform_name))
assert FRAMEWORK_DIR.is_dir()
@ -44,15 +46,20 @@ if pioutil.is_pio_build():
variant = board.get("build.variant")
#series = mcu_type[:7].upper() + "xx"
# Prepare a new empty folder at the destination
variant_dir = FRAMEWORK_DIR / "variants" / variant
if variant_dir.is_dir():
shutil.rmtree(variant_dir)
if not variant_dir.is_dir():
variant_dir.mkdir()
# Only prepare a new variant if the PlatformIO configuration provides it (board_build.variant).
# This check is important to avoid deleting official board config variants.
if marlin_variant_pattern.match(str(variant).lower()):
# Prepare a new empty folder at the destination
variant_dir = FRAMEWORK_DIR / "variants" / variant
if variant_dir.is_dir():
shutil.rmtree(variant_dir)
if not variant_dir.is_dir():
variant_dir.mkdir()
# Source dir is a local variant sub-folder
source_dir = Path("buildroot/share/PlatformIO/variants", variant)
assert source_dir.is_dir()
# Source dir is a local variant sub-folder
source_dir = Path("buildroot/share/PlatformIO/variants", variant)
assert source_dir.is_dir()
marlin.copytree(source_dir, variant_dir)
print("Copying variant " + str(variant) + " to framework directory...")
marlin.copytree(source_dir, variant_dir)

View File

@ -32,4 +32,4 @@ if pioutil.is_pio_build():
fw_path.rename(fws_path)
import marlin
marlin.add_post_action(addboot);
marlin.add_post_action(addboot)

View File

@ -70,4 +70,4 @@ def encrypt_mks(source, target, env, new_name):
fwpath.unlink()
def add_post_action(action):
env.AddPostAction(str(Path("$BUILD_DIR", "${PROGNAME}.bin")), action);
env.AddPostAction(str(Path("$BUILD_DIR", "${PROGNAME}.bin")), action)

View File

@ -1,7 +1,7 @@
#
# offset_and_rename.py
#
# - If 'build.offset' is provided, either by JSON or by the environment...
# - If 'board_build.offset' is provided, either by JSON or by the environment...
# - Set linker flag LD_FLASH_OFFSET and relocate the VTAB based on 'build.offset'.
# - Set linker flag LD_MAX_DATA_SIZE based on 'build.maximum_ram_size'.
# - Define STM32_FLASH_SIZE from 'upload.maximum_size' for use by Flash-based EEPROM emulation.
@ -60,6 +60,10 @@ if pioutil.is_pio_build():
def rename_target(source, target, env):
from pathlib import Path
Path(target[0].path).replace(Path(target[0].dir.path, new_name))
from datetime import datetime
from os import path
_newpath = Path(target[0].dir.path, datetime.now().strftime(new_name.replace('{date}', '%Y%m%d').replace('{time}', '%H%M%S')))
Path(target[0].path).replace(_newpath)
env['PROGNAME'] = path.splitext(_newpath)[0]
marlin.add_post_action(rename_target)

View File

@ -72,7 +72,7 @@ if pioutil.is_pio_build():
result = check_envs("env:"+build_env, board_envs, config)
if not result:
err = "Error: Build environment '%s' is incompatible with %s. Use one of these: %s" % \
err = "Error: Build environment '%s' is incompatible with %s. Use one of these environments: %s" % \
( build_env, motherboard, ", ".join([ e[4:] for e in board_envs if e.startswith("env:") ]) )
raise SystemExit(err)
@ -90,7 +90,7 @@ if pioutil.is_pio_build():
# Find the name.cpp.o or name.o and remove it
#
def rm_ofile(subdir, name):
build_dir = Path(env['PROJECT_BUILD_DIR'], build_env);
build_dir = Path(env['PROJECT_BUILD_DIR'], build_env)
for outdir in (build_dir, build_dir / "debug"):
for ext in (".cpp.o", ".o"):
fpath = outdir / "src/src" / subdir / (name + ext)

View File

@ -2,8 +2,14 @@
#
# schema.py
#
# Used by signature.py via common-dependencies.py to generate a schema file during the PlatformIO build.
# This script can also be run standalone from within the Marlin repo to generate all schema files.
# Used by signature.py via common-dependencies.py to generate a schema file during the PlatformIO build
# when CONFIG_EXPORT is defined in the configuration.
#
# This script can also be run standalone from within the Marlin repo to generate JSON and YAML schema files.
#
# This script is a companion to abm/js/schema.js in the MarlinFirmware/AutoBuildMarlin project, which has
# been extended to evaluate conditions and can determine what options are actually enabled, not just which
# options are uncommented. That will be migrated to this script for standalone migration.
#
import re,json
from pathlib import Path
@ -85,7 +91,8 @@ def extract():
NORMAL = 0 # No condition yet
BLOCK_COMMENT = 1 # Looking for the end of the block comment
EOL_COMMENT = 2 # EOL comment started, maybe add the next comment?
GET_SENSORS = 3 # Gathering temperature sensor options
SLASH_COMMENT = 3 # Block-like comment, starting with aligned //
GET_SENSORS = 4 # Gathering temperature sensor options
ERROR = 9 # Syntax error
# List of files to process, with shorthand
@ -94,6 +101,8 @@ def extract():
sch_out = { 'basic':{}, 'advanced':{} }
# Regex for #define NAME [VALUE] [COMMENT] with sanitized line
defgrep = re.compile(r'^(//)?\s*(#define)\s+([A-Za-z0-9_]+)\s*(.*?)\s*(//.+)?$')
# Pattern to match a float value
flt = r'[-+]?\s*(\d+\.|\d*\.\d+)([eE][-+]?\d+)?[fF]?'
# Defines to ignore
ignore = ('CONFIGURATION_H_VERSION', 'CONFIGURATION_ADV_H_VERSION', 'CONFIG_EXAMPLES_DIR', 'CONFIG_EXPORT')
# Start with unknown state
@ -107,6 +116,7 @@ def extract():
line_number = 0 # Counter for the line number of the file
conditions = [] # Create a condition stack for the current file
comment_buff = [] # A temporary buffer for comments
prev_comment = '' # Copy before reset for an EOL comment
options_json = '' # A buffer for the most recent options JSON found
eol_options = False # The options came from end of line, so only apply once
join_line = False # A flag that the line should be joined with the previous one
@ -143,9 +153,13 @@ def extract():
if not defmatch and the_line.startswith('//'):
comment_buff.append(the_line[2:].strip())
else:
last_added_ref['comment'] = ' '.join(comment_buff)
comment_buff = []
state = Parse.NORMAL
cline = ' '.join(comment_buff)
comment_buff = []
if cline != '':
# A (block or slash) comment was already added
cfield = 'notes' if 'comment' in last_added_ref else 'comment'
last_added_ref[cfield] = cline
def use_comment(c, opt, sec, bufref):
if c.startswith(':'): # If the comment starts with : then it has magic JSON
@ -162,6 +176,15 @@ def extract():
bufref.append(c)
return opt, sec
# For slash comments, capture consecutive slash comments.
# The comment will be applied to the next #define.
if state == Parse.SLASH_COMMENT:
if not defmatch and the_line.startswith('//'):
use_comment(the_line[2:].strip(), options_json, section, comment_buff)
continue
else:
state = Parse.NORMAL
# In a block comment, capture lines up to the end of the comment.
# Assume nothing follows the comment closure.
if state in (Parse.BLOCK_COMMENT, Parse.GET_SENSORS):
@ -178,19 +201,19 @@ def extract():
state = Parse.NORMAL
# Strip the leading '*' from block comments
if cline.startswith('*'): cline = cline[1:].strip()
cline = re.sub(r'^\* ?', '', cline)
# Collect temperature sensors
if state == Parse.GET_SENSORS:
sens = re.match(r'^(-?\d+)\s*:\s*(.+)$', cline)
if sens:
s2 = sens[2].replace("'","''")
options_json += f"{sens[1]}:'{s2}', "
options_json += f"{sens[1]}:'{sens[1]} - {s2}', "
elif state == Parse.BLOCK_COMMENT:
# Look for temperature sensors
if cline == "Temperature sensors available:":
if re.match(r'temperature sensors.*:', cline, re.IGNORECASE):
state, cline = Parse.GET_SENSORS, "Temperature Sensors"
options_json, section = use_comment(cline, options_json, section, comment_buff)
@ -216,15 +239,19 @@ def extract():
# Comment after a define may be continued on the following lines
if defmatch != None and cpos > 10:
state = Parse.EOL_COMMENT
prev_comment = '\n'.join(comment_buff)
comment_buff = []
else:
state = Parse.SLASH_COMMENT
# Process the start of a new comment
if cpos != -1:
comment_buff = []
cline, line = line[cpos+2:].strip(), line[:cpos].strip()
if state == Parse.BLOCK_COMMENT:
# Strip leading '*' from block comments
if cline.startswith('*'): cline = cline[1:].strip()
cline = re.sub(r'^\* ?', '', cline)
else:
# Expire end-of-line options after first use
if cline.startswith(':'): eol_options = True
@ -295,32 +322,33 @@ def extract():
}
# Type is based on the value
if val == '':
value_type = 'switch'
elif re.match(r'^(true|false)$', val):
value_type = 'bool'
val = val == 'true'
elif re.match(r'^[-+]?\s*\d+$', val):
value_type = 'int'
val = int(val)
elif re.match(r'[-+]?\s*(\d+\.|\d*\.\d+)([eE][-+]?\d+)?[fF]?', val):
value_type = 'float'
val = float(val.replace('f',''))
else:
value_type = 'string' if val[0] == '"' \
else 'char' if val[0] == "'" \
else 'state' if re.match(r'^(LOW|HIGH)$', val) \
else 'enum' if re.match(r'^[A-Za-z0-9_]{3,}$', val) \
else 'int[]' if re.match(r'^{(\s*[-+]?\s*\d+\s*(,\s*)?)+}$', val) \
else 'float[]' if re.match(r'^{(\s*[-+]?\s*(\d+\.|\d*\.\d+)([eE][-+]?\d+)?[fF]?\s*(,\s*)?)+}$', val) \
else 'array' if val[0] == '{' \
else ''
value_type = \
'switch' if val == '' \
else 'bool' if re.match(r'^(true|false)$', val) \
else 'int' if re.match(r'^[-+]?\s*\d+$', val) \
else 'ints' if re.match(r'^([-+]?\s*\d+)(\s*,\s*[-+]?\s*\d+)+$', val) \
else 'floats' if re.match(rf'({flt}(\s*,\s*{flt})+)', val) \
else 'float' if re.match(f'^({flt})$', val) \
else 'string' if val[0] == '"' \
else 'char' if val[0] == "'" \
else 'state' if re.match(r'^(LOW|HIGH)$', val) \
else 'enum' if re.match(r'^[A-Za-z0-9_]{3,}$', val) \
else 'int[]' if re.match(r'^{\s*[-+]?\s*\d+(\s*,\s*[-+]?\s*\d+)*\s*}$', val) \
else 'float[]' if re.match(r'^{{\s*{flt}(\s*,\s*{flt})*\s*}}$', val) \
else 'array' if val[0] == '{' \
else ''
val = (val == 'true') if value_type == 'bool' \
else int(val) if value_type == 'int' \
else val.replace('f','') if value_type == 'floats' \
else float(val.replace('f','')) if value_type == 'float' \
else val
if val != '': define_info['value'] = val
if value_type != '': define_info['type'] = value_type
# Join up accumulated conditions with &&
if conditions: define_info['requires'] = ' && '.join(sum(conditions, []))
if conditions: define_info['requires'] = '(' + ') && ('.join(sum(conditions, [])) + ')'
# If the comment_buff is not empty, add the comment to the info
if comment_buff:
@ -383,25 +411,35 @@ def main():
if schema:
# Get the first command line argument
# Get the command line arguments after the script name
import sys
if len(sys.argv) > 1:
arg = sys.argv[1]
else:
arg = 'some'
args = sys.argv[1:]
if len(args) == 0: args = ['some']
# Does the given array intersect at all with args?
def inargs(c): return len(set(args) & set(c)) > 0
# Help / Unknown option
unk = not inargs(['some','json','jsons','group','yml','yaml'])
if (unk): print(f"Unknown option: '{args[0]}'")
if inargs(['-h', '--help']) or unk:
print("Usage: schema.py [some|json|jsons|group|yml|yaml]...")
print(" some = json + yml")
print(" jsons = json + group")
return
# JSON schema
if arg in ['some', 'json', 'jsons']:
if inargs(['some', 'json', 'jsons']):
print("Generating JSON ...")
dump_json(schema, Path('schema.json'))
# JSON schema (wildcard names)
if arg in ['group', 'jsons']:
if inargs(['group', 'jsons']):
group_options(schema)
dump_json(schema, Path('schema_grouped.json'))
# YAML
if arg in ['some', 'yml', 'yaml']:
if inargs(['some', 'yml', 'yaml']):
try:
import yaml
except ImportError:

47
buildroot/share/PlatformIO/scripts/signature.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
#
# signature.py
#
@ -44,35 +45,35 @@ def compress_file(filepath, storedname, outpath):
zipf.write(filepath, arcname=storedname, compress_type=zipfile.ZIP_BZIP2, compresslevel=9)
#
# Compute the build signature. The idea is to extract all defines in the configuration headers
# to build a unique reversible signature from this build so it can be included in the binary
# We can reverse the signature to get a 1:1 equivalent configuration file
# Compute the build signature by extracting all configuration settings and
# building a unique reversible signature that can be included in the binary.
# The signature can be reversed to get a 1:1 equivalent configuration file.
#
def compute_build_signature(env):
if 'BUILD_SIGNATURE' in env:
return
build_path = Path(env['PROJECT_BUILD_DIR'], env['PIOENV'])
marlin_json = build_path / 'marlin_config.json'
marlin_zip = build_path / 'mc.zip'
# Definitions from these files will be kept
files_to_keep = [ 'Marlin/Configuration.h', 'Marlin/Configuration_adv.h' ]
build_path = Path(env['PROJECT_BUILD_DIR'], env['PIOENV'])
# Check if we can skip processing
hashes = ''
for header in files_to_keep:
hashes += get_file_sha256sum(header)[0:10]
marlin_json = build_path / 'marlin_config.json'
marlin_zip = build_path / 'mc.zip'
# Read existing config file
# Read a previously exported JSON file
# Same configuration, skip recomputing the build signature
same_hash = False
try:
with marlin_json.open() as infile:
conf = json.load(infile)
if conf['__INITIAL_HASH'] == hashes:
# Same configuration, skip recomputing the building signature
same_hash = conf['__INITIAL_HASH'] == hashes
if same_hash:
compress_file(marlin_json, 'marlin_config.json', marlin_zip)
return
except:
pass
@ -125,9 +126,6 @@ def compute_build_signature(env):
# Remove all boards now
if key.startswith("BOARD_") and key != "BOARD_INFO_NAME":
continue
# Remove all keys ending by "_NAME" as it does not make a difference to the configuration
if key.endswith("_NAME") and key != "CUSTOM_MACHINE_NAME":
continue
# Remove all keys ending by "_T_DECLARED" as it's a copy of extraneous system stuff
if key.endswith("_T_DECLARED"):
continue
@ -196,7 +194,7 @@ def compute_build_signature(env):
outfile.write(ini_fmt.format(key.lower(), ' = ' + val))
#
# Produce a schema.json file if CONFIG_EXPORT == 3
# CONFIG_EXPORT 3 = schema.json, 4 = schema.yml
#
if config_dump >= 3:
try:
@ -207,7 +205,7 @@ def compute_build_signature(env):
if conf_schema:
#
# Produce a schema.json file if CONFIG_EXPORT == 3
# 3 = schema.json
#
if config_dump in (3, 13):
print("Generating schema.json ...")
@ -217,7 +215,7 @@ def compute_build_signature(env):
schema.dump_json(conf_schema, build_path / 'schema_grouped.json')
#
# Produce a schema.yml file if CONFIG_EXPORT == 4
# 4 = schema.yml
#
elif config_dump == 4:
print("Generating schema.yml ...")
@ -243,8 +241,9 @@ def compute_build_signature(env):
#
# Produce a JSON file for CONFIGURATION_EMBEDDING or CONFIG_EXPORT == 1
# Skip if an identical JSON file was already present.
#
if config_dump == 1 or 'CONFIGURATION_EMBEDDING' in defines:
if not same_hash and (config_dump == 1 or 'CONFIGURATION_EMBEDDING' in defines):
with marlin_json.open('w') as outfile:
json.dump(data, outfile, separators=(',', ':'))
@ -255,9 +254,10 @@ def compute_build_signature(env):
return
# Compress the JSON file as much as we can
compress_file(marlin_json, 'marlin_config.json', marlin_zip)
if not same_hash:
compress_file(marlin_json, 'marlin_config.json', marlin_zip)
# Generate a C source file for storing this array
# Generate a C source file containing the entire ZIP file as an array
with open('Marlin/src/mczip.h','wb') as result_file:
result_file.write(
b'#ifndef NO_CONFIGURATION_EMBEDDING_WARNING\n'
@ -274,3 +274,8 @@ def compute_build_signature(env):
if count % 16:
result_file.write(b'\n')
result_file.write(b'};\n')
if __name__ == "__main__":
# Build required. From command line just explain usage.
print("Use schema.py to export JSON and YAML from the command-line.")
print("Build Marlin with CONFIG_EXPORT 2 to export 'config.ini'.")

View File

View File

@ -27,4 +27,4 @@
#ifdef USBCON
USB_DM = PA_11,
USB_DP = PA_12,
#endif
#endif

View File

@ -413,7 +413,7 @@ const PinMap PinMap_USB_OTG_HS[] = {
*/
{NC, NP, 0}
};
#endif
#ifdef HAL_SD_MODULE_ENABLED
WEAK const PinMap PinMap_SD[] = {
@ -430,4 +430,3 @@ WEAK const PinMap PinMap_SD[] = {
{NC, NP, 0}
};
#endif
#endif

View File

@ -100,12 +100,12 @@
/*
* SDIO Pins
*/
#define BOARD_SDIO_D0 PC8
#define BOARD_SDIO_D1 PC9
#define BOARD_SDIO_D2 PC10
#define BOARD_SDIO_D3 PC11
#define BOARD_SDIO_CLK PC12
#define BOARD_SDIO_CMD PD2
#define BOARD_SDIO_D0 PC8
#define BOARD_SDIO_D1 PC9
#define BOARD_SDIO_D2 PC10
#define BOARD_SDIO_D3 PC11
#define BOARD_SDIO_CLK PC12
#define BOARD_SDIO_CMD PD2
/* Pin aliases: these give the GPIO port/bit for each pin as an
* enum. These are optional, but recommended. They make it easier to

View File

@ -100,12 +100,12 @@
/*
* SDIO Pins
*/
#define BOARD_SDIO_D0 PC8
#define BOARD_SDIO_D1 PC9
#define BOARD_SDIO_D2 PC10
#define BOARD_SDIO_D3 PC11
#define BOARD_SDIO_CLK PC12
#define BOARD_SDIO_CMD PD2
#define BOARD_SDIO_D0 PC8
#define BOARD_SDIO_D1 PC9
#define BOARD_SDIO_D2 PC10
#define BOARD_SDIO_D3 PC11
#define BOARD_SDIO_CLK PC12
#define BOARD_SDIO_CMD PD2
/* Pin aliases: these give the GPIO port/bit for each pin as an
* enum. These are optional, but recommended. They make it easier to

View File

@ -1,4 +1,4 @@
cmake_minimum_required(VERSION 2.8)
cmake_minimum_required(VERSION 3.5)
#====================================================================#
# Usage under Linux: #
# #
@ -24,21 +24,67 @@ set(SCRIPT_BRANCH 1.0.2) #Set to wanted marlin-cmake release tag or branch
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake)
file(DOWNLOAD https://github.com/tohara/marlin-cmake/archive/${SCRIPT_BRANCH}.tar.gz
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-src.tar.gz SHOW_PROGRESS)
file(DOWNLOAD https://github.com/tohara/marlin-cmake/archive/${SCRIPT_BRANCH}.tar.gz
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-src.tar.gz SHOW_PROGRESS)
execute_process(COMMAND ${CMAKE_COMMAND} -E tar -xvf ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-src.tar.gz WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR})
file(RENAME ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-${SCRIPT_BRANCH} ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake)
file(REMOVE ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-src.tar.gz)
execute_process(COMMAND ${CMAKE_COMMAND} -E tar -xvf ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-src.tar.gz WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR})
file(RENAME ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-${SCRIPT_BRANCH} ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake)
file(REMOVE ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake-src.tar.gz)
endif()
if(WIN32 AND NOT EXISTS ${CMAKE_BINARY_DIR}/make.exe)
file(COPY ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/resources/make.exe DESTINATION ${CMAKE_BINARY_DIR}/)
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/modules/Arduino_SDK.cmake)
file(DOWNLOAD https://raw.githubusercontent.com/tohara/marlin-cmake/master/modules/Arduino_SDK.cmake
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/modules/Arduino_SDK.cmake SHOW_PROGRESS)
endif()
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/modules/marlin_cmake_functions.cmake)
file(DOWNLOAD https://raw.githubusercontent.com/tohara/marlin-cmake/master/modules/marlin_cmake_functions.cmake
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/modules/marlin_cmake_functions.cmake SHOW_PROGRESS)
endif()
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/Platform/Arduino.cmake)
file(DOWNLOAD https://raw.githubusercontent.com/tohara/marlin-cmake/master/Platform/Arduino.cmake
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/Platform/Arduino.cmake SHOW_PROGRESS)
endif()
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/settings/marlin_boards.txt)
file(DOWNLOAD https://raw.githubusercontent.com/tohara/marlin-cmake/master/settings/marlin_boards.txt
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/settings/marlin_boards.txt SHOW_PROGRESS)
endif()
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/toolchain/ArduinoToolchain.cmake)
file(DOWNLOAD https://raw.githubusercontent.com/tohara/marlin-cmake/master/toolchain/ArduinoToolchain.cmake
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/toolchain/ArduinoToolchain.cmake SHOW_PROGRESS)
endif()
if(WIN32)
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/resources/make.exe)
file(DOWNLOAD https://raw.githubusercontent.com/tohara/marlin-cmake/master/resources/make.exe
${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/resources/make.exe SHOW_PROGRESS)
endif()
endif(WIN32)
if(NOT EXISTS ${CMAKE_CURRENT_LIST_DIR}/arduino-1.8.19)
file(DOWNLOAD https://downloads.arduino.cc/arduino-1.8.19-windows.zip
${CMAKE_CURRENT_LIST_DIR}/arduino-1.8.19-windows.zip SHOW_PROGRESS)
execute_process(COMMAND ${CMAKE_COMMAND} -E tar -xvzf ${CMAKE_CURRENT_LIST_DIR}/arduino-1.8.19-windows.zip WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR})
file(REMOVE ${CMAKE_CURRENT_LIST_DIR}/arduino-1.8.19-windows.zip)
endif()
# Print CMake version
message("-- Running CMake version: " ${CMAKE_VERSION})
# Replace the CMake Ver. in the Arduino.cmake
file(READ "${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/Platform/Arduino.cmake" ORIGINAL_FILE_CONTENTS)
string(REPLACE "cmake_minimum_required(VERSION 2.8.5)" "cmake_minimum_required(VERSION 3.5)" NEW_FILE_CONTENTS "${ORIGINAL_FILE_CONTENTS}")
file(WRITE "${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/Platform/Arduino.cmake" "${NEW_FILE_CONTENTS}")
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/modules)
#====================================================================#
@ -46,9 +92,10 @@ set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_LIST_DIR}/marlin-cma
# It can also be set from command line. eg.: #
# cmake .. -DARDUINO_SDK_PATH="/path/to/arduino-1.x.x" #
#====================================================================#
#set(ARDUINO_SDK_PATH ${CMAKE_CURRENT_LIST_DIR}/arduino-1.6.8)
set(ARDUINO_SDK_PATH ${CMAKE_CURRENT_LIST_DIR}/arduino-1.8.19)
#set(ARDUINO_SDK_PATH /Applications/Arduino.app/Contents/Java)
#set(ARDUINO_SDK_PATH $HOME/ArduinoAddons/Arduino_1.6.x)
#====================================================================#
# Set included cmake files #
#====================================================================#
@ -62,6 +109,19 @@ set(CMAKE_TOOLCHAIN_FILE ${CMAKE_CURRENT_LIST_DIR}/marlin-cmake/toolchain/Arduin
#====================================================================#
# Setup Project #
# #
# If you receive this error: #
# 'Unknown CMake command "_cmake_record_install_prefix".' #
# #
# Go to the file in your CMake directory. #
# #
# For Windows: cmake\Modules\Platform\WindowsPaths.cmake #
# For Linux: cmake/Modules/Platform/UnixPaths.cmake #
# #
# Comment out "_cmake_record_install_prefix()" #
# - OR - #
# Add "include(CMakeSystemSpecificInformation)" above the line. #
# #
#====================================================================#
project(Marlin C CXX)
@ -79,7 +139,6 @@ project(Marlin C CXX)
print_board_list()
print_programmer_list()
#====================================================================#
# Get motherboard settings from Configuration.h #
# setup_motherboard(TARGET Marlin_src_folder) #
@ -105,9 +164,9 @@ set(${PROJECT_NAME}_SRCS "${SOURCES};../../../Marlin/Marlin.ino")
# cmake .. -DUPLOAD_PORT=/dev/ttyACM0 #
#====================================================================#
if(UPLOAD_PORT)
set(${PROJECT_NAME}_PORT ${UPLOAD_PORT})
set(${PROJECT_NAME}_PORT ${UPLOAD_PORT})
else()
set(${PROJECT_NAME}_PORT /dev/ttyACM0)
set(${PROJECT_NAME}_PORT /dev/ttyACM0)
endif()
#====================================================================#

View File

@ -1,6 +1,6 @@
/**
* Marlin 3D Printer Firmware
* Copyright (c) 2021 MarlinFirmware [https://github.com/MarlinFirmware/Marlin]
* Copyright (c) 2023 MarlinFirmware [https://github.com/MarlinFirmware/Marlin]
*
* Based on Sprinter and grbl.
* Copyright (c) 2011 Camiel Gubbels / Erik van der Zalm

View File

@ -0,0 +1,5 @@
/**
* $(function) : Description pending
*
* $(javaparam)
*/

File diff suppressed because it is too large Load Diff

View File

@ -144,7 +144,7 @@ if [[ $ACTION == "init" ]]; then
find config -name "Conf*.h" -print0 | while read -d $'\0' fn ; do
fldr=$(dirname "$fn")
blank_line=$(awk '/^\s*$/ {print NR; exit}' "$fn")
$SED -i~ "${blank_line}i\\\n#define CONFIG_EXAMPLES_DIR \"$fldr\"\\ " "$fn"
$SED -i~ "${blank_line}i\\\n#define CONFIG_EXAMPLES_DIR \"$fldr\"" "$fn"
rm -f "$fn~"
done
}

View File

@ -8,7 +8,7 @@ Marlin Firmware Commands:
firstpush ... Push and set-upstream the current branch to 'origin'
ghpc ........ Push the current branch to its upstream branch
ghtp ........ Set the transfer protolcol for all your remotes
ghtp ........ Set the transfer protocol for all your remotes
mfadd ....... Fetch a remote branch from any Marlin fork
mfclean ..... Attempt to clean up merged and deleted branches
mfdoc ....... Build the website, serve locally, and browse
@ -25,4 +25,22 @@ Marlin Firmware Commands:
Enter [command] --help for more information.
Build / Test Commands:
mftest ............... Run a platform test locally with PlatformIO
build_all_examples ... Build all configurations of a branch, stop on error
Modify Configuration.h / Configuration_adv.h:
opt_add .............. Add a configuration option (to the top of Configuration.h)
opt_disable .......... Disable a configuration option (modifies )
opt_enable ........... Enable a configuration option
opt_set .............. Set the value of a configuration option
use_example_configs .. Download configs from a remote branch on GitHub
Modify pins files:
pins_set ............. Set the value of a pin in a pins file
pinsformat.js ........ Node.js script to format pins files
THIS

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View File

@ -11,11 +11,14 @@ import sys
import datetime
import random
try:
import heatshrink
import heatshrink2 as heatshrink
heatshrink_exists = True
except ImportError:
heatshrink_exists = False
try:
import heatshrink
heatshrink_exists = True
except ImportError:
heatshrink_exists = False
def millis():
return time.perf_counter() * 1000
@ -72,7 +75,7 @@ class Protocol(object):
self.device = device
self.baud = baud
self.block_size = int(bsize)
self.simulate_errors = max(min(simerr, 1.0), 0.0);
self.simulate_errors = max(min(simerr, 1.0), 0.0)
self.connected = True
self.response_timeout = timeout
@ -234,8 +237,8 @@ class Protocol(object):
# checksum 16 fletchers
def checksum(self, cs, value):
cs_low = (((cs & 0xFF) + value) % 255);
return ((((cs >> 8) + cs_low) % 255) << 8) | cs_low;
cs_low = (((cs & 0xFF) + value) % 255)
return ((((cs >> 8) + cs_low) % 255) << 8) | cs_low
def build_checksum(self, buffer):
cs = 0
@ -267,7 +270,7 @@ class Protocol(object):
def response_ok(self, data):
try:
packet_id = int(data);
packet_id = int(data)
except ValueError:
return
if packet_id != self.sync:
@ -276,7 +279,7 @@ class Protocol(object):
self.packet_status = 1
def response_resend(self, data):
packet_id = int(data);
packet_id = int(data)
self.errors += 1
if not self.syncronised:
print("Retrying syncronisation")
@ -327,7 +330,7 @@ class FileTransferProtocol(object):
return self.responses.popleft()
def connect(self):
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.QUERY);
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.QUERY)
token, data = self.await_response()
if token != 'PFT:version:':
@ -349,7 +352,7 @@ class FileTransferProtocol(object):
timeout = TimeOut(5000)
token = None
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.OPEN, payload);
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.OPEN, payload)
while token != 'PFT:success' and not timeout.timedout():
try:
token, data = self.await_response(1000)
@ -360,7 +363,7 @@ class FileTransferProtocol(object):
print("Broken transfer detected, purging")
self.abort()
time.sleep(0.1)
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.OPEN, payload);
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.OPEN, payload)
timeout.reset()
elif token == 'PFT:fail':
raise Exception("Can not open file on client")
@ -369,10 +372,10 @@ class FileTransferProtocol(object):
raise ReadTimeout()
def write(self, data):
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.WRITE, data);
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.WRITE, data)
def close(self):
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.CLOSE);
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.CLOSE)
token, data = self.await_response(1000)
if token == 'PFT:success':
print("File closed")
@ -385,7 +388,7 @@ class FileTransferProtocol(object):
return False
def abort(self):
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.ABORT);
self.protocol.send(FileTransferProtocol.protocol_id, FileTransferProtocol.Packet.ABORT)
token, data = self.await_response()
if token == 'PFT:success':
print("Transfer Aborted")
@ -393,18 +396,19 @@ class FileTransferProtocol(object):
def copy(self, filename, dest_filename, compression, dummy):
self.connect()
compression_support = heatshrink_exists and self.compression['algorithm'] == 'heatshrink' and compression
if compression and (not heatshrink_exists or not self.compression['algorithm'] == 'heatshrink'):
print("Compression not supported by client")
#compression_support = False
has_heatshrink = heatshrink_exists and self.compression['algorithm'] == 'heatshrink'
if compression and not has_heatshrink:
hs = '2' if sys.version_info[0] > 2 else ''
print("Compression not supported by client. Use 'pip install heatshrink%s' to fix." % hs)
compression = False
data = open(filename, "rb").read()
filesize = len(data)
self.open(dest_filename, compression_support, dummy)
self.open(dest_filename, compression, dummy)
block_size = self.protocol.block_size
if compression_support:
if compression:
data = heatshrink.encode(data, window_sz2=self.compression['window'], lookahead_sz2=self.compression['lookahead'])
cratio = filesize / len(data)
@ -419,17 +423,17 @@ class FileTransferProtocol(object):
self.write(data[start:end])
kibs = (( (i+1) * block_size) / 1024) / (millis() + 1 - start_time) * 1000
if (i / blocks) >= dump_pctg:
print("\r{0:2.0f}% {1:4.2f}KiB/s {2} Errors: {3}".format((i / blocks) * 100, kibs, "[{0:4.2f}KiB/s]".format(kibs * cratio) if compression_support else "", self.protocol.errors), end='')
print("\r{0:2.0f}% {1:4.2f}KiB/s {2} Errors: {3}".format((i / blocks) * 100, kibs, "[{0:4.2f}KiB/s]".format(kibs * cratio) if compression else "", self.protocol.errors), end='')
dump_pctg += 0.1
if self.protocol.errors > 0:
# Dump last status (errors may not be visible)
print("\r{0:2.0f}% {1:4.2f}KiB/s {2} Errors: {3} - Aborting...".format((i / blocks) * 100, kibs, "[{0:4.2f}KiB/s]".format(kibs * cratio) if compression_support else "", self.protocol.errors), end='')
print("\r{0:2.0f}% {1:4.2f}KiB/s {2} Errors: {3} - Aborting...".format((i / blocks) * 100, kibs, "[{0:4.2f}KiB/s]".format(kibs * cratio) if compression else "", self.protocol.errors), end='')
print("") # New line to break the transfer speed line
self.close()
print("Transfer aborted due to protocol errors")
#raise Exception("Transfer aborted due to protocol errors")
return False;
print("\r{0:2.0f}% {1:4.2f}KiB/s {2} Errors: {3}".format(100, kibs, "[{0:4.2f}KiB/s]".format(kibs * cratio) if compression_support else "", self.protocol.errors)) # no one likes transfers finishing at 99.8%
return False
print("\r{0:2.0f}% {1:4.2f}KiB/s {2} Errors: {3}".format(100, kibs, "[{0:4.2f}KiB/s]".format(kibs * cratio) if compression else "", self.protocol.errors)) # no one likes transfers finishing at 99.8%
if not self.close():
print("Transfer failed")

View File

View File

@ -9,6 +9,29 @@
# If no language codes are specified then all languages will be checked
#
langname() {
case "$1" in
an ) echo "Aragonese" ;; bg ) echo "Bulgarian" ;;
ca ) echo "Catalan" ;; cz ) echo "Czech" ;;
da ) echo "Danish" ;; de ) echo "German" ;;
el ) echo "Greek" ;; el_CY ) echo "Greek (Cyprus)" ;;
el_gr) echo "Greek (Greece)" ;; en ) echo "English" ;;
es ) echo "Spanish" ;; eu ) echo "Basque-Euskera" ;;
fi ) echo "Finnish" ;; fr ) echo "French" ;;
fr_na) echo "French (no accent)" ;; gl ) echo "Galician" ;;
hr ) echo "Croatian (Hrvatski)" ;; hu ) echo "Hungarian / Magyar" ;;
it ) echo "Italian" ;; jp_kana) echo "Japanese (Kana)" ;;
ko_KR) echo "Korean" ;; nl ) echo "Dutch" ;;
pl ) echo "Polish" ;; pt ) echo "Portuguese" ;;
pt_br) echo "Portuguese (Brazil)" ;; ro ) echo "Romanian" ;;
ru ) echo "Russian" ;; sk ) echo "Slovak" ;;
sv ) echo "Swedish" ;; tr ) echo "Turkish" ;;
uk ) echo "Ukrainian" ;; vi ) echo "Vietnamese" ;;
zh_CN) echo "Simplified Chinese" ;; zh_TW ) echo "Traditional Chinese" ;;
* ) echo "<unknown>" ;;
esac
}
LANGHOME="Marlin/src/lcd/language"
[ -d $LANGHOME ] && cd $LANGHOME
@ -20,7 +43,7 @@ TEST_LANGS=""
if [[ -n $@ ]]; then
for K in "$@"; do
for F in $FILES; do
[[ "$F" != "${F%$K*}" ]] && TEST_LANGS+="$F "
[[ $F == $K ]] && TEST_LANGS+="$F "
done
done
[[ -z $TEST_LANGS ]] && { echo "No languages matching $@." ; exit 0 ; }
@ -28,20 +51,54 @@ else
TEST_LANGS=$FILES
fi
echo "Missing strings for $TEST_LANGS..."
echo "Finding all missing strings for $TEST_LANGS..."
WORD_LINES=() # Complete lines for all words (or, grep out of en at the end instead)
ALL_MISSING=() # All missing languages for each missing word
#NEED_WORDS=() # All missing words across all specified languages
WORD_COUNT=0
# Go through all strings in the English language file
# For each word, query all specified languages for the word
# If the word is missing, add its language to the list
for WORD in $(awk '/LSTR/{print $2}' language_en.h); do
# Skip MSG_MARLIN
[[ $WORD == "MSG_MARLIN" ]] && break
LANG_LIST=""
((WORD_COUNT++))
# Find all selected languages that lack the string
LANG_MISSING=" "
for LANG in $TEST_LANGS; do
if [[ $(grep -c -E "^ *LSTR +$WORD\b" language_${LANG}.h) -eq 0 ]]; then
INHERIT=$(awk '/using namespace/{print $3}' language_${LANG}.h | sed -E 's/Language_([a-zA-Z_]+)\s*;/\1/')
if [[ -z $INHERIT || $INHERIT == "en" ]]; then
LANG_LIST+=" $LANG"
LANG_MISSING+="$LANG "
elif [[ $(grep -c -E "^ *LSTR +$WORD\b" language_${INHERIT}.h) -eq 0 ]]; then
LANG_LIST+=" $LANG"
LANG_MISSING+="$LANG "
fi
fi
done
[[ -n $LANG_LIST ]] && printf "%-38s :%s\n" "$WORD" "$LANG_LIST"
# For each word store all the missing languages
if [[ $LANG_MISSING != " " ]]; then
WORD_LINES+=("$(grep -m 1 -E "$WORD\b" language_en.h)")
ALL_MISSING+=("$LANG_MISSING")
#NEED_WORDS+=($WORD)
fi
done
echo
echo "${#WORD_LINES[@]} out of $WORD_COUNT LCD strings need translation"
for LANG in $TEST_LANGS; do
HED=0 ; IND=0
for WORDLANGS in "${ALL_MISSING[@]}"; do
# If the current word is missing from the current language then print it
if [[ $WORDLANGS =~ " $LANG " ]]; then
[[ $HED == 0 ]] && { echo ; echo "Missing strings for language_$LANG.h ($(langname $LANG)):" ; HED=1 ; }
echo "${WORD_LINES[$IND]}"
fi
((IND++))
done
done

View File

@ -0,0 +1,153 @@
#!/usr/bin/env python3
'''
languageExport.py
Export LCD language strings to CSV files for easier translation.
Use importTranslations.py to import CSV into the language files.
'''
import re
from pathlib import Path
from languageUtil import namebyid
LANGHOME = "Marlin/src/lcd/language"
# Write multiple sheets if true, otherwise write one giant sheet
MULTISHEET = True
OUTDIR = 'out-csv'
# Check for the path to the language files
if not Path(LANGHOME).is_dir():
print("Error: Couldn't find the '%s' directory." % LANGHOME)
print("Edit LANGHOME or cd to the root of the repo before running.")
exit(1)
# A limit just for testing
LIMIT = 0
# A dictionary to contain strings for each language.
# Init with 'en' so English will always be first.
language_strings = { 'en': 0 }
# A dictionary to contain all distinct LCD string names
names = {}
# Get all "language_*.h" files
langfiles = sorted(list(Path(LANGHOME).glob('language_*.h')))
# Read each language file
for langfile in langfiles:
# Get the language code from the filename
langcode = langfile.name.replace('language_', '').replace('.h', '')
# Skip 'test' and any others that we don't want
if langcode in ['test']: continue
# Open the file
f = open(langfile, 'r', encoding='utf-8')
if not f: continue
# Flags to indicate a wide or tall section
wideflag, tallflag = False, False
# A counter for the number of strings in the file
stringcount = 0
# A dictionary to hold all the strings
strings = { 'narrow': {}, 'wide': {}, 'tall': {} }
# Read each line in the file
for line in f:
# Clean up the line for easier parsing
line = line.split("//")[0].strip()
if line.endswith(';'): line = line[:-1].strip()
# Check for wide or tall sections, assume no complicated nesting
if line.startswith("#endif") or line.startswith("#else"):
wideflag, tallflag = False, False
elif re.match(r'#if.*WIDTH\s*>=?\s*2[01].*', line): wideflag = True
elif re.match(r'#if.*LCD_HEIGHT\s*>=?\s*4.*', line): tallflag = True
# For string-defining lines capture the string data
match = re.match(r'LSTR\s+([A-Z0-9_]+)\s*=\s*(.+)\s*', line)
if match:
# Name and quote-sanitized value
name, value = match.group(1), match.group(2).replace('\\"', '$$$')
# Remove all _UxGT wrappers from the value in a non-greedy way
value = re.sub(r'_UxGT\((".*?")\)', r'\1', value)
# Multi-line strings get one or more bars | for identification
multiline = 0
multimatch = re.match(r'.*MSG_(\d)_LINE\s*\(\s*(.+?)\s*\).*', value)
if multimatch:
multiline = int(multimatch.group(1))
value = '|' + re.sub(r'"\s*,\s*"', '|', multimatch.group(2))
# Wrap inline defines in parentheses
value = re.sub(r' *([A-Z0-9]+_[A-Z0-9_]+) *', r'(\1)', value)
# Remove quotes around strings
value = re.sub(r'"(.*?)"', r'\1', value).replace('$$$', '""')
# Store all unique names as dictionary keys
names[name] = 1
# Store the string as narrow or wide
strings['tall' if tallflag else 'wide' if wideflag else 'narrow'][name] = value
# Increment the string counter
stringcount += 1
# Break for testing
if LIMIT and stringcount >= LIMIT: break
# Close the file
f.close()
# Store the array in the dict
language_strings[langcode] = strings
# Get the language codes from the dictionary
langcodes = list(language_strings.keys())
# Print the array
#print(language_strings)
# Report the total number of unique strings
print("Found %s distinct LCD strings." % len(names))
# Write a single language entry to the CSV file with narrow, wide, and tall strings
def write_csv_lang(f, strings, name):
f.write(',')
if name in strings['narrow']: f.write('"%s"' % strings['narrow'][name])
f.write(',')
if name in strings['wide']: f.write('"%s"' % strings['wide'][name])
f.write(',')
if name in strings['tall']: f.write('"%s"' % strings['tall'][name])
if MULTISHEET:
#
# Export a separate sheet for each language
#
Path.mkdir(Path(OUTDIR), exist_ok=True)
for lang in langcodes:
with open("%s/language_%s.csv" % (OUTDIR, lang), 'w', encoding='utf-8') as f:
lname = lang + ' ' + namebyid(lang)
header = ['name', lname, lname + ' (wide)', lname + ' (tall)']
f.write('"' + '","'.join(header) + '"\n')
for name in names.keys():
f.write('"' + name + '"')
write_csv_lang(f, language_strings[lang], name)
f.write('\n')
else:
#
# Export one large sheet containing all languages
#
with open("languages.csv", 'w', encoding='utf-8') as f:
header = ['name']
for lang in langcodes:
lname = lang + ' ' + namebyid(lang)
header += [lname, lname + ' (wide)', lname + ' (tall)']
f.write('"' + '","'.join(header) + '"\n')
for name in names.keys():
f.write('"' + name + '"')
for lang in langcodes: write_csv_lang(f, language_strings[lang], name)
f.write('\n')

View File

@ -0,0 +1,219 @@
#!/usr/bin/env python3
"""
languageImport.py
Import LCD language strings from a CSV file or Google Sheets
and write Marlin LCD language files based on the data.
Use languageExport.py to export CSV from the language files.
Google Sheets Link:
https://docs.google.com/spreadsheets/d/12yiy-kS84ajKFm7oQIrC4CF8ZWeu9pAR4zrgxH4ruk4/edit#gid=84528699
TODO: Use the defines and comments above the namespace from existing language files.
Get the 'constexpr uint8_t CHARSIZE' from existing language files.
Get the correct 'using namespace' for languages that don't inherit from English.
"""
import sys, re, requests, csv, datetime
from languageUtil import namebyid
LANGHOME = "Marlin/src/lcd/language"
OUTDIR = 'out-language'
# Get the file path from the command line
FILEPATH = sys.argv[1] if len(sys.argv) > 1 else None
download = FILEPATH == 'download'
if not FILEPATH or download:
SHEETID = "12yiy-kS84ajKFm7oQIrC4CF8ZWeu9pAR4zrgxH4ruk4"
FILEPATH = 'https://docs.google.com/spreadsheet/ccc?key=%s&output=csv' % SHEETID
if FILEPATH.startswith('http'):
response = requests.get(FILEPATH)
assert response.status_code == 200, 'GET failed for %s' % FILEPATH
csvdata = response.content.decode('utf-8')
else:
if not FILEPATH.endswith('.csv'): FILEPATH += '.csv'
with open(FILEPATH, 'r', encoding='utf-8') as f: csvdata = f.read()
if not csvdata:
print("Error: couldn't read CSV data from %s" % FILEPATH)
exit(1)
if download:
DLNAME = sys.argv[2] if len(sys.argv) > 2 else 'languages.csv'
if not DLNAME.endswith('.csv'): DLNAME += '.csv'
with open(DLNAME, 'w', encoding='utf-8') as f: f.write(csvdata)
print("Downloaded %s from %s" % (DLNAME, FILEPATH))
exit(0)
lines = csvdata.splitlines()
print(lines)
reader = csv.reader(lines, delimiter=',')
gothead = False
columns = ['']
numcols = 0
strings_per_lang = {}
for row in reader:
if not gothead:
gothead = True
numcols = len(row)
if row[0] != 'name':
print('Error: first column should be "name"')
exit(1)
# The rest of the columns are language codes and names
for i in range(1, numcols):
elms = row[i].split(' ')
lang = elms[0]
style = ('Wide' if elms[-1] == '(wide)' else 'Tall' if elms[-1] == '(tall)' else 'Narrow')
columns.append({ 'lang': lang, 'style': style })
if not lang in strings_per_lang: strings_per_lang[lang] = {}
if not style in strings_per_lang[lang]: strings_per_lang[lang][style] = {}
continue
# Add the named string for all the included languages
name = row[0]
for i in range(1, numcols):
str = row[i]
if str:
col = columns[i]
strings_per_lang[col['lang']][col['style']][name] = str
# Create a folder for the imported language outfiles
from pathlib import Path
Path.mkdir(Path(OUTDIR), exist_ok=True)
FILEHEADER = '''
/**
* Marlin 3D Printer Firmware
* Copyright (c) 2023 MarlinFirmware [https://github.com/MarlinFirmware/Marlin]
*
* Based on Sprinter and grbl.
* Copyright (c) 2011 Camiel Gubbels / Erik van der Zalm
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <https://www.gnu.org/licenses/>.
*
*/
#pragma once
/**
* %s
*
* LCD Menu Messages
* See also https://marlinfw.org/docs/development/lcd_language.html
*
* Substitutions are applied for the following characters when used in menu items titles:
*
* $ displays an inserted string
* { displays '0'....'10' for indexes 0 - 10
* ~ displays '1'....'11' for indexes 0 - 10
* * displays 'E1'...'E11' for indexes 0 - 10 (By default. Uses LCD_FIRST_TOOL)
* @ displays an axis name such as XYZUVW, or E for an extruder
*/
'''
# Iterate over the languages which correspond to the columns
# The columns are assumed to be grouped by language in the order Narrow, Wide, Tall
# TODO: Go through lang only, then impose the order Narrow, Wide, Tall.
# So if something is missing or out of order everything still gets built correctly.
f = None
gotlang = {}
for i in range(1, numcols):
#if i > 6: break # Testing
col = columns[i]
lang, style = col['lang'], col['style']
# If we haven't already opened a file for this language, do so now
if not lang in gotlang:
gotlang[lang] = {}
if f: f.close()
fn = "%s/language_%s.h" % (OUTDIR, lang)
f = open(fn, 'w', encoding='utf-8')
if not f:
print("Failed to open %s." % fn)
exit(1)
# Write the opening header for the new language file
#f.write(FILEHEADER % namebyid(lang))
f.write('/**\n * Imported from %s on %s at %s\n */\n' % (FILEPATH, datetime.date.today(), datetime.datetime.now().strftime("%H:%M:%S")))
# Start a namespace for the language and style
f.write('\nnamespace Language%s_%s {\n' % (style, lang))
# Wide and tall namespaces inherit from the others
if style == 'Wide':
f.write(' using namespace LanguageNarrow_%s;\n' % lang)
f.write(' #if LCD_WIDTH >= 20 || HAS_DWIN_E3V2\n')
elif style == 'Tall':
f.write(' using namespace LanguageWide_%s;\n' % lang)
f.write(' #if LCD_HEIGHT >= 4\n')
elif lang != 'en':
f.write(' using namespace Language_en; // Inherit undefined strings from English\n')
# Formatting for the lines
indent = ' ' if style == 'Narrow' else ' '
width = 34 if style == 'Narrow' else 32
lstr_fmt = '%sLSTR %%-%ds = %%s;%%s\n' % (indent, width)
# Emit all the strings for this language and style
for name in strings_per_lang[lang][style].keys():
# Get the raw string value
val = strings_per_lang[lang][style][name]
# Count the number of bars
if val.startswith('|'):
bars = val.count('|')
val = val[1:]
else:
bars = 0
# Escape backslashes, substitute quotes, and wrap in _UxGT("...")
val = '_UxGT("%s")' % val.replace('\\', '\\\\').replace('"', '$$$')
# Move named references outside of the macro
val = re.sub(r'\(([A-Z0-9]+_[A-Z0-9_]+)\)', r'") \1 _UxGT("', val)
# Remove all empty _UxGT("") that result from the above
val = re.sub(r'\s*_UxGT\(""\)\s*', '', val)
# No wrapper needed for just spaces
val = re.sub(r'_UxGT\((" +")\)', r'\1', val)
# Multi-line strings start with a bar...
if bars:
# Wrap the string in MSG_#_LINE(...) and split on bars
val = re.sub(r'^_UxGT\((.+)\)', r'_UxGT(MSG_%s_LINE(\1))' % bars, val)
val = val.replace('|', '", "')
# Restore quotes inside the string
val = val.replace('$$$', '\\"')
# Add a comment with the English string for reference
comm = ''
if lang != 'en' and 'en' in strings_per_lang:
en = strings_per_lang['en']
if name in en[style]: str = en[style][name]
elif name in en['Narrow']: str = en['Narrow'][name]
if str:
cfmt = '%%%ss// %%s' % (50 - len(val) if len(val) < 50 else 1)
comm = cfmt % (' ', str)
# Write out the string definition
f.write(lstr_fmt % (name, val, comm))
if style == 'Wide' or style == 'Tall': f.write(' #endif\n')
f.write('}\n') # End namespace
# Assume the 'Tall' namespace comes last
if style == 'Tall': f.write('\nnamespace Language_%s {\n using namespace LanguageTall_%s;\n}\n' % (lang, lang))
# Close the last-opened output file
if f: f.close()

View File

@ -0,0 +1,41 @@
#!/usr/bin/env python3
#
# marlang.py
#
# A dictionary to contain language names
LANGNAME = {
'an': "Aragonese",
'bg': "Bulgarian",
'ca': "Catalan",
'cz': "Czech",
'da': "Danish",
'de': "German",
'el': "Greek", 'el_CY': "Greek (Cyprus)", 'el_gr': "Greek (Greece)",
'en': "English",
'es': "Spanish",
'eu': "Basque-Euskera",
'fi': "Finnish",
'fr': "French", 'fr_na': "French (no accent)",
'gl': "Galician",
'hr': "Croatian (Hrvatski)",
'hu': "Hungarian / Magyar",
'it': "Italian",
'jp_kana': "Japanese (Kana)",
'ko_KR': "Korean",
'nl': "Dutch",
'pl': "Polish",
'pt': "Portuguese", 'pt_br': "Portuguese (Brazil)",
'ro': "Romanian",
'ru': "Russian",
'sk': "Slovak",
'sv': "Swedish",
'tr': "Turkish",
'uk': "Ukrainian",
'vi': "Vietnamese",
'zh_CN': "Simplified Chinese", 'zh_TW': "Traditional Chinese"
}
def namebyid(id):
if id in LANGNAME: return LANGNAME[id]
return '<unknown>'

View File

@ -10,6 +10,11 @@
const fs = require("fs");
var do_log = false
function logmsg(msg, line='') {
if (do_log) console.log(msg, line);
}
// String lpad / rpad
String.prototype.lpad = function(len, chr) {
if (!len) return this;
@ -27,8 +32,17 @@ String.prototype.rpad = function(len, chr) {
return s;
};
// Concatenate a string, adding a space if necessary
// to avoid merging two words
String.prototype.concat_with_space = function(str) {
const c = this.substr(-1), d = str.charAt(0);
if (c !== ' ' && c !== '' && d !== ' ' && d !== '')
str = ' ' + str;
return this + str;
};
const mpatt = [ '-?\\d{1,3}', 'P[A-I]\\d+', 'P\\d_\\d+', 'Pin[A-Z]\\d\\b' ],
definePatt = new RegExp(`^\\s*(//)?#define\\s+[A-Z_][A-Z0-9_]+\\s+(${mpatt[0]}|${mpatt[1]}|${mpatt[2]}|${mpatt[3]})\\s*(//.*)?$`, 'gm'),
definePatt = new RegExp(`^\\s*(//)?#define\\s+[A-Z_][A-Z0-9_]+\\s+(${mpatt.join('|')})\\s*(//.*)?$`, 'gm'),
ppad = [ 3, 4, 5, 5 ],
col_comment = 50,
col_value_rj = col_comment - 3;
@ -38,11 +52,11 @@ for (let m of mpatt) mexpr.push(new RegExp('^' + m + '$'));
const argv = process.argv.slice(2), argc = argv.length;
var src_file = 0, src_name = 'STDIN', dst_file, do_log = false;
var src_file = 0, dst_file;
if (argc > 0) {
let ind = 0;
if (argv[0] == '-v') { do_log = true; ind++; }
dst_file = src_file = src_name = argv[ind++];
dst_file = src_file = argv[ind++];
if (ind < argc) dst_file = argv[ind];
}
@ -56,6 +70,7 @@ else
// Find the pin pattern so non-pin defines can be skipped
function get_pin_pattern(txt) {
var r, m = 0, match_count = [ 0, 0, 0, 0 ];
var max_match_count = 0, max_match_index = -1;
definePatt.lastIndex = 0;
while ((r = definePatt.exec(txt)) !== null) {
let ind = -1;
@ -65,12 +80,15 @@ function get_pin_pattern(txt) {
return r[2].match(p);
}) ) {
const m = ++match_count[ind];
if (m >= 5) {
return { match: mpatt[ind], pad:ppad[ind] };
if (m > max_match_count) {
max_match_count = m;
max_match_index = ind;
}
}
}
return null;
if (max_match_index === -1) return null;
return { match:mpatt[max_match_index], pad:ppad[max_match_index] };
}
function process_text(txt) {
@ -79,13 +97,14 @@ function process_text(txt) {
if (!patt) return txt;
const pindefPatt = new RegExp(`^(\\s*(//)?#define)\\s+([A-Z_][A-Z0-9_]+)\\s+(${patt.match})\\s*(//.*)?$`),
noPinPatt = new RegExp(`^(\\s*(//)?#define)\\s+([A-Z_][A-Z0-9_]+)\\s+(-1)\\s*(//.*)?$`),
skipPatt1 = new RegExp('^(\\s*(//)?#define)\\s+(AT90USB|USBCON|(BOARD|DAC|FLASH|HAS|IS|USE)_.+|.+_(ADDRESS|AVAILABLE|BAUDRATE|CLOCK|CONNECTION|DEFAULT|FREQ|ITEM|MODULE|NAME|ONLY|PERIOD|RANGE|RATE|SERIAL|SIZE|SPI|STATE|STEP|TIMER))\\s+(.+)\\s*(//.*)?$'),
skipPatt1 = new RegExp('^(\\s*(//)?#define)\\s+(AT90USB|USBCON|(BOARD|DAC|FLASH|HAS|IS|USE)_.+|.+_(ADDRESS|AVAILABLE|BAUDRATE|CLOCK|CONNECTION|DEFAULT|ERROR|EXTRUDERS|FREQ|ITEM|MKS_BASE_VERSION|MODULE|NAME|ONLY|ORIENTATION|PERIOD|RANGE|RATE|READ_RETRIES|SERIAL|SIZE|SPI|STATE|STEP|TIMER|VERSION))\\s+(.+)\\s*(//.*)?$'),
skipPatt2 = new RegExp('^(\\s*(//)?#define)\\s+([A-Z_][A-Z0-9_]+)\\s+(0x[0-9A-Fa-f]+|\d+|.+[a-z].+)\\s*(//.*)?$'),
skipPatt3 = /^\s*#e(lse|ndif)\b.*$/,
aliasPatt = new RegExp('^(\\s*(//)?#define)\\s+([A-Z_][A-Z0-9_]+)\\s+([A-Z_][A-Z0-9_()]+)\\s*(//.*)?$'),
switchPatt = new RegExp('^(\\s*(//)?#define)\\s+([A-Z_][A-Z0-9_]+)\\s*(//.*)?$'),
undefPatt = new RegExp('^(\\s*(//)?#undef)\\s+([A-Z_][A-Z0-9_]+)\\s*(//.*)?$'),
defPatt = new RegExp('^(\\s*(//)?#define)\\s+([A-Z_][A-Z0-9_]+)\\s+([-_\\w]+)\\s*(//.*)?$'),
condPatt = new RegExp('^(\\s*(//)?#(if|ifn?def|else|elif)(\\s+\\S+)*)\\s+(//.*)$'),
condPatt = new RegExp('^(\\s*(//)?#(if|ifn?def|elif)(\\s+\\S+)*)\\s+(//.*)$'),
commPatt = new RegExp('^\\s{20,}(//.*)?$');
const col_value_lj = col_comment - patt.pad - 2;
var r, out = '', check_comment_next = false;
@ -101,74 +120,75 @@ function process_text(txt) {
//
// #define SKIP_ME
//
if (do_log) console.log("skip:", line);
logmsg("skip:", line);
}
else if ((r = pindefPatt.exec(line)) !== null) {
//
// #define MY_PIN [pin]
//
if (do_log) console.log("pin:", line);
logmsg("pin:", line);
const pinnum = r[4].charAt(0) == 'P' ? r[4] : r[4].lpad(patt.pad);
line = r[1] + ' ' + r[3];
line = line.rpad(col_value_lj) + pinnum;
if (r[5]) line = line.rpad(col_comment) + r[5];
line = line.rpad(col_value_lj).concat_with_space(pinnum);
if (r[5]) line = line.rpad(col_comment).concat_with_space(r[5]);
}
else if ((r = noPinPatt.exec(line)) !== null) {
//
// #define MY_PIN -1
//
if (do_log) console.log("pin -1:", line);
logmsg("pin -1:", line);
line = r[1] + ' ' + r[3];
line = line.rpad(col_value_lj) + '-1';
if (r[5]) line = line.rpad(col_comment) + r[5];
line = line.rpad(col_value_lj).concat_with_space('-1');
if (r[5]) line = line.rpad(col_comment).concat_with_space(r[5]);
}
else if (skipPatt2.exec(line) !== null) {
else if (skipPatt2.exec(line) !== null || skipPatt3.exec(line) !== null) {
//
// #define SKIP_ME
// #else, #endif
//
if (do_log) console.log("skip:", line);
logmsg("skip:", line);
}
else if ((r = aliasPatt.exec(line)) !== null) {
//
// #define ALIAS OTHER
//
if (do_log) console.log("alias:", line);
logmsg("alias:", line);
line = r[1] + ' ' + r[3];
line += r[4].lpad(col_value_rj + 1 - line.length);
if (r[5]) line = line.rpad(col_comment) + r[5];
line = line.concat_with_space(r[4].lpad(col_value_rj + 1 - line.length));
if (r[5]) line = line.rpad(col_comment).concat_with_space(r[5]);
}
else if ((r = switchPatt.exec(line)) !== null) {
//
// #define SWITCH
//
if (do_log) console.log("switch:", line);
logmsg("switch:", line);
line = r[1] + ' ' + r[3];
if (r[4]) line = line.rpad(col_comment) + r[4];
if (r[4]) line = line.rpad(col_comment).concat_with_space(r[4]);
check_comment_next = true;
}
else if ((r = defPatt.exec(line)) !== null) {
//
// #define ...
//
if (do_log) console.log("def:", line);
logmsg("def:", line);
line = r[1] + ' ' + r[3] + ' ';
line += r[4].lpad(col_value_rj + 1 - line.length);
line = line.concat_with_space(r[4].lpad(col_value_rj + 1 - line.length));
if (r[5]) line = line.rpad(col_comment - 1) + ' ' + r[5];
}
else if ((r = undefPatt.exec(line)) !== null) {
//
// #undef ...
//
if (do_log) console.log("undef:", line);
logmsg("undef:", line);
line = r[1] + ' ' + r[3];
if (r[4]) line = line.rpad(col_comment) + r[4];
if (r[4]) line = line.rpad(col_comment).concat_with_space(r[4]);
}
else if ((r = condPatt.exec(line)) !== null) {
//
// #if ...
// #if, #ifdef, #ifndef, #elif ...
//
if (do_log) console.log("cond:", line);
line = r[1].rpad(col_comment) + r[5];
logmsg("cond:", line);
line = r[1].rpad(col_comment).concat_with_space(r[5]);
check_comment_next = true;
}
out += line + '\n';

View File

@ -0,0 +1,272 @@
#!/usr/bin/env python3
#
# Formatter script for pins_MYPINS.h files
#
# Usage: pinsformat.py [infile] [outfile]
#
# With no parameters convert STDIN to STDOUT
#
import sys, re
do_log = False
def logmsg(msg, line):
if do_log: print(msg, line)
col_comment = 50
# String lpad / rpad
def lpad(astr, fill, c=' '):
if not fill: return astr
need = fill - len(astr)
return astr if need <= 0 else (need * c) + astr
def rpad(astr, fill, c=' '):
if not fill: return astr
need = fill - len(astr)
return astr if need <= 0 else astr + (need * c)
# Pin patterns
mpatt = [ r'-?\d{1,3}', r'P[A-I]\d+', r'P\d_\d+', r'Pin[A-Z]\d\b' ]
mstr = '|'.join(mpatt)
mexpr = [ re.compile(f'^{m}$') for m in mpatt ]
# Corrsponding padding for each pattern
ppad = [ 3, 4, 5, 5 ]
# Match a define line
definePatt = re.compile(rf'^\s*(//)?#define\s+[A-Z_][A-Z0-9_]+\s+({mstr})\s*(//.*)?$')
def format_pins(argv):
src_file = 'stdin'
dst_file = None
scnt = 0
for arg in argv:
if arg == '-v':
do_log = True
elif scnt == 0:
# Get a source file if specified. Default destination is the same file
src_file = dst_file = arg
scnt += 1
elif scnt == 1:
# Get destination file if specified
dst_file = arg
scnt += 1
# No text to process yet
file_text = ''
if src_file == 'stdin':
# If no source file specified read from STDIN
file_text = sys.stdin.read()
else:
# Open and read the file src_file
with open(src_file, 'r') as rf: file_text = rf.read()
if len(file_text) == 0:
print('No text to process')
return
# Read from file or STDIN until it terminates
filtered = process_text(file_text)
if dst_file:
with open(dst_file, 'w') as wf: wf.write(filtered)
else:
print(filtered)
# Find the pin pattern so non-pin defines can be skipped
def get_pin_pattern(txt):
r = ''
m = 0
match_count = [ 0, 0, 0, 0 ]
# Find the most common matching pattern
match_threshold = 5
for line in txt.split('\n'):
r = definePatt.match(line)
if r == None: continue
ind = -1
for p in mexpr:
ind += 1
if not p.match(r[2]): continue
match_count[ind] += 1
if match_count[ind] >= match_threshold:
return { 'match': mpatt[ind], 'pad':ppad[ind] }
return None
def process_text(txt):
if len(txt) == 0: return '(no text)'
patt = get_pin_pattern(txt)
if patt == None: return txt
pmatch = patt['match']
pindefPatt = re.compile(rf'^(\s*(//)?#define)\s+([A-Z_][A-Z0-9_]+)\s+({pmatch})\s*(//.*)?$')
noPinPatt = re.compile(r'^(\s*(//)?#define)\s+([A-Z_][A-Z0-9_]+)\s+(-1)\s*(//.*)?$')
skipPatt1 = re.compile(r'^(\s*(//)?#define)\s+(AT90USB|USBCON|(BOARD|DAC|FLASH|HAS|IS|USE)_.+|.+_(ADDRESS|AVAILABLE|BAUDRATE|CLOCK|CONNECTION|DEFAULT|ERROR|EXTRUDERS|FREQ|ITEM|MKS_BASE_VERSION|MODULE|NAME|ONLY|ORIENTATION|PERIOD|RANGE|RATE|READ_RETRIES|SERIAL|SIZE|SPI|STATE|STEP|TIMER|VERSION))\s+(.+)\s*(//.*)?$')
skipPatt2 = re.compile(r'^(\s*(//)?#define)\s+([A-Z_][A-Z0-9_]+)\s+(0x[0-9A-Fa-f]+|\d+|.+[a-z].+)\s*(//.*)?$')
skipPatt3 = re.compile(r'^\s*#e(lse|ndif)\b.*$')
aliasPatt = re.compile(r'^(\s*(//)?#define)\s+([A-Z_][A-Z0-9_]+)\s+([A-Z_][A-Z0-9_()]+)\s*(//.*)?$')
switchPatt = re.compile(r'^(\s*(//)?#define)\s+([A-Z_][A-Z0-9_]+)\s*(//.*)?$')
undefPatt = re.compile(r'^(\s*(//)?#undef)\s+([A-Z_][A-Z0-9_]+)\s*(//.*)?$')
defPatt = re.compile(r'^(\s*(//)?#define)\s+([A-Z_][A-Z0-9_]+)\s+([-_\w]+)\s*(//.*)?$')
condPatt = re.compile(r'^(\s*(//)?#(if|ifn?def|elif)(\s+\S+)*)\s+(//.*)$')
commPatt = re.compile(r'^\s{20,}(//.*)?$')
col_value_lj = col_comment - patt['pad'] - 2
col_value_rj = col_comment - 3
#
# #define SKIP_ME
#
def trySkip1(d):
if skipPatt1.match(d['line']) == None: return False
logmsg("skip:", d['line'])
return True
#
# #define MY_PIN [pin]
#
def tryPindef(d):
line = d['line']
r = pindefPatt.match(line)
if r == None: return False
logmsg("pin:", line)
pinnum = r[4] if r[4][0] == 'P' else lpad(r[4], patt['pad'])
line = f'{r[1]} {r[3]}'
line = rpad(line, col_value_lj) + pinnum
if r[5]: line = rpad(line, col_comment) + r[5]
d['line'] = line
return True
#
# #define MY_PIN -1
#
def tryNoPin(d):
line = d['line']
r = noPinPatt.match(line)
if r == None: return False
logmsg("pin -1:", line)
line = f'{r[1]} {r[3]}'
line = rpad(line, col_value_lj) + '-1'
if r[5]: line = rpad(line, col_comment) + r[5]
d['line'] = line
return True
#
# #define SKIP_ME_TOO
#
def trySkip2(d):
if skipPatt2.match( d['line']) == None: return False
logmsg("skip:", d['line'])
return True
#
# #else|endif
#
def trySkip3(d):
if skipPatt3.match( d['line']) == None: return False
logmsg("skip:", d['line'])
return True
#
# #define ALIAS OTHER
#
def tryAlias(d):
line = d['line']
r = aliasPatt.match(line)
if r == None: return False
logmsg("alias:", line)
line = f'{r[1]} {r[3]}'
line += lpad(r[4], col_value_rj + 1 - len(line))
if r[5]: line = rpad(line, col_comment) + r[5]
d['line'] = line
return True
#
# #define SWITCH
#
def trySwitch(d):
line = d['line']
r = switchPatt.match(line)
if r == None: return False
logmsg("switch:", line)
line = f'{r[1]} {r[3]}'
if r[4]: line = rpad(line, col_comment) + r[4]
d['line'] = line
d['check_comment_next'] = True
return True
#
# #define ...
#
def tryDef(d):
line = d['line']
r = defPatt.match(line)
if r == None: return False
logmsg("def:", line)
line = f'{r[1]} {r[3]} '
line += lpad(r[4], col_value_rj + 1 - len(line))
if r[5]: line = rpad(line, col_comment - 1) + ' ' + r[5]
d['line'] = line
return True
#
# #undef ...
#
def tryUndef(d):
line = d['line']
r = undefPatt.match(line)
if r == None: return False
logmsg("undef:", line)
line = f'{r[1]} {r[3]}'
if r[4]: line = rpad(line, col_comment) + r[4]
d['line'] = line
return True
#
# #if|ifdef|ifndef|elif ...
#
def tryCond(d):
line = d['line']
r = condPatt.match(line)
if r == None: return False
logmsg("cond:", line)
line = rpad(r[1], col_comment) + r[5]
d['line'] = line
d['check_comment_next'] = True
return True
out = ''
wDict = { 'check_comment_next': False }
# Transform each line and add it to the output
for line in txt.split('\n'):
wDict['line'] = line
if wDict['check_comment_next']:
r = commPatt.match(line)
wDict['check_comment_next'] = (r != None)
if wDict['check_comment_next']:
# Comments in column 50
line = rpad('', col_comment) + r[1]
elif trySkip1(wDict): pass #define SKIP_ME
elif tryPindef(wDict): pass #define MY_PIN [pin]
elif tryNoPin(wDict): pass #define MY_PIN -1
elif trySkip2(wDict): pass #define SKIP_ME_TOO
elif trySkip3(wDict): pass #else|endif
elif tryAlias(wDict): pass #define ALIAS OTHER
elif trySwitch(wDict): pass #define SWITCH
elif tryDef(wDict): pass #define ...
elif tryUndef(wDict): pass #undef ...
elif tryCond(wDict): pass #if|ifdef|ifndef|elif ...
out += wDict['line'] + '\n'
return re.sub('\n\n$', '\n', re.sub(r'\n\n+', '\n\n', out))
# Python standard startup for command line with arguments
if __name__ == '__main__':
format_pins(sys.argv[1:])

View File

@ -0,0 +1,142 @@
#!/usr/bin/env python3
#
# Utility to compress Marlin RGB565 TFT data to RLE16 format.
# Reads the existing Marlin RGB565 cpp file and generates a new file with the additional RLE16 data.
#
# Usage: rle16_compress_cpp_image_data.py INPUT_FILE.cpp OUTPUT_FILE.cpp
#
import sys,struct
import re
def addCompressedData(input_file, output_file):
ofile = open(output_file, 'wt')
c_data_section = False
c_skip_data = False
c_footer = False
raw_data = []
rle_value = []
rle_count = []
arrname = ''
line = input_file.readline()
while line:
if not c_footer:
if not c_skip_data: ofile.write(line)
if "};" in line:
c_skip_data = False
c_data_section = False
c_footer = True
if c_data_section:
cleaned = re.sub(r"\s|,|\n", "", line)
as_list = cleaned.split("0x")
as_list.pop(0)
raw_data += [int(x, 16) for x in as_list]
if "const uint" in line:
# e.g.: const uint16_t marlin_logo_480x320x16[153600] = {
if "_rle16" in line:
c_skip_data = True
else:
c_data_section = True
arrname = line.split('[')[0].split(' ')[-1]
print("Found data array", arrname)
line = input_file.readline()
input_file.close()
#
# RLE16 (run length 16) encoding
# Convert data from from raw RGB565 to a simple run-length-encoded format for each word of data.
# - Each sequence begins with a count byte N.
# - If the high bit is set in N the run contains N & 0x7F + 1 unique words.
# - Otherwise it repeats the following word N + 1 times.
# - Each RGB565 word is stored in MSB / LSB order.
#
def rle_encode(data):
warn = "This may take a while" if len(data) > 300000 else ""
print("Compressing image data...", warn)
rledata = []
distinct = []
i = 0
while i < len(data):
v = data[i]
i += 1
rsize = 1
for j in range(i, len(data)):
if v != data[j]: break
i += 1
rsize += 1
if rsize >= 128: break
# If the run is one, add to the distinct values
if rsize == 1: distinct.append(v)
# If distinct length >= 127, or the repeat run is 2 or more,
# store the distinct run.
nr = len(distinct)
if nr and (nr >= 128 or rsize > 1 or i >= len(data)):
rledata += [(nr - 1) | 0x80] + distinct
distinct = []
# If the repeat run is 2 or more, store the repeat run.
if rsize > 1: rledata += [rsize - 1, v]
return rledata
def append_byte(data, byte, cols=240):
if data == '': data = ' '
data += ('0x{0:02X}, '.format(byte)) # 6 characters
if len(data) % (cols * 6 + 2) == 0: data = data.rstrip() + "\n "
return data
def rle_emit(ofile, arrname, rledata, rawsize):
col = 0
i = 0
outstr = ''
size = 0
while i < len(rledata):
rval = rledata[i]
i += 1
if rval & 0x80:
count = (rval & 0x7F) + 1
outstr = append_byte(outstr, rval)
size += 1
for j in range(count):
outstr = append_byte(outstr, rledata[i + j] >> 8)
outstr = append_byte(outstr, rledata[i + j] & 0xFF)
size += 2
i += count
else:
outstr = append_byte(outstr, rval)
outstr = append_byte(outstr, rledata[i] >> 8)
outstr = append_byte(outstr, rledata[i] & 0xFF)
i += 1
size += 3
outstr = outstr.rstrip()[:-1]
ofile.write("\n// Saves %i bytes\nconst uint8_t %s_rle16[%d] = {\n%s\n};\n" % (rawsize - size, arrname, size, outstr))
(w, h, d) = arrname.split("_")[-1].split('x')
ofile.write("\nconst tImage MarlinLogo{0}x{1}x16 = MARLIN_LOGO_CHOSEN({0}, {1});\n".format(w, h))
ofile.write("\n#endif // HAS_GRAPHICAL_TFT && SHOW_BOOTSCREEN\n".format(w, h))
# Encode the data, write it out, close the file
rledata = rle_encode(raw_data)
rle_emit(ofile, arrname, rledata, len(raw_data) * 2)
ofile.close()
if len(sys.argv) <= 2:
print("Utility to compress Marlin RGB565 TFT data to RLE16 format.")
print("Reads a Marlin RGB565 cpp file and generates a new file with the additional RLE16 data.")
print("Usage: rle16_compress_cpp_image_data.py INPUT_FILE.cpp OUTPUT_FILE.cpp")
exit(1)
output_cpp = sys.argv[2]
inname = sys.argv[1].replace('//', '/')
input_cpp = open(inname)
print("Processing", inname, "...")
addCompressedData(input_cpp, output_cpp)

View File

@ -0,0 +1,200 @@
#!/usr/bin/env python3
#
# Bitwise RLE compress a Marlin mono DOGM bitmap.
# Input: An existing Marlin Marlin mono DOGM bitmap .cpp or .h file.
# Output: A new file with the original and compressed data.
#
# Usage: rle_compress_bitmap.py INPUT_FILE OUTPUT_FILE
#
import sys,struct
import re
def addCompressedData(input_file, output_file):
ofile = open(output_file, 'wt')
datatype = "uint8_t"
bytewidth = 16
raw_data = []
arrname = ''
c_data_section = False ; c_skip_data = False ; c_footer = False
while True:
line = input_file.readline()
if not line: break
if not c_footer:
if not c_skip_data: ofile.write(line)
mat = re.match(r'.+CUSTOM_BOOTSCREEN_BMPWIDTH\s+(\d+)', line)
if mat: bytewidth = (int(mat[1]) + 7) // 8
if "};" in line:
c_skip_data = False
c_data_section = False
c_footer = True
if c_data_section:
cleaned = re.sub(r"\s|,|\n", "", line)
mat = re.match(r'(0b|B)[01]{8}', cleaned)
if mat:
as_list = cleaned.split(mat[1])
as_list.pop(0)
raw_data += [int(x, 2) for x in as_list]
else:
as_list = cleaned.split("0x")
as_list.pop(0)
raw_data += [int(x, 16) for x in as_list]
mat = re.match(r'const (uint\d+_t|unsigned char)', line)
if mat:
# e.g.: const unsigned char custom_start_bmp[] PROGMEM = {
datatype = mat[0]
if "_rle" in line:
c_skip_data = True
else:
c_data_section = True
arrname = line.split('[')[0].split(' ')[-1]
print("Found data array", arrname)
input_file.close()
#print("\nRaw Bitmap Data", raw_data)
#
# Bitwise RLE (run length) encoding
# Convert data from raw mono bitmap to a bitwise run-length-encoded format.
# - The first nybble is the starting bit state. Changing this nybble inverts the bitmap.
# - The following bytes provide the runs for alternating on/off bits.
# - A value of 0-14 encodes a run of 1-15.
# - A value of 16 indicates a run of 16-270 calculated using the next two bytes.
#
def bitwise_rle_encode(data):
def get_bit(data, n): return 1 if (data[n // 8] & (0x80 >> (n & 7))) else 0
def try_encode(data, isext):
bitslen = len(data) * 8
bitstate = get_bit(data, 0)
rledata = [ bitstate ]
bigrun = 256 if isext else 272
medrun = False
i = 0
runlen = -1
while i <= bitslen:
if i < bitslen: b = get_bit(data, i)
runlen += 1
if bitstate != b or i == bitslen:
if runlen >= bigrun:
isext = True
if medrun: return [], isext
rem = runlen & 0xFF
rledata += [ 15, 15, rem // 16, rem % 16 ]
elif runlen >= 16:
rledata += [ 15, runlen // 16 - 1, runlen % 16 ]
if runlen >= 256: medrun = True
else:
rledata += [ runlen - 1 ]
bitstate ^= 1
runlen = 0
i += 1
#print("\nrledata", rledata)
encoded = []
ri = 0
rlen = len(rledata)
while ri < rlen:
v = rledata[ri] << 4
if (ri < rlen - 1): v |= rledata[ri + 1]
encoded += [ v ]
ri += 2
#print("\nencoded", encoded)
return encoded, isext
# Try to encode with the original isext flag
warn = "This may take a while" if len(data) > 300000 else ""
print("Compressing image data...", warn)
isext = False
encoded, isext = try_encode(data, isext)
if len(encoded) == 0:
encoded, isext = try_encode(data, True)
return encoded, isext
def bitwise_rle_decode(isext, rledata, invert=0):
expanded = []
for n in rledata: expanded += [ n >> 4, n & 0xF ]
decoded = []
bitstate = 0 ; workbyte = 0 ; outindex = 0
i = 0
while i < len(expanded):
c = expanded[i]
i += 1
if i == 1: bitstate = c ; continue
if c == 15:
d = expanded[i] ; e = expanded[i + 1]
if isext and d == 15:
c = 256 + 16 * e + expanded[i + 2] - 1
i += 1
else:
c = 16 * d + e + 15
i += 2
for _ in range(c, -1, -1):
bitval = 0x80 >> (outindex & 7)
if bitstate: workbyte |= bitval
if bitval == 1:
decoded += [ workbyte ]
workbyte = 0
outindex += 1
bitstate ^= 1
print("\nDecoded RLE data:")
pretty = [ '{0:08b}'.format(v) for v in decoded ]
rows = [pretty[i:i+bytewidth] for i in range(0, len(pretty), bytewidth)]
for row in rows: print(f"{''.join(row)}")
return decoded
def rle_emit(ofile, arrname, rledata, rawsize, isext):
outstr = ''
rows = [ rledata[i:i+16] for i in range(0, len(rledata), 16) ]
for i in range(0, len(rows)):
rows[i] = [ '0x{0:02X}'.format(v) for v in rows[i] ]
outstr += f" {', '.join(rows[i])},\n"
outstr = outstr[:-2]
size = len(rledata)
defname = 'COMPACT_CUSTOM_BOOTSCREEN_EXT' if isext else 'COMPACT_CUSTOM_BOOTSCREEN'
ofile.write(f"\n// Saves {rawsize - size} bytes\n#define {defname}\n{datatype} {arrname}_rle[{size}] PROGMEM = {{\n{outstr}\n}};\n")
# Encode the data, write it out, close the file
rledata, isext = bitwise_rle_encode(raw_data)
rle_emit(ofile, arrname, rledata, len(raw_data), isext)
ofile.close()
# Validate that code properly compressed (and decompressed) the data
checkdata = bitwise_rle_decode(isext, rledata)
for i in range(0, len(checkdata)):
if raw_data[i] != checkdata[i]:
print(f'Data mismatch at byte offset {i} (should be {raw_data[i]} but got {checkdata[i]})')
break
if len(sys.argv) <= 2:
print('Usage: rle_compress_bitmap.py INPUT_FILE OUTPUT_FILE')
exit(1)
output_cpp = sys.argv[2]
inname = sys.argv[1].replace('//', '/')
try:
input_cpp = open(inname)
print("Processing", inname, "...")
addCompressedData(input_cpp, output_cpp)
except OSError:
print("Can't find input file", inname)

View File

@ -7,17 +7,6 @@ import serial
Import("env")
# Needed (only) for compression, but there are problems with pip install heatshrink
#try:
# import heatshrink
#except ImportError:
# # Install heatshrink
# print("Installing 'heatshrink' python module...")
# env.Execute(env.subst("$PYTHONEXE -m pip install heatshrink"))
#
# Not tested: If it's safe to install python libraries in PIO python try:
# env.Execute(env.subst("$PYTHONEXE -m pip install https://github.com/p3p/pyheatshrink/releases/download/0.3.3/pyheatshrink-pip.zip"))
import MarlinBinaryProtocol
#-----------------#
@ -168,7 +157,8 @@ def Upload(source, target, env):
marlin_string_config_h_author = _GetMarlinEnv(MarlinEnv, 'STRING_CONFIG_H_AUTHOR')
# Get firmware upload params
upload_firmware_source_name = str(source[0]) # Source firmware filename
upload_firmware_source_name = env['PROGNAME'] + '.bin' if 'PROGNAME' in env else str(source[0])
# Source firmware filename
upload_speed = env['UPLOAD_SPEED'] if 'UPLOAD_SPEED' in env else 115200
# baud rate of serial connection
upload_port = _GetUploadPort(env) # Serial port to use
@ -191,6 +181,21 @@ def Upload(source, target, env):
# "upload_random_name": generate a random 8.3 firmware filename to upload
upload_random_filename = upload_delete_old_bins and not marlin_long_filename_host_support
# Heatshrink module is needed (only) for compression
if upload_compression:
if sys.version_info[0] > 2:
try:
import heatshrink2
except ImportError:
print("Installing 'heatshrink2' python module...")
env.Execute(env.subst("$PYTHONEXE -m pip install heatshrink2"))
else:
try:
import heatshrink
except ImportError:
print("Installing 'heatshrink' python module...")
env.Execute(env.subst("$PYTHONEXE -m pip install heatshrink"))
try:
# Start upload job

View File

@ -11,7 +11,7 @@
".vscode"
],
"binary_file_patterns":
[ "*.psd", "*.png", "*.jpg", "*.jpeg", "*.bdf", "*.patch", "avrdude_5.*", "*.svg", "*.bin", "*.woff" ],
[ "*.psd", "*.png", "*.jpg", "*.jpeg", "*.bdf", "*.patch", "avrdude_5.*", "*.svg", "*.bin", "*.woff", "*.otf" ],
"file_exclude_patterns":
[
"Marlin/platformio.ini",
@ -30,6 +30,7 @@
"ensure_newline_at_eof_on_save": true,
"tab_size": 2,
"translate_tabs_to_spaces": true,
"trim_trailing_white_space_on_save": true
"trim_trailing_white_space_on_save": true,
"uncrustify_config" : "${project_dir}/../extras/uncrustify.cfg"
}
}

View File

@ -499,7 +499,7 @@ def get_starting_env(board_name_full, version):
possible_envs = None
for i, line in enumerate(pins_h):
if 0 < line.find("Unknown MOTHERBOARD value set in Configuration.h"):
invalid_board();
invalid_board()
if list_start_found == False and 0 < line.find('1280'):
list_start_found = True
elif list_start_found == False: # skip lines until find start of CPU list
@ -1103,7 +1103,7 @@ class output_window(Text):
else:
try:
temp_text = IO_queue.get(block=False)
except Queue.Empty:
except queue.Empty:
continue_updates = False # queue is exhausted so no need for further updates
else:
self.insert('end', temp_text[0], temp_text[1])
@ -1267,6 +1267,7 @@ def main():
global build_type
global target_env
global board_name
global Marlin_ver
board_name, Marlin_ver = get_board_name()

View File

@ -0,0 +1,234 @@
# Marlin Binary File Transfer (BFT)
Marlin is capable of transferring binary data to the internal storage (SD card) via serial when built with `BINARY_FILE_TRANSFER` enabled. The following is a description of the binary protocol that must be used to conduct transfers once the printer is in binary mode after running `M28 B1`.
## Data Endianness
All data structures are **little-endian**! This means that when constructing the packets with multi-byte values, the lower bits are packed first. For example, each packet should start with a 16-bit start token with the value of `0xB5AD`. The data itself should start with a value of `0xAD` followed by `0xB5` etc.
An example Connection SYNC packet, which is only a header and has no payload:
```
S S P P P H
t y r a a e
a n o c y a
r c t k l d
t o e o e
c t a r
o d
l t C
y l S
p e
e n
---- -- - - ---- ----
ADB5 00 0 1 0000 0103
```
## Packet Header
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-------------------------------+---------------+-------+-------+
| Start Token (0xB5AD) | Sync Number | Prot- | Pack- |
| | | ocol | et |
| | | ID | Type |
+-------------------------------+---------------+-------+-------+
| Payload Length | Header Checksum |
+-------------------------------+-------------------------------+
```
| Field | Width | Description |
|-----------------|---------|---|
| Start Token | 16 bits | Each packet must start with the 16-bit value `0xB5AD`. |
| Sync Number | 8 bits | Synchronization value, each packet after sync should increment this value by 1. |
| Protocol ID | 4 bits | Protocol ID. `0` for Connection Control, `1` for Transfer. See Below. |
| Packet Type | 4 bits | Packet Type ID. Depends on the Protocol ID, see below. |
| Payload Length | 16 bits | Length of payload data. If this value is greater than 0, a packet payload will follow the header. |
| Header Checksum | 16 bits | 16-bit Fletchers checksum of the header data excluding the Start Token |
## Packet Payload
If the Payload Length field of the header is non-zero, payload data is expected to follow.
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-------------------------------+-------------------------------+
| Payload Data ... |
+-------------------------------+-------------------------------+
| ... | Packet Checksum |
+-------------------------------+-------------------------------+
```
| Field | Width | Description |
|-----------------|------------------------|---|
| Payload Data | Payload Length bytes | Payload data. This should be no longer than the buffer length reported by the Connection SYNC operation. |
| Packet Checksum | 16 bits | 16-bit Fletchers checksum of the header and payload, including the Header Checksum, but excluding the Start Token. |
## Fletchers Checksum
Data packets require a checksum for the header and a checksum for the entire packet if the packet has a payload. In both cases the checksum does not include the first two bytes of the packet, the Start Token.
A simple example implementation:
```c++
uint16_t cs = 0;
for (size_t i = 2; i<packet.size(); i++) {
uint8_t cslow = (((cs & 0xFF) + packet[i]) % 255);
cs = ((((cs >> 8) + cslow) % 255) << 8) | cslow;
}
```
## General Responses
### ok
All packets **except** the SYNC Packet (see below) are acknowledged by an `ok<SYNC>` message. This acknowledgement only signifies the client has received the packet and that the header was well formed. An `ok` acknowledgement does not signify successful operation in cases where the client also sends detailed response messages (see details on packet types below). Most notably, with the current implementation the client will still respond `ok` when a client sends multiple packets with the same Sync Number, but will not send the proper response or any errors.
**NOTE**: The `ok` acknowledgement is sent before any packet type specific output. The `SYNC` value should match the Sync Number of the last packet sent, and the next packet sent should use a Sync Number of this value + 1.
Example:
```
ok1
```
### rs
In the case of a packet being sent out of order, where the Sync Number is not the previous Sync Number + 1, an `rs<SYNC>` message will be sent with the last Sync Number received.
Example:
```
rs1
```
## Connection Control (`Protocol ID` 0)
`Protocol ID` 0 packets control the binary connection itself. There are only 2 types:
| Packet Type | Name | Description |
|---|---|---|
| 1 | SYNC | Synchronize host and client and get connection info. |
| 2 | CLOSE | Close the binary connection and switch back to ASCII. |
### SYNC Packet
A SYNC packet should be the first packet sent by a host after enabling binary mode. On success, a sync response will be sent.
**Note**: This is the only packet that is not acknowledged with an `ok` response.
Returns a sync response:
```
ss<SYNC>,<BUFFER_SIZE>,<VERSION_MAJOR>.<VERSION_MINOR>.<VERSION_PATCH>
```
| Value | Description |
|---|---|
| SYNC | The current Sync Number, this should be used in the next packet sent and incremented by 1 for each packet sent after. |
| BUFFER_SIZE | The client buffer size. Packet Payload Length must not exceed this value. |
| VERSION_MAJOR | The major version number of the client Marlin BFT protocol, e.g., `0`. |
| VERSION_MINOR | The minor version number of the client Marlin BFT protocol, e.g., `1`. |
| VERSION_PATCH | The patch version number of the client Marlin BFT protocol, e.g., `0`. |
Example response:
```
ss0,96,0.1.0
```
### CLOSE Packet
A CLOSE packet should be the last packet sent by a host. On success, the client will switch back to ASCII mode.
## Transfer Control (`Protocol ID` 1)
`Protocol ID` 1 packets control the file transfers performed over the connection:
| Packet Type | Name | Description |
|---|---|---|
| 0 | QUERY | Query the client protocol details and compression parameters. |
| 1 | OPEN | Open a file for writing and begin accepting data to transfer. |
| 2 | CLOSE | Finish writing and close the current file. |
| 3 | WRITE | Write data to an open file. |
| 4 | ABORT | Abort file transfer. |
### QUERY Packet
A QUERY packet should be the second packet sent by a host, after a SYNC packet. On success a query response will be sent in addition to an `ok<sync>` acknowledgement.
Returns a query response:
```
PFT:version:<VERSION_MAJOR>.<VERSION_MINOR>.<VERSION_PATCH>:compression:<COMPRESSION_ALGO>(,<COMPRESSION_PARAMS>)
```
| Value | Description |
|---|---|
| VERSION_MAJOR | The major version number of the client Marlin BFT protocol, e.g., `0`. |
| VERSION_MINOR | The minor version number of the client Marlin BFT protocol, e.g., `1`. |
| VERSION_PATCH | The patch version number of the client Marlin BFT protocol, e.g., `0`. |
| COMPRESSION_ALGO | Compression algorithm. Currently either `heatshrink` or `none` |
| COMPRESSION_PARAMS | Compression parameters, separated by commas. Currently, if `COMPRESSION_AGLO` is heatshrink, this will be the window size and lookahead size. |
Example response:
```
PFT:version:0.1.0:compression:heatshrink,8,4
```
### OPEN Packet
Opens a file for writing. The filename and other options are specified in the Packet Payload. The filename can be a long filename if the firmware is compiled with support, however the entire Packet Payload must not be longer than the buffer length returned by the SYNC Packet. The filename value must include a null terminator.
Payload:
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+---------------+---------------+-------------------------------+
| Dummy | Compression | Filename |
+---------------------------------------------------------------|
| ... |
+-------------------------------+-------------------------------+
| ... | NULL (0x0) | Packet Checksum |
+-------------------------------+-------------------------------+
```
| Field | Width | Description |
|-----------------|------------------------|---|
| Dummy | 8 bits | A boolean value indicating if this file transfer should be actually carried out or not. If `1`, the client will respond as if the file is opened and accept data transfer, but no data will be written. |
| Compression | 8 bits | A boolean value indicating if the data to be transferred will be compressed using the algorithm and parameters returned in the QUERY Packet. |
| Filename | ... | A filename including a null terminator byte. |
| Packet Checksum | 16 bits | 16-bit Fletchers checksum of the header and payload, including the Header Checksum, but excluding the Start Token. |
Responses:
| Response | Description |
|---|---|
| `PFT:success` | File opened and ready for write. |
| `PFT:fail` | The client couldn't open the file. |
| `PFT:busy` | The file is already open. |
### CLOSE Packet
Closes the currently open file.
Responses:
| Response | Description |
|---|---|
| `PFT:success` | Buffer flushed and file closed. |
| `PFT:ioerror` | Client storage device failure. |
| `PFT:invalid` | No file open. |
### WRITE Packet
Writes payload data to the currently open file. If the file was opened with Compression set to 1, the data will be decompressed first. Payload Length must not exceed the buffer size returned by the SYNC Packet.
Responses:
On success, an `ok<SYNC>` response will be sent. On error, an `ok<SYNC>` response will be followed by an error response:
| Response | Description |
|---|---|
| `PFT:ioerror` | Client storage device failure. |
| `PFT:invalid` | No file open. |
### ABORT Packet
Closes the currently open file and remove it.
Responses:
| Response | Description |
|---|---|
| `PFT:success` | Transfer aborted, file removed. |
## Typical Usage
1. Send ASCII command `M28 B1` to initiate Binary Transfer mode.
2. Send Connection SYNC Packet, record Sync Number and Buffer Size.
3. Send Transfer QUERY Packet, using Sync Number from above. Record compression algorithm and parameters.
4. Send Transfer OPEN Packet, using last Sync Number + 1, filename and compression options. If error, send Connection CLOSE Packet and abort.
5. Send Transfer Write Packets, using last Sync Number + 1 with the file data. The Payload Length must not exceed the Buffer Size reported in step 2. On error, send a Transfer ABORT Packet, followed by a Connection CLOSE Packet and then abort transfer.
6. Send Transfer CLOSE Packet, using last Sync Number + 1.
7. Send Connection CLOSE Packet, using last Sync Number + 1.
8. Client is now in ASCII mode, transfer complete