Llama-3.2-1B-CyberFrog - An Optimized Model for Task Execution Planning in Robotics

Quantized Version: phamhai/Llama-3.2-1B-CyberFrog-Q4_K_M-GGUF

Llama-3.2-1B-CyberFrog is an advanced, lightweight model specifically optimized for task execution planning in robotics. With 1 billion parameters, CyberFrog excels in translating complex natural language instructions into actionable robotic tasks with high efficiency and precision.

Strengths:

  • Efficient Task Translation and Task Execution Planning
  • Efficient 1B parameter architecture
  • Easy to deploy on edge devices, electric vehicles, and robotics.

Intended Use:

Instruction Parsing

  • Objective : Allow users to give complex instructions in a single sentence or conversation and have the robot understand and break down the steps autonomously.
  • How it works: When given a complex instruction like "Get the ingredients for a sandwich and start making it," the LLM can:
    • Break Down Tasks: Identify sub-tasks (e.g., "Go to kitchen," "Find bread, lettuce, and meat," "Place them on the counter," "Assemble ingredients").
    • Sequence Planning: Arrange these tasks in an actionable order.
    • Conditional Logic: If the robot doesn’t find an ingredient, it might ask the user, "I couldn’t find lettuce. Would you like me to proceed without it?"
  • Implementation: Task breakdown can be implemented by sending parsed steps to a robot operating system (ROS) or a task management module that understands and schedules actions in the correct order.
  • Use Case: Warehouse robots, where a user might instruct, "Pick up all items on Shelf B and bring them to Packing Area 2."

Task Planning Translation

  • Objective: Translate high-level tasks from human language into detailed, actionable robot plans.
  • How it works: Given a task like "Clean the kitchen," the LLM interprets it by using contextual knowledge to generate subtasks:
    • Identify relevant actions, e.g., "Wipe down counters," "Sweep the floor," "Take out the trash."
    • If connected to an environmental sensing system, it can recognize that there are items out of place or that certain surfaces need cleaning.
    • Order these actions logically and assign them to specific robotic functions (e.g., vacuuming, mopping, object manipulation).
  • Implementation: Using an LLM alongside a task manager to break down and allocate steps, then feeding these steps to robotic modules (for navigation, object manipulation, etc.).
  • Use Case: Cleaning robots in homes, hospitals, or offices, where tasks can vary greatly based on real-time needs.

Model Details:

  • Base Models: Llama-3.2-1B-Instruct
  • Parameters: 1 billion
  • Context Length: 4096 tokens
  • Training Data: CyberFrog SFT Dataset
  • Input/Output Format: Supports Vietnamese command, render [Action name in English][Argument/Entity for Action in English]: Detailed actions in Vietnamese.
  • Model weights: Llama-3.2-1B-CyberFrog

Terms of Use and License: By using our released weights, you agree to and comply with the terms and conditions specified in Meta's LLaMA-3 license.

Usage Examples

with Huggingface's transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "phamhai/Llama-3.2-1B-CyberFrog"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)

# Example 1:

messages =  [
    {"role": "system", "content": "You are a humanoid robot with artificial intelligence capable of assisting humans with daily tasks. You have arms, the ability to move, and interact with the outside world. You help humans through their commands. Upon receiving a command, you have the ability to analyze it and determine the series of actions required to achieve the person's goal. The commands and tasks you receive must be analyzed into a sequence of actions within the action space to ensure that, when you execute all the actions in that sequence, the task assigned by the user is completed."},
    {"role": "user", "content": "Tự động bật hệ thống thông gió trong nhà kho khi nhiệt độ tăng cao."}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=False, return_tensors="pt")

outputs = model.generate(tokenized_chat, max_new_tokens=128, temperature=0.05) 
print(tokenizer.decode(outputs[0]))

# 1. [Sense_Temperature][warehouse]: Đo nhiệt độ hiện tại trong nhà kho.
# 2. [Compare_Temperature][threshold]: So sánh nhiệt độ đo được với ngưỡng nhiệt độ cao (cần được người dùng xác định trước).
# 3. [Conditional_Check][temperature comparison result]: Kiểm tra xem kết quả so sánh có cho thấy nhiệt độ cao hơn ngưỡng hay không.
# 4. [Activate_Ventilation][warehouse ventilation system]:  Nếu nhiệt độ cao hơn ngưỡng, kích hoạt hệ thống thông gió trong nhà kho.
# 5. [Log_Action][ventilation activation]: Ghi lại thời gian kích hoạt hệ thống thông gió vào nhật ký. (Tùy chọn, giúp theo dõi hoạt động)

# Example 2:

messages =  [
    {"role": "system", "content": "You are a humanoid robot with artificial intelligence capable of assisting humans with daily tasks. You have arms, the ability to move, and interact with the outside world. You help humans through their commands. Upon receiving a command, you have the ability to analyze it and determine the series of actions required to achieve the person's goal. The commands and tasks you receive must be analyzed into a sequence of actions within the action space to ensure that, when you execute all the actions in that sequence, the task assigned by the user is completed."},
    {"role": "user", "content": "Phủ thêm lớp đất mùn cho các cây hoa trong sân vườn để giữ ẩm."}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=False, return_tensors="pt")

outputs = model.generate(tokenized_chat, max_new_tokens=128, temperature=0.05) 
print(tokenizer.decode(outputs[0]))

# 1. [Find][flower beds]: Tìm kiếm khu vực trồng hoa trong sân vườn.
# 2. [Move][flower beds]: Di chuyển đến khu vực các luống hoa.
# 3. [Find][mulch]: Tìm lớp đất mùn đã chuẩn bị sẵn. (Giả định đất mùn đã được chuẩn bị từ trước).
# 4. [Pick_up][mulch]:  Lấy đất mùn. (Robot có khả năng cầm nắm, xúc, hoặc sử dụng dụng cụ).
# 5. [Move][flower beds]: Quay trở lại khu vực các luống hoa.
# 6. [Spread][mulch on flower beds]: Rải đều lớp đất mùn lên bề mặt đất xung quanh gốc cây hoa.
# 7. [Put_down][any remaining mulch]: Đặt xuống bất kỳ đất mùn còn lại. (Nếu có dụng cụ, đặt dụng cụ xuống).
# 8. [Move][original location]: Trở về vị trí ban đầu.

# Example 3:

messages =  [
    {"role": "system", "content": "You are a humanoid robot with artificial intelligence capable of assisting humans with daily tasks. You have arms, the ability to move, and interact with the outside world. You help humans through their commands. Upon receiving a command, you have the ability to analyze it and determine the series of actions required to achieve the person's goal. The commands and tasks you receive must be analyzed into a sequence of actions within the action space to ensure that, when you execute all the actions in that sequence, the task assigned by the user is completed."},
    {"role": "user", "content": "Đo nhiệt độ và độ ẩm trong các khu vực nhà bếp và phòng tắm."}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=False, return_tensors="pt")

outputs = model.generate(tokenized_chat, max_new_tokens=128, temperature=0.05) 
print(tokenizer.decode(outputs[0]))

# 1. [Move][Kitchen]: Di chuyển đến khu vực nhà bếp.
# 2. [Measure_temperature][Kitchen]: Đo nhiệt độ trong nhà bếp.
# 3. [Measure_humidity][Kitchen]: Đo độ ẩm trong nhà bếp.
# 4. [Move][Bathroom]: Di chuyển đến khu vực phòng tắm.
# 5. [Measure_temperature][Bathroom]: Đo nhiệt độ trong phòng tắm.
# 6. [Measure_humidity][Bathroom]: Đo độ ẩm trong phòng tắm.

Corresponding Author:

Downloads last month
27
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for phamhai/Llama-3.2-1B-CyberFrog

Finetuned
(184)
this model
Quantizations
1 model