Version: 1.7
Table of Contents
Top
idl/grpc/service.proto
ActionTypeEffector
ActionTypeEffector is the message that represents coefficients of the action types in the tree to calculate the predicted state evaluation.
Each number should start from 0.0. For example, if evaluation of an action-state is 10, the action is direct pass, and value of direct_pass is 0.5, so the final evaluation of the action-state will be 5.
example in python grpc:
actions = []
action_type_effector = pb2.ActionTypeEffector(
direct_pass=2.0,
lead_pass=1.5,
through_pass=1.0,
short_dribble=1.0,
long_dribble=1.0,
cross=1.0,
hold=1.0
)
planner_evaluation_effector = pb2.PlannerEvaluationEffector(
action_type_effector= action_type_effector
)
planner_evaluation = pb2.PlannerEvaluation(
effectors=planner_evaluation_effector,
)
helios_offensive_planner = pb2.HeliosOffensivePlanner(
lead_pass=True,
direct_pass=False,
through_pass=True,
simple_pass=True,
short_dribble=True,
long_dribble=True,
simple_shoot=True,
simple_dribble=False,
cross=True,
server_side_decision=False,
max_depth=5,
max_nodes=800,
evalution=planner_evaluation
)
actions.append(pb2.PlayerAction(helios_offensive_planner=helios_offensive_planner))
return pb2.PlayerActions(actions=actions)
Field | Type | Label | Description |
---|
direct_pass | float | | The coefficient of the direct pass action. |
lead_pass | float | | The coefficient of the lead pass action. |
through_pass | float | | The coefficient of the through pass action. |
short_dribble | float | | The coefficient of the short dribble action. |
long_dribble | float | | The coefficient of the long dribble action. |
cross | float | | The coefficient of the cross action. |
hold | float | | The coefficient of the hold action. |
AddArc
AddCircle
AddLine
AddMessage
AddPoint
AddRectangle
AddSector
AddText
AddTriangle
AttentionTo
AttentionToOf
Ball
Ball is the message that represents the ball in the soccer simulation.
Field | Type | Label | Description |
---|
position | RpcVector2D | | The position of the ball. |
relative_position | RpcVector2D | | The relative position of the ball to the agent who is sending the message. |
seen_position | RpcVector2D | | The position of the ball that the agent has seen. |
heard_position | RpcVector2D | | The position of the ball that the agent has heard. |
velocity | RpcVector2D | | The velocity of the ball. |
seen_velocity | RpcVector2D | | The velocity of the ball that the agent has seen. |
heard_velocity | RpcVector2D | | The velocity of the ball that the agent has heard. |
pos_count | int32 | | How many cycles ago the agent has seen or heard the ball. |
seen_pos_count | int32 | | How many cycles ago the agent has seen the ball. |
heard_pos_count | int32 | | How many cycles ago the agent has heard the ball. |
vel_count | int32 | | How many cycles ago the agent has seen or heard the velocity of the ball. |
seen_vel_count | int32 | | How many cycles ago the agent has seen the velocity of the ball. |
heard_vel_count | int32 | | How many cycles ago the agent has heard the velocity of the ball. |
lost_count | int32 | | How many cycles ago the agent has lost the ball. |
ghost_count | int32 | | |
dist_from_self | float | | The distance of the ball from the agent who is sending the message. |
angle_from_self | float | | The angle of the ball from the agent who is sending the message. |
BallGoalieMessage
BallMessage
BallPlayerMessage
BestPlannerActionRequest
BestPlannerActionRequest.PairsEntry
BestPlannerActionResponse
Field | Type | Label | Description |
---|
index | int32 | | |
Bhv_BeforeKickOff
Bhv_BodyNeckToBall
Bhv_BodyNeckToPoint
Bhv_Emergency
Bhv_GoToPointLookBall
Bhv_NeckBodyToBall
Field | Type | Label | Description |
---|
angle_buf | float | | |
Bhv_NeckBodyToPoint
Bhv_ScanField
Body_AdvanceBall
Body_ClearBall
Body_Dribble
Body_GoToPoint
todo more variables
Body_GoToPointDodge
Body_HoldBall
Body_Intercept
Body_KickOneStep
Body_SmartKick
todo more variables
Body_StopBall
Body_StopDash
Field | Type | Label | Description |
---|
save_recovery | bool | | |
Body_TackleToPoint
Body_TurnToAngle
Field | Type | Label | Description |
---|
angle | float | | |
Body_TurnToBall
Field | Type | Label | Description |
---|
cycle | int32 | | |
Body_TurnToPoint
Catch
ChangePlayerType
Field | Type | Label | Description |
---|
uniform_number | int32 | | |
type | int32 | | |
ChangeView
CoachAction
CoachActions
Dash
Dash is the message that represents the dash action in the soccer simulation.
By using this action, agent can dash (run or walk) to a direction with a power.
The rcssserver, calculates the next position and velocity of the agent based on current position, velocity, power and direction.
Field | Type | Label | Description |
---|
power | float | | The power of the dash action. The power can be between -100 to 100. If the power is negative, the agent will dash in the backward direction by using two times of the power. |
relative_direction | float | | The relative direction of the dash action to the body direction of the agent. The direction can be between -180 to 180. |
DebugClient
Field | Type | Label | Description |
---|
message | string | | |
DefenseLineMessage
Field | Type | Label | Description |
---|
defense_line_x | float | | |
DoChangeMode
Field | Type | Label | Description |
---|
game_mode_type | GameModeType | | |
side | Side | | some of the game mode need to know the side |
DoChangePlayerType
DoHeliosSayPlayerTypes
DoHeliosSubstitute
DoKickOff
DoMoveBall
DoMovePlayer
DoRecover
DribbleMessage
Empty
Focus_MoveToPoint
Focus_Reset
GoalieAndPlayerMessage
GoalieMessage
HeliosBasicMove
HeliosBasicOffensive
HeliosCommunicaion
HeliosFieldEvaluator
HeliosFieldEvaluator is the message that represents the field evaluator of the proxy agent to evaluate each node (predicted state) in the planner tree.
If you dont set the field evaluator, the proxy agent will use the default field evaluator (HeliosFieldEvaluator) to evaluate each node in the planner tree.
This field evaluator calculate the value of the predicted state by using this formula:
value = x_coefficient * (ball.x + 52.5) + ball_dist_to_goal_coefficient * max(0.0, effective_max_ball_dist_to_goal - ball.dist(opponent goal center))
example in python grpc:
actions = []
helios_field_evaluator = pb2.HeliosFieldEvaluator(
x_coefficient=2.1,
ball_dist_to_goal_coefficient=1.8,
effective_max_ball_dist_to_goal=50.0
)
field_evaluator = pb2.PlannerFieldEvaluator(
helios_field_evaluator=helios_field_evaluator,
)
planner_evaluation = pb2.PlannerEvaluation(
field_evaluators=field_evaluator
)
helios_offensive_planner = pb2.HeliosOffensivePlanner(
lead_pass=True,
direct_pass=False,
through_pass=True,
simple_pass=True,
short_dribble=True,
long_dribble=True,
simple_shoot=True,
simple_dribble=False,
cross=True,
server_side_decision=False,
max_depth=5,
max_nodes=800,
evalution=planner_evaluation
)
actions.append(pb2.PlayerAction(helios_offensive_planner=helios_offensive_planner))
return pb2.PlayerActions(actions=actions)
Field | Type | Label | Description |
---|
x_coefficient | float | | The coefficient of the x-coordinate of the ball in the predicted state. The default value is 1. |
ball_dist_to_goal_coefficient | float | | The coefficient of the distance of the ball to the opponent goal center in the predicted state. The default value is 1. |
effective_max_ball_dist_to_goal | float | | The effective maximum distance of the ball to the opponent goal center in the predicted state. The default value is 40.0. |
HeliosGoalie
HeliosGoalieKick
HeliosGoalieMove
HeliosOffensivePlanner
HeliosOffensivePlanner is the message that represents the offensive planner of the agent in the soccer simulation.
The offensive planner is responsible for making decisions about the offensive actions of the agent by creating a tree of actions,
finding the best chain of actions, and executing the first action in the chain, when the agent is ball owner.
The best action is an action with best incomming predicted state.
The best predicted state is the state that has the best evaluation value by using this formula: value = ball.x + max(0.0, 40.0 - ball.dist(opponent goal center))
Due to the complexity of the not simple actions, the agent can not calculate the best action in the first layer of the tree. So, the agent can use the simple actions in the first layer of the tree.
To create the tree, the planner create all possible edges (actions) and create the next state of the agent by using each action.
Then the planner starts to create the next layer of the tree by using the next state of the agent. The planner continues to create the tree until
the max depth of the tree or number of edges is reached.
For more information check this paper: HELIOS Base: An Open Source Package for the RoboCup Soccer 2D Simulation
Creating the tree and find best predicted state and action:
Field | Type | Label | Description |
---|
direct_pass | bool | | Whether the agent can make a direct pass or not. The direct pass is a pass action that the agent can pass the ball to the position of a teammate player. This action is just used in the first layer of the tree. |
lead_pass | bool | | Whether the agent can make a lead pass or not. The lead pass is a pass action that the agent can pass the ball to the position of a teammate player with a lead (very cloase to the teammate). This action is just used in the first layer of the tree. |
through_pass | bool | | Whether the agent can make a through pass or not. The through pass is a pass action that the agent can pass the ball to the position of a teammate player with a through (close or very far from the teammate, between teammates and opponent goal). This action is just used in the first layer of the tree. |
short_dribble | bool | | Whether the agent can make a short dribble or not. The short dribble is a dribble action that the agent can dribble the ball to a position. This action is just used in the first layer of the tree. |
long_dribble | bool | | Whether the agent can make a long dribble or not. The long dribble is a dribble action that the agent can dribble the ball to a position. This dribble is longer than the short dribble. This action is just used in the first layer of the tree |
cross | bool | | Whether the agent can make a cross or not. The cross is a kick action that the agent can kick the ball to the position close to teammate, but it does not care that the teammate can control the ball or not. This action is just used in the first layer of the tree. |
simple_pass | bool | | Whether the agent can make a simple pass or not. The simple pass is a pass action that the agent can pass the ball to the position of a teammate player. This action is just used in the second or more layers of the tree. This action is not very accurate. |
simple_dribble | bool | | Whether the agent can make a simple dribble or not. The simple dribble is a dribble action that the agent can dribble the ball to a position. This action is just used in the second or more layers of the tree. This action is not very accurate. |
simple_shoot | bool | | Whether the agent can make a simple shoot or not. The simple shoot is a kick action that the agent can kick the ball to the opponent goal. This action is just used in the second or more layers of the tree. This action is not very accurate. |
server_side_decision | bool | | If this value is true, the proxy agent, will create the tree and send all of the nodes to the playmaker server to choose the best action. If this value is false, the proxy agent will choose the best action by itself. The default value is false. |
max_depth | int32 | | The maximum depth of the tree. The agent will create the tree with this depth. To create the first layer of the tree, the agent will use the direct_pass, lead_pass, through_pass, short_dribble, long_dribble, cross actions. The difault value is 4. So, if you do not set this value, the agent will create the tree with 4 depth. Due to the default value of rpc, 0 means the default value. |
max_nodes | int32 | | The maximum number of nodes in the tree. The agent will create the tree with this number of nodes. The difault value is 500. So, if you do not set this value, the agent will create the tree with 500 nodes. Due to the default value of rpc, 0 means the default value. |
evaluation | PlannerEvaluation | | The evaluation methods to evaluate the actions[predicted states] in the tree. |
HeliosPenalty
HeliosSetPlay
HeliosShoot
InitMessage
InterceptInfo
InterceptInfo is the message that represents the information about an intercept action.
Field | Type | Label | Description |
---|
action_type | InterceptActionType | | The type of the intercept action. |
turn_steps | int32 | | The number of steps that the agent needs to turn to the ball. |
turn_angle | float | | The angle that the agent needs to turn to the ball. |
dash_steps | int32 | | The number of steps that the agent needs to dash to the ball. |
dash_power | float | | The power of the dash action. |
dash_dir | float | | The direction of the dash action to player's body direction. |
final_self_position | RpcVector2D | | The final position of the agent after the intercept action. |
final_ball_dist | float | | The final distance of the ball from the agent after the intercept action. |
final_stamina | float | | The final stamina of the agent after the intercept action. |
value | float | | The value of the intercept action. TODO less is better or more is better? |
InterceptMessage
InterceptTable
InterceptTable is the message that represents the intercept table of the agent.
Field | Type | Label | Description |
---|
self_reach_steps | int32 | | The number of steps that the agent needs to reach the ball. |
first_teammate_reach_steps | int32 | | The number of steps that the first teammate needs to reach the ball. |
second_teammate_reach_steps | int32 | | The number of steps that the second teammate needs to reach the ball. |
first_opponent_reach_steps | int32 | | The number of steps that the first opponent needs to reach the ball. |
second_opponent_reach_steps | int32 | | The number of steps that the second opponent needs to reach the ball. |
first_teammate_id | int32 | | The ID of the first teammate. This ID is unique for each player's object in the each agent proxy. If the ID is 0, it means the agent has no first teammate. |
second_teammate_id | int32 | | The ID of the second teammate. This ID is unique for each player's object in the each agent proxy. If the ID is 0, it means the agent has no second teammate. |
first_opponent_id | int32 | | The ID of the first opponent. This ID is unique for each player's object in the each agent proxy. If the ID is 0, it means the agent has no first opponent. |
second_opponent_id | int32 | | The ID of the second opponent. This ID is unique for each player's object in the each agent proxy. If the ID is 0, it means the agent has no second opponent. |
self_intercept_info | InterceptInfo | repeated | The intercept information of the agent. |
Kick
Field | Type | Label | Description |
---|
power | float | | |
relative_direction | float | | |
Log
MatrixFieldEvaluator
MatrixFieldEvaluator is the message that represents the matrix field evaluator of the proxy agent to evaluate each node (predicted state) in the planner tree.
If you dont set the field evaluator, the proxy agent will use the default field evaluator (HeliosFieldEvaluator) to evaluate each node in the planner tree.
This field evaluator calculate the value of the predicted state by using a matrix of float values.
| 10 | 20 | 30 | 40 |
| 15 | 25 | 35 | 45 |
| 10 | 20 | 30 | 40 |
In this example matrix, the value of each point in the opponent pernaly area is 45.
example in python grpc:
actions = []
matrix_field_evaluator = pb2.MatrixFieldEvaluator(
evals=[
pb2.MatrixFieldEvaluatorY(evals=[10, 15, 10]),
pb2.MatrixFieldEvaluatorY(evals=[20, 25, 20]),
pb2.MatrixFieldEvaluatorY(evals=[30, 35, 30]),
pb2.MatrixFieldEvaluatorY(evals=[40, 45, 40]),
]
)
field_evaluator = pb2.PlannerFieldEvaluator(
matrix_field_evaluator=matrix_field_evaluator
)
planner_evaluation = pb2.PlannerEvaluation(
field_evaluators=field_evaluator
)
helios_offensive_planner = pb2.HeliosOffensivePlanner(
lead_pass=True,
direct_pass=False,
through_pass=True,
simple_pass=True,
short_dribble=True,
long_dribble=True,
simple_shoot=True,
simple_dribble=False,
cross=True,
server_side_decision=False,
max_depth=5,
max_nodes=800,
evalution=planner_evaluation
)
actions.append(pb2.PlayerAction(helios_offensive_planner=helios_offensive_planner))
return pb2.PlayerActions(actions=actions)
MatrixFieldEvaluatorY
Field | Type | Label | Description |
---|
evals | float | repeated | |
Move
Neck_ScanField
Neck_ScanPlayers
todo min/max_angle
Neck_TurnToBall
Neck_TurnToBallAndPlayer
Field | Type | Label | Description |
---|
side | Side | | |
uniform_number | int32 | | |
count_threshold | int32 | | |
Neck_TurnToBallOrScan
Field | Type | Label | Description |
---|
count_threshold | int32 | | |
Neck_TurnToGoalieOrScan
Field | Type | Label | Description |
---|
count_threshold | int32 | | |
Neck_TurnToLowConfTeammate
Neck_TurnToPlayerOrScan
Field | Type | Label | Description |
---|
side | Side | | |
uniform_number | int32 | | |
count_threshold | int32 | | |
Neck_TurnToPoint
Neck_TurnToRelative
Field | Type | Label | Description |
---|
angle | float | | |
OffsideLineMessage
Field | Type | Label | Description |
---|
offside_line_x | float | | |
OnePlayerMessage
OpponentEffector
PlannerEvaluation is the message that represents the evaluation methods to evaluate the actions[predicted states] in the tree.
Using this method causes the predicted state eval to be decreased based on the distance or reach steps of the opponent players to the position of the ball in the predicted state.
Each variable in the message is a list of float values.
For example, if you want to decrease the predicted state eval if the distance of the opponent player to the ball is less than 5,
You can set the negetive_effect_by_distance variable with the value of [-9.0, -8.5, -7.2, -6.1, -3.8]. It means the predicted state eval will be decreased by 9.0 if the distance is less than 1,
8.5 if the distance is less than 2, 7.2 if the distance is less than 3, 6.1 if the distance is less than 4, 3.8 if the distance is less than 5.
Example in python grpc:
actions = []
opponent_effector = pb2.OpponentEffector(
negetive_effect_by_distance=[-50, -45, -40, -30, -20, -15, -10, -5, -2, -1, -0.5, -0.1],
negetive_effect_by_distance_based_on_first_layer=False,
negetive_effect_by_reach_steps=[],
negetive_effect_by_reach_steps_based_on_first_layer=False
)
planner_evaluation_effector = pb2.PlannerEvaluationEffector(
opponent_effector=opponent_effector,
)
planner_evaluation = pb2.PlannerEvaluation(
effectors=planner_evaluation_effector,
)
helios_offensive_planner = pb2.HeliosOffensivePlanner(
lead_pass=True,
direct_pass=False,
through_pass=True,
simple_pass=True,
short_dribble=True,
long_dribble=True,
simple_shoot=True,
simple_dribble=False,
cross=True,
server_side_decision=False,
max_depth=5,
max_nodes=800,
evalution=planner_evaluation
)
actions.append(pb2.PlayerAction(helios_offensive_planner=helios_offensive_planner))
return pb2.PlayerActions(actions=actions)
Field | Type | Label | Description |
---|
negetive_effect_by_distance | float | repeated | The list of float values that represents the negetive effect of the distance of the opponent player to the ball in the predicted state. The values of this list should be negetive numbers. |
negetive_effect_by_distance_based_on_first_layer | bool | | If this value is true, the negetive_effect_by_distance will be calculated based on the first action of each action chain. For example, if we have a chain of actions like [direct_pass, simple_pass, simple_dribble], the negetive_effect_by_distance will be calculated based on the direct_pass action for all of the actions. |
negetive_effect_by_reach_steps | float | repeated | The list of float values that represents the negetive effect of the reach steps of the opponent player to the ball in the predicted state. |
negetive_effect_by_reach_steps_based_on_first_layer | bool | | If this value is true, the negetive_effect_by_reach_steps will be calculated based on the first action of each action chain. For example, if we have a chain of actions like [direct_pass, simple_pass, simple_dribble], the negetive_effect_by_reach_steps will be calculated based on the direct_pass action for all of the actions. |
OpponentMessage
PassMessage
PassRequestMessage
PenaltyKickState
PlannerEvaluation
PlannerEvaluation is the message that represents the evaluation methods to evaluate the actions[predicted states] in the tree.
Using this method causes the predicted state eval to be calculated based on field evaluators and effected by effectors.
PlannerEvaluationEffector
PlannerEvaluationEffector is the message that represents the effectors of the planner evaluation methods.
The proxy agent will update the predicted state evaluation based on the effectors.
example in python grpc:
actions = []
teammate_effector = pb2.TeammateEffector(
coefficients={2: 1.2, 5: 1.6},
apply_based_on_first_layer=False
)
action_type_effector = pb2.ActionTypeEffector(
direct_pass=2.0,
lead_pass=1.5,
through_pass=1.0,
short_dribble=1.0,
long_dribble=1.0,
cross=1.0,
hold=1.0
)
opponent_effector = pb2.OpponentEffector(
negetive_effect_by_distance=[-50, -45, -40, -30, -20, -15, -10, -5, -2, -1, -0.5, -0.1],
negetive_effect_by_distance_based_on_first_layer=False,
negetive_effect_by_reach_steps=[],
negetive_effect_by_reach_steps_based_on_first_layer=False
)
planner_evaluation_effector = pb2.PlannerEvaluationEffector(
opponent_effector= opponent_effector,
teammate_effector= teammate_effector,
action_type_effector= action_type_effector
)
planner_evaluation = pb2.PlannerEvaluation(
effectors=planner_evaluation_effector,
)
helios_offensive_planner = pb2.HeliosOffensivePlanner(
lead_pass=True,
direct_pass=False,
through_pass=True,
simple_pass=True,
short_dribble=True,
long_dribble=True,
simple_shoot=True,
simple_dribble=False,
cross=True,
server_side_decision=False,
max_depth=5,
max_nodes=800,
evalution=planner_evaluation
)
actions.append(pb2.PlayerAction(helios_offensive_planner=helios_offensive_planner))
return pb2.PlayerActions(actions=actions)
Field | Type | Label | Description |
---|
opponent_effector | OpponentEffector | | The effector of the opponent players. You can set the negetive effect of the distance or reach steps of the opponent players to the ball in the predicted state. By using this effector, the proxy agent will decrease the predicted state evaluation based on the distance or reach steps of the opponent players to the ball in the predicted state. |
action_type_effector | ActionTypeEffector | | The effector of the action types. You can set the coefficients of the action types in the tree to calculate the predicted state evaluation. By using this effector, the proxy agent will update the predicted state evaluation based on the coefficients of the action types in the tree. |
teammate_effector | TeammateEffector | | The effector of the teammates. You can set the coefficients of the teammates in the tree to calculate the predicted state evaluation. By using this effector, the proxy agent will update the predicted state evaluation based on the coefficients of the teammates in the tree. |
PlannerFieldEvaluator
PlannerFieldEvaluator is the message that represents the field evaluator of the proxy agent to evaluate each node (predicted state) in the planner tree.
If you dont set the field evaluator, the proxy agent will use the default field evaluator (HeliosFieldEvaluator) to evaluate each node in the planner tree.
This field evaluator calculate the value of the predicted state by using helios_field_evaluator or/and matrix_field_evaluator.
Note: if you just use the matrix_field_evaluator, value of all target in each square of the matrix should be the same, so it causes that the player choosing hold ball action instead of dribble in that area.
To avoid this issue, you can use the helios_field_evaluator with the matrix_field_evaluator together.
Player
Player is the message that represents a player in the soccer simulation.
To get type information of the player, you can use the type_id field and player type information.
Field | Type | Label | Description |
---|
position | RpcVector2D | | The position of the player. |
seen_position | RpcVector2D | | The position of the player that the agent has seen. |
heard_position | RpcVector2D | | The position of the player that the agent has heard. |
velocity | RpcVector2D | | The velocity of the player. |
seen_velocity | RpcVector2D | | The velocity of the player that the agent has seen. |
pos_count | int32 | | How many cycles ago the agent has seen or heard the player. |
seen_pos_count | int32 | | How many cycles ago the agent has seen the player. |
heard_pos_count | int32 | | How many cycles ago the agent has heard the player. |
vel_count | int32 | | How many cycles ago the agent has seen or heard the velocity of the player. |
seen_vel_count | int32 | | How many cycles ago the agent has seen the velocity of the player. |
ghost_count | int32 | | How many cycles ago the agent has lost the player. |
dist_from_self | float | | The distance of the player from the agent who is sending the message. |
angle_from_self | float | | The angle of the player from the agent who is sending the message. |
id | int32 | | The unique identifier of the player. |
side | Side | | The side of the player. It can be LEFT or RIGHT or UNKNOWN if the side is not known. |
uniform_number | int32 | | The uniform number of the player. |
uniform_number_count | int32 | | How many cycles ago the agent has seen the uniform number of the player. |
is_goalie | bool | | Whether the player is a goalie or not. |
body_direction | float | | The body direction of the player. |
body_direction_count | int32 | | How many cycles ago the agent has seen the body direction of the player. |
face_direction | float | | The face direction of the player. In soccer simulation 2D, face direction is the direction that the player is looking at. |
face_direction_count | int32 | | How many cycles ago the agent has seen the face direction of the player. |
point_to_direction | float | | The direction that the player is pointing to. |
point_to_direction_count | int32 | | How many cycles ago the agent has seen the point to direction of the player. |
is_kicking | bool | | Whether the player is kicking or not. |
dist_from_ball | float | | The distance of the player from the ball. |
angle_from_ball | float | | The angle of the player from the ball. |
ball_reach_steps | int32 | | How many cycles the player needs to reach the ball. |
is_tackling | bool | | Whether the player is tackling or not. |
type_id | int32 | | The type identifier of the player. |
PlayerAction
PlayerActions
Field | Type | Label | Description |
---|
actions | PlayerAction | repeated | |
ignore_preprocess | bool | | |
ignore_doforcekick | bool | | |
ignore_doHeardPassRecieve | bool | | |
ignore_doIntention | bool | | |
ignore_shootInPreprocess | bool | | |
PlayerParam
Field | Type | Label | Description |
---|
register_response | RegisterResponse | | |
player_types | int32 | | |
subs_max | int32 | | |
pt_max | int32 | | |
allow_mult_default_type | bool | | |
player_speed_max_delta_min | float | | |
player_speed_max_delta_max | float | | |
stamina_inc_max_delta_factor | float | | |
player_decay_delta_min | float | | |
player_decay_delta_max | float | | |
inertia_moment_delta_factor | float | | |
dash_power_rate_delta_min | float | | |
dash_power_rate_delta_max | float | | |
player_size_delta_factor | float | | |
kickable_margin_delta_min | float | | |
kickable_margin_delta_max | float | | |
kick_rand_delta_factor | float | | |
extra_stamina_delta_min | float | | |
extra_stamina_delta_max | float | | |
effort_max_delta_factor | float | | |
effort_min_delta_factor | float | | |
random_seed | int32 | | |
new_dash_power_rate_delta_min | float | | |
new_dash_power_rate_delta_max | float | | |
new_stamina_inc_max_delta_factor | float | | |
kick_power_rate_delta_min | float | | |
kick_power_rate_delta_max | float | | |
foul_detect_probability_delta_factor | float | | |
catchable_area_l_stretch_min | float | | |
catchable_area_l_stretch_max | float | | |
PlayerType
PointTo
PointToOf
RecoveryMessage
Field | Type | Label | Description |
---|
recovery | float | | |
RegisterRequest
RegisterRequest is the message that the client sends to the server to register itself.
The client should send this message to the server to register itself.
The server will respond with a RegisterResponse message.
Field | Type | Label | Description |
---|
agent_type | AgentType | | The type of the agent. It can be PlayerT, CoachT, or TrainerT. |
team_name | string | | The name of the team that the agent belongs to. |
uniform_number | int32 | | The uniform number of the agent. |
rpc_version | int32 | | The version of the RPC protocol that the client supports. |
RegisterResponse
RegisterResponse is the message that the server sends to the client in response to a RegisterRequest message.
The server will respond with this message after receiving a RegisterRequest message.
The client should use the information in this message to identify itself to the server.
Field | Type | Label | Description |
---|
client_id | int32 | | The unique identifier assigned to the client by the server. |
agent_type | AgentType | | The type of the agent. It can be PlayerT, CoachT, or TrainerT. |
team_name | string | | The name of the team that the agent belongs to. |
uniform_number | int32 | | The uniform number of the agent. |
rpc_server_language_type | RpcServerLanguageType | | The language that the server is implemented in. |
RpcActionState
RpcCooperativeAction
RpcPredictState
RpcVector2D
RpcVector2D represents a 2D vector with additional properties.
If you want to have access to geometric operations, you can use Vector2D class in pyrusgeom package
To use this class, you need to install pyrusgeom package, import Vector2D class and create a Vector2D object with x and y values.
Field | Type | Label | Description |
---|
x | float | | The x-coordinate of the vector. |
y | float | | The y-coordinate of the vector. |
dist | float | | The distance magnitude of the vector. |
angle | float | | The angle of the vector in degrees. In soccer simulation 2D environment, the 0 degree is opponent's goal, and the angle increases in the counter-clock direction. So, if your team is in left side, -90 degree is up, 0 degree is right (opponent gole), 90 degree is down. |
Say
Self
Self is the message that represents the agent itself in the soccer simulation.
When an agent send a message to the playmaker server, self is information about the agent itself.
Field | Type | Label | Description |
---|
position | RpcVector2D | | The position of the agent. |
seen_position | RpcVector2D | | The position of the agent that the agent has seen. (By using flags) |
heard_position | RpcVector2D | | The position of the agent that the agent has heard. (This is not very useful) |
velocity | RpcVector2D | | The velocity of the agent. |
seen_velocity | RpcVector2D | | The velocity of the agent that the agent has seen. (By using flags) |
pos_count | int32 | | How many cycles ago the agent has seen or heard itself. |
seen_pos_count | int32 | | How many cycles ago the agent has seen itself. |
heard_pos_count | int32 | | How many cycles ago the agent has heard itself. |
vel_count | int32 | | How many cycles ago the agent has seen or heard the velocity of itself. |
seen_vel_count | int32 | | How many cycles ago the agent has seen the velocity of itself. |
ghost_count | int32 | | How many cycles ago the agent has lost itself. |
id | int32 | | The ID number for this object in proxy. |
side | Side | | The side of the agent. It can be LEFT or RIGHT or UNKNOWN if the side is not known. |
uniform_number | int32 | | The uniform number of the agent. |
uniform_number_count | int32 | | How many cycles ago the agent has seen the uniform number of itself. |
is_goalie | bool | | Whether the agent is a goalie or not. |
body_direction | float | | The body direction of the agent. |
body_direction_count | int32 | | How many cycles ago the agent has seen the body direction of itself. |
face_direction | float | | The face direction of the agent. In soccer simulation 2D, face direction is the direction that the agent is looking at. This is a global direction. |
face_direction_count | int32 | | How many cycles ago the agent has seen the face direction of itself. |
point_to_direction | float | | The direction that the agent is pointing to. This is a global direction. |
point_to_direction_count | int32 | | How many cycles ago the agent has seen the point to direction of itself. |
is_kicking | bool | | Whether the agent is kicking or not. |
dist_from_ball | float | | The distance of the agent from the ball. |
angle_from_ball | float | | The angle of the agent from the ball. |
ball_reach_steps | int32 | | How many cycles the agent needs to reach the ball. |
is_tackling | bool | | Whether the agent is tackling or not. |
relative_neck_direction | float | | The relative neck direction of the agent to the body direction. |
stamina | float | | The stamina of the agent. This number is between TODO |
is_kickable | bool | | Whether the agent is kickable or not. Means the agent can kick the ball. |
catch_probability | float | | The probability of the agent to catch the ball. This number is important for goalies. |
tackle_probability | float | | The probability of the agent to tackle the ball. |
foul_probability | float | | The probability of the agent to foul. |
view_width | ViewWidth | | The view width of the agent. It can be NARROW, NORMAL, or WIDE. |
type_id | int32 | | The type identifier of the agent. The RcssServer generates 18 different types of agents. The coach is reponsible to give the type information to the agent. |
kick_rate | float | | The kick rate of the agent. This number is calculated by this formula: self.playerType().kickRate(wm.ball().distFromSelf(), (wm.ball().angleFromSelf() - self.body()).degree()), So, if the kick rate is more, the agent can kick the ball with more first speed to any angle. |
recovery | float | | The current estimated recovery value. TODO more info |
stamina_capacity | float | | The stamina capacity of the agent. This number is between 0 to ~130000 depending on the server param. |
card | CardType | | The card type of the agent. It can be NO_CARD, YELLOW, or RED. |
catch_time | int32 | | The time when the last catch command is performed. |
effort | float | | The effort of the agent. TODO more info |
SelfMessage
ServerParam
SetplayMessage
Field | Type | Label | Description |
---|
wait_step | int32 | | |
StaminaCapacityMessage
Field | Type | Label | Description |
---|
stamina_capacity | float | | |
StaminaMessage
Field | Type | Label | Description |
---|
stamina | float | | |
State
State is the message that represents the state of the agent in the soccer simulation.
Field | Type | Label | Description |
---|
register_response | RegisterResponse | | The response of the agent registration. The agent should use this information to identify itself to the playermaker server. |
world_model | WorldModel | | The world model of the agent. The agent should use this information to make decisions. If the server is in full state mode, the world model will be full state without noise. |
full_world_model | WorldModel | | The full world model of the agent. This value will be set only if the server is in full state mode and proxy agent is in debug mode. TODO add more information |
need_preprocess | bool | | Whether the agent needs to preprocess the world model or not. If the agent needs to do some preprocessing actions, it means the proxy agent will igonre the playmaker actions, you can ignore preprocessing. |
Tackle
Field | Type | Label | Description |
---|
power_or_dir | float | | |
foul | bool | | |
TeammateEffector
TeammateEffector is the message that represents the coefficients of the teammates in the tree to calculate the predicted state evaluation.
Each number should start from 0.0. For example, if evaluation of an action-state is 10, the action is direct pass to player 5,
and value of player 5 is 0.5, so the final evaluation of the action-state will be 5.
example in python grpc:
actions = []
teammate_effector = pb2.TeammateEffector(
coefficients={2: 1.2, 5: 1.6},
apply_based_on_first_layer=False
)
planner_evaluation_effector = pb2.PlannerEvaluationEffector(
teammate_effector= teammate_effector
)
planner_evaluation = pb2.PlannerEvaluation(
effectors=planner_evaluation_effector,
)
helios_offensive_planner = pb2.HeliosOffensivePlanner(
lead_pass=True,
direct_pass=False,
through_pass=True,
simple_pass=True,
short_dribble=True,
long_dribble=True,
simple_shoot=True,
simple_dribble=False,
cross=True,
server_side_decision=False,
max_depth=5,
max_nodes=800,
evalution=planner_evaluation
)
actions.append(pb2.PlayerAction(helios_offensive_planner=helios_offensive_planner))
return pb2.PlayerActions(actions=actions)
Field | Type | Label | Description |
---|
coefficients | TeammateEffector.CoefficientsEntry | repeated | The map of the coefficients of the teammates. The key of the map is the uniform number of the teammate, and the value is the coefficient of the teammate. The value should be started from 0.0. |
apply_based_on_first_layer | bool | | If this value is true, the coefficients will be calculated based on the first action target of each action chain. For example, if we have a chain of actions like [direct_pass to 5, simple_pass to 6, simple_pass to 7], the coefficients will be calculated based on the coeeficient of the player 5 for all of the actions. |
TeammateEffector.CoefficientsEntry
TeammateMessage
ThreePlayerMessage
TrainerAction
TrainerActions
Turn
Turn is the message that represents the turn action in the soccer simulation.
By using this action, agent can turn to a direction relative to the current body direction.
The rcssserver, calculates the next body direction of the agent based on current body direction, relative direction and velocity of the agent.
Field | Type | Label | Description |
---|
relative_direction | float | | The relative direction of the turn action to the body direction of the agent. The direction can be between -180 to 180. |
TurnNeck
Field | Type | Label | Description |
---|
moment | float | | |
TwoPlayerMessage
View_ChangeWidth
View_Normal
View_Synch