What’s the memory of logs a Mule application can get or allowed by default in cloudhub ?
By default, Mule applications deployed to CloudHub are allowed to store up to 100 MB of log data per worker, or up to 30 days, whichever limit is reached first. This means:
Storage Limit: The total amount of log data your application can store on CloudHub is capped at 100 MB per worker.
Time-Based Limit: Even if the total log size doesn't reach 100 MB, CloudHub will automatically remove older logs after 30 days to maintain storage efficiency.
Important Considerations:
This limit applies per worker. If your application uses multiple workers (e.g., for scaling purposes), the total log storage capacity increases proportionally (100 MB per worker).
The size of the log data depends on the verbosity level you configure for your Mule application logs. More detailed logging will consume storage space faster.
CloudHub doesn't offer a way to increase this default limit directly within the platform.
Here are some options to manage log storage in CloudHub:
Configure Log Levels: Adjust the logging level of your application to a more concise level (e.g., INFO instead of DEBUG) to reduce the amount of log data generated.
External Logging Solutions: Consider integrating your application with external logging solutions like Splunk or ELK Stack for more robust log management and potentially higher storage capacities.
Regular Log Rotation: Implement a mechanism to rotate logs periodically, archiving older logs to a separate storage location to free up space within CloudHub.
Additional Resources:
View Log Data: https://docs.mulesoft.com/cloudhub/viewing-log-data
CloudHub Architecture: https://docs.mulesoft.com/general/
No comments:
Post a Comment
Note: only a member of this blog may post a comment.