Skip to content

Commit

Permalink
feat(is_pre_process): add config item
Browse files Browse the repository at this point in the history
add `is_pre_process` and make  it configurablemarkdownlint
  • Loading branch information
Dirreke committed Dec 7, 2023
1 parent 5b80116 commit e210ab7
Show file tree
Hide file tree
Showing 11 changed files with 70 additions and 28 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@ Cargo.lock
.idea/
*.iml
.vscode/
log/
2 changes: 1 addition & 1 deletion .markdownlint.yml
Original file line number Diff line number Diff line change
@@ -1 +1 @@
line-length: false
line-length: true
21 changes: 13 additions & 8 deletions docs/Configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,8 @@ other components, the default (and only supported) policy is `kind: compound`.
The _trigger_ field is used to dictate when the log file should be rolled. It
supports two types: `size`, and `time`. They both require a `limit` field.

For `size`, the `limit` field is a string which defines the maximum file size
prior to a rolling of the file. The limit field requires one of the following
For `size`, the `limit` field is a string which defines the maximum file size
prior to a rolling of the file. The limit field requires one of the following
units in bytes, case does not matter:

- b
Expand All @@ -192,7 +192,7 @@ trigger:
limit: 10 mb
```

For `time`, the `limit` field is a string which defines the time to roll the
For `time`, the `limit` field is a string which defines the time to roll the
file. The limit field supports the following units(second will be used if the
unit is not specified), case does not matter:

Expand All @@ -204,11 +204,11 @@ unit is not specified), case does not matter:
- month[s]
- year[s]

> note: The log file will be rolled at the integer time. For example, if the
`limit` is set to `2 day`, the log file will be rolled at 0:00 every other a
day, regardless of the time `log4rs` was started or the log file was created.
This means that the initial log file will be likely rolled before the limit
is reached.
> Note: The log file will be rolled at the integer time. For example, if the
> `limit` is set to `2 day`, the log file will be rolled at 0:00 every other a
> day, regardless of the time `log4rs` was started or the log file was created.
> This means that the initial log file will be likely rolled before the limit
> is reached.

i.e.

Expand Down Expand Up @@ -238,6 +238,11 @@ The _count_ field is the exclusive maximum index used to name rolling files.
However, be warned that the roller renames every file when a log rolls over.
Having a large count value can negatively impact performance.

> Note: If you use the `triger: time`, the log file will be rolled before it
> gets written, which ensure that the logs are rolled in the correct position
> instead of leaving a single line of logs in the previous log file. However,
> this may cause a substantial slowdown if the `background` feature is not enabled.

i.e.

```yml
Expand Down
4 changes: 2 additions & 2 deletions examples/compile_time_config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ fn main() {
let config = serde_yaml::from_str(config_str).unwrap();
log4rs::init_raw_config(config).unwrap();

info!("Goes to console");
error!("Goes to console");
info!("Goes to console, file and rolling file");
error!("Goes to console, file and rolling file");
trace!("Doesn't go to console as it is filtered out");
}
6 changes: 3 additions & 3 deletions examples/sample_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ appenders:
level: info
file:
kind: file
path: "log/log.log"
path: "log/file.log"
encoder:
pattern: "[{d(%Y-%m-%dT%H:%M:%S%.6f)} {h({l}):<5.5} {M}] {m}{n}"
rollingfile:
kind: rolling_file
path: "log/log2.log"
path: "log/rolling_file.log"
encoder:
pattern: "[{d(%Y-%m-%dT%H:%M:%S%.6f)} {h({l}):<5.5} {M}] {m}{n}"
policy:
Expand All @@ -22,7 +22,7 @@ appenders:
limit: 1 minute
roller:
kind: fixed_window
pattern: "log/old-log-{}.log"
pattern: "log/old-rolling_file-{}.log"
base: 0
count: 2
root:
Expand Down
45 changes: 31 additions & 14 deletions src/append/rolling_file/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -167,25 +167,39 @@ impl Append for RollingFileAppender {
// TODO(eas): Perhaps this is better as a concurrent queue?
let mut writer = self.writer.lock();

let is_pre_process = self.policy.is_pre_process();
let log_writer = self.get_writer(&mut writer)?;
let len = log_writer.len;

let mut file = LogFile {
writer: &mut writer,
path: &self.path,
len,
};
if is_pre_process {
let len = log_writer.len;

let mut file = LogFile {
writer: &mut writer,
path: &self.path,
len,
};

// TODO(eas): Idea: make this optionally return a future, and if so, we initialize a queue for
// data that comes in while we are processing the file rotation.

// TODO(eas): Idea: make this optionally return a future, and if so, we initialize a queue for
// data that comes in while we are processing the file rotation.
self.policy.process(&mut file)?;

//first, rotate
self.policy.process(&mut file)?;
let log_writer_new = self.get_writer(&mut writer)?;
self.encoder.encode(log_writer_new, record)?;
log_writer_new.flush()?;
} else {
self.encoder.encode(log_writer, record)?;
log_writer.flush()?;
let len = log_writer.len;

//second, write
let writer_file = self.get_writer(&mut writer)?;
self.encoder.encode(writer_file, record)?;
writer_file.flush()?;
let mut file = LogFile {
writer: &mut writer,
path: &self.path,
len,
};

self.policy.process(&mut file)?;
}

Ok(())
}
Expand Down Expand Up @@ -410,6 +424,9 @@ appenders:
fn process(&self, _: &mut LogFile) -> anyhow::Result<()> {
Ok(())
}
fn is_pre_process(&self) -> bool {
false
}
}

#[test]
Expand Down
4 changes: 4 additions & 0 deletions src/append/rolling_file/policy/compound/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,10 @@ impl Policy for CompoundPolicy {
}
Ok(())
}

fn is_pre_process(&self) -> bool {
self.trigger.is_pre_process()
}
}

/// A deserializer for the `CompoundPolicyDeserializer`.
Expand Down
5 changes: 5 additions & 0 deletions src/append/rolling_file/policy/compound/trigger/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ pub mod time;
pub trait Trigger: fmt::Debug + Send + Sync + 'static {
/// Determines if the active log file should be rolled over.
fn trigger(&self, file: &LogFile) -> anyhow::Result<bool>;

/// Sets the is_pre_process flag for log files.
///
/// Defaults to true for time triggers and false for size triggers
fn is_pre_process(&self) -> bool;
}

#[cfg(feature = "config_parsing")]
Expand Down
4 changes: 4 additions & 0 deletions src/append/rolling_file/policy/compound/trigger/size.rs
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,10 @@ impl Trigger for SizeTrigger {
fn trigger(&self, file: &LogFile) -> anyhow::Result<bool> {
Ok(file.len_estimate() > self.limit)
}

fn is_pre_process(&self) -> bool {
false
}
}

/// A deserializer for the `SizeTrigger`.
Expand Down
4 changes: 4 additions & 0 deletions src/append/rolling_file/policy/compound/trigger/time.rs
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,10 @@ impl Trigger for TimeTrigger {
}
Ok(is_triger)
}

fn is_pre_process(&self) -> bool {
true
}
}

/// A deserializer for the `TimeTrigger`.
Expand Down
2 changes: 2 additions & 0 deletions src/append/rolling_file/policy/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ pub trait Policy: Sync + Send + 'static + fmt::Debug {
/// This method is called after each log event. It is provided a reference
/// to the current log file.
fn process(&self, log: &mut LogFile) -> anyhow::Result<()>;
/// Return the config `Trigger.is_pre_process` value
fn is_pre_process(&self) -> bool;
}

#[cfg(feature = "config_parsing")]
Expand Down

0 comments on commit e210ab7

Please sign in to comment.