Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If an option is omitted on existing volume, it gets reset to the default. #48

Open
pcahyna opened this issue Oct 16, 2019 · 22 comments
Open
Milestone

Comments

@pcahyna
Copy link
Member

pcahyna commented Oct 16, 2019

Omitted mount_point: the volume gets unmounted.

Type switching: if I omit fs_type:, the volume gets reformatted with the default type (xfs). Wouldn't it be more reasonable to keep the existing filesystem? (note that with storage_safe_mode introduced in #43, any destructive operation is prevented - the role errors out. Still, there is the question whether even attempting to change the type to default is reasonable.)

Originally posted by @pcahyna in #43 (comment)

@pcahyna
Copy link
Member Author

pcahyna commented Oct 16, 2019

Quoting @dwlehman 's reply:

I'm not sure about that. I can see a case for either behavior. I have a somewhat difficult time imagining a user omitting fs_type and having a specific expectation about the result. Probably some people will complain whichever way we choose to interpret it.

Quoting my reply:

The scenario I am imagining is that someone will take a storage configuration created before by some other tools (anaconda, manually...), already used for storing data, and they will start using the storage role to manage it - change its properties, most likely size. It can be unexpected to see the the storage role reformatting your volumes (or at least trying to remove them, with the changes in #43 ) and recreating them just because you did specify only the size (that you want to change) and not the filesystem type (or any other attribute that should be left unchanged).

@dwlehman
Copy link
Collaborator

There has to be a way to specify that a given volume not be mounted, obviously.

We may need more sophisticated default handling if we're going to need to differentiate between "the default" and "whatever is already there". By the way, the latter is invalid for a non-existent volume.

@pcahyna
Copy link
Member Author

pcahyna commented Oct 17, 2019

There has to be a way to specify that a given volume not be mounted, obviously.

Of course. ZFS uses mountpoint=none for this. I think this already works in the role.

We may need more sophisticated default handling if we're going to need to differentiate between "the default" and "whatever is already there". By the way, the latter is invalid for a non-existent volume.

Sure. The latter could be defined as "default for new volumes, whatever is already there for existing volumes". By the way, how exactly does size work now? Does the role change the size of a volume to a default if not specified? Or there is no default?

@dwlehman
Copy link
Collaborator

size is required, although I think you found that we aren't explicitly checking that.

@robled
Copy link

robled commented Oct 17, 2019

We may need more sophisticated default handling if we're going to need to differentiate between "the default" and "whatever is already there". By the way, the latter is invalid for a non-existent volume.

At high level, this is an important point, specifically whatever is already there. It becomes exceedingly complex to account for (and especially test for) a near-infinite number of pre-existing configurations for configuration management, such as Ansible. When Central CI adopted cinch Ansible automation, previous systems configured with Puppet were deleted and re-deployed from scratch using cinch, so as to avoid the complexity being discussed here. This also applies to situations where systems were configured manually.

@dwlehman
Copy link
Collaborator

The only snag I can think of here is that this logic will almost have to be within the provider/backend module(s) since doing it in yaml is not going to be viable (at least not for the blivet provider). These are the main reasons I chose to store the defaults in yaml:

  1. it allows users to override the values (perhaps this should use other variables instead?)
  2. it allows various providers/backends to share them

I think these are both pretty important. Any feedback on that?

I guess I can pass the defaults into the blivet module, but it feels a bit gross. At any rate, this better/more-complex logic for filling in omitted values is going to have to live in something more capable than yaml. I'll work on prototyping it in the blivet module for now.

@t-woerner
Copy link

The role should be able to keep settings that are not changed like for example mount points, labels, etc. while you are resizing a filesystem. It should not expect that all settings that are not changed are provided again.
Maybe a special mode to reset all other settings might be good to add though for the people that want or need this mode.

@dwlehman
Copy link
Collaborator

Volumes of type 'disk' present an interesting problem since they are always existing. At this point we could differentiate between cases where the formatting type is recognized by blkid versus not, but that definitely leaves room for accidental data loss if the device contains data with a format that blkid does not recognize.

@pcahyna
Copy link
Member Author

pcahyna commented Oct 29, 2019

@dwlehman this is true, but how is it related to the subject of this issue? IIUC the problem of accidentally deleting data that is not recognized by blkid exists already, independently of the change suggested in this issue.

By the way, I found that the value of fs_label is already being preserved by the role, i.e. not reset to the default (empty string) (contrary to what I mistakenly claimed to @t-woerner and @tabowling). This is the behaviour I propose in this issue, but it is inconsistent with other attributes (like mount_point).

@pcahyna
Copy link
Member Author

pcahyna commented Oct 29, 2019

Actually, the issue with label seems to be rather that the role does not support changing it at all and ignores it even when given explicitly and not creating a new filesystem.

@dwlehman
Copy link
Collaborator

@dwlehman this is true, but how is it related to the subject of this issue? IIUC the problem of accidentally deleting data that is not recognized by blkid exists already, independently of the change suggested in this issue.

In this case blkid's inability to recognize the metadata will lead to us destroying data when we're claiming to be careful not to do so. It's true that this problem is independent of device/volume type, but it is particularly troublesome for disk volumes since it is impossible for them not to be pre-existing.

@pcahyna
Copy link
Member Author

pcahyna commented Oct 30, 2019

I still don't see the relation to this issue. We already claim to be careful not to destroy existing data in #43 .

@pcahyna
Copy link
Member Author

pcahyna commented Oct 30, 2019

Also it is not really different from existing utilities that require some kind of "force" flag to overwrite what they consider to be existing data (think mkfs.xfs -f), but can not detect existing data with 100% reliability either.

@dwlehman
Copy link
Collaborator

I'm not saying that it only becomes an problem within the context of this issue, but I am saying that volumes of type disk are particularly vulnerable to this limitation since they always use the logic that depends on detecting the pre-existing metadata.

@t-woerner
Copy link

t-woerner commented Oct 31, 2019

I think that it would be good to have additional states like mounted and unmounted to ensure that a file system is mounted at a given mount point or not mounted at this mount point or not at all (this could depend on the value of the mount point parameter in the task for example).

@dwlehman
Copy link
Collaborator

How would mounted be different from present?

@pcahyna
Copy link
Member Author

pcahyna commented Oct 31, 2019

@t-woerner can you please show an example role input variable showing what you have in mind? It is not clear to me either.

@t-woerner
Copy link

@pcahyna mounted would not change any file system parameters, but make sure that it is mounted. Also unmounted would not change the filesystem parameters, but simply make sure that it is not mounted. Only the identifier is needed for these actions, no other parameter would be given to the task.

How do you unmount at the moment using the storage role without removing the volume? The documentation of the role is not providing this information.

@dwlehman
Copy link
Collaborator

dwlehman commented Nov 4, 2019

How do you unmount at the moment using the storage role without removing the volume?

You define the volume with a mount point of null or ''.

@pcahyna
Copy link
Member Author

pcahyna commented Mar 17, 2020

@dwlehman I would suggest to use none which is the special string used for this in fstab, or perhaps unmounted (this is what ZFS uses), not null. We may want to reserve null as a special value for general use in system roles. Otherwise I agree.

@dwlehman
Copy link
Collaborator

@pcahyna in general (AFAIK) ansible treats none, None, null, and "" all the same. I am not sure it's advisable for us to try to assign different meanings to them given the common usage within ansible. Of course I am not a long-time ansible person, so ICBW.

@pcahyna
Copy link
Member Author

pcahyna commented Mar 17, 2020

AFAIK null is a special value in YAML, corresponding to Python's None. "none" and "None" should be just strings, but I will check, Ansible sometimes performs surprising conversions. (What is confusing is that Jinja2, unlike YAML, calls the null literal none, but this is valid only in Jinja2 templates.)

@dwlehman dwlehman added this to the 2.0 milestone Aug 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants