@@ -3,27 +3,30 @@ title: Organisations
3
3
sidebar_position : 110
4
4
---
5
5
6
- The Organisations data migration option is useful when you want to migrate your Flagsmith Organisation from one location
7
- to another. It's not a useful tool to merge Flagsmith data into another Flagsmith instance, for that use-case consider
8
- [ feature flag importing] ( /system-administration/importing-and-exporting/features ) .
6
+ You can import and export an entire Flagsmith organisation. This lets you:
9
7
10
- If, for example, you wanted to move from self hosting Flagsmith to our SaaS version, the process looks something like
11
- this:
8
+ * Migrate from self-hosted Flagsmith to SaaS, or the other way around.
9
+ * Migrate from one self-hosted Flagsmith instance to another.
12
10
13
- - ** Step 1.** Contact Flagsmith support to confirm you would like to migrate from self hosted to cloud
14
- - ** Step 2.** Generate a JSON file from your self hosted instance (more information below)
15
- - ** Step 3.** Send the JSON file to Flagsmith support
16
- - ** Step 4.** Flagsmith support will import the JSON file into our cloud offering
17
- - ** Step 5.** Register and re-add your users and passwords (Flagsmith support will need to assign at least one
18
- organisation administrator to the newly imported organisation)
11
+ ::: note
19
12
20
- ::: tip
21
-
22
- You can import and export from any combination of self-hosted/SaaS to and from self-hosted/SaaS. If you need to go to or
23
- from our SaaS platform, you will need to get in touch with us to operate that part of the process for you.
13
+ Merging data between existing Flagsmith organisations ** is not** supported. A new Flagsmith organisation is created as
14
+ part of the import process.
24
15
25
16
:::
26
17
18
+ ## Prerequisites
19
+
20
+ Importing or exporting an organisation requires shell access to any machine or container where Flagsmith is
21
+ installed and can connect to your Flagsmith database.
22
+
23
+ Organisations can be imported or exported using the local file system or S3-compatible storage.
24
+
25
+ Importing or exporting an organisation does not require downtime. However, it is a one-time operation that does not
26
+ continuously migrate data. You should plan a convenient time to perform imports and exports.
27
+
28
+ ** If you need to copy an organisation from or to Flagsmith SaaS, please contact Flagsmith support.**
29
+
27
30
## What is exported?
28
31
29
32
We ** will** export the following entities:
@@ -33,150 +36,220 @@ We **will** export the following entities:
33
36
- Segments
34
37
- Identities
35
38
- Integrations
39
+ - Client-side and server-side SDK keys
36
40
37
41
We ** will not** export the following entities:
38
42
39
- - Flagsmith Users that log into the Dashboard and manage Flagsmith
43
+ - Flagsmith users
44
+ - Flag analytics
40
45
- Audit logs
41
46
- Change requests
42
47
- Scheduled flag changes
48
+ - Admin API keys
49
+ - Groups and custom roles
50
+ - SAML configurations and login method restrictions
43
51
44
- ## Dealing with existing data
52
+ ## Running shell commands
45
53
46
- The data migration process is designed to import data into a completely new Organisation within the target Flagsmith
47
- instance. This target instance could be a completely new installation, or it could have existing data in it, in separate
48
- Organisations.
54
+ Importing or exporting is performed using shell commands on a Flagsmith container that has access to your Flagsmith
55
+ database. You can also create a new container just for this operation.
49
56
50
- This process does ** not ** support importing data into an existing Organisation.
57
+ < details >
51
58
52
- ## Exporting
59
+ < summary >Kubernetes</ summary >
53
60
54
- The export process involves running a command from a terminal window. This must either be run from a running container
55
- in your self hosted deployment or, alternatively, you can run a separate container that can connect to the same database
56
- as your deployed fleet of flagsmith instances. To run the command, you'll also need to find the id of your organisation.
57
- You can do this through the django admin interface. Information about accessing the admin interface can be found
58
- [ here] ( /deployment/configuration/django-admin.md ) . Once you've obtained access to the admin interface, if you browse to
59
- the 'Organisations' menu item on the left, you should see something along the lines of the following:
61
+ To run an interactive shell inside an existing API container, use ` kubectl exec ` replacing ` YOUR_API_SERVICE ` with the
62
+ name of your Flagsmith API Kubernetes service:
60
63
61
- ![ Image] ( /img/organisations-admin.png )
64
+ ```
65
+ kubectl exec -it service/YOUR_API_SERVICE --container flagsmith-api -- sh
66
+ ```
62
67
63
- The ID you need is the one in brackets after the organisation name, so here it would be 1.
68
+ To find your Flagsmith API Kubernetes service, you can use ` kubectl get services ` :
64
69
65
- Once you've obtained the ID of your organisation, you're ready to export the organisation as a JSON file. There are 2
66
- options as to where to output the organisation export JSON file. Option 1 - local file system, Option 2 - S3 bucket.
67
- These options are detailed below.
70
+ ```
71
+ kubectl get services --selector app.kubernetes.io/component=api
72
+ ```
68
73
69
- ### Option 1 - Local File System
74
+ Putting these two commands together, this one-liner will give you an interactive API shell:
70
75
71
- ``` bash
72
- python manage.py dumporganisationtolocalfs < organisation-id > < local-file-system-path >
76
+ ```
77
+ kubectl exec -it $(kubectl get service --selector app.kubernetes.io/component=api --output name) --container flagsmith-api -- sh
73
78
```
74
79
75
- e.g.
80
+ </ details >
76
81
77
- ``` bash
78
- python manage.py dumporganisationtolocalfs 1 /tmp/organisation-1.json
79
- ```
80
-
81
- Since this will write to your local file system, you may need to attach a volume to your docker container to be able to
82
- obtain the file afterwards. There is an example docker-compose file provided below to help guide you to do this.
83
-
84
- ``` yml
85
- version : ' 3'
86
- services :
87
- postgres :
88
- image : postgres:15.5-alpine
89
- environment :
90
- POSTGRES_PASSWORD : password
91
- POSTGRES_DB : flagsmith
92
- container_name : flagsmith_postgres
93
- ports :
94
- - ' 5434:5432'
95
-
96
- flagsmith :
97
- build :
98
- dockerfile : ./Dockerfile
99
- context : .
100
- environment :
101
- DJANGO_ALLOWED_HOSTS : ' *'
102
- DATABASE_URL : postgresql://postgres:password@postgres:5432/flagsmith
103
- ENV : prod
104
- ALLOW_REGISTRATION_WITHOUT_INVITE : ' True'
105
- ports :
106
- - ' 8000:8000'
107
- depends_on :
108
- - postgres
109
- links :
110
- - postgres
111
-
112
- flagsmith-fs-exporter :
113
- build :
114
- dockerfile : ./Dockerfile
115
- context : .
116
- environment :
117
- DATABASE_URL : postgresql://postgres:password@postgres:5432/flagsmith
118
- command :
119
- - ' dumporganisationtolocalfs'
120
- - ' 1'
121
- - ' /tmp/flagsmith-exporter/org-1.json'
122
- depends_on :
123
- - postgres
124
- - flagsmith
125
- links :
126
- - postgres
127
- volumes :
128
- - ' /tmp/flagsmith-exporter:/tmp/flagsmith-exporter'
129
- ` ` `
130
-
131
- ### Option 2 - S3 bucket
132
-
133
- The command you will need to run for this is slightly different as per the following.
82
+ <details >
83
+
84
+ <summary >Docker Compose</summary >
85
+
86
+ Use ` docker compose exec ` to get an interactive shell inside your API container, replacing ` flagsmith ` with your
87
+ Flagsmith API service name from your Compose definition:
88
+
89
+ ```
90
+ docker compose exec -it flagsmith sh
91
+ ```
92
+
93
+ </details >
94
+
95
+ <details >
96
+
97
+ <summary >SSH and local environments</summary >
98
+
99
+ If you have a shell inside a Flagsmith environment, check that you can run ` python manage.py ` . In containers running
100
+ Flagsmith images, this file is located in the ` /app ` directory:
101
+
102
+ ```
103
+ python /app/manage.py health_check
104
+ ```
105
+
106
+ </details >
107
+
108
+ ## Working with files in Flagsmith containers {#files}
109
+
110
+ You can use the [ ` kubectl cp ` ] ( https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cp/ ) or
111
+ [ ` docker cp ` ] ( https://docs.docker.com/reference/cli/docker/container/cp/ ) commands to copy files to and from Flagsmith
112
+ containers. This lets you import a previously exported organisation, or copy an organisation you just exported to a
113
+ secure location.
114
+
115
+ <details >
116
+
117
+ <summary >Kubernetes</summary >
118
+
119
+ From a Flagsmith API container shell, run the ` hostname ` command to get the current pod name. For example:
120
+
121
+ ```
122
+ $ hostname
123
+ flagsmith-api-59d68fd74d-4kw2k
124
+ ```
125
+
126
+ Then, from a different machine, use ` kubectl cp ` to copy the exported file for further processing. For example, this
127
+ command copies a file from your pod's ` /tmp ` directory to your local machine's home directory:
134
128
135
- ` ` ` bash
136
- python manage.py dumporganisationtos3 <organisation-id> <bucket-name> <key>
137
129
```
130
+ kubectl cp --container flagsmith-api YOUR_API_POD_NAME:/tmp/organisation-1234.json ~/organisation-1234.json
131
+ ```
132
+
133
+ <h3 >Read-only file systems</h3 >
134
+
135
+ In some situations, you may not be able to write to ` /tmp ` or any directory in the container's root file system. If
136
+ this is the case, attach a writable volume to your API pods. For example, if you are using the Flagsmith Helm chart,
137
+ these values will create an [ emptyDir volume] ( https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ) on your
138
+ Flagsmith API pods that can be used for exporting data:
139
+
140
+ ``` yaml title="values.yaml"
141
+ api :
142
+ extraVolumes :
143
+ - name : exports
144
+ emptyDir : {}
145
+ volumeMounts :
146
+ - name : exports
147
+ mountPath : /exports
148
+ ` ` `
149
+
150
+ </details>
151
+
152
+ <details>
153
+
154
+ <summary>Docker</summary>
155
+
156
+ From a Flagsmith API container shell, run the ` hostname` command to get the container ID. For example:
157
+
158
+ ```
159
+ $ hostname
160
+ 6893461b8a7e
161
+ ```
162
+
163
+ Then, from the host machine, you can copy files to/from this container using `docker cp`. For example, this command
164
+ copies an exported organisation from your container's `/tmp` directory into your host machine's current directory:
165
+
166
+ ```
167
+ docker cp 6893461b8a7e:/tmp/organisation-1234.json .
168
+ ```
169
+
170
+ </details>
171
+
172
+ ## Exporting
173
+
174
+ To export your Flagsmith organisation, you need to know its ID. To find an organisation's ID, use one of the
175
+ following methods:
176
+
177
+ * From the Flagsmith dashboard, click your organisation name in the top left. The organisation ID is displayed in
178
+ the URL bar: `https://flagsmith.example.com/organisation/YOUR_ORGANISATION_ID/...`.
179
+ * From [Django Admin](/deployment/configuration/django-admin), browse to the Organisations section in the sidebar.
180
+ Here you can see all of your organisations and their IDs.
181
+ * If you have an Admin API key, call the
182
+ [List Organisations API endpoint](https://api.flagsmith.com/api/v1/docs/#/api/api_v1_organisations_list). This
183
+ returns all the organisations that your API key is a member of.
138
184
139
- e.g.
185
+ Once you have shell access and you know the organisation's ID, you can export it to the container's file system or
186
+ S3-compatible storage.
187
+
188
+ ### Exporting to the local file system
189
+
190
+ To export the organisation with ID 1234 to a JSON file in the local file system:
140
191
141
192
```bash
142
- python manage.py dumporganisationtos3 1 my-export-bucket exports/ organisation-1 .json
193
+ python manage.py dumporganisationtolocalfs 1234 /tmp/ organisation-1234 .json
143
194
```
144
195
145
- This requires the application to be running with access to an AWS account. If you're running the application in AWS,
146
- make sure whichever role you are using to run you container has access to read from and write to the given S3 bucket.
147
- Alternatively, you can provide the ` AWS_ACCESS_KEY_ID ` and ` AWS_SECRET_ACCESS_KEY ` environment variables to refer to an
148
- IAM user that has access to the S3 bucket.
196
+ Then, [ copy the exported JSON file] ( #files ) to a secure location.
197
+
198
+ ### Exporting to S3-compatible storage
149
199
150
- #### Using localstack to achieve local/test exports with s3 can be done using
200
+ To export the organisation with ID 1234 to a key named ` 1234.json ` in the S3 bucket named ` my-bucket ` :
151
201
152
- [ localstack] ( https://github.com/localstack/localstack ) and the
153
- [ service-specific endpoint] ( https://docs.aws.amazon.com/sdkref/latest/guide/feature-ss-endpoints.html ) strategy.
202
+ ``` bash
203
+ python manage.py dumporganisationtos3 1234 my-bucket 1234.json
204
+ ```
154
205
155
- Once running you are able to set a specific url for the s3 service ` AWS_ENDPOINT_URL_S3 ` or for all services
156
- ` AWS_ENDPOINT_URL ` .
206
+ You can provide [ additional S3 configuration ] ( #s3-configuration ) for authentication or to use services other than AWS
207
+ S3 .
157
208
158
209
## Importing
159
210
160
- ### Option 1 - Local File System
211
+ You can import an organisation from the local file system or S3-compatible storage.
161
212
162
- This is coming soon - see https://github.com/Flagsmith/flagsmith/issues/2512 for more info.
213
+ ### Importing from the local file system
163
214
164
- ### Option 2 - S3 bucket
215
+ To import the organisation exported in the file ` /tmp/org-1234.json ` , run this command from a Flagsmith container:
165
216
166
- ``` bash
167
- python manage.py importorganisationfroms3 < bucket-name > < key >
217
+ ```
218
+ python manage.py loaddata /tmp/org-1234.json
168
219
```
169
220
170
- e.g.
221
+ ### Importing from S3-compatible storage
222
+
223
+ To import the organisation exported in the key ` org-1234.json ` of the AWS S3 bucket named ` my-bucket ` , run this command
224
+ from a Flagsmith container:
171
225
172
226
``` bash
173
- python manage.py importorganisationfroms3 my-export- bucket exports/organisation-1 .json
227
+ python manage.py importorganisationfroms3 my-bucket org-1234 .json
174
228
```
175
229
176
- #### Using localstack to achieve local/test imports with s3 can be done using
230
+ You can provide [ additional S3 configuration] ( #s3-configuration ) for authentication or to use services other than AWS
231
+ S3.
232
+
233
+ ### Accessing an imported organisation
177
234
178
- [ localstack ] ( https://github.com/localstack/localstack ) and the
179
- [ service-specific endpoint ] ( https://docs.aws.amazon.com/sdkref/latest/guide/feature-ss-endpoints.html ) strategy.
235
+ After you import an organisation, you will need to add your Flagsmith user to it. To do this, edit the imported
236
+ organisation from [ Django Admin ] ( /deployment/configuration/django-admin ) and add your user to it with Admin permissions:
180
237
181
- Once running you are able to set a specific url for the s3 service ` AWS_ENDPOINT_URL_S3 ` or for all services
182
- ` AWS_ENDPOINT_URL ` .
238
+ ![ ] ( django-admin.png )
239
+
240
+ ## Additional S3 configuration {#s3-configuration}
241
+
242
+ To provide credentials, set the ` AWS_ACCESS_KEY_ID ` and ` AWS_SECRET_ACCESS_KEY ` environment variables before running
243
+ any commands. For example:
244
+
245
+ ```
246
+ export AWS_ACCESS_KEY_ID='abc123'
247
+ export AWS_SECRET_ACCESS_KEY='xyz456'
248
+ ```
249
+
250
+ By default, all commands will interact with buckets hosted on AWS S3. To use other S3-compatible services such as Google
251
+ Cloud Storage, set the ` AWS_ENDPOINT_URL_S3 ` environment variable:
252
+
253
+ ```
254
+ export AWS_ENDPOINT_URL_S3='https://storage.googleapis.com'
255
+ ```
0 commit comments