Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions docs/tutorial-resources.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ If you are working with ``resdk`` in an interactive session, the
logging feature prints useful messages. It will let you know
what is happening behind the scenes. Read more about
:ref:`how the logging is configured in resdk<resdk_resdk_logger>`
or about `Python logging`_ in general.
or about `Python logging`_ in general.

.. _`Python logging`: https://docs.python.org/2/howto/logging.html
.. _`Genialis Platform`: https://app.genialis.com
Expand All @@ -57,7 +57,7 @@ In Resolwe, meta-data is stored in the PostgreSQL database tables:
Data, Sample, Collection, Process, DescriptorSchema, Storage,
User, and Group. We support data management through the `REST API`_.
Each table is represented as a REST resource with a corresponding
endpoint. The Sample table is a special case, represented by the ``sample``
endpoint. The Sample table is a special case, represented by the ``sample``
resource. More details on samples will be given later.

In ``resdk`` each REST API endpoint has a corresponding class, with
Expand All @@ -76,7 +76,7 @@ Data and Processes
The two most important resources in Resolwe are *process* and *data*.
A Process stores an algorithm that transforms inputs into outputs. It
is a blueprint for one step in the analysis. A Data object is an
instance of a Process. It is a complete record of the process step.
instance of a Process. It is a complete record of the process step.
It remembers the inputs (files, arguments, parameters...), the process
(the algorithm) and the outputs (files, images, numbers...). In
addition, Data objects store some useful meta data, making it
Expand All @@ -87,10 +87,10 @@ easy to reproduce the dataflow and access information.
(``mm10_chr19.fasta``) with the *Bowtie* aligner. All you have to do is
create a Data object, setting the process and inputs. When the Data object
is created, the platform automatically runs the given process with
provided inputs, storing all inputs, outputs, and meta data.
provided inputs, storing all inputs, outputs, and meta data.

You have already seen how to create and query data objects in the
`Download, upload, and annotations tutorial`_.
You have already seen how to create and query data objects in the
`Download, upload, and annotations tutorial`_.
You will learn the details of running processes to generate new data objects in
the `Running processes tutorial`_. For now,
let's just inspect the Bowtie process to learn a little more about it:
Expand Down Expand Up @@ -273,7 +273,7 @@ parameter is given, it will be interpreted as a unique identifier ``id`` or
``slug``, depending on if it is a number or string.

.. code-block:: python

# Get a Collection object by id
res.collection.get(128)

Expand Down Expand Up @@ -304,7 +304,7 @@ dataset using the ``process_type`` attribute:
data.process_type

# Filter data objects by type
res.data.filter(type='data:genome:fasta:')
res.data.filter(type='data:genome:fasta')

The following are some examples of filtering of collections, samples and data
objects:
Expand Down Expand Up @@ -336,7 +336,7 @@ objects:
case_1 = sample_list.get(name='case_1')

# select 'case_1' bam file
bam = case_1.data.get(type='data:alignment:bam:')
bam = case_1.data.get(type='data:alignment:bam')

# in which collections is sample 'case_1'
list_collections = case_1.collections
Expand Down Expand Up @@ -365,7 +365,7 @@ Python class attributes.

# Access a resource attribute
res.<resource>.<attribute>

# See a list of a resource's available attributes
res.<resource>.__dict__.keys()

Expand Down
6 changes: 3 additions & 3 deletions resdk/query.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,9 +145,9 @@ def __getitem__(self, index):
# pylint: disable=protected-access
if not isinstance(index, (slice,) + six.integer_types):
raise TypeError
if ((not isinstance(index, slice) and index < 0) or
(isinstance(index, slice) and index.start is not None and index.start < 0) or
(isinstance(index, slice) and index.stop is not None and index.stop < 0)):
if ((not isinstance(index, slice) and index < 0)
or (isinstance(index, slice) and index.start is not None and index.start < 0)
or (isinstance(index, slice) and index.stop is not None and index.stop < 0)):
raise ValueError("Negative indexing is not supported.")
if isinstance(index, slice) and index.step is not None:
raise ValueError("`step` parameter in slice is not supported")
Expand Down
5 changes: 2 additions & 3 deletions resdk/resolwe.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
import yaml
# Needed because we mock requests in test_resolwe.py
from requests.exceptions import ConnectionError # pylint: disable=redefined-builtin
from six.moves.urllib.parse import urljoin # pylint: disable=import-error
from six.moves.urllib.parse import urljoin # pylint: disable=wrong-import-order

from .exceptions import ValidationError, handle_http_exception
from .query import ResolweQuery
Expand Down Expand Up @@ -344,8 +344,7 @@ def run(self, slug=None, input={}, descriptor=None, # pylint: disable=redefined
:return: data object that was just created
:rtype: Data object
"""
if ((descriptor and not descriptor_schema) or
(not descriptor and descriptor_schema)):
if ((descriptor and not descriptor_schema) or (not descriptor and descriptor_schema)):
raise ValueError("Set both or neither descriptor and descriptor_schema.")

if src is not None:
Expand Down
8 changes: 4 additions & 4 deletions resdk/resources/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,10 +138,10 @@ def __setattr__(self, name, value):
more comprehensive check is called before save.

"""
if (hasattr(self, '_original_values') and
name in self._original_values and
name in self.READ_ONLY_FIELDS and
value != self._original_values[name]):
if (hasattr(self, '_original_values')
and name in self._original_values
and name in self.READ_ONLY_FIELDS
and value != self._original_values[name]):
raise ValueError("Can not change read only field {}".format(name))

super(BaseResource, self).__setattr__(name, value)
Expand Down
8 changes: 4 additions & 4 deletions resdk/resources/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
import logging

import requests
from six.moves.urllib.parse import urljoin # pylint: disable=import-error
from six.moves.urllib.parse import urljoin # pylint: disable=wrong-import-order

from .base import BaseResolweResource
from .descriptor import DescriptorSchema
Expand Down Expand Up @@ -210,9 +210,9 @@ def put_in_download_list(elm, fname):
field_name = 'output.{}'.format(field_name)

for ann_field_name, ann in self.annotation.items():
if (ann_field_name.startswith('output') and
(field_name is None or field_name == ann_field_name) and
ann['value'] is not None):
if (ann_field_name.startswith('output')
and (field_name is None or field_name == ann_field_name)
and ann['value'] is not None):
if ann['type'].startswith('basic:{}:'.format(field_type)):
put_in_download_list(ann['value'], ann_field_name)
elif ann['type'].startswith('list:basic:{}:'.format(field_type)):
Expand Down
8 changes: 4 additions & 4 deletions resdk/resources/sample.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def get_reads(self):

def get_bam(self):
"""Return ``bam`` object on the sample."""
return self.data.get(type='data:alignment:bam:')
return self.data.get(type='data:alignment:bam')

def get_primary_bam(self, fallback_to_bam=False):
"""Return ``primary bam`` object on the sample.
Expand All @@ -30,7 +30,7 @@ def get_primary_bam(self, fallback_to_bam=False):

"""
try:
return self.data.get(type='data:alignment:bam:primary:')
return self.data.get(type='data:alignment:bam:primary')
except LookupError:
if fallback_to_bam:
return self.get_bam()
Expand All @@ -39,11 +39,11 @@ def get_primary_bam(self, fallback_to_bam=False):

def get_macs(self):
"""Return list of ``bed`` objects on the sample."""
return self.data.filter(type='data:chipseq:macs14:')
return self.data.filter(type='data:chipseq:macs14')

def get_cuffquant(self):
"""Return ``cuffquant`` object on the sample."""
return self.data.get(type='data:cufflinks:cuffquant:')
return self.data.get(type='data:cufflinks:cuffquant')


class Sample(SampleUtilsMixin, BaseCollection):
Expand Down
4 changes: 2 additions & 2 deletions resdk/scripts/reads.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@ def upload_reads():
print("\nERROR: Incorrect file path(s).\n")
exit(1)
else:
if (all(os.path.isfile(file) for file in args.r1) and
all(os.path.isfile(file) for file in args.r2)):
if (all(os.path.isfile(file) for file in args.r1)
and all(os.path.isfile(file) for file in args.r2)):
resolwe.run('upload-fastq-paired', {'src1': args.r1, 'src2': args.r2},
collections=args.collection)
else:
Expand Down
4 changes: 2 additions & 2 deletions resdk/scripts/sequp.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,8 +243,8 @@ def parse_annotation_file(annotation_file):
if exp_type:
descriptor['experiment_type'] = exp_type
# Paired-end reads
if (annotations[sample_n]['PAIRED_END'] == 'Y' and
annotations[sample_n]['FASTQ_PATH_PAIR']):
if (annotations[sample_n]['PAIRED_END'] == 'Y'
and annotations[sample_n]['FASTQ_PATH_PAIR']):
rw_reads = annotations[sample_n]['FASTQ_PATH_PAIR'].split(',')
slug = 'upload-fastq-paired'
input_['src1'] = fw_reads
Expand Down
7 changes: 5 additions & 2 deletions resdk/tests/unit/test_relations.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

import unittest

from mock import MagicMock
from mock import MagicMock, patch

from resdk.resources.collection import Collection
from resdk.resources.relation import Relation
Expand Down Expand Up @@ -37,7 +37,10 @@ def test_samples(self):
relation.update()
self.assertEqual(relation._samples, None)

def test_collection(self):
# I appears it is not possible to deepcopy MagicMocks so we just patch
# the deepcopy functionality:
@patch('resdk.resources.base.copy')
def test_collection(self, copy_mock):
relation = Relation(id=1, resolwe=MagicMock())
collection = Collection(id=3, resolwe=MagicMock())
collection.id = 3 # this is overriden when initialized
Expand Down
9 changes: 8 additions & 1 deletion resdk/tests/unit/test_resolwe.py
Original file line number Diff line number Diff line change
Expand Up @@ -424,15 +424,22 @@ def test_file_processing(self, resolwe_mock, data_mock):
input={"src": "/path/to/file1",
"src_list": ["/path/to/file2", "/path/to/file3"]})

@patch('resdk.resolwe.copy')
@patch('resdk.resolwe.Resolwe', spec=True)
def test_dehydrate_data(self, resolwe_mock):
def test_dehydrate_data(self, resolwe_mock, copy_mock):
data_obj = Data(id=1, resolwe=MagicMock())
data_obj.id = 1 # this is overriden when initialized
process = self.process_mock

# I appears it is not possible to deepcopy MagicMocks so we just patch
# the deepcopy functionality:
copy_mock.deepcopy = MagicMock(return_value={"genome": data_obj})
result = Resolwe._process_inputs(resolwe_mock, {"genome": data_obj}, process)
self.assertEqual(result, {'genome': 1})

# I appears it is not possible to deepcopy MagicMocks so we just patch
# the deepcopy functionality:
copy_mock.deepcopy = MagicMock(return_value={"reads": data_obj})
result = Resolwe._process_inputs(resolwe_mock, {"reads": [data_obj]}, process)
self.assertEqual(result, {'reads': [1]})

Expand Down
2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ ignore =
max-line-length=99
# Ignore E127: checked by pylint
# E127 continuation line over-indented for visual indent
ignore=E127
ignore=E127,W503

[pydocstyle]
match-dir = (?!tests|\.).*
Expand Down